Try to do better for lt,eq,otherwise comparisons against constants.
Here is the minimal reproducer:
q = 0xBEEF :: Int
foo :: Int -> Int
foo !x
| x < q = 1
| x == q = 2
| otherwise = 3
I think in a perfect world we would compare x with q once. Then branch on the condition twice.
However currently we end up with:
case <# [ipv_sUf 48879#] of {
__DEFAULT ->
case ipv_sUf of {
__DEFAULT -> M.foo3;
48879# -> M.foo2;
};
1# -> M.foo1;
===>
cUp: // global
_sUf::I64 = I64[R1 + 7];
if (%MO_S_Ge_W64(_sUf::I64, 48879)) goto cUG; else goto cUH;
cUG: // global
if (_sUf::I64 != 48879) goto cUD; else goto cUE;
===>
_blk_cUe:
movq 7(%rbx),%rax
cmpq $48879,%rax
jge _blk_cUv
_blk_cUw:
...
_blk_cUv:
cmpq $48879,%rax
jne _blk_cUs
We are not in bad company as gcc produces something quite similar. But clang seems does better:
cmpl $11, %edi
movl $111, %eax
movl $222, %ecx
cmovel %eax, %ecx
movl $1, %eax
cmovgel %ecx, %eax
retq
It seems the pattern we want to optimize here is:
if (ComparisonOp1 (_sUf::I64, 48879)) goto cUG; else goto cUH;
cUG: // global
if (ComparisonOp2 (_sUf::I64, 48879) goto cUD; else goto cUE;
Where the constant for the comparisons is the same, so we could share the status result of the comparison.
I'm not sure if we can reasonably represent this in Cmm currently. It should be reasonable to optimize this kind of comparison "on the fly" during CmmToAsm though.