Cmm: Needs CSE pass
libraries/base/System/Mem.hs
manages to produce the following Cmm with lots of nested Loads.
See how often BaseReg + 872 is computed? Same for BaseReg + 888. Ideally these would be floated
out and aggregated. Again the LLVM codegen is able to recognize this, and float them out.
cLu: // global
I64[Sp - 8] = .LcLq_info;
Sp = Sp - 8;
I64[I64[I64[BaseReg + 872] + 24] + 16] = Sp;
P64[I64[BaseReg + 888] + 8] = Hp + 8;
I64[I64[BaseReg + 872] + 104] = I64[I64[BaseReg + 872] + 104] - ((Hp + 8) - I64[I64[BaseReg + 888]]);
(_uLw::I64) = call "ccall" arg hints: [PtrHint,] result hints: [PtrHint] suspendThread(BaseReg, 0);
call "ccall" arg hints: [] result hints: [] performMajorGC();
(_uLx::I64) = call "ccall" arg hints: [PtrHint] result hints: [PtrHint] resumeThread(_uLw::I64);
BaseReg = _uLx::I64;
_uLB::P64 = I64[I64[BaseReg + 872] + 24];
Sp = I64[_uLB::P64 + 16];
SpLim = _uLB::P64 + 192;
I64[BaseReg + 904] = 0;
_uLD::I64 = I64[I64[BaseReg + 888] + 8];
Hp = _uLD::I64 - 8;
_uLE::I64 = I64[I64[BaseReg + 888]];
I64[BaseReg + 856] = _uLE::I64 + ((%MO_SS_Conv_W32_W64(I32[I64[BaseReg + 888] + 48]) << 12) - 1);
I64[I64[BaseReg + 872] + 104] = I64[I64[BaseReg + 872] + 104] + (_uLD::I64 - _uLE::I64);
call (I64[Sp])() returns to cLq, args: 8, res: 8, upd: 8;
This is bad for the NCG, as it sees a series of Assign/Store statements, but does not see the common expressions: BaseReg + 888
, BaseReg + 872
, ...
thus we end up generating two (or even three) loads per instruction.
We very much want to keep I[x + off] = ...
, but x should not be itself of the form I[y + off']
, if it occures in the same block at least a second time.
The LLVM codegen can recognise this.