1. 27 May, 2016 4 commits
    • Ömer Sinan Ağacan's avatar
      StgCmmExpr: Remove a redundant list · 59250dce
      Ömer Sinan Ağacan authored
      59250dce
    • Simon Peyton Jones's avatar
      Comments and white space only · 72fd407e
      Simon Peyton Jones authored
      72fd407e
    • Simon Peyton Jones's avatar
      More fixes for unboxed tuples · b43a7936
      Simon Peyton Jones authored
      This is a continuation of
         commit e9e61f18
         Date:   Thu May 26 15:24:53 2016 +0100
         Reduce special-casing for nullary unboxed tuple
      
      which related to Trac #12115.  But typecheck/should_run/tcrun051
      revealed that my patch was incomplete.
      
      This fixes it, by removing another special case in Type.repType.
      I had also missed a case in UnariseStg.unariseIdBinder.
      
      I took the opportunity to add explanatory notes
        Note [Unarisation]
        Note [Unarisation and nullary tuples]
      in UnariseStg
      b43a7936
    • Ömer Sinan Ağacan's avatar
      StgCmmCon: Do not generate moves from unused fields to local variables · cd50d236
      Ömer Sinan Ağacan authored
      Say we have a record like this:
      
          data Rec = Rec
            { f1 :: Int
            , f2 :: Int
            , f3 :: Int
            , f4 :: Int
            , f5 :: Int
            }
      
      Before this patch, the code generated for `f1` looked like this:
      
          f1_entry()
              {offset
                 ...
                 cJT:
                     _sI6::P64 = R1;
                     _sI7::P64 = P64[_sI6::P64 + 7];
                     _sI8::P64 = P64[_sI6::P64 + 15];
                     _sI9::P64 = P64[_sI6::P64 + 23];
                     _sIa::P64 = P64[_sI6::P64 + 31];
                     _sIb::P64 = P64[_sI6::P64 + 39];
                     R1 = _sI7::P64 & (-8);
                     Sp = Sp + 8;
                     call (I64[R1])(R1) args: 8, res: 0, upd: 8;
              }
      
      Note how all fields of the record are moved to local variables, even though
      they're never used. These moves make it to the final assembly:
      
          f1_info:
              ...
          _cJT:
              movq 7(%rbx),%rax
              movq 15(%rbx),%rcx
              movq 23(%rbx),%rcx
              movq 31(%rbx),%rcx
              movq 39(%rbx),%rbx
              movq %rax,%rbx
              andq $-8,%rbx
              addq $8,%rbp
              jmp *(%rbx)
      
      With this patch we stop generating these move instructions. Cmm becomes this:
      
          f1_entry()
              {offset
                 ...
                 cJT:
                     _sI6::P64 = R1;
                     _sI7::P64 = P64[_sI6::P64 + 7];
                     R1 = _sI7::P64 & (-8);
                     Sp = Sp + 8;
                     call (I64[R1])(R1) args: 8, res: 0, upd: 8;
              }
      
      Assembly becomes this:
      
          f1_info:
              ...
          _cJT:
              movq 7(%rbx),%rax
              movq %rax,%rbx
              andq $-8,%rbx
              addq $8,%rbp
              jmp *(%rbx)
      
      It turns out CmmSink already optimizes this, but it's better to generate
      better code in the first place.
      
      Reviewers: simonmar, simonpj, austin, bgamari
      
      Reviewed By: simonmar, simonpj
      
      Subscribers: rwbarton, thomie
      
      Differential Revision: https://phabricator.haskell.org/D2269
      cd50d236
  2. 26 May, 2016 4 commits
  3. 25 May, 2016 7 commits
  4. 24 May, 2016 16 commits
  5. 23 May, 2016 4 commits
  6. 22 May, 2016 5 commits