1. 25 Oct, 2013 1 commit
  2. 19 Jun, 2013 1 commit
  3. 10 Mar, 2013 1 commit
  4. 09 Mar, 2013 1 commit
  5. 01 Feb, 2013 2 commits
    • gmainlan@microsoft.com's avatar
      Mimic OldCmm basic block ordering in the LLVM backend. · b39e4de1
      gmainlan@microsoft.com authored
      In OldCmm, the false case of a conditional was a fallthrough. In Cmm,
      conditionals have both true and false successors. When we convert Cmm to LLVM,
      we now first re-order Cmm blocks so that the false successor of a conditional
      occurs next in the list of basic blocks, i.e., it is a fallthrough, just like it
      (necessarily) did in OldCmm. Surprisingly, this can make a big performance
      difference.
      b39e4de1
    • gmainlan@microsoft.com's avatar
      Always pass vector values on the stack. · 6480a35c
      gmainlan@microsoft.com authored
      Vector values are now always passed on the stack. This isn't particularly
      efficient, but it will have to do for now.
      6480a35c
  6. 12 Nov, 2012 1 commit
    • Simon Marlow's avatar
      Remove OldCmm, convert backends to consume new Cmm · d92bd17f
      Simon Marlow authored
      This removes the OldCmm data type and the CmmCvt pass that converts
      new Cmm to OldCmm.  The backends (NCGs, LLVM and C) have all been
      converted to consume new Cmm.
      
      The main difference between the two data types is that conditional
      branches in new Cmm have both true/false successors, whereas in OldCmm
      the false case was a fallthrough.  To generate slightly better code we
      occasionally need to invert a conditional to ensure that the
      branch-not-taken becomes a fallthrough; this was previously done in
      CmmCvt, and it is now done in CmmContFlowOpt.
      
      We could go further and use the Hoopl Block representation for native
      code, which would mean that we could use Hoopl's postorderDfs and
      analyses for native code, but for now I've left it as is, using the
      old ListGraph representation for native code.
      d92bd17f
  7. 24 Oct, 2012 1 commit
  8. 08 Oct, 2012 1 commit
    • Simon Marlow's avatar
      Produce new-style Cmm from the Cmm parser · a7c0387d
      Simon Marlow authored
      The main change here is that the Cmm parser now allows high-level cmm
      code with argument-passing and function calls.  For example:
      
      foo ( gcptr a, bits32 b )
      {
        if (b > 0) {
           // we can make tail calls passing arguments:
           jump stg_ap_0_fast(a);
        }
      
        return (x,y);
      }
      
      More details on the new cmm syntax are in Note [Syntax of .cmm files]
      in CmmParse.y.
      
      The old syntax is still more-or-less supported for those occasional
      code fragments that really need to explicitly manipulate the stack.
      However there are a couple of differences: it is now obligatory to
      give a list of live GlobalRegs on every jump, e.g.
      
        jump %ENTRY_CODE(Sp(0)) [R1];
      
      Again, more details in Note [Syntax of .cmm files].
      
      I have rewritten most of the .cmm files in the RTS into the new
      syntax, except for AutoApply.cmm which is generated by the genapply
      program: this file could be generated in the new syntax instead and
      would probably be better off for it, but I ran out of enthusiasm.
      
      Some other changes in this batch:
      
       - The PrimOp calling convention is gone, primops now use the ordinary
         NativeNodeCall convention.  This means that primops and "foreign
         import prim" code must be written in high-level cmm, but they can
         now take more than 10 arguments.
      
       - CmmSink now does constant-folding (should fix #7219)
      
       - .cmm files now go through the cmmPipeline, and as a result we
         generate better code in many cases.  All the object files generated
         for the RTS .cmm files are now smaller.  Performance should be
         better too, but I haven't measured it yet.
      
       - RET_DYN frames are removed from the RTS, lots of code goes away
      
       - we now have some more canned GC points to cover unboxed-tuples with
         2-4 pointers, which will reduce code size a little.
      a7c0387d
  9. 18 Sep, 2012 5 commits
  10. 16 Sep, 2012 2 commits
  11. 12 Sep, 2012 3 commits
  12. 11 Sep, 2012 1 commit
  13. 31 Aug, 2012 2 commits
  14. 20 Jul, 2012 1 commit
  15. 05 Jul, 2012 1 commit
  16. 05 Jun, 2012 1 commit
  17. 15 May, 2012 1 commit
    • batterseapower's avatar
      Support code generation for unboxed-tuple function arguments · 09987de4
      batterseapower authored
      This is done by a 'unarisation' pre-pass at the STG level which
      translates away all (live) binders binding something of unboxed
      tuple type.
      
      This has the following knock-on effects:
        * The subkind hierarchy is vastly simplified (no UbxTupleKind or ArgKind)
        * Various relaxed type checks in typechecker, 'foreign import prim' etc
        * All case binders may be live at the Core level
      09987de4
  18. 15 Mar, 2012 1 commit
  19. 14 Feb, 2012 1 commit
  20. 08 Feb, 2012 1 commit
  21. 23 Jan, 2012 1 commit
  22. 19 Jan, 2012 1 commit
  23. 17 Jan, 2012 1 commit
  24. 19 Dec, 2011 1 commit
  25. 04 Nov, 2011 1 commit
  26. 02 Oct, 2011 2 commits
  27. 25 Aug, 2011 4 commits