1. 27 Jan, 2020 1 commit
  2. 25 Jan, 2020 1 commit
  3. 13 Jan, 2020 1 commit
  4. 03 Dec, 2019 1 commit
  5. 23 Oct, 2019 1 commit
    • Andreas Klebinger's avatar
      Fix bug in the x86 backend involving the CFG. · aa778152
      Andreas Klebinger authored
      This is part two of fixing #17334.
      
      There are two parts to this commit:
      
      - A bugfix for computing loop levels
      - A bugfix of basic block invariants in the NCG.
      
      -----------------------------------------------------------
      
      In the first bug we ended up with a CFG of the sort: [A -> B -> C]
      This was represented via maps as fromList [(A,B),(B,C)] and later
      transformed into a adjacency array. However the transformation did
      not include block C in the array (since we only looked at the keys of
      the map).
      
      This was still fine until we tried to look up successors for C and tried
      to read outside of the array bounds when accessing C.
      
      In order to prevent this in the future I refactored to code to include
      all nodes as keys in the map representation. And make this a invariant
      which is checked in a few places.
      
      Overall I expect this to make the code more robust as now any failed
      lookup will represent an error, versus failed lookups sometimes being
      expected and sometimes not.
      
      In terms of performance this makes some things cheaper (getting a list
      of all nodes) and others more expensive (adding a new edge). Overall
      this adds up to no noteable performance difference.
      
      -----------------------------------------------------------
      
      Part 2: When the NCG generated a new basic block, it did
      not always insert a NEWBLOCK meta instruction in the stream which
      caused a quite subtle bug.
      
          During instruction selection a statement `s`
          in a block B with control of the sort: B -> C
          will sometimes result in control
          flow of the sort:
      
                  ┌ < ┐
                  v   ^
            B ->  B1  ┴ -> C
      
          as is the case for some atomic operations.
      
          Now to keep the CFG in sync when introducing B1 we clearly
          want to insert it between B and C. However there is
          a catch when we have to deal with self loops.
      
          We might start with code and a CFG of these forms:
      
          loop:
              stmt1               ┌ < ┐
              ....                v   ^
              stmtX              loop ┘
              stmtY
              ....
              goto loop:
      
          Now we introduce B1:
                                  ┌ ─ ─ ─ ─ ─┐
              loop:               │   ┌ <  ┐ │
              instrs              v   │    │ ^
              ....               loop ┴ B1 ┴ ┘
              instrsFromX
              stmtY
              goto loop:
      
          This is simple, all outgoing edges from loop now simply
          start from B1 instead and the code generator knows which
          new edges it introduced for the self loop of B1.
      
          Disaster strikes if the statement Y follows the same pattern.
          If we apply the same rule that all outgoing edges change then
          we end up with:
      
              loop ─> B1 ─> B2 ┬─┐
                │      │    └─<┤ │
                │      └───<───┘ │
                └───────<────────┘
      
          This is problematic. The edge B1->B1 is modified as expected.
          However the modification is wrong!
      
          The assembly in this case looked like this:
      
          _loop:
              <instrs>
          _B1:
              ...
              cmpxchgq ...
              jne _B1
              <instrs>
              <end _B1>
          _B2:
              ...
              cmpxchgq ...
              jne _B2
              <instrs>
              jmp loop
      
          There is no edge _B2 -> _B1 here. It's still a self loop onto _B1.
      
          The problem here is that really B1 should be two basic blocks.
          Otherwise we have control flow in the *middle* of a basic block.
          A contradiction!
      
          So to account for this we add yet another basic block marker:
      
          _B:
              <instrs>
          _B1:
              ...
              cmpxchgq ...
              jne _B1
              jmp _B1'
          _B1':
              <instrs>
              <end _B1>
          _B2:
              ...
      
          Now when inserting B2 we will only look at the outgoing edges of B1' and
          everything will work out nicely.
      
          You might also wonder why we don't insert jumps at the end of _B1'. There is
          no way another block ends up jumping to the labels _B1 or _B2 since they are
          essentially invisible to other blocks. View them as control flow labels local
          to the basic block if you'd like.
      
          Not doing this ultimately caused (part 2 of) #17334.
      aa778152
  6. 16 Oct, 2019 1 commit
    • Andreas Klebinger's avatar
      Add loop level analysis to the NCG backend. · 535a88e1
      Andreas Klebinger authored
      For backends maintaining the CFG during codegen
      we can now find loops and their nesting level.
      
      This is based on the Cmm CFG and dominator analysis.
      
      As a result we can estimate edge frequencies a lot better
      for methods, resulting in far better code layout.
      
      Speedup on nofib: ~1.5%
      Increase in compile times: ~1.9%
      
      To make this feasible this commit adds:
      * Dominator analysis based on the Lengauer-Tarjan Algorithm.
      * An algorithm estimating global edge frequences from branch
      probabilities - In CFG.hs
      
      A few static branch prediction heuristics:
      
      * Expect to take the backedge in loops.
      * Expect to take the branch NOT exiting a loop.
      * Expect integer vs constant comparisons to be false.
      
      We also treat heap/stack checks special for branch prediction
      to avoid them being treated as loops.
      535a88e1
  7. 13 Oct, 2019 1 commit
    • Andreas Klebinger's avatar
      Fix #17334 where NCG did not properly update the CFG. · c1bd07cd
      Andreas Klebinger authored
      Statements can change the basic block in which instructions
      are placed during instruction selection.
      
      We have to keep track of this switch of the current basic block
      as we need this information in order to properly update the CFG.
      
      This commit implements this change and fixes #17334.
      
      We do so by having stmtToInstr return the new block id
      if a statement changed the basic block.
      c1bd07cd
  8. 05 Oct, 2019 1 commit
    • John Ericson's avatar
      Clean up `#include`s in the compiler · ee8118ca
      John Ericson authored
       - Remove unneeded ones
      
       - Use <..> for inter-package.
         Besides general clean up, helps distinguish between the RTS we link
         against vs the RTS we compile for.
      ee8118ca
  9. 25 Sep, 2019 1 commit
    • Sebastian Graf's avatar
      PmCheck: Only ever check constantly many models against a single pattern · ebc65025
      Sebastian Graf authored
      Introduces a new flag `-fmax-pmcheck-deltas` to achieve that. Deprecates
      the old `-fmax-pmcheck-iter` mechanism in favor of this new flag.
      
      From the user's guide:
      
      Pattern match checking can be exponential in some cases. This limit makes sure
      we scale polynomially in the number of patterns, by forgetting refined
      information gained from a partially successful match. For example, when
      matching `x` against `Just 4`, we split each incoming matching model into two
      sub-models: One where `x` is not `Nothing` and one where `x` is `Just y` but
      `y` is not `4`. When the number of incoming models exceeds the limit, we
      continue checking the next clause with the original, unrefined model.
      
      This also retires the incredibly hard to understand "maximum number of
      refinements" mechanism, because the current mechanism is more general
      and should catch the same exponential cases like PrelRules at the same
      time.
      
      -------------------------
      Metric Decrease:
          T11822
      -------------------------
      ebc65025
  10. 13 Sep, 2019 1 commit
  11. 09 Sep, 2019 1 commit
    • Sylvain Henry's avatar
      Module hierarchy: StgToCmm (#13009) · 447864a9
      Sylvain Henry authored
      Add StgToCmm module hierarchy. Platform modules that are used in several
      other places (NCG, LLVM codegen, Cmm transformations) are put into
      GHC.Platform.
      447864a9
  12. 15 Aug, 2019 1 commit
    • James Foster's avatar
      Remove unused imports of the form 'import foo ()' (Fixes #17065) · ca71d551
      James Foster authored
      These kinds of imports are necessary in some cases such as
      importing instances of typeclasses or intentionally creating
      dependencies in the build system, but '-Wunused-imports' can't
      detect when they are no longer needed. This commit removes the
      unused ones currently in the code base (not including test files
      or submodules), with the hope that doing so may increase
      parallelism in the build system by removing unnecessary
      dependencies.
      ca71d551
  13. 16 Jul, 2019 1 commit
  14. 03 Jul, 2019 1 commit
  15. 28 Jun, 2019 1 commit
    • Travis Whitaker's avatar
      Correct closure observation, construction, and mutation on weak memory machines. · 11bac115
      Travis Whitaker authored
      Here the following changes are introduced:
          - A read barrier machine op is added to Cmm.
          - The order in which a closure's fields are read and written is changed.
          - Memory barriers are added to RTS code to ensure correctness on
            out-or-order machines with weak memory ordering.
      
      Cmm has a new CallishMachOp called MO_ReadBarrier. On weak memory machines, this
      is lowered to an instruction that ensures memory reads that occur after said
      instruction in program order are not performed before reads coming before said
      instruction in program order. On machines with strong memory ordering properties
      (e.g. X86, SPARC in TSO mode) no such instruction is necessary, so
      MO_ReadBarrier is simply erased. However, such an instruction is necessary on
      weakly ordered machines, e.g. ARM and PowerPC.
      
      Weam memory ordering has consequences for how closures are observed and mutated.
      For example, consider a closure that needs to be updated to an indirection. In
      order for the indirection to be safe for concurrent observers to enter, said
      observers must read the indirection's info table before they read the
      indirectee. Furthermore, the entering observer makes assumptions about the
      closure based on its info table contents, e.g. an INFO_TYPE of IND imples the
      closure has an indirectee pointer that is safe to follow.
      
      When a closure is updated with an indirection, both its info table and its
      indirectee must be written. With weak memory ordering, these two writes can be
      arbitrarily reordered, and perhaps even interleaved with other threads' reads
      and writes (in the absence of memory barrier instructions). Consider this
      example of a bad reordering:
      
      - An updater writes to a closure's info table (INFO_TYPE is now IND).
      - A concurrent observer branches upon reading the closure's INFO_TYPE as IND.
      - A concurrent observer reads the closure's indirectee and enters it. (!!!)
      - An updater writes the closure's indirectee.
      
      Here the update to the indirectee comes too late and the concurrent observer has
      jumped off into the abyss. Speculative execution can also cause us issues,
      consider:
      
      - An observer is about to case on a value in closure's info table.
      - The observer speculatively reads one or more of closure's fields.
      - An updater writes to closure's info table.
      - The observer takes a branch based on the new info table value, but with the
        old closure fields!
      - The updater writes to the closure's other fields, but its too late.
      
      Because of these effects, reads and writes to a closure's info table must be
      ordered carefully with respect to reads and writes to the closure's other
      fields, and memory barriers must be placed to ensure that reads and writes occur
      in program order. Specifically, updates to a closure must follow the following
      pattern:
      
      - Update the closure's (non-info table) fields.
      - Write barrier.
      - Update the closure's info table.
      
      Observing a closure's fields must follow the following pattern:
      
      - Read the closure's info pointer.
      - Read barrier.
      - Read the closure's (non-info table) fields.
      
      This patch updates RTS code to obey this pattern. This should fix long-standing
      SMP bugs on ARM (specifically newer aarch64 microarchitectures supporting
      out-of-order execution) and PowerPC. This fixes issue #15449.
      Co-Authored-By: Ben Gamari's avatarBen Gamari <ben@well-typed.com>
      11bac115
  16. 20 Jun, 2019 1 commit
    • John Ericson's avatar
      Move 'Platform' to ghc-boot · bff2f24b
      John Ericson authored
      ghc-pkg needs to be aware of platforms so it can figure out which
      subdire within the user package db to use. This is admittedly
      roundabout, but maybe Cabal could use the same notion of a platform as
      GHC to good affect too.
      bff2f24b
  17. 09 Jun, 2019 1 commit
  18. 14 Apr, 2019 1 commit
  19. 11 Apr, 2019 1 commit
    • Carter Schonwald's avatar
      removing x87 register support from native code gen · 42504f4a
      Carter Schonwald authored
      * simplifies registers to have GPR, Float and Double, by removing the SSE2 and X87 Constructors
      * makes -msse2 assumed/default for x86 platforms, fixing a long standing nondeterminism in rounding
      behavior in 32bit haskell code
      * removes the 80bit floating point representation from the supported float sizes
      * theres still 1 tiny bit of x87 support needed,
      for handling float and double return values in FFI calls  wrt the C ABI on x86_32,
      but this one piece does not leak into the rest of NCG.
      * Lots of code thats not been touched in a long time got deleted as a
      consequence of all of this
      
      all in all, this change paves the way towards a lot of future further
      improvements in how GHC handles floating point computations, along with
      making the native code gen more accessible to a larger pool of contributors.
      42504f4a
  20. 09 Apr, 2019 2 commits
    • Artem Pyanykh's avatar
      bd2de4f0
    • Artem Pyanykh's avatar
      codegen: fix memset unroll for small bytearrays, add 64-bit sets · af4cea7f
      Artem Pyanykh authored
      Fixes #16052
      
      When the offset in `setByteArray#` is statically known, we can provide
      better alignment guarantees then just 1 byte.
      
      Also, memset can now do 64-bit wide sets.
      
      The current memset intrinsic is not optimal however and can be
      improved for the case when we know that we deal with
      
      (baseAddress at known alignment) + offset
      
      For instance, on 64-bit
      
      `setByteArray# s 1# 23# 0#`
      
      given that bytearray is 8 bytes aligned could be unrolled into
      `movb, movw, movl, movq, movq`; but currently it is
      `movb x23` since alignment of 1 is all we can embed into MO_Memset op.
      af4cea7f
  21. 01 Apr, 2019 1 commit
  22. 10 Feb, 2019 1 commit
  23. 30 Jan, 2019 3 commits
  24. 17 Nov, 2018 1 commit
    • Andreas Klebinger's avatar
      NCG: New code layout algorithm. · 912fd2b6
      Andreas Klebinger authored
      Summary:
      This patch implements a new code layout algorithm.
      It has been tested for x86 and is disabled on other platforms.
      
      Performance varies slightly be CPU/Machine but in general seems to be better
      by around 2%.
      Nofib shows only small differences of about +/- ~0.5% overall depending on
      flags/machine performance in other benchmarks improved significantly.
      
      Other benchmarks includes at least the benchmarks of: aeson, vector, megaparsec, attoparsec,
      containers, text and xeno.
      
      While the magnitude of gains differed three different CPUs where tested with
      all getting faster although to differing degrees. I tested: Sandy Bridge(Xeon), Haswell,
      Skylake
      
      * Library benchmark results summarized:
        * containers: ~1.5% faster
        * aeson: ~2% faster
        * megaparsec: ~2-5% faster
        * xml library benchmarks: 0.2%-1.1% faster
        * vector-benchmarks: 1-4% faster
        * text: 5.5% faster
      
      On average GHC compile times go down, as GHC compiled with the new layout
      is faster than the overhead introduced by using the new layout algorithm,
      
      Things this patch does:
      
      * Move code responsilbe for block layout in it's own module.
      * Move the NcgImpl Class into the NCGMonad module.
      * Extract a control flow graph from the input cmm.
      * Update this cfg to keep it in sync with changes during
        asm codegen. This has been tested on x64 but should work on x86.
        Other platforms still use the old codelayout.
      * Assign weights to the edges in the CFG based on type and limited static
        analysis which are then used for block layout.
      * Once we have the final code layout eliminate some redundant jumps.
      
        In particular turn a sequences of:
            jne .foo
            jmp .bar
          foo:
        into
            je bar
          foo:
            ..
      
      Test Plan: ci
      
      Reviewers: bgamari, jmct, jrtc27, simonmar, simonpj, RyanGlScott
      
      Reviewed By: RyanGlScott
      
      Subscribers: RyanGlScott, trommler, jmct, carter, thomie, rwbarton
      
      GHC Trac Issues: #15124
      
      Differential Revision: https://phabricator.haskell.org/D4726
      912fd2b6
  25. 02 Nov, 2018 1 commit
    • Michal Terepeta's avatar
      Add Int8# and Word8# · 2c959a18
      Michal Terepeta authored
      This is the first step of implementing:
      https://github.com/ghc-proposals/ghc-proposals/pull/74
      
      The main highlights/changes:
      
          primops.txt.pp gets two new sections for two new primitive types for
          signed and unsigned 8-bit integers (Int8# and Word8 respectively) along
          with basic arithmetic and comparison operations. PrimRep/RuntimeRep get
          two new constructors for them. All of the primops translate into the
          existing MachOPs.
      
          For CmmCalls the codegen will now zero-extend the values at call
          site (so that they can be moved to the right register) and then truncate
          them back their original width.
      
          x86 native codegen needed some updates, since it wasn't able to deal
          with the new widths, but all the changes are quite localized. LLVM
          backend seems to just work.
      
      This is the second attempt at merging this, after the first attempt in
      D4475 had to be backed out due to regressions on i386.
      
      Bumps binary submodule.
      Signed-off-by: Michal Terepeta's avatarMichal Terepeta <michal.terepeta@gmail.com>
      
      Test Plan: ./validate (on both x86-{32,64})
      
      Reviewers: bgamari, hvr, goldfire, simonmar
      
      Subscribers: rwbarton, carter
      
      Differential Revision: https://phabricator.haskell.org/D5258
      2c959a18
  26. 09 Oct, 2018 1 commit
    • Ben Gamari's avatar
      Revert "Add Int8# and Word8#" · d728c3c5
      Ben Gamari authored
      This unfortunately broke i386 support since it introduced references to
      byte-sized registers that don't exist on that architecture.
      
      Reverts binary submodule
      
      This reverts commit 5d5307f9.
      d728c3c5
  27. 07 Oct, 2018 1 commit
    • Michal Terepeta's avatar
      Add Int8# and Word8# · 5d5307f9
      Michal Terepeta authored
      This is the first step of implementing:
      https://github.com/ghc-proposals/ghc-proposals/pull/74
      
      The main highlights/changes:
      
      - `primops.txt.pp` gets two new sections for two new primitive types
        for signed and unsigned 8-bit integers (`Int8#` and `Word8`
        respectively) along with basic arithmetic and comparison
        operations. `PrimRep`/`RuntimeRep` get two new constructors for
        them. All of the primops translate into the existing `MachOP`s.
      
      - For `CmmCall`s the codegen will now zero-extend the values at call
        site (so that they can be moved to the right register) and then
        truncate them back their original width.
      
      - x86 native codegen needed some updates, since it wasn't able to deal
        with the new widths, but all the changes are quite localized. LLVM
        backend seems to just work.
      
      Bumps binary submodule.
      Signed-off-by: Michal Terepeta's avatarMichal Terepeta <michal.terepeta@gmail.com>
      
      Test Plan: ./validate with new tests
      
      Reviewers: hvr, goldfire, bgamari, simonmar
      
      Subscribers: Abhiroop, dfeuer, rwbarton, thomie, carter
      
      Differential Revision: https://phabricator.haskell.org/D4475
      5d5307f9
  28. 18 Sep, 2018 1 commit
    • Andreas Klebinger's avatar
      Invert FP conditions to eliminate the explicit NaN check. · 6bb9bc7d
      Andreas Klebinger authored
      Summary:
      Optimisation: we don't have to test the parity flag if we
      know the test has already excluded the unordered case: eg >
      and >= test for a zero carry flag, which can only occur for
      ordered operands.
      
      By reversing comparisons we can avoid testing the parity
      for < and <= as well. This works since:
      * If any of the arguments is an NaN CF gets set. Resulting in a false result.
      * Since this allows us to rule out NaN we can exchange the arguments and invert the
        direction of the arrows.
      
      Test Plan: ci/nofib
      
      Reviewers: carter, bgamari, alpmestan
      
      Reviewed By: alpmestan
      
      Subscribers: alpmestan, simonpj, jmct, rwbarton, thomie
      
      GHC Trac Issues: #15196
      
      Differential Revision: https://phabricator.haskell.org/D4990
      6bb9bc7d
  29. 21 Aug, 2018 1 commit
  30. 31 May, 2018 1 commit
    • Andreas Klebinger's avatar
      Change jump targets in JMP_TBL from blocks to X86.JumpDest. · 5748c79e
      Andreas Klebinger authored
      Jump tables always point to blocks when we first generate them.  However
      there are rare situations where we can shortcut one of these blocks to a
      static address during the asm shortcutting pass.
      
      While we already updated the data section accordingly this patch also
      extends this to the references stored in JMP_TBL.
      
      Test Plan: ci
      
      Reviewers: bgamari
      
      Reviewed By: bgamari
      
      Subscribers: thomie, carter
      
      GHC Trac Issues: #15104
      
      Differential Revision: https://phabricator.haskell.org/D4595
      5748c79e
  31. 16 May, 2018 1 commit
    • Simon Marlow's avatar
      Allow CmmLabelDiffOff with different widths · fbd28e2c
      Simon Marlow authored
      Summary:
      This change makes it possible to generate a static 32-bit relative label
      offset on x86_64. Currently we can only generate word-sized label
      offsets.
      
      This will be used in D4634 to shrink info tables.  See D4632 for more
      details.
      
      Test Plan: See D4632
      
      Reviewers: bgamari, niteria, michalt, erikd, jrtc27, osa1
      
      Subscribers: thomie, carter
      
      Differential Revision: https://phabricator.haskell.org/D4633
      fbd28e2c
  32. 05 May, 2018 1 commit
    • Sebastian Graf's avatar
      Add 'addWordC#' PrimOp · 6243bba7
      Sebastian Graf authored
      This is mostly for congruence with 'subWordC#' and '{add,sub}IntC#'.
      I found 'plusWord2#' while implementing this, which both lacks
      documentation and has a slightly different specification than
      'addWordC#', which means the generic implementation is unnecessarily
      complex.
      
      While I was at it, I also added lacking meta-information on PrimOps
      and refactored 'subWordC#'s generic implementation to be branchless.
      
      Reviewers: bgamari, simonmar, jrtc27, dfeuer
      
      Reviewed By: bgamari, dfeuer
      
      Subscribers: dfeuer, thomie, carter
      
      Differential Revision: https://phabricator.haskell.org/D4592
      6243bba7
  33. 19 Mar, 2018 1 commit
  34. 26 Jan, 2018 1 commit
    • Andreas Klebinger's avatar
      Handle the likely:True case in CmmContFlowOpt · 52dfb25c
      Andreas Klebinger authored
      It's better to fall through to the likely case than to jump to it.
      
      We optimize for this in CmmContFlowOpt when likely:False.
      This commit extends the logic there to handle cases with likely:True
      as well.
      
      Test Plan: ci
      
      Reviewers: bgamari, simonmar
      
      Reviewed By: bgamari
      
      Subscribers: simonmar, alexbiehl, rwbarton, thomie, carter
      
      Differential Revision: https://phabricator.haskell.org/D4306
      52dfb25c
  35. 21 Jan, 2018 1 commit
    • John Ky's avatar
      Add new mbmi and mbmi2 compiler flags · f8557696
      John Ky authored
      This adds support for the bit deposit and extraction operations provided
      by the BMI and BMI2 instruction set extensions on modern amd64 machines.
      
      Implement x86 code generator for pdep and pext.  Properly initialise
      bmiVersion field.
      
      pdep and pext test cases
      
      Fix pattern match for pdep and pext instructions
      
      Fix build of pdep and pext code for 32-bit architectures
      
      Test Plan: Validate
      
      Reviewers: austin, simonmar, bgamari, angerman
      
      Reviewed By: bgamari
      
      Subscribers: trommler, carter, angerman, thomie, rwbarton, newhoggy
      
      GHC Trac Issues: #14206
      
      Differential Revision: https://phabricator.haskell.org/D4236
      f8557696
  36. 22 Nov, 2017 1 commit
  37. 15 Nov, 2017 1 commit
    • John Ky's avatar
      Add new mbmi and mbmi2 compiler flags · f5dc8ccc
      John Ky authored
      This adds support for the bit deposit and extraction operations provided
      by the BMI and BMI2 instruction set extensions on modern amd64 machines.
      
      Test Plan: Validate
      
      Reviewers: austin, simonmar, bgamari, hvr, goldfire, erikd
      
      Reviewed By: bgamari
      
      Subscribers: goldfire, erikd, trommler, newhoggy, rwbarton, thomie
      
      GHC Trac Issues: #14206
      
      Differential Revision: https://phabricator.haskell.org/D4063
      f5dc8ccc