1. 04 Jan, 2020 1 commit
  2. 21 Oct, 2019 1 commit
    • Ben Gamari's avatar
      rts: Implement concurrent collection in the nonmoving collector · bd8e3ff4
      Ben Gamari authored
      This extends the non-moving collector to allow concurrent collection.
      
      The full design of the collector implemented here is described in detail
      in a technical note
      
          B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
          Compiler" (2018)
      
      This extension involves the introduction of a capability-local
      remembered set, known as the /update remembered set/, which tracks
      objects which may no longer be visible to the collector due to mutation.
      To maintain this remembered set we introduce a write barrier on
      mutations which is enabled while a concurrent mark is underway.
      
      The update remembered set representation is similar to that of the
      nonmoving mark queue, being a chunked array of `MarkEntry`s. Each
      `Capability` maintains a single accumulator chunk, which it flushed
      when it (a) is filled, or (b) when the nonmoving collector enters its
      post-mark synchronization phase.
      
      While the write barrier touches a significant amount of code it is
      conceptually straightforward: the mutator must ensure that the referee
      of any pointer it overwrites is added to the update remembered set.
      However, there are a few details:
      
       * In the case of objects with a dirty flag (e.g. `MVar`s) we can
         exploit the fact that only the *first* mutation requires a write
         barrier.
      
       * Weak references, as usual, complicate things. In particular, we must
         ensure that the referee of a weak object is marked if dereferenced by
         the mutator. For this we (unfortunately) must introduce a read
         barrier, as described in Note [Concurrent read barrier on deRefWeak#]
         (in `NonMovingMark.c`).
      
       * Stable names are also a bit tricky as described in Note [Sweeping
         stable names in the concurrent collector] (`NonMovingSweep.c`).
      
      We take quite some pains to ensure that the high thread count often seen
      in parallel Haskell applications doesn't affect pause times. To this end
      we allow thread stacks to be marked either by the thread itself (when it
      is executed or stack-underflows) or the concurrent mark thread (if the
      thread owning the stack is never scheduled). There is a non-trivial
      handshake to ensure that this happens without racing which is described
      in Note [StgStack dirtiness flags and concurrent marking].
      Co-Authored-by: Ömer Sinan Ağacan's avatarÖmer Sinan Ağacan <omer@well-typed.com>
      bd8e3ff4
  3. 28 Jun, 2019 1 commit
    • Travis Whitaker's avatar
      Correct closure observation, construction, and mutation on weak memory machines. · 11bac115
      Travis Whitaker authored
      Here the following changes are introduced:
          - A read barrier machine op is added to Cmm.
          - The order in which a closure's fields are read and written is changed.
          - Memory barriers are added to RTS code to ensure correctness on
            out-or-order machines with weak memory ordering.
      
      Cmm has a new CallishMachOp called MO_ReadBarrier. On weak memory machines, this
      is lowered to an instruction that ensures memory reads that occur after said
      instruction in program order are not performed before reads coming before said
      instruction in program order. On machines with strong memory ordering properties
      (e.g. X86, SPARC in TSO mode) no such instruction is necessary, so
      MO_ReadBarrier is simply erased. However, such an instruction is necessary on
      weakly ordered machines, e.g. ARM and PowerPC.
      
      Weam memory ordering has consequences for how closures are observed and mutated.
      For example, consider a closure that needs to be updated to an indirection. In
      order for the indirection to be safe for concurrent observers to enter, said
      observers must read the indirection's info table before they read the
      indirectee. Furthermore, the entering observer makes assumptions about the
      closure based on its info table contents, e.g. an INFO_TYPE of IND imples the
      closure has an indirectee pointer that is safe to follow.
      
      When a closure is updated with an indirection, both its info table and its
      indirectee must be written. With weak memory ordering, these two writes can be
      arbitrarily reordered, and perhaps even interleaved with other threads' reads
      and writes (in the absence of memory barrier instructions). Consider this
      example of a bad reordering:
      
      - An updater writes to a closure's info table (INFO_TYPE is now IND).
      - A concurrent observer branches upon reading the closure's INFO_TYPE as IND.
      - A concurrent observer reads the closure's indirectee and enters it. (!!!)
      - An updater writes the closure's indirectee.
      
      Here the update to the indirectee comes too late and the concurrent observer has
      jumped off into the abyss. Speculative execution can also cause us issues,
      consider:
      
      - An observer is about to case on a value in closure's info table.
      - The observer speculatively reads one or more of closure's fields.
      - An updater writes to closure's info table.
      - The observer takes a branch based on the new info table value, but with the
        old closure fields!
      - The updater writes to the closure's other fields, but its too late.
      
      Because of these effects, reads and writes to a closure's info table must be
      ordered carefully with respect to reads and writes to the closure's other
      fields, and memory barriers must be placed to ensure that reads and writes occur
      in program order. Specifically, updates to a closure must follow the following
      pattern:
      
      - Update the closure's (non-info table) fields.
      - Write barrier.
      - Update the closure's info table.
      
      Observing a closure's fields must follow the following pattern:
      
      - Read the closure's info pointer.
      - Read barrier.
      - Read the closure's (non-info table) fields.
      
      This patch updates RTS code to obey this pattern. This should fix long-standing
      SMP bugs on ARM (specifically newer aarch64 microarchitectures supporting
      out-of-order execution) and PowerPC. This fixes issue #15449.
      Co-Authored-By: Ben Gamari's avatarBen Gamari <ben@well-typed.com>
      11bac115
  4. 12 Jan, 2019 1 commit
  5. 02 Nov, 2018 1 commit
    • Ben Gamari's avatar
      rts: Add FALLTHROUGH macro · 6bb8aaa3
      Ben Gamari authored
      Instead of using the GCC `/* fallthrough */` syntax we now use the
      `__attribute__((fallthrough))`, which Phyx says should be more portable
      than the former.
      
      Also adds a missing fallthrough annotation in the MachO linker,
      fixing #14613.
      
      Reviewers: erikd, simonmar
      
      Reviewed By: simonmar
      
      Subscribers: rwbarton, carter
      
      GHC Trac Issues: #14613
      
      Differential Revision: https://phabricator.haskell.org/D5292
      6bb8aaa3
  6. 12 Jul, 2018 1 commit
    • Simon Marlow's avatar
      Fix deadlock between STM and throwTo · 7fc418df
      Simon Marlow authored
      There was a lock-order reversal between lockTSO() and the TVar lock,
      see #15136 for the details.
      
      It turns out we can fix this pretty easily by just deleting all the
      locking code(!).  The principle for unblocking a `BlockedOnSTM` thread
      then becomes the same as for other kinds of blocking: if the TSO
      belongs to this capability then we do it directly, otherwise we send a
      message to the capability that owns the TSO. That is, a thread blocked
      on STM is owned by its capability, as it should be.
      
      The possible downside of this is that we might send multiple messages
      to wake up a thread when the thread is on another capability. This is
      safe, it's just not very efficient.  I'll try to do some experiments
      to see if this is a problem.
      
      Test Plan: Test case from #15136 doesn't deadlock any more.
      
      Reviewers: bgamari, osa1, erikd
      
      Reviewed By: osa1
      
      Subscribers: rwbarton, thomie, carter
      
      GHC Trac Issues: #15136
      
      Differential Revision: https://phabricator.haskell.org/D4956
      7fc418df
  7. 26 Jul, 2017 1 commit
  8. 03 Jul, 2017 1 commit
  9. 14 May, 2017 1 commit
  10. 29 Apr, 2017 1 commit
  11. 23 Feb, 2017 1 commit
  12. 29 Nov, 2016 1 commit
  13. 17 May, 2016 1 commit
    • Erik de Castro Lopo's avatar
      rts: More const correct-ness fixes · 33c029dd
      Erik de Castro Lopo authored
      In addition to more const-correctness fixes this patch fixes an
      infelicity of the previous const-correctness patch (995cf0f3) which
      left `UNTAG_CLOSURE` taking a `const StgClosure` pointer parameter
      but returning a non-const pointer. Here we restore the original type
      signature of `UNTAG_CLOSURE` and add a new function
      `UNTAG_CONST_CLOSURE` which takes and returns a const `StgClosure`
      pointer and uses that wherever possible.
      
      Test Plan: Validate on Linux, OS X and Windows
      
      Reviewers: Phyx, hsyl20, bgamari, austin, simonmar, trofi
      
      Reviewed By: simonmar, trofi
      
      Subscribers: thomie
      
      Differential Revision: https://phabricator.haskell.org/D2231
      33c029dd
  14. 04 May, 2016 1 commit
  15. 07 Feb, 2016 1 commit
  16. 07 Jul, 2015 1 commit
    • Sergei Trofimovich's avatar
      fix EBADF unqueueing in select backend (Trac #10590) · 5857e0af
      Sergei Trofimovich authored
      Alexander found a interesting case:
      1. We have a queue of two waiters in a blocked_queue
      2. first file descriptor changes state to RUNNABLE,
         second changes to INVALID
      3. awaitEvent function dequeued RUNNABLE thread to a
         run queue and attempted to dequeue INVALID descriptor
         to a run queue.
      
      Unqueueing INVALID fails thusly:
              #3  0x000000000045cf1c in barf (s=0x4c1cb0 "removeThreadFromDeQueue: not found")
                                     at rts/RtsMessages.c:42
              #4  0x000000000046848b in removeThreadFromDeQueue (...) at rts/Threads.c:249
              #5  0x000000000049a120 in removeFromQueues (...) at rts/RaiseAsync.c:719
              #6  0x0000000000499502 in throwToSingleThreaded__ (...) at rts/RaiseAsync.c:67
              #7  0x0000000000499555 in throwToSingleThreaded (..) at rts/RaiseAsync.c:75
              #8  0x000000000047c27d in awaitEvent (wait=rtsFalse) at rts/posix/Select.c:415
      
      The problem here is a throwToSingleThreaded function that tries
      to unqueue a TSO from blocked_queue, but awaitEvent function
      leaves blocked_queue in a inconsistent state while traverses
      over blocked_queue:
      
            case RTS_FD_IS_READY:
                IF_DEBUG(scheduler,
                    debugBelch("Waking up blocked thread %lu\n",
                               (unsigned long)tso->id));
                tso->why_blocked = NotBlocked;
                tso->_link = END_TSO_QUEUE;              // Here we break the queue head
                pushOnRunQueue(&MainCapability,tso);
                break;
      Signed-off-by: default avatarSergei Trofimovich <siarheit@google.com>
      
      Test Plan: tested on a sample from T10590
      
      Reviewers: austin, bgamari, simonmar
      
      Reviewed By: bgamari, simonmar
      
      Subscribers: qnikst, thomie, bgamari
      
      Differential Revision: https://phabricator.haskell.org/D1024
      
      GHC Trac Issues: #10590, #4934
      5857e0af
  17. 12 Nov, 2014 1 commit
  18. 21 Oct, 2014 1 commit
  19. 20 Oct, 2014 1 commit
  20. 02 Oct, 2014 1 commit
    • Edward Z. Yang's avatar
      Rename _closure to _static_closure, apply naming consistently. · 35672072
      Edward Z. Yang authored
      Summary:
      In preparation for indirecting all references to closures,
      we rename _closure to _static_closure to ensure any old code
      will get an undefined symbol error.  In order to reference
      a closure foobar_closure (which is now undefined), you should instead
      use STATIC_CLOSURE(foobar).  For convenience, a number of these
      old identifiers are macro'd.
      
      Across C-- and C (Windows and otherwise), there were differing
      conventions on whether or not foobar_closure or &foobar_closure
      was the address of the closure.  Now, all foobar_closure references
      are addresses, and no & is necessary.
      
      CHARLIKE/INTLIKE were not changed, simply alpha-renamed.
      
      Part of remove HEAP_ALLOCED patch set (#8199)
      
      Depends on D265
      Signed-off-by: Edward Z. Yang's avatarEdward Z. Yang <ezyang@mit.edu>
      
      Test Plan: validate
      
      Reviewers: simonmar, austin
      
      Subscribers: simonmar, ezyang, carter, thomie
      
      Differential Revision: https://phabricator.haskell.org/D267
      
      GHC Trac Issues: #8199
      35672072
  21. 29 Sep, 2014 1 commit
  22. 28 Jul, 2014 1 commit
  23. 04 May, 2014 1 commit
  24. 02 May, 2014 1 commit
    • Simon Marlow's avatar
      Per-thread allocation counters and limits · b0534f78
      Simon Marlow authored
      This tracks the amount of memory allocation by each thread in a
      counter stored in the TSO.  Optionally, when the counter drops below
      zero (it counts down), the thread can be sent an asynchronous
      exception: AllocationLimitExceeded.  When this happens, given a small
      additional limit so that it can handle the exception.  See
      documentation in GHC.Conc for more details.
      
      Allocation limits are similar to timeouts, but
      
        - timeouts use real time, not CPU time.  Allocation limits do not
          count anything while the thread is blocked or in foreign code.
      
        - timeouts don't re-trigger if the thread catches the exception,
          allocation limits do.
      
        - timeouts can catch non-allocating loops, if you use
          -fno-omit-yields.  This doesn't work for allocation limits.
      
      I couldn't measure any impact on benchmarks with these changes, even
      for nofib/smp.
      b0534f78
  25. 21 Oct, 2013 1 commit
  26. 11 Oct, 2013 1 commit
  27. 09 Jul, 2013 1 commit
  28. 29 Mar, 2013 1 commit
    • nfrisby's avatar
      ticky enhancements · 460abd75
      nfrisby authored
        * the new StgCmmArgRep module breaks a dependency cycle; I also
          untabified it, but made no real changes
      
        * updated the documentation in the wiki and change the user guide to
          point there
      
        * moved the allocation enters for ticky and CCS to after the heap check
      
          * I left LDV where it was, which was before the heap check at least
            once, since I have no idea what it is
      
        * standardized all (active?) ticky alloc totals to bytes
      
        * in order to avoid double counting StgCmmLayout.adjustHpBackwards
          no longer bumps ALLOC_HEAP_ctr
      
        * I resurrected the SLOW_CALL counters
      
          * the new module StgCmmArgRep breaks cyclic dependency between
            Layout and Ticky (which the SLOW_CALL counters cause)
      
          * renamed them SLOW_CALL_fast_<pattern> and VERY_SLOW_CALL
      
        * added ALLOC_RTS_ctr and _tot ticky counters
      
          * eg allocation by Storage.c:allocate or a BUILD_PAP in stg_ap_*_info
      
          * resurrected ticky counters for ALLOC_THK, ALLOC_PAP, and
            ALLOC_PRIM
      
          * added -ticky and -DTICKY_TICKY in ways.mk for debug ways
      
        * added a ticky counter for total LNE entries
      
        * new flags for ticky: -ticky-allocd -ticky-dyn-thunk -ticky-LNE
      
          * all off by default
      
          * -ticky-allocd: tracks allocation *of* closure in addition to
             allocation *by* that closure
      
          * -ticky-dyn-thunk tracks dynamic thunks as if they were functions
      
          * -ticky-LNE tracks LNEs as if they were functions
      
        * updated the ticky report format, including making the argument
          categories (more?) accurate again
      
        * the printed name for things in the report include the unique of
          their ticky parent as well as if they are not top-level
      460abd75
  29. 08 Oct, 2012 1 commit
    • Simon Marlow's avatar
      Produce new-style Cmm from the Cmm parser · a7c0387d
      Simon Marlow authored
      The main change here is that the Cmm parser now allows high-level cmm
      code with argument-passing and function calls.  For example:
      
      foo ( gcptr a, bits32 b )
      {
        if (b > 0) {
           // we can make tail calls passing arguments:
           jump stg_ap_0_fast(a);
        }
      
        return (x,y);
      }
      
      More details on the new cmm syntax are in Note [Syntax of .cmm files]
      in CmmParse.y.
      
      The old syntax is still more-or-less supported for those occasional
      code fragments that really need to explicitly manipulate the stack.
      However there are a couple of differences: it is now obligatory to
      give a list of live GlobalRegs on every jump, e.g.
      
        jump %ENTRY_CODE(Sp(0)) [R1];
      
      Again, more details in Note [Syntax of .cmm files].
      
      I have rewritten most of the .cmm files in the RTS into the new
      syntax, except for AutoApply.cmm which is generated by the genapply
      program: this file could be generated in the new syntax instead and
      would probably be better off for it, but I ran out of enthusiasm.
      
      Some other changes in this batch:
      
       - The PrimOp calling convention is gone, primops now use the ordinary
         NativeNodeCall convention.  This means that primops and "foreign
         import prim" code must be written in high-level cmm, but they can
         now take more than 10 arguments.
      
       - CmmSink now does constant-folding (should fix #7219)
      
       - .cmm files now go through the cmmPipeline, and as a result we
         generate better code in many cases.  All the object files generated
         for the RTS .cmm files are now smaller.  Performance should be
         better too, but I haven't measured it yet.
      
       - RET_DYN frames are removed from the RTS, lots of code goes away
      
       - we now have some more canned GC points to cover unboxed-tuples with
         2-4 pointers, which will reduce code size a little.
      a7c0387d
  30. 25 Aug, 2012 1 commit
    • ian@well-typed.com's avatar
      Make a function for get_itbl, rather than using a CPP macro · 8413d838
      ian@well-typed.com authored
      This has several advantages:
      * It can be called from gdb
      * There is more type information for the user, and type checking
        for the compiler
      * Less opportunity for things to go wrong, e.g. due to missing
        parentheses or repeated execution
      
      The sizes of the non-debug .o files hasn't changed (other than
      Inlines.o), so I'm pretty sure the compiled code is identical.
      8413d838
  31. 07 Jun, 2012 1 commit
  32. 27 Feb, 2012 1 commit
  33. 06 Jan, 2012 1 commit
  34. 05 Jan, 2012 1 commit
  35. 14 Nov, 2011 1 commit
  36. 02 Nov, 2011 1 commit
    • Simon Marlow's avatar
      Overhaul of infrastructure for profiling, coverage (HPC) and breakpoints · 7bb0447d
      Simon Marlow authored
      User visible changes
      ====================
      
      Profilng
      --------
      
      Flags renamed (the old ones are still accepted for now):
      
        OLD            NEW
        ---------      ------------
        -auto-all      -fprof-auto
        -auto          -fprof-exported
        -caf-all       -fprof-cafs
      
      New flags:
      
        -fprof-auto              Annotates all bindings (not just top-level
                                 ones) with SCCs
      
        -fprof-top               Annotates just top-level bindings with SCCs
      
        -fprof-exported          Annotates just exported bindings with SCCs
      
        -fprof-no-count-entries  Do not maintain entry counts when profiling
                                 (can make profiled code go faster; useful with
                                 heap profiling where entry counts are not used)
      
      Cost-centre stacks have a new semantics, which should in most cases
      result in more useful and intuitive profiles.  If you find this not to
      be the case, please let me know.  This is the area where I have been
      experimenting most, and the current solution is probably not the
      final version, however it does address all the outstanding bugs and
      seems to be better than GHC 7.2.
      
      Stack traces
      ------------
      
      +RTS -xc now gives more information.  If the exception originates from
      a CAF (as is common, because GHC tends to lift exceptions out to the
      top-level), then the RTS walks up the stack and reports the stack in
      the enclosing update frame(s).
      
      Result: +RTS -xc is much more useful now - but you still have to
      compile for profiling to get it.  I've played around a little with
      adding 'head []' to GHC itself, and +RTS -xc does pinpoint the problem
      quite accurately.
      
      I plan to add more facilities for stack tracing (e.g. in GHCi) in the
      future.
      
      Coverage (HPC)
      --------------
      
       * derived instances are now coloured yellow if they weren't used
       * likewise record field names
       * entry counts are more accurate (hpc --fun-entry-count)
       * tab width is now correct (markup was previously off in source with
         tabs)
      
      Internal changes
      ================
      
      In Core, the Note constructor has been replaced by
      
              Tick (Tickish b) (Expr b)
      
      which is used to represent all the kinds of source annotation we
      support: profiling SCCs, HPC ticks, and GHCi breakpoints.
      
      Depending on the properties of the Tickish, different transformations
      apply to Tick.  See CoreUtils.mkTick for details.
      
      Tickets
      =======
      
      This commit closes the following tickets, test cases to follow:
      
        - Close #2552: not a bug, but the behaviour is now more intuitive
          (test is T2552)
      
        - Close #680 (test is T680)
      
        - Close #1531 (test is result001)
      
        - Close #949 (test is T949)
      
        - Close #2466: test case has bitrotted (doesn't compile against current
          version of vector-space package)
      7bb0447d
  37. 02 Feb, 2011 1 commit
    • Simon Marlow's avatar
      GC refactoring and cleanup · 18896fa2
      Simon Marlow authored
      Now we keep any partially-full blocks in the gc_thread[] structs after
      each GC, rather than moving them to the generation.  This should give
      us slightly better locality (though I wasn't able to measure any
      difference).
      
      Also in this patch: better sanity checking with THREADED.
      18896fa2
  38. 15 Dec, 2010 1 commit
    • Simon Marlow's avatar
      Implement stack chunks and separate TSO/STACK objects · f30d5273
      Simon Marlow authored
      This patch makes two changes to the way stacks are managed:
      
      1. The stack is now stored in a separate object from the TSO.
      
      This means that it is easier to replace the stack object for a thread
      when the stack overflows or underflows; we don't have to leave behind
      the old TSO as an indirection any more.  Consequently, we can remove
      ThreadRelocated and deRefTSO(), which were a pain.
      
      This is obviously the right thing, but the last time I tried to do it
      it made performance worse.  This time I seem to have cracked it.
      
      2. Stacks are now represented as a chain of chunks, rather than
         a single monolithic object.
      
      The big advantage here is that individual chunks are marked clean or
      dirty according to whether they contain pointers to the young
      generation, and the GC can avoid traversing clean stack chunks during
      a young-generation collection.  This means that programs with deep
      stacks will see a big saving in GC overhead when using the default GC
      settings.
      
      A secondary advantage is that there is much less copying involved as
      the stack grows.  Programs that quickly grow a deep stack will see big
      improvements.
      
      In some ways the implementation is simpler, as nothing special needs
      to be done to reclaim stack as the stack shrinks (the GC just recovers
      the dead stack chunks).  On the other hand, we have to manage stack
      underflow between chunks, so there's a new stack frame
      (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
      The total amount of code is probably about the same as before.
      
      There are new RTS flags:
      
         -ki<size> Sets the initial thread stack size (default 1k)  Egs: -ki4k -ki2m
         -kc<size> Sets the stack chunk size (default 32k)
         -kb<size> Sets the stack chunk buffer size (default 1k)
      
      -ki was previously called just -k, and the old name is still accepted
      for backwards compatibility.  These new options are documented.
      f30d5273
  39. 03 Dec, 2010 2 commits