1. 14 Jun, 2016 1 commit
  2. 04 May, 2016 1 commit
  3. 22 Jul, 2015 1 commit
    • gcampax's avatar
      Two step allocator for 64-bit systems · 0d1a8d09
      gcampax authored
      The current OS memory allocator conflates the concepts of allocating
      address space and allocating memory, which makes the HEAP_ALLOCED()
      implementation excessively complicated (as the only thing it cares
      about is address space layout) and slow. Instead, what we want
      is to allocate a single insanely large contiguous block of address
      space (to make HEAP_ALLOCED() checks fast), and then commit subportions
      of that in 1MB blocks as we did before.
      This is currently behind a flag, USE_LARGE_ADDRESS_SPACE, that is only enabled for
      certain OSes.
      Test Plan: validate
      Reviewers: simonmar, ezyang, austin
      Subscribers: thomie, carter
      Differential Revision: https://phabricator.haskell.org/D524
      GHC Trac Issues: #9706
  4. 10 Jul, 2015 1 commit
  5. 29 Sep, 2014 1 commit
  6. 20 Aug, 2014 1 commit
  7. 28 Jul, 2014 1 commit
  8. 18 Jul, 2011 6 commits
    • Duncan Coutts's avatar
      Add new fully-accurate per-spark trace/eventlog events · 084b64f2
      Duncan Coutts authored
      Replaces the existing EVENT_RUN/STEAL_SPARK events with 7 new events
      covering all stages of the spark lifcycle:
        create, dud, overflow, run, steal, fizzle, gc
      The sampled spark events are still available. There are now two event
      classes for sparks, the sampled and the fully accurate. They can be
      enabled/disabled independently. By default +RTS -l includes the sampled
      but not full detail spark events. Use +RTS -lf-p to enable the detailed
      'f' and disable the sampled 'p' spark.
      Includes work by Mikolaj <mikolaj.konarski@gmail.com>
    • Duncan Coutts's avatar
      Move allocation of spark pools into initCapability · 5d091088
      Duncan Coutts authored
      Rather than a separate phase of initSparkPools. It means all the spark
      stuff for a capability is initialisaed at the same time, which is then
      becomes a good place to stick an initial spark trace event.
    • Duncan Coutts's avatar
      Classify overflowed sparks separately · fa8d20e6
      Duncan Coutts authored
      When you use `par` to make a spark, if the spark pool on the current
      capability is full then the spark is discarded. This represents a
      loss of potential parallelism and it also means there are simply a
      lot of sparks around. Both are things that might be of concern to a
      programmer when tuning a parallel program that uses par.
      The "+RTS -s" stats command now reports overflowed sparks, e.g.
      SPARKS: 100001 (15521 converted, 84480 overflowed, 0 dud, 0 GC'd, 0 fizzled)
    • Duncan Coutts's avatar
      Use a struct for the set of spark counters · 556557eb
      Duncan Coutts authored
    • Duncan Coutts's avatar
      Change tryStealSpark so it does not consume fizzled sparks · ededf355
      Duncan Coutts authored
      We want to count fizzled sparks accurately. Now tryStealSpark returns
      fizzled sparks, and the callers now update the fizzled spark count.
    • Duncan Coutts's avatar
      Improve the newSpark dud test by using the pointer tag bits · e0b98b42
      Duncan Coutts authored
      newSpark() checks if the spark is a dud, and if so does not add it to
      the spark pool. Previously, newSpark would discard the pointer tag bits
      and just check closure_SHOULD_SPARK(p). We can take advantage of the
      tag bits which can tell us if the pointer points to a value. If it is,
      it's a dud spark and we don't need to add it to the spark pool.
  9. 18 Mar, 2011 1 commit
  10. 14 Feb, 2011 1 commit
    • Simon Marlow's avatar
      pruneSparkQueue: check for tagged pointers · 9ef55740
      Simon Marlow authored
      This was a bug in 6.12.3.  I think the problem no longer occurs due to
      the way sparks are treated as weak pointers, but it doesn't hurt to
      test for tagged pointers anyway: better to do the test than have a
      subtle invariant.
  11. 11 Nov, 2010 1 commit
  12. 01 Nov, 2010 1 commit
  13. 25 May, 2010 2 commits
  14. 23 Nov, 2009 1 commit
  15. 12 Dec, 2009 1 commit
    • chak@cse.unsw.edu.au.'s avatar
      Expose all EventLog events as DTrace probes · 015d3d46
      chak@cse.unsw.edu.au. authored
      - Defines a DTrace provider, called 'HaskellEvent', that provides a probe
        for every event of the eventlog framework.
      - In contrast to the original eventlog, the DTrace probes are available in
        all flavours of the runtime system (DTrace probes have virtually no
        overhead if not enabled); when -DTRACING is defined both the regular
        event log as well as DTrace probes can be used.
      - Currently, Mac OS X only.  User-space DTrace probes are implemented
        differently on Mac OS X than in the original DTrace implementation.
        Nevertheless, it shouldn't be too hard to enable these probes on other
        platforms, too.
      - Documentation is at http://hackage.haskell.org/trac/ghc/wiki/DTrace
  16. 29 Aug, 2009 1 commit
    • Simon Marlow's avatar
      Unify event logging and debug tracing. · a5288c55
      Simon Marlow authored
        - tracing facilities are now enabled with -DTRACING, and -DDEBUG
          additionally enables debug-tracing.  -DEVENTLOG has been
        - -debug now implies -eventlog
        - events can be printed to stderr instead of being sent to the
          binary .eventlog file by adding +RTS -v (which is implied by the
          +RTS -Dx options).
        - -Dx debug messages can be sent to the binary .eventlog file
          by adding +RTS -l.  This should help debugging by reducing
          the impact of debug tracing on execution time.
        - Various debug messages that duplicated the information in events
          have been removed.
  17. 02 Aug, 2009 1 commit
    • Simon Marlow's avatar
      RTS tidyup sweep, first phase · a2a67cd5
      Simon Marlow authored
      The first phase of this tidyup is focussed on the header files, and in
      particular making sure we are exposinng publicly exactly what we need
      to, and no more.
       - Rts.h now includes everything that the RTS exposes publicly,
         rather than a random subset of it.
       - Most of the public header files have moved into subdirectories, and
         many of them have been renamed.  But clients should not need to
         include any of the other headers directly, just #include the main
         public headers: Rts.h, HsFFI.h, RtsAPI.h.
       - All the headers needed for via-C compilation have moved into the
         stg subdirectory, which is self-contained.  Most of the headers for
         the rest of the RTS APIs have moved into the rts subdirectory.
       - I left MachDeps.h where it is, because it is so widely used in
         Haskell code.
       - I left a deprecated stub for RtsFlags.h in place.  The flag
         structures are now exposed by Rts.h.
       - Various internal APIs are no longer exposed by public header files.
       - Various bits of dead code and declarations have been removed
       - More gcc warnings are turned on, and the RTS code is more
       - More source files #include "PosixSource.h", and hence only use
         standard POSIX (1003.1c-1995) interfaces.
      There is a lot more tidying up still to do, this is just the first
      pass.  I also intend to standardise the names for external RTS APIs
      (e.g use the rts_ prefix consistently), and declare the internal APIs
      as hidden for shared libraries.
  18. 02 Jun, 2009 1 commit
  19. 23 Apr, 2009 1 commit
  20. 13 Apr, 2009 2 commits
  21. 03 Apr, 2009 1 commit
  22. 05 Feb, 2009 1 commit
  23. 19 Nov, 2008 2 commits
  24. 06 Nov, 2008 2 commits
  25. 05 Nov, 2008 1 commit
  26. 22 Oct, 2008 2 commits
    • Simon Marlow's avatar
    • Simon Marlow's avatar
      Refactoring and reorganisation of the scheduler · 99df892c
      Simon Marlow authored
      Change the way we look for work in the scheduler.  Previously,
      checking to see whether there was anything to do was a
      non-side-effecting operation, but this has changed now that we do
      work-stealing.  This lead to a refactoring of the inner loop of the
      Also, lots of cleanup in the new work-stealing code, but no functional
      One new statistic is added to the +RTS -s output:
        SPARKS: 1430 (2 converted, 1427 pruned)
      lets you know something about the use of `par` in the program.
  27. 15 Sep, 2008 1 commit
    • berthold@mathematik.uni-marburg.de's avatar
      Work stealing for sparks · cf9650f2
      berthold@mathematik.uni-marburg.de authored
         Spark stealing support for PARALLEL_HASKELL and THREADED_RTS versions of the RTS.
        Spark pools are per capability, separately allocated and held in the Capability 
        structure. The implementation uses Double-Ended Queues (deque) and cas-protected 
        The write end of the queue (position bottom) can only be used with
        mutual exclusion, i.e. by exactly one caller at a time.
        Multiple readers can steal()/findSpark() from the read end
        (position top), and are synchronised without a lock, based on a cas
        of the top position. One reader wins, the others return NULL for a
        Work stealing is called when Capabilities find no other work (inside yieldCapability),
        and tries all capabilities 0..n-1 twice, unless a theft succeeds.
        Inside schedulePushWork, all considered cap.s (those which were idle and could 
        be grabbed) are woken up. Future versions should wake up capabilities immediately when 
        putting a new spark in the local pool, from newSpark().
      Patch has been re-recorded due to conflicting bugfixes in the sparks.c, also fixing a 
      (strange) conflict in the scheduler.
  28. 09 Sep, 2008 1 commit
  29. 23 Jul, 2008 2 commits