1. 11 Apr, 2011 1 commit
    • Simon Marlow's avatar
      Refactoring and tidy up · 1fb38442
      Simon Marlow authored
      This is a port of some of the changes from my private local-GC branch
      (which is still in darcs, I haven't converted it to git yet).  There
      are a couple of small functional differences in the GC stats: first,
      per-thread GC timings should now be more accurate, and secondly we now
      report average and maximum pause times. e.g. from minimax +RTS -N8 -s:
      
                                          Tot time (elapsed)  Avg pause  Max pause
        Gen  0      2755 colls,  2754 par   13.16s    0.93s     0.0003s    0.0150s
        Gen  1       769 colls,   769 par    3.71s    0.26s     0.0003s    0.0059s
      1fb38442
  2. 11 Nov, 2010 1 commit
  3. 01 Nov, 2010 1 commit
  4. 21 Dec, 2010 1 commit
  5. 25 Nov, 2010 1 commit
  6. 25 May, 2010 1 commit
  7. 29 Mar, 2010 1 commit
    • Simon Marlow's avatar
      New implementation of BLACKHOLEs · 5d52d9b6
      Simon Marlow authored
      This replaces the global blackhole_queue with a clever scheme that
      enables us to queue up blocked threads on the closure that they are
      blocked on, while still avoiding atomic instructions in the common
      case.
      
      Advantages:
      
       - gets rid of a locked global data structure and some tricky GC code
         (replacing it with some per-thread data structures and different
         tricky GC code :)
      
       - wakeups are more prompt: parallel/concurrent performance should
         benefit.  I haven't seen anything dramatic in the parallel
         benchmarks so far, but a couple of threading benchmarks do improve
         a bit.
      
       - waking up a thread blocked on a blackhole is now O(1) (e.g. if
         it is the target of throwTo).
      
       - less sharing and better separation of Capabilities: communication
         is done with messages, the data structures are strictly owned by a
         Capability and cannot be modified except by sending messages.
      
       - this change will utlimately enable us to do more intelligent
         scheduling when threads block on each other.  This is what started
         off the whole thing, but it isn't done yet (#3838).
      
      I'll be documenting all this on the wiki in due course.
      5d52d9b6
  8. 11 Mar, 2010 1 commit
    • Simon Marlow's avatar
      Use message-passing to implement throwTo in the RTS · 7408b392
      Simon Marlow authored
      This replaces some complicated locking schemes with message-passing
      in the implementation of throwTo. The benefits are
      
       - previously it was impossible to guarantee that a throwTo from
         a thread running on one CPU to a thread running on another CPU
         would be noticed, and we had to rely on the GC to pick up these
         forgotten exceptions. This no longer happens.
      
       - the locking regime is simpler (though the code is about the same
         size)
      
       - threads can be unblocked from a blocked_exceptions queue without
         having to traverse the whole queue now.  It's a rare case, but
         replaces an O(n) operation with an O(1).
      
       - generally we move in the direction of sharing less between
         Capabilities (aka HECs), which will become important with other
         changes we have planned.
      
      Also in this patch I replaced several STM-specific closure types with
      a generic MUT_PRIM closure type, which allowed a lot of code in the GC
      and other places to go away, hence the line-count reduction.  The
      message-passing changes resulted in about a net zero line-count
      difference.
      7408b392
  9. 09 Mar, 2010 2 commits
    • Simon Marlow's avatar
      Split part of the Task struct into a separate struct InCall · 7effbbbb
      Simon Marlow authored
      The idea is that this leaves Tasks and OSThread in one-to-one
      correspondence.  The part of a Task that represents a call into
      Haskell from C is split into a separate struct InCall, pointed to by
      the Task and the TSO bound to it.  A given OSThread/Task thus always
      uses the same mutex and condition variable, rather than getting a new
      one for each callback.  Conceptually it is simpler, although there are
      more types and indirections in a few places now.
      
      This improves callback performance by removing some of the locks that
      we had to take when making in-calls.  Now we also keep the current Task
      in a thread-local variable if supported by the OS and gcc (currently
      only Linux).
      7effbbbb
    • Simon Marlow's avatar
      Fix a rare deadlock when the IO manager thread is slow to start up · f8a01fb3
      Simon Marlow authored
      This fixes occasional failures of ffi002(threaded1) on a loaded
      machine.
      f8a01fb3
  10. 26 Jan, 2010 1 commit
  11. 12 Dec, 2009 1 commit
    • chak@cse.unsw.edu.au.'s avatar
      Expose all EventLog events as DTrace probes · 015d3d46
      chak@cse.unsw.edu.au. authored
      - Defines a DTrace provider, called 'HaskellEvent', that provides a probe
        for every event of the eventlog framework.
      - In contrast to the original eventlog, the DTrace probes are available in
        all flavours of the runtime system (DTrace probes have virtually no
        overhead if not enabled); when -DTRACING is defined both the regular
        event log as well as DTrace probes can be used.
      - Currently, Mac OS X only.  User-space DTrace probes are implemented
        differently on Mac OS X than in the original DTrace implementation.
        Nevertheless, it shouldn't be too hard to enable these probes on other
        platforms, too.
      - Documentation is at http://hackage.haskell.org/trac/ghc/wiki/DTrace
      015d3d46
  12. 02 Dec, 2009 1 commit
  13. 01 Dec, 2009 2 commits
    • Simon Marlow's avatar
      Make allocatePinned use local storage, and other refactorings · 5270423a
      Simon Marlow authored
      This is a batch of refactoring to remove some of the GC's global
      state, as we move towards CPU-local GC.  
      
        - allocateLocal() now allocates large objects into the local
          nursery, rather than taking a global lock and allocating
          then in gen 0 step 0.
      
        - allocatePinned() was still allocating from global storage and
          taking a lock each time, now it uses local storage. 
          (mallocForeignPtrBytes should be faster with -threaded).
          
        - We had a gen 0 step 0, distinct from the nurseries, which are
          stored in a separate nurseries[] array.  This is slightly strange.
          I removed the g0s0 global that pointed to gen 0 step 0, and
          removed all uses of it.  I think now we don't use gen 0 step 0 at
          all, except possibly when there is only one generation.  Possibly
          more tidying up is needed here.
      
        - I removed the global allocate() function, and renamed
          allocateLocal() to allocate().
      
        - the alloc_blocks global is gone.  MAYBE_GC() and
          doYouWantToGC() now check the local nursery only.
      5270423a
    • Simon Marlow's avatar
      free cap->saved_mut_lists too · cf036e5b
      Simon Marlow authored
      fixes some memory leakage at shutdown
      cf036e5b
  14. 09 Oct, 2009 1 commit
  15. 07 Oct, 2009 1 commit
  16. 29 Aug, 2009 1 commit
    • Simon Marlow's avatar
      Unify event logging and debug tracing. · a5288c55
      Simon Marlow authored
        - tracing facilities are now enabled with -DTRACING, and -DDEBUG
          additionally enables debug-tracing.  -DEVENTLOG has been
          removed.
      
        - -debug now implies -eventlog
      
        - events can be printed to stderr instead of being sent to the
          binary .eventlog file by adding +RTS -v (which is implied by the
          +RTS -Dx options).
      
        - -Dx debug messages can be sent to the binary .eventlog file
          by adding +RTS -l.  This should help debugging by reducing
          the impact of debug tracing on execution time.
      
        - Various debug messages that duplicated the information in events
          have been removed.
      a5288c55
  17. 31 Aug, 2009 1 commit
  18. 02 Aug, 2009 1 commit
    • Simon Marlow's avatar
      RTS tidyup sweep, first phase · a2a67cd5
      Simon Marlow authored
      The first phase of this tidyup is focussed on the header files, and in
      particular making sure we are exposinng publicly exactly what we need
      to, and no more.
      
       - Rts.h now includes everything that the RTS exposes publicly,
         rather than a random subset of it.
      
       - Most of the public header files have moved into subdirectories, and
         many of them have been renamed.  But clients should not need to
         include any of the other headers directly, just #include the main
         public headers: Rts.h, HsFFI.h, RtsAPI.h.
      
       - All the headers needed for via-C compilation have moved into the
         stg subdirectory, which is self-contained.  Most of the headers for
         the rest of the RTS APIs have moved into the rts subdirectory.
      
       - I left MachDeps.h where it is, because it is so widely used in
         Haskell code.
       
       - I left a deprecated stub for RtsFlags.h in place.  The flag
         structures are now exposed by Rts.h.
      
       - Various internal APIs are no longer exposed by public header files.
      
       - Various bits of dead code and declarations have been removed
      
       - More gcc warnings are turned on, and the RTS code is more
         warning-clean.
      
       - More source files #include "PosixSource.h", and hence only use
         standard POSIX (1003.1c-1995) interfaces.
      
      There is a lot more tidying up still to do, this is just the first
      pass.  I also intend to standardise the names for external RTS APIs
      (e.g use the rts_ prefix consistently), and declare the internal APIs
      as hidden for shared libraries.
      a2a67cd5
  19. 12 Jun, 2009 1 commit
    • Duncan Coutts's avatar
      Add and export rts_unsafeGetMyCapability from rts · 85df606a
      Duncan Coutts authored
      We need this, or something equivalent, to be able to implement
      stgAllocForGMP outside of the rts. That's because we want to use
      allocateLocal which allocates from the given capability without
      having to take any locks. In the gmp primops we're basically in
      an unsafe foreign call, that is a context where we hold a current
      capability. So it's safe for us to use allocateLocal. We just
      need a way to get the current capability. The method to get the
      current capability varies depends on whether we're using the
      threaded rts or not. When stgAllocForGMP is built inside the rts
      that's ok because we can do it conditionally on THREADED_RTS.
      Outside the rts we need a single api we can call without knowing
      if we're talking to a threaded rts or not, hence this addition.
      85df606a
  20. 02 Jun, 2009 1 commit
  21. 03 Apr, 2009 1 commit
  22. 17 Mar, 2009 1 commit
    • Simon Marlow's avatar
      Add fast event logging · 8b18faef
      Simon Marlow authored
      Generate binary log files from the RTS containing a log of runtime
      events with timestamps.  The log file can be visualised in various
      ways, for investigating runtime behaviour and debugging performance
      problems.  See for example the forthcoming ThreadScope viewer.
      
      New GHC option:
      
        -eventlog   (link-time option) Enables event logging.
      
        +RTS -l     (runtime option) Generates <prog>.eventlog with
                    the binary event information.
      
      This replaces some of the tracing machinery we already had in the RTS:
      e.g. +RTS -vg  for GC tracing (we should do this using the new event
      logging instead).
      
      Event logging has almost no runtime cost when it isn't enabled, though
      in the future we might add more fine-grained events and this might
      change; hence having a link-time option and compiling a separate
      version of the RTS for event logging.  There's a small runtime cost
      for enabling event-logging, for most programs it shouldn't make much
      difference.
      
      (Todo: docs)
      8b18faef
  23. 13 Mar, 2009 1 commit
    • Simon Marlow's avatar
      Instead of a separate context-switch flag, set HpLim to zero · 304e7fb7
      Simon Marlow authored
      This reduces the latency between a context-switch being triggered and
      the thread returning to the scheduler, which in turn should reduce the
      cost of the GC barrier when there are many cores.
      
      We still retain the old context_switch flag which is checked at the
      end of each block of allocation.  The idea is that setting HpLim may
      fail if the the target thread is modifying HpLim at the same time; the
      context_switch flag is a fallback.  It also allows us to "context
      switch soon" without forcing an immediate switch, which can be costly.
      304e7fb7
  24. 12 Jan, 2009 1 commit
    • Simon Marlow's avatar
      Keep the remembered sets local to each thread during parallel GC · 6a405b1e
      Simon Marlow authored
      This turns out to be quite vital for parallel programs:
      
        - The way we discover which threads to traverse is by finding
          dirty threads via the remembered sets (aka mutable lists).
      
        - A dirty thread will be on the remembered set of the capability
          that was running it, and we really want to traverse that thread's
          stack using the GC thread for the capability, because it is in
          that CPU's cache.  If we get this wrong, we get penalised badly by
          the memory system.
      
      Previously we had per-capability mutable lists but they were
      aggregated before GC and traversed by just one of the GC threads.
      This resulted in very poor performance particularly for parallel
      programs with deep stacks.
      
      Now we keep per-capability remembered sets throughout GC, which also
      removes a lock (recordMutableGen_sync).
      6a405b1e
  25. 21 Nov, 2008 1 commit
    • Simon Marlow's avatar
      Use mutator threads to do GC, instead of having a separate pool of GC threads · 3ebcd3de
      Simon Marlow authored
      Previously, the GC had its own pool of threads to use as workers when
      doing parallel GC.  There was a "leader", which was the mutator thread
      that initiated the GC, and the other threads were taken from the pool.
      
      This was simple and worked fine for sequential programs, where we did
      most of the benchmarking for the parallel GC, but falls down for
      parallel programs.  When we have N mutator threads and N cores, at GC
      time we would have to stop N-1 mutator threads and start up N-1 GC
      threads, and hope that the OS schedules them all onto separate cores.
      It practice it doesn't, as you might expect.
      
      Now we use the mutator threads to do GC.  This works quite nicely,
      particularly for parallel programs, where each mutator thread scans
      its own spark pool, which is probably in its cache anyway.
      
      There are some flag changes:
      
        -g<n> is removed (-g1 is still accepted for backwards compat).
        There's no way to have a different number of GC threads than mutator
        threads now.
      
        -q1       Use one OS thread for GC (turns off parallel GC)
        -qg<n>    Use parallel GC for generations >= <n> (default: 1)
      
      Using parallel GC only for generations >=1 works well for sequential
      programs.  Compiling an ordinary sequential program with -threaded and
      running it with -N2 or more should help if you do a lot of GC.  I've
      found that adding -qg0 (do parallel GC for generation 0 too) speeds up
      some parallel programs, but slows down some sequential programs.
      Being conservative, I left the threshold at 1.
      
      ToDo: document the new options.
      3ebcd3de
  26. 14 Nov, 2008 1 commit
  27. 18 Nov, 2008 1 commit
    • Simon Marlow's avatar
      Add optional eager black-holing, with new flag -feager-blackholing · d600bf7a
      Simon Marlow authored
      Eager blackholing can improve parallel performance by reducing the
      chances that two threads perform the same computation.  However, it
      has a cost: one extra memory write per thunk entry.  
      
      To get the best results, any code which may be executed in parallel
      should be compiled with eager blackholing turned on.  But since
      there's a cost for sequential code, we make it optional and turn it on
      for the parallel package only.  It might be a good idea to compile
      applications (or modules) with parallel code in with
      -feager-blackholing.
      
      ToDo: document -feager-blackholing.
      d600bf7a
  28. 13 Nov, 2008 1 commit
  29. 06 Nov, 2008 2 commits
  30. 24 Oct, 2008 1 commit
  31. 23 Oct, 2008 1 commit
  32. 22 Oct, 2008 2 commits
    • Simon Marlow's avatar
    • Simon Marlow's avatar
      Refactoring and reorganisation of the scheduler · 99df892c
      Simon Marlow authored
      Change the way we look for work in the scheduler.  Previously,
      checking to see whether there was anything to do was a
      non-side-effecting operation, but this has changed now that we do
      work-stealing.  This lead to a refactoring of the inner loop of the
      scheduler.
      
      Also, lots of cleanup in the new work-stealing code, but no functional
      changes.
      
      One new statistic is added to the +RTS -s output:
      
        SPARKS: 1430 (2 converted, 1427 pruned)
      
      lets you know something about the use of `par` in the program.
      99df892c
  33. 15 Sep, 2008 1 commit
    • berthold@mathematik.uni-marburg.de's avatar
      Work stealing for sparks · cf9650f2
      berthold@mathematik.uni-marburg.de authored
         Spark stealing support for PARALLEL_HASKELL and THREADED_RTS versions of the RTS.
        
        Spark pools are per capability, separately allocated and held in the Capability 
        structure. The implementation uses Double-Ended Queues (deque) and cas-protected 
        access.
        
        The write end of the queue (position bottom) can only be used with
        mutual exclusion, i.e. by exactly one caller at a time.
        Multiple readers can steal()/findSpark() from the read end
        (position top), and are synchronised without a lock, based on a cas
        of the top position. One reader wins, the others return NULL for a
        failure.
        
        Work stealing is called when Capabilities find no other work (inside yieldCapability),
        and tries all capabilities 0..n-1 twice, unless a theft succeeds.
        
        Inside schedulePushWork, all considered cap.s (those which were idle and could 
        be grabbed) are woken up. Future versions should wake up capabilities immediately when 
        putting a new spark in the local pool, from newSpark().
      
      Patch has been re-recorded due to conflicting bugfixes in the sparks.c, also fixing a 
      (strange) conflict in the scheduler.
      cf9650f2
  34. 19 Sep, 2008 1 commit
  35. 09 Sep, 2008 1 commit
    • Simon Marlow's avatar
      Fix race condition in wakeupThreadOnCapability() (#2574) · d572aed6
      Simon Marlow authored
      wakeupThreadOnCapbility() is used to signal another capability that
      there is a thread waiting to be added to its run queue.  It adds the
      thread to the (locked) wakeup queue on the remote capability.  In
      order to do this, it has to modify the TSO's link field, which has a
      write barrier.  The write barrier might put the TSO on the mutable
      list, and the bug was that it was using the mutable list of the
      *target* capability, which we do not have exclusive access to.  We
      should be using the current Capabilty's mutable list in this case.
      d572aed6
  36. 19 Aug, 2008 1 commit