1. 05 Aug, 2009 1 commit
  2. 02 Aug, 2009 1 commit
    • Simon Marlow's avatar
      RTS tidyup sweep, first phase · a2a67cd5
      Simon Marlow authored
      The first phase of this tidyup is focussed on the header files, and in
      particular making sure we are exposinng publicly exactly what we need
      to, and no more.
      
       - Rts.h now includes everything that the RTS exposes publicly,
         rather than a random subset of it.
      
       - Most of the public header files have moved into subdirectories, and
         many of them have been renamed.  But clients should not need to
         include any of the other headers directly, just #include the main
         public headers: Rts.h, HsFFI.h, RtsAPI.h.
      
       - All the headers needed for via-C compilation have moved into the
         stg subdirectory, which is self-contained.  Most of the headers for
         the rest of the RTS APIs have moved into the rts subdirectory.
      
       - I left MachDeps.h where it is, because it is so widely used in
         Haskell code.
       
       - I left a deprecated stub for RtsFlags.h in place.  The flag
         structures are now exposed by Rts.h.
      
       - Various internal APIs are no longer exposed by public header files.
      
       - Various bits of dead code and declarations have been removed
      
       - More gcc warnings are turned on, and the RTS code is more
         warning-clean.
      
       - More source files #include "PosixSource.h", and hence only use
         standard POSIX (1003.1c-1995) interfaces.
      
      There is a lot more tidying up still to do, this is just the first
      pass.  I also intend to standardise the names for external RTS APIs
      (e.g use the rts_ prefix consistently), and declare the internal APIs
      as hidden for shared libraries.
      a2a67cd5
  3. 24 Jul, 2009 2 commits
  4. 04 Jun, 2009 1 commit
  5. 03 Apr, 2009 1 commit
  6. 17 Mar, 2009 1 commit
    • Simon Marlow's avatar
      Add fast event logging · 8b18faef
      Simon Marlow authored
      Generate binary log files from the RTS containing a log of runtime
      events with timestamps.  The log file can be visualised in various
      ways, for investigating runtime behaviour and debugging performance
      problems.  See for example the forthcoming ThreadScope viewer.
      
      New GHC option:
      
        -eventlog   (link-time option) Enables event logging.
      
        +RTS -l     (runtime option) Generates <prog>.eventlog with
                    the binary event information.
      
      This replaces some of the tracing machinery we already had in the RTS:
      e.g. +RTS -vg  for GC tracing (we should do this using the new event
      logging instead).
      
      Event logging has almost no runtime cost when it isn't enabled, though
      in the future we might add more fine-grained events and this might
      change; hence having a link-time option and compiling a separate
      version of the RTS for event logging.  There's a small runtime cost
      for enabling event-logging, for most programs it shouldn't make much
      difference.
      
      (Todo: docs)
      8b18faef
  7. 13 Mar, 2009 2 commits
    • Simon Marlow's avatar
    • Simon Marlow's avatar
      Use work-stealing for load-balancing in the GC · 4e354226
      Simon Marlow authored
        
      New flag: "+RTS -qb" disables load-balancing in the parallel GC
      (though this is subject to change, I think we will probably want to do
      something more automatic before releasing this).
      
      To get the "PARGC3" configuration described in the "Runtime support
      for Multicore Haskell" paper, use "+RTS -qg0 -qb -RTS".
      
      The main advantage of this is that it allows us to easily disable
      load-balancing altogether, which turns out to be important in parallel
      programs.  Maintaining locality is sometimes more important that
      spreading the work out in parallel GC.  There is a side benefit in
      that the parallel GC should have improved locality even when
      load-balancing, because each processor prefers to take work from its
      own queue before stealing from others.
      4e354226
  8. 09 Mar, 2009 1 commit
  9. 12 Jan, 2009 2 commits
    • Simon Marlow's avatar
      sanity checking fixes · 1aaac347
      Simon Marlow authored
      1aaac347
    • Simon Marlow's avatar
      Keep the remembered sets local to each thread during parallel GC · 6a405b1e
      Simon Marlow authored
      This turns out to be quite vital for parallel programs:
      
        - The way we discover which threads to traverse is by finding
          dirty threads via the remembered sets (aka mutable lists).
      
        - A dirty thread will be on the remembered set of the capability
          that was running it, and we really want to traverse that thread's
          stack using the GC thread for the capability, because it is in
          that CPU's cache.  If we get this wrong, we get penalised badly by
          the memory system.
      
      Previously we had per-capability mutable lists but they were
      aggregated before GC and traversed by just one of the GC threads.
      This resulted in very poor performance particularly for parallel
      programs with deep stacks.
      
      Now we keep per-capability remembered sets throughout GC, which also
      removes a lock (recordMutableGen_sync).
      6a405b1e
  10. 21 Nov, 2008 1 commit
    • Simon Marlow's avatar
      Use mutator threads to do GC, instead of having a separate pool of GC threads · 3ebcd3de
      Simon Marlow authored
      Previously, the GC had its own pool of threads to use as workers when
      doing parallel GC.  There was a "leader", which was the mutator thread
      that initiated the GC, and the other threads were taken from the pool.
      
      This was simple and worked fine for sequential programs, where we did
      most of the benchmarking for the parallel GC, but falls down for
      parallel programs.  When we have N mutator threads and N cores, at GC
      time we would have to stop N-1 mutator threads and start up N-1 GC
      threads, and hope that the OS schedules them all onto separate cores.
      It practice it doesn't, as you might expect.
      
      Now we use the mutator threads to do GC.  This works quite nicely,
      particularly for parallel programs, where each mutator thread scans
      its own spark pool, which is probably in its cache anyway.
      
      There are some flag changes:
      
        -g<n> is removed (-g1 is still accepted for backwards compat).
        There's no way to have a different number of GC threads than mutator
        threads now.
      
        -q1       Use one OS thread for GC (turns off parallel GC)
        -qg<n>    Use parallel GC for generations >= <n> (default: 1)
      
      Using parallel GC only for generations >=1 works well for sequential
      programs.  Compiling an ordinary sequential program with -threaded and
      running it with -N2 or more should help if you do a lot of GC.  I've
      found that adding -qg0 (do parallel GC for generation 0 too) speeds up
      some parallel programs, but slows down some sequential programs.
      Being conservative, I left the threshold at 1.
      
      ToDo: document the new options.
      3ebcd3de
  11. 12 Nov, 2008 1 commit
  12. 22 Oct, 2008 1 commit
  13. 09 Sep, 2008 3 commits
  14. 23 Jul, 2008 1 commit
  15. 16 Jun, 2008 1 commit
  16. 09 Jun, 2008 1 commit
  17. 03 Jun, 2008 2 commits
  18. 24 Apr, 2008 2 commits
  19. 17 Apr, 2008 1 commit
  20. 16 Apr, 2008 14 commits