1. 09 May, 2010 1 commit
  2. 06 May, 2010 1 commit
  3. 29 Mar, 2010 4 commits
    • Simon Marlow's avatar
      Move a thread to the front of the run queue when another thread blocks on it · 2726a2f1
      Simon Marlow authored
      This fixes #3838, and was made possible by the new BLACKHOLE
      infrastructure.  To allow reording of the run queue I had to make it
      doubly-linked, which entails some extra trickiness with regard to
      GC write barriers and suchlike.
      2726a2f1
    • Simon Marlow's avatar
      tiny GC optimisation · 1373cd30
      Simon Marlow authored
      1373cd30
    • Simon Marlow's avatar
      New implementation of BLACKHOLEs · 5d52d9b6
      Simon Marlow authored
      This replaces the global blackhole_queue with a clever scheme that
      enables us to queue up blocked threads on the closure that they are
      blocked on, while still avoiding atomic instructions in the common
      case.
      
      Advantages:
      
       - gets rid of a locked global data structure and some tricky GC code
         (replacing it with some per-thread data structures and different
         tricky GC code :)
      
       - wakeups are more prompt: parallel/concurrent performance should
         benefit.  I haven't seen anything dramatic in the parallel
         benchmarks so far, but a couple of threading benchmarks do improve
         a bit.
      
       - waking up a thread blocked on a blackhole is now O(1) (e.g. if
         it is the target of throwTo).
      
       - less sharing and better separation of Capabilities: communication
         is done with messages, the data structures are strictly owned by a
         Capability and cannot be modified except by sending messages.
      
       - this change will utlimately enable us to do more intelligent
         scheduling when threads block on each other.  This is what started
         off the whole thing, but it isn't done yet (#3838).
      
      I'll be documenting all this on the wiki in due course.
      5d52d9b6
    • Simon Marlow's avatar
      e74936e3
  4. 25 Mar, 2010 1 commit
  5. 31 Dec, 2009 1 commit
  6. 07 Dec, 2009 1 commit
  7. 04 Dec, 2009 1 commit
  8. 03 Dec, 2009 2 commits
    • Simon Marlow's avatar
      GC refactoring, remove "steps" · 214b3663
      Simon Marlow authored
      The GC had a two-level structure, G generations each of T steps.
      Steps are for aging within a generation, mostly to avoid premature
      promotion.  
      
      Measurements show that more than 2 steps is almost never worthwhile,
      and 1 step is usually worse than 2.  In theory fractional steps are
      possible, so the ideal number of steps is somewhere between 1 and 3.
      GHC's default has always been 2.
      
      We can implement 2 steps quite straightforwardly by having each block
      point to the generation to which objects in that block should be
      promoted, so blocks in the nursery point to generation 0, and blocks
      in gen 0 point to gen 1, and so on.
      
      This commit removes the explicit step structures, merging generations
      with steps, thus simplifying a lot of code.  Performance is
      unaffected.  The tunable number of steps is now gone, although it may
      be replaced in the future by a way to tune the aging in generation 0.
      214b3663
    • Simon Marlow's avatar
      add a missing lock around allocGroup() · 8bc3f028
      Simon Marlow authored
      8bc3f028
  9. 02 Dec, 2009 2 commits
  10. 01 Dec, 2009 1 commit
    • Simon Marlow's avatar
      Make allocatePinned use local storage, and other refactorings · 5270423a
      Simon Marlow authored
      This is a batch of refactoring to remove some of the GC's global
      state, as we move towards CPU-local GC.  
      
        - allocateLocal() now allocates large objects into the local
          nursery, rather than taking a global lock and allocating
          then in gen 0 step 0.
      
        - allocatePinned() was still allocating from global storage and
          taking a lock each time, now it uses local storage. 
          (mallocForeignPtrBytes should be faster with -threaded).
          
        - We had a gen 0 step 0, distinct from the nurseries, which are
          stored in a separate nurseries[] array.  This is slightly strange.
          I removed the g0s0 global that pointed to gen 0 step 0, and
          removed all uses of it.  I think now we don't use gen 0 step 0 at
          all, except possibly when there is only one generation.  Possibly
          more tidying up is needed here.
      
        - I removed the global allocate() function, and renamed
          allocateLocal() to allocate().
      
        - the alloc_blocks global is gone.  MAYBE_GC() and
          doYouWantToGC() now check the local nursery only.
      5270423a
  11. 29 Nov, 2009 1 commit
    • Simon Marlow's avatar
      Store a destination step in the block descriptor · f9d15f9f
      Simon Marlow authored
      At the moment, this just saves a memory reference in the GC inner loop
      (worth a percent or two of GC time).  Later, it will hopefully let me
      experiment with partial steps, and simplifying the generation/step
      infrastructure.
      f9d15f9f
  12. 28 Aug, 2009 1 commit
  13. 20 Aug, 2009 1 commit
  14. 19 Aug, 2009 1 commit
  15. 18 Aug, 2009 1 commit
    • Simon Marlow's avatar
      Fix #3429: a tricky race condition · c5cafbcc
      Simon Marlow authored
      There were two bugs, and had it not been for the first one we would
      not have noticed the second one, so this is quite fortunate.
      
      The first bug is in stg_unblockAsyncExceptionszh_ret, when we found a
      pending exception to raise, but don't end up raising it, there was a
      missing adjustment to the stack pointer.  
      
      The second bug was that this case was actually happening at all: it
      ought to be incredibly rare, because the pending exception thread
      would have to be killed between us finding it and attempting to raise
      the exception.  This made me suspicious.  It turned out that there was
      a race condition on the tso->flags field; multiple threads were
      updating this bitmask field non-atomically (one of the bits is the
      dirty-bit for the generational GC).  The fix is to move the dirty bit
      into its own field of the TSO, making the TSO one word larger (sadly).
      c5cafbcc
  16. 02 Aug, 2009 1 commit
    • Simon Marlow's avatar
      RTS tidyup sweep, first phase · a2a67cd5
      Simon Marlow authored
      The first phase of this tidyup is focussed on the header files, and in
      particular making sure we are exposinng publicly exactly what we need
      to, and no more.
      
       - Rts.h now includes everything that the RTS exposes publicly,
         rather than a random subset of it.
      
       - Most of the public header files have moved into subdirectories, and
         many of them have been renamed.  But clients should not need to
         include any of the other headers directly, just #include the main
         public headers: Rts.h, HsFFI.h, RtsAPI.h.
      
       - All the headers needed for via-C compilation have moved into the
         stg subdirectory, which is self-contained.  Most of the headers for
         the rest of the RTS APIs have moved into the rts subdirectory.
      
       - I left MachDeps.h where it is, because it is so widely used in
         Haskell code.
       
       - I left a deprecated stub for RtsFlags.h in place.  The flag
         structures are now exposed by Rts.h.
      
       - Various internal APIs are no longer exposed by public header files.
      
       - Various bits of dead code and declarations have been removed
      
       - More gcc warnings are turned on, and the RTS code is more
         warning-clean.
      
       - More source files #include "PosixSource.h", and hence only use
         standard POSIX (1003.1c-1995) interfaces.
      
      There is a lot more tidying up still to do, this is just the first
      pass.  I also intend to standardise the names for external RTS APIs
      (e.g use the rts_ prefix consistently), and declare the internal APIs
      as hidden for shared libraries.
      a2a67cd5
  17. 24 Jul, 2009 1 commit
  18. 13 Jun, 2009 1 commit
  19. 03 Apr, 2009 1 commit
  20. 13 Mar, 2009 1 commit
    • Simon Marlow's avatar
      Use work-stealing for load-balancing in the GC · 4e354226
      Simon Marlow authored
        
      New flag: "+RTS -qb" disables load-balancing in the parallel GC
      (though this is subject to change, I think we will probably want to do
      something more automatic before releasing this).
      
      To get the "PARGC3" configuration described in the "Runtime support
      for Multicore Haskell" paper, use "+RTS -qg0 -qb -RTS".
      
      The main advantage of this is that it allows us to easily disable
      load-balancing altogether, which turns out to be important in parallel
      programs.  Maintaining locality is sometimes more important that
      spreading the work out in parallel GC.  There is a side benefit in
      that the parallel GC should have improved locality even when
      load-balancing, because each processor prefers to take work from its
      own queue before stealing from others.
      4e354226
  21. 12 Mar, 2009 1 commit
  22. 06 Mar, 2009 1 commit
    • Simon Marlow's avatar
      Partial fix for #2917 · 1b62aece
      Simon Marlow authored
       - add newAlignedPinnedByteArray# for allocating pinned BAs with
         arbitrary alignment
      
       - the old newPinnedByteArray# now aligns to 16 bytes
      
      Foreign.alloca will use newAlignedPinnedByteArray#, and so might end
      up wasting less space than before (we used to align to 8 by default).
      Foreign.allocaBytes and Foreign.mallocForeignPtrBytes will get 16-byte
      aligned memory, which is enough to avoid problems with SSE
      instructions on x86, for example.
      
      There was a bug in the old newPinnedByteArray#: it aligned to 8 bytes,
      but would have failed if the header was not a multiple of 8
      (fortunately it always was, even with profiling).  Also we
      occasionally wasted some space unnecessarily due to alignment in
      allocatePinned().
      
      I haven't done anything about Foreign.malloc/mallocBytes, which will
      give you the same alignment guarantees as malloc() (8 bytes on
      Linux/x86 here).
      1b62aece
  23. 12 Jan, 2009 2 commits
    • Simon Marlow's avatar
      sanity checking fixes · 1aaac347
      Simon Marlow authored
      1aaac347
    • Simon Marlow's avatar
      Keep the remembered sets local to each thread during parallel GC · 6a405b1e
      Simon Marlow authored
      This turns out to be quite vital for parallel programs:
      
        - The way we discover which threads to traverse is by finding
          dirty threads via the remembered sets (aka mutable lists).
      
        - A dirty thread will be on the remembered set of the capability
          that was running it, and we really want to traverse that thread's
          stack using the GC thread for the capability, because it is in
          that CPU's cache.  If we get this wrong, we get penalised badly by
          the memory system.
      
      Previously we had per-capability mutable lists but they were
      aggregated before GC and traversed by just one of the GC threads.
      This resulted in very poor performance particularly for parallel
      programs with deep stacks.
      
      Now we keep per-capability remembered sets throughout GC, which also
      removes a lock (recordMutableGen_sync).
      6a405b1e
  24. 09 Dec, 2008 1 commit
  25. 21 Nov, 2008 1 commit
    • Simon Marlow's avatar
      Use mutator threads to do GC, instead of having a separate pool of GC threads · 3ebcd3de
      Simon Marlow authored
      Previously, the GC had its own pool of threads to use as workers when
      doing parallel GC.  There was a "leader", which was the mutator thread
      that initiated the GC, and the other threads were taken from the pool.
      
      This was simple and worked fine for sequential programs, where we did
      most of the benchmarking for the parallel GC, but falls down for
      parallel programs.  When we have N mutator threads and N cores, at GC
      time we would have to stop N-1 mutator threads and start up N-1 GC
      threads, and hope that the OS schedules them all onto separate cores.
      It practice it doesn't, as you might expect.
      
      Now we use the mutator threads to do GC.  This works quite nicely,
      particularly for parallel programs, where each mutator thread scans
      its own spark pool, which is probably in its cache anyway.
      
      There are some flag changes:
      
        -g<n> is removed (-g1 is still accepted for backwards compat).
        There's no way to have a different number of GC threads than mutator
        threads now.
      
        -q1       Use one OS thread for GC (turns off parallel GC)
        -qg<n>    Use parallel GC for generations >= <n> (default: 1)
      
      Using parallel GC only for generations >=1 works well for sequential
      programs.  Compiling an ordinary sequential program with -threaded and
      running it with -N2 or more should help if you do a lot of GC.  I've
      found that adding -qg0 (do parallel GC for generation 0 too) speeds up
      some parallel programs, but slows down some sequential programs.
      Being conservative, I left the threshold at 1.
      
      ToDo: document the new options.
      3ebcd3de
  26. 06 Nov, 2008 1 commit
  27. 19 Sep, 2008 1 commit
  28. 12 Sep, 2008 1 commit
  29. 11 Sep, 2008 1 commit
  30. 09 Sep, 2008 2 commits
  31. 08 Sep, 2008 1 commit
  32. 29 Jul, 2008 1 commit
  33. 09 Jun, 2008 1 commit