1. 28 Jul, 2014 1 commit
  2. 30 May, 2014 1 commit
  3. 01 Oct, 2013 1 commit
  4. 04 Sep, 2013 1 commit
    • Simon Marlow's avatar
      Don't move Capabilities in setNumCapabilities (#8209) · aa779e09
      Simon Marlow authored
      We have various problems with reallocating the array of Capabilities,
      due to threads in waitForReturnCapability that are already holding a
      pointer to a Capability.
      
      Rather than add more locking to make this safer, I decided it would be
      easier to ensure that we never move the Capabilities at all.  The
      capabilities array is now an array of pointers to Capabaility.  There
      are extra indirections, but it rarely matters - we don't often access
      Capabilities via the array, normally we already have a pointer to
      one.  I ran the parallel benchmarks and didn't see any difference.
      aa779e09
  5. 13 Dec, 2012 1 commit
  6. 07 Sep, 2012 1 commit
    • Simon Marlow's avatar
      Deprecate lnat, and use StgWord instead · 41737f12
      Simon Marlow authored
      lnat was originally "long unsigned int" but we were using it when we
      wanted a 64-bit type on a 64-bit machine.  This broke on Windows x64,
      where long == int == 32 bits.  Using types of unspecified size is bad,
      but what we really wanted was a type with N bits on an N-bit machine.
      StgWord is exactly that.
      
      lnat was mentioned in some APIs that clients might be using
      (e.g. StackOverflowHook()), so we leave it defined but with a comment
      to say that it's deprecated.
      41737f12
  7. 07 Jun, 2012 1 commit
  8. 04 Apr, 2012 1 commit
  9. 27 Feb, 2012 1 commit
  10. 13 Feb, 2012 1 commit
    • Simon Marlow's avatar
      Allocate pinned object blocks from the nursery, not the global · 67f4ab7e
      Simon Marlow authored
      allocator.
      
      Prompted by a benchmark posted to parallel-haskell@haskell.org by
      Andreas Voellmy <andreas.voellmy@gmail.com>.  This program exhibits
      contention for the block allocator when run with -N2 and greater
      without the fix:
      
      {-# LANGUAGE MagicHash, UnboxedTuples, BangPatterns #-}
      module Main where
      
      import Control.Monad
      import Control.Concurrent
      import System.Environment
      import GHC.IO
      import GHC.Exts
      import GHC.Conc
      
      main = do
       [m] <- fmap (fmap read) getArgs
       n <- getNumCapabilities
       ms <- replicateM n newEmptyMVar
       sequence [ forkIO $ busyWorkerB (m `quot` n) >> putMVar mv () | mv <- ms ]
       mapM takeMVar ms
      
      busyWorkerB :: Int -> IO ()
      busyWorkerB n_loops = go 0
        where go !n | n >= n_loops = return ()
                    | otherwise    =
                do p <- (IO $ \s ->
                          case newPinnedByteArray# 1024# s      of
                            { (# s', mbarr# #) ->
                                 (# s', () #)
                            }
                        )
                   go (n+1)
      67f4ab7e
  11. 07 Feb, 2012 1 commit
  12. 15 Dec, 2011 1 commit
    • Simon Marlow's avatar
      Support for reducing the number of Capabilities with setNumCapabilities · 9bae7915
      Simon Marlow authored
      This patch allows setNumCapabilities to /reduce/ the number of active
      capabilities as well as increase it.  This is particularly tricky to
      do, because a Capability is a large data structure and ties into the
      rest of the system in many ways.  Trying to clean it all up would be
      extremely error prone.
      
      So instead, the solution is to mark the extra capabilities as
      "disabled".  This has the following consequences:
      
        - threads on a disabled capability are migrated away by the
          scheduler loop
      
        - disabled capabilities do not participate in GC
          (see scheduleDoGC())
      
        - No spark threads are created on this capability
          (see scheduleActivateSpark())
      
        - We do not attempt to migrate threads *to* a disabled
          capability (see schedulePushWork()).
      
      So a disabled capability should do no work, and does not participate
      in GC, although it remains alive in other respects.  For example, a
      blocked thread might wake up on a disabled capability, and it will get
      quickly migrated to a live capability.  A disabled capability can
      still initiate GC if necessary.  Indeed, it turns out to be hard to
      migrate bound threads, so we wait until the next GC to do this (see
      comments for details).
      9bae7915
  13. 13 Dec, 2011 1 commit
    • Simon Marlow's avatar
      New flag +RTS -qi<n>, avoid waking up idle Capabilities to do parallel GC · a02eb298
      Simon Marlow authored
      This is an experimental tweak to the parallel GC that avoids waking up
      a Capability to do parallel GC if we know that the capability has been
      idle for a (tunable) number of GC cycles.  The idea is that if you're
      only using a few Capabilities, there's no point waking up the ones
      that aren't busy.
      
      e.g. +RTS -qi3
      
      says "A Capability will participate in parallel GC if it was running
      at all since the last 3 GC cycles."
      
      Results are a bit hit and miss, and I don't completely understand why
      yet.  Hence, for now it is turned off by default, and also not
      documented except in the +RTS -? output.
      a02eb298
  14. 06 Dec, 2011 2 commits
    • Simon Marlow's avatar
      Allow the number of capabilities to be increased at runtime (#3729) · 92e7d6c9
      Simon Marlow authored
      At present the number of capabilities can only be *increased*, not
      decreased.  The latter presents a few more challenges!
      92e7d6c9
    • Simon Marlow's avatar
      Make forkProcess work with +RTS -N · 8b75acd3
      Simon Marlow authored
      Consider this experimental for the time being.  There are a lot of
      things that could go wrong, but I've verified that at least it works
      on the test cases we have.
      
      I also did some API cleanups while I was here.  Previously we had:
      
      Capability * rts_eval (Capability *cap, HaskellObj p, /*out*/HaskellObj *ret);
      
      but this API is particularly error-prone: if you forget to discard the
      Capability * you passed in and use the return value instead, then
      you're in for subtle bugs with +RTS -N later on.  So I changed all
      these functions to this form:
      
      void rts_eval (/* inout */ Capability **cap,
                     /* in    */ HaskellObj p,
                     /* out */   HaskellObj *ret)
      
      It's much harder to use this version incorrectly, because you have to
      pass the Capability in by reference.
      8b75acd3
  15. 01 Dec, 2011 1 commit
    • Simon Marlow's avatar
      Fix a scheduling bug in the threaded RTS · 6d18141d
      Simon Marlow authored
      The parallel GC was using setContextSwitches() to stop all the other
      threads, which sets the context_switch flag on every Capability.  That
      had the side effect of causing every Capability to also switch
      threads, and since GCs can be much more frequent than context
      switches, this increased the context switch frequency.  When context
      switches are expensive (because the switch is between two bound
      threads or a bound and unbound thread), the difference is quite
      noticeable.
      
      The fix is to have a separate flag to indicate that a Capability
      should stop and return to the scheduler, but not switch threads.  I've
      called this the "interrupt" flag.
      6d18141d
  16. 14 Aug, 2011 1 commit
  17. 18 Jul, 2011 2 commits
  18. 26 May, 2011 1 commit
    • Duncan Coutts's avatar
      Rearrange shutdownCapability code slightly · 68b76e0e
      Duncan Coutts authored
      This is mostly for the beneift of having sensible places to put tracing
      code later. We want a code path that has somewhere to trace (in order):
       (1) starting up all capabilities;
       (2) N * starting up an individual capability;
       (3) N * shutting down an individual capability;
       (4) shutting down all capabilities.
      This has to work in both threaded and non-threaded modes.
      
      Locations (1) and (2) are provided by initCapabilities and
      initCapability respectively. Previously, there was no loccation for (4)
      and while shutdownCapability should be usable for (3) it was only called
      in the !THREADED_RTS case.
      
      Now, shutdownCapability is called unconditionally (and the body is
      conditonal on THREADED_RTS) and there is a new shutdownCapabilities that
      calls shutdownCapability in a loop.
      68b76e0e
  19. 11 Apr, 2011 1 commit
    • Simon Marlow's avatar
      Refactoring and tidy up · 1fb38442
      Simon Marlow authored
      This is a port of some of the changes from my private local-GC branch
      (which is still in darcs, I haven't converted it to git yet).  There
      are a couple of small functional differences in the GC stats: first,
      per-thread GC timings should now be more accurate, and secondly we now
      report average and maximum pause times. e.g. from minimax +RTS -N8 -s:
      
                                          Tot time (elapsed)  Avg pause  Max pause
        Gen  0      2755 colls,  2754 par   13.16s    0.93s     0.0003s    0.0150s
        Gen  1       769 colls,   769 par    3.71s    0.26s     0.0003s    0.0059s
      1fb38442
  20. 25 Nov, 2010 1 commit
  21. 11 Nov, 2010 1 commit
  22. 01 Nov, 2010 1 commit
  23. 17 Jun, 2010 1 commit
  24. 25 May, 2010 1 commit
  25. 20 May, 2010 1 commit
  26. 01 Apr, 2010 1 commit
    • Simon Marlow's avatar
      Change the representation of the MVar blocked queue · f4692220
      Simon Marlow authored
      The list of threads blocked on an MVar is now represented as a list of
      separately allocated objects rather than being linked through the TSOs
      themselves.  This lets us remove a TSO from the list in O(1) time
      rather than O(n) time, by marking the list object.  Removing this
      linear component fixes some pathalogical performance cases where many
      threads were blocked on an MVar and became unreachable simultaneously
      (nofib/smp/threads007), or when sending an asynchronous exception to a
      TSO in a long list of thread blocked on an MVar.
      
      MVar performance has actually improved by a few percent as a result of
      this change, slightly to my surprise.
      
      This is the final cleanup in the sequence, which let me remove the old
      way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the
      new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent).  It
      is now the case that only the Capability that owns a TSO may modify
      its state (well, almost), and this simplifies various things.  More of
      the RTS is based on message-passing between Capabilities now.
      f4692220
  27. 29 Mar, 2010 1 commit
    • Simon Marlow's avatar
      New implementation of BLACKHOLEs · 5d52d9b6
      Simon Marlow authored
      This replaces the global blackhole_queue with a clever scheme that
      enables us to queue up blocked threads on the closure that they are
      blocked on, while still avoiding atomic instructions in the common
      case.
      
      Advantages:
      
       - gets rid of a locked global data structure and some tricky GC code
         (replacing it with some per-thread data structures and different
         tricky GC code :)
      
       - wakeups are more prompt: parallel/concurrent performance should
         benefit.  I haven't seen anything dramatic in the parallel
         benchmarks so far, but a couple of threading benchmarks do improve
         a bit.
      
       - waking up a thread blocked on a blackhole is now O(1) (e.g. if
         it is the target of throwTo).
      
       - less sharing and better separation of Capabilities: communication
         is done with messages, the data structures are strictly owned by a
         Capability and cannot be modified except by sending messages.
      
       - this change will utlimately enable us to do more intelligent
         scheduling when threads block on each other.  This is what started
         off the whole thing, but it isn't done yet (#3838).
      
      I'll be documenting all this on the wiki in due course.
      5d52d9b6
  28. 11 Mar, 2010 1 commit
    • Simon Marlow's avatar
      Use message-passing to implement throwTo in the RTS · 7408b392
      Simon Marlow authored
      This replaces some complicated locking schemes with message-passing
      in the implementation of throwTo. The benefits are
      
       - previously it was impossible to guarantee that a throwTo from
         a thread running on one CPU to a thread running on another CPU
         would be noticed, and we had to rely on the GC to pick up these
         forgotten exceptions. This no longer happens.
      
       - the locking regime is simpler (though the code is about the same
         size)
      
       - threads can be unblocked from a blocked_exceptions queue without
         having to traverse the whole queue now.  It's a rare case, but
         replaces an O(n) operation with an O(1).
      
       - generally we move in the direction of sharing less between
         Capabilities (aka HECs), which will become important with other
         changes we have planned.
      
      Also in this patch I replaced several STM-specific closure types with
      a generic MUT_PRIM closure type, which allowed a lot of code in the GC
      and other places to go away, hence the line-count reduction.  The
      message-passing changes resulted in about a net zero line-count
      difference.
      7408b392
  29. 09 Mar, 2010 1 commit
    • Simon Marlow's avatar
      Split part of the Task struct into a separate struct InCall · 7effbbbb
      Simon Marlow authored
      The idea is that this leaves Tasks and OSThread in one-to-one
      correspondence.  The part of a Task that represents a call into
      Haskell from C is split into a separate struct InCall, pointed to by
      the Task and the TSO bound to it.  A given OSThread/Task thus always
      uses the same mutex and condition variable, rather than getting a new
      one for each callback.  Conceptually it is simpler, although there are
      more types and indirections in a few places now.
      
      This improves callback performance by removing some of the locks that
      we had to take when making in-calls.  Now we also keep the current Task
      in a thread-local variable if supported by the OS and gcc (currently
      only Linux).
      7effbbbb
  30. 16 Feb, 2010 1 commit
  31. 02 Dec, 2009 1 commit
  32. 01 Dec, 2009 1 commit
    • Simon Marlow's avatar
      Make allocatePinned use local storage, and other refactorings · 5270423a
      Simon Marlow authored
      This is a batch of refactoring to remove some of the GC's global
      state, as we move towards CPU-local GC.  
      
        - allocateLocal() now allocates large objects into the local
          nursery, rather than taking a global lock and allocating
          then in gen 0 step 0.
      
        - allocatePinned() was still allocating from global storage and
          taking a lock each time, now it uses local storage. 
          (mallocForeignPtrBytes should be faster with -threaded).
          
        - We had a gen 0 step 0, distinct from the nurseries, which are
          stored in a separate nurseries[] array.  This is slightly strange.
          I removed the g0s0 global that pointed to gen 0 step 0, and
          removed all uses of it.  I think now we don't use gen 0 step 0 at
          all, except possibly when there is only one generation.  Possibly
          more tidying up is needed here.
      
        - I removed the global allocate() function, and renamed
          allocateLocal() to allocate().
      
        - the alloc_blocks global is gone.  MAYBE_GC() and
          doYouWantToGC() now check the local nursery only.
      5270423a
  33. 09 Sep, 2009 1 commit
  34. 05 Aug, 2009 1 commit
  35. 02 Aug, 2009 1 commit
    • Simon Marlow's avatar
      RTS tidyup sweep, first phase · a2a67cd5
      Simon Marlow authored
      The first phase of this tidyup is focussed on the header files, and in
      particular making sure we are exposinng publicly exactly what we need
      to, and no more.
      
       - Rts.h now includes everything that the RTS exposes publicly,
         rather than a random subset of it.
      
       - Most of the public header files have moved into subdirectories, and
         many of them have been renamed.  But clients should not need to
         include any of the other headers directly, just #include the main
         public headers: Rts.h, HsFFI.h, RtsAPI.h.
      
       - All the headers needed for via-C compilation have moved into the
         stg subdirectory, which is self-contained.  Most of the headers for
         the rest of the RTS APIs have moved into the rts subdirectory.
      
       - I left MachDeps.h where it is, because it is so widely used in
         Haskell code.
       
       - I left a deprecated stub for RtsFlags.h in place.  The flag
         structures are now exposed by Rts.h.
      
       - Various internal APIs are no longer exposed by public header files.
      
       - Various bits of dead code and declarations have been removed
      
       - More gcc warnings are turned on, and the RTS code is more
         warning-clean.
      
       - More source files #include "PosixSource.h", and hence only use
         standard POSIX (1003.1c-1995) interfaces.
      
      There is a lot more tidying up still to do, this is just the first
      pass.  I also intend to standardise the names for external RTS APIs
      (e.g use the rts_ prefix consistently), and declare the internal APIs
      as hidden for shared libraries.
      a2a67cd5
  36. 26 Apr, 2009 1 commit
  37. 13 Mar, 2009 1 commit
    • Simon Marlow's avatar
      Instead of a separate context-switch flag, set HpLim to zero · 304e7fb7
      Simon Marlow authored
      This reduces the latency between a context-switch being triggered and
      the thread returning to the scheduler, which in turn should reduce the
      cost of the GC barrier when there are many cores.
      
      We still retain the old context_switch flag which is checked at the
      end of each block of allocation.  The idea is that setting HpLim may
      fail if the the target thread is modifying HpLim at the same time; the
      context_switch flag is a fallback.  It also allows us to "context
      switch soon" without forcing an immediate switch, which can be costly.
      304e7fb7
  38. 12 Jan, 2009 1 commit
    • Simon Marlow's avatar
      Keep the remembered sets local to each thread during parallel GC · 6a405b1e
      Simon Marlow authored
      This turns out to be quite vital for parallel programs:
      
        - The way we discover which threads to traverse is by finding
          dirty threads via the remembered sets (aka mutable lists).
      
        - A dirty thread will be on the remembered set of the capability
          that was running it, and we really want to traverse that thread's
          stack using the GC thread for the capability, because it is in
          that CPU's cache.  If we get this wrong, we get penalised badly by
          the memory system.
      
      Previously we had per-capability mutable lists but they were
      aggregated before GC and traversed by just one of the GC threads.
      This resulted in very poor performance particularly for parallel
      programs with deep stacks.
      
      Now we keep per-capability remembered sets throughout GC, which also
      removes a lock (recordMutableGen_sync).
      6a405b1e