1. 06 Nov, 2011 1 commit
  2. 15 Dec, 2010 1 commit
    • Simon Marlow's avatar
      Implement stack chunks and separate TSO/STACK objects · f30d5273
      Simon Marlow authored
      This patch makes two changes to the way stacks are managed:
      
      1. The stack is now stored in a separate object from the TSO.
      
      This means that it is easier to replace the stack object for a thread
      when the stack overflows or underflows; we don't have to leave behind
      the old TSO as an indirection any more.  Consequently, we can remove
      ThreadRelocated and deRefTSO(), which were a pain.
      
      This is obviously the right thing, but the last time I tried to do it
      it made performance worse.  This time I seem to have cracked it.
      
      2. Stacks are now represented as a chain of chunks, rather than
         a single monolithic object.
      
      The big advantage here is that individual chunks are marked clean or
      dirty according to whether they contain pointers to the young
      generation, and the GC can avoid traversing clean stack chunks during
      a young-generation collection.  This means that programs with deep
      stacks will see a big saving in GC overhead when using the default GC
      settings.
      
      A secondary advantage is that there is much less copying involved as
      the stack grows.  Programs that quickly grow a deep stack will see big
      improvements.
      
      In some ways the implementation is simpler, as nothing special needs
      to be done to reclaim stack as the stack shrinks (the GC just recovers
      the dead stack chunks).  On the other hand, we have to manage stack
      underflow between chunks, so there's a new stack frame
      (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
      The total amount of code is probably about the same as before.
      
      There are new RTS flags:
      
         -ki<size> Sets the initial thread stack size (default 1k)  Egs: -ki4k -ki2m
         -kc<size> Sets the stack chunk size (default 32k)
         -kb<size> Sets the stack chunk buffer size (default 1k)
      
      -ki was previously called just -k, and the old name is still accepted
      for backwards compatibility.  These new options are documented.
      f30d5273
  3. 25 Nov, 2010 1 commit
  4. 19 Sep, 2010 1 commit
    • Edward Z. Yang's avatar
      Interruptible FFI calls with pthread_kill and CancelSynchronousIO. v4 · 83d563cb
      Edward Z. Yang authored
      This is patch that adds support for interruptible FFI calls in the form
      of a new foreign import keyword 'interruptible', which can be used
      instead of 'safe' or 'unsafe'.  Interruptible FFI calls act like safe
      FFI calls, except that the worker thread they run on may be interrupted.
      
      Internally, it replaces BlockedOnCCall_NoUnblockEx with
      BlockedOnCCall_Interruptible, and changes the behavior of the RTS
      to not modify the TSO_ flags on the event of an FFI call from
      a thread that was interruptible.  It also modifies the bytecode
      format for foreign call, adding an extra Word16 to indicate
      interruptibility.
      
      The semantics of interruption vary from platform to platform, but the
      intent is that any blocking system calls are aborted with an error code.
      This is most useful for making function calls to system library
      functions that support interrupting.  There is no support for pre-Vista
      Windows.
      
      There is a partner testsuite patch which adds several tests for this
      functionality.
      83d563cb
  5. 01 Apr, 2010 1 commit
    • Simon Marlow's avatar
      Change the representation of the MVar blocked queue · f4692220
      Simon Marlow authored
      The list of threads blocked on an MVar is now represented as a list of
      separately allocated objects rather than being linked through the TSOs
      themselves.  This lets us remove a TSO from the list in O(1) time
      rather than O(n) time, by marking the list object.  Removing this
      linear component fixes some pathalogical performance cases where many
      threads were blocked on an MVar and became unreachable simultaneously
      (nofib/smp/threads007), or when sending an asynchronous exception to a
      TSO in a long list of thread blocked on an MVar.
      
      MVar performance has actually improved by a few percent as a result of
      this change, slightly to my surprise.
      
      This is the final cleanup in the sequence, which let me remove the old
      way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the
      new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent).  It
      is now the case that only the Capability that owns a TSO may modify
      its state (well, almost), and this simplifies various things.  More of
      the RTS is based on message-passing between Capabilities now.
      f4692220
  6. 11 Mar, 2010 1 commit
    • Simon Marlow's avatar
      Use message-passing to implement throwTo in the RTS · 7408b392
      Simon Marlow authored
      This replaces some complicated locking schemes with message-passing
      in the implementation of throwTo. The benefits are
      
       - previously it was impossible to guarantee that a throwTo from
         a thread running on one CPU to a thread running on another CPU
         would be noticed, and we had to rely on the GC to pick up these
         forgotten exceptions. This no longer happens.
      
       - the locking regime is simpler (though the code is about the same
         size)
      
       - threads can be unblocked from a blocked_exceptions queue without
         having to traverse the whole queue now.  It's a rare case, but
         replaces an O(n) operation with an O(1).
      
       - generally we move in the direction of sharing less between
         Capabilities (aka HECs), which will become important with other
         changes we have planned.
      
      Also in this patch I replaced several STM-specific closure types with
      a generic MUT_PRIM closure type, which allowed a lot of code in the GC
      and other places to go away, hence the line-count reduction.  The
      message-passing changes resulted in about a net zero line-count
      difference.
      7408b392
  7. 22 Jan, 2010 1 commit
    • Simon Marlow's avatar
      When acquiring a spinlock, yieldThread() every 1000 spins (#3553, #3758) · 65ac2f4c
      Simon Marlow authored
      This helps when the thread holding the lock has been descheduled,
      which is the main cause of the "last-core slowdown" problem.  With
      this patch, I get much better results with -N8 on an 8-core box,
      although some benchmarks are still worse than with 7 cores.
      
      I also added a yieldThread() into the any_work() loop of the parallel
      GC when it has no work to do. Oddly, this seems to improve performance
      on the parallel GC benchmarks even when all the cores are busy.
      Perhaps it is due to reducing contention on the memory bus.
      65ac2f4c
  8. 17 Dec, 2009 1 commit
    • Simon Marlow's avatar
      Fix #650: use a card table to mark dirty sections of mutable arrays · 0417404f
      Simon Marlow authored
      The card table is an array of bytes, placed directly following the
      actual array data.  This means that array reading is unaffected, but
      array writing needs to read the array size from the header in order to
      find the card table.
      
      We use a bytemap rather than a bitmap, because updating the card table
      must be multi-thread safe.  Each byte refers to 128 entries of the
      array, but this is tunable by changing the constant
      MUT_ARR_PTRS_CARD_BITS in includes/Constants.h.
      0417404f
  9. 25 Nov, 2009 2 commits
    • Simon Marlow's avatar
      threadStackOverflow: check whether stack squeezing released some stack (#3677) · d2c874dc
      Simon Marlow authored
      In a stack overflow situation, stack squeezing may reduce the stack
      size, but we don't know whether it has been reduced enough for the
      stack check to succeed if we try again.  Fortunately stack squeezing
      is idempotent, so all we need to do is record whether *any* squeezing
      happened.  If we are at the stack's absolute -K limit, and stack
      squeezing happened, then we try running the thread again.
      
      We also want to avoid enlarging the stack if squeezing has already
      released some of it.  However, we don't want to get into a
      pathalogical situation where a thread has a nearly full stack (near
      its current limit, but not near the absolute -K limit), keeps
      allocating a little bit, squeezing removes a little bit, and then it
      runs again.  So to avoid this, if we squeezed *and* there is still
      less than BLOCK_SIZE_W words free, then we enlarge the stack anyway.
      d2c874dc
    • Simon Marlow's avatar
      add a comment to TSO_MARKED · 69ba3e6b
      Simon Marlow authored
      69ba3e6b
  10. 25 Aug, 2009 1 commit
  11. 18 Aug, 2009 1 commit
    • Simon Marlow's avatar
      Fix #3429: a tricky race condition · c5cafbcc
      Simon Marlow authored
      There were two bugs, and had it not been for the first one we would
      not have noticed the second one, so this is quite fortunate.
      
      The first bug is in stg_unblockAsyncExceptionszh_ret, when we found a
      pending exception to raise, but don't end up raising it, there was a
      missing adjustment to the stack pointer.  
      
      The second bug was that this case was actually happening at all: it
      ought to be incredibly rare, because the pending exception thread
      would have to be killed between us finding it and attempting to raise
      the exception.  This made me suspicious.  It turned out that there was
      a race condition on the tso->flags field; multiple threads were
      updating this bitmask field non-atomically (one of the bits is the
      dirty-bit for the generational GC).  The fix is to move the dirty bit
      into its own field of the TSO, making the TSO one word larger (sadly).
      c5cafbcc
  12. 02 Aug, 2009 1 commit
    • Simon Marlow's avatar
      RTS tidyup sweep, first phase · a2a67cd5
      Simon Marlow authored
      The first phase of this tidyup is focussed on the header files, and in
      particular making sure we are exposinng publicly exactly what we need
      to, and no more.
      
       - Rts.h now includes everything that the RTS exposes publicly,
         rather than a random subset of it.
      
       - Most of the public header files have moved into subdirectories, and
         many of them have been renamed.  But clients should not need to
         include any of the other headers directly, just #include the main
         public headers: Rts.h, HsFFI.h, RtsAPI.h.
      
       - All the headers needed for via-C compilation have moved into the
         stg subdirectory, which is self-contained.  Most of the headers for
         the rest of the RTS APIs have moved into the rts subdirectory.
      
       - I left MachDeps.h where it is, because it is so widely used in
         Haskell code.
       
       - I left a deprecated stub for RtsFlags.h in place.  The flag
         structures are now exposed by Rts.h.
      
       - Various internal APIs are no longer exposed by public header files.
      
       - Various bits of dead code and declarations have been removed
      
       - More gcc warnings are turned on, and the RTS code is more
         warning-clean.
      
       - More source files #include "PosixSource.h", and hence only use
         standard POSIX (1003.1c-1995) interfaces.
      
      There is a lot more tidying up still to do, this is just the first
      pass.  I also intend to standardise the names for external RTS APIs
      (e.g use the rts_ prefix consistently), and declare the internal APIs
      as hidden for shared libraries.
      a2a67cd5
  13. 09 Sep, 2008 1 commit
  14. 10 Jul, 2008 1 commit
  15. 16 Apr, 2008 1 commit
  16. 11 Sep, 2007 1 commit
    • Simon Marlow's avatar
      FIX #1466 (partly), which was causing concprog001(ghci) to fail · 066f1028
      Simon Marlow authored
      An AP_STACK now ensures that there is at least AP_STACK_SPLIM words of
      stack headroom available after unpacking the payload.  Continuations
      that require more than AP_STACK_SPLIM words of stack must do their own
      stack checks instead of aggregating their stack usage into the parent
      frame.  I have made this change for the interpreter, but not for
      compiled code yet - we should do this in the glorious rewrite of the
      code generator.
      066f1028
  17. 18 Jul, 2007 1 commit
  18. 17 Apr, 2007 1 commit
    • Simon Marlow's avatar
      Re-working of the breakpoint support · cdce6477
      Simon Marlow authored
      This is the result of Bernie Pope's internship work at MSR Cambridge,
      with some subsequent improvements by me.  The main plan was to
      
       (a) Reduce the overhead for breakpoints, so we could enable 
           the feature by default without incurrent a significant penalty
       (b) Scatter more breakpoint sites throughout the code
      
      Currently we can set a breakpoint on almost any subexpression, and the
      overhead is around 1.5x slower than normal GHCi.  I hope to be able to
      get this down further and/or allow breakpoints to be turned off.
      
      This patch also fixes up :print following the recent changes to
      constructor info tables.  (most of the :print tests now pass)
      
      We now support single-stepping, which just enables all breakpoints.
      
        :step <expr>     executes <expr> with single-stepping turned on
        :step            single-steps from the current breakpoint
      
      The mechanism is quite different to the previous implementation.  We
      share code with the HPC (haskell program coverage) implementation now.
      The coverage pass annotates source code with "tick" locations which
      are tracked by the coverage tool.  In GHCi, each "tick" becomes a
      potential breakpoint location.
      
      Previously breakpoints were compiled into code that magically invoked
      a nested instance of GHCi.  Now, a breakpoint causes the current
      thread to block and control is returned to GHCi.
      
      See the wiki page for more details and the current ToDo list:
      
        http://hackage.haskell.org/trac/ghc/wiki/NewGhciDebugger
      cdce6477
  19. 28 Feb, 2007 1 commit
  20. 16 Jun, 2006 1 commit
    • Simon Marlow's avatar
      Asynchronous exception support for SMP · b1953bbb
      Simon Marlow authored
      This patch makes throwTo work with -threaded, and also refactors large
      parts of the concurrency support in the RTS to clean things up.  We
      have some new files:
      
        RaiseAsync.{c,h}	asynchronous exception support
        Threads.{c,h}         general threading-related utils
      
      Some of the contents of these new files used to be in Schedule.c,
      which is smaller and cleaner as a result of the split.
      
      Asynchronous exception support in the presence of multiple running
      Haskell threads is rather tricky.  In fact, to my annoyance there are
      still one or two bugs to track down, but the majority of the tests run
      now.
      b1953bbb
  21. 07 Apr, 2006 1 commit
    • Simon Marlow's avatar
      Reorganisation of the source tree · 0065d5ab
      Simon Marlow authored
      Most of the other users of the fptools build system have migrated to
      Cabal, and with the move to darcs we can now flatten the source tree
      without losing history, so here goes.
      
      The main change is that the ghc/ subdir is gone, and most of what it
      contained is now at the top level.  The build system now makes no
      pretense at being multi-project, it is just the GHC build system.
      
      No doubt this will break many things, and there will be a period of
      instability while we fix the dependencies.  A straightforward build
      should work, but I haven't yet fixed binary/source distributions.
      Changes to the Building Guide will follow, too.
      0065d5ab
  22. 08 Feb, 2006 1 commit
    • Simon Marlow's avatar
      make the smp way RTS-only, normal libraries now work with -smp · beb5737b
      Simon Marlow authored
      We had to bite the bullet here and add an extra word to every thunk,
      to enable running ordinary libraries on SMP.  Otherwise, we would have
      needed to ship an extra set of libraries with GHC 6.6 in addition to
      the two sets we already ship (normal + profiled), and all Cabal
      packages would have to be compiled for SMP too.  We decided it best
      just to take the hit now, making SMP easily accessible to everyone in
      GHC 6.6.
      
      Incedentally, although this increases allocation by around 12% on
      average, the performance hit is around 5%, and much less if your inner
      loop doesn't use any laziness.
      beb5737b
  23. 21 Oct, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-10-21 14:02:17 by simonmar] · 03a9ff01
      simonmar authored
      Big re-hash of the threaded/SMP runtime
      
      This is a significant reworking of the threaded and SMP parts of
      the runtime.  There are two overall goals here:
      
        - To push down the scheduler lock, reducing contention and allowing
          more parts of the system to run without locks.  In particular,
          the scheduler does not require a lock any more in the common case.
      
        - To improve affinity, so that running Haskell threads stick to the
          same OS threads as much as possible.
      
      At this point we have the basic structure working, but there are some
      pieces missing.  I believe it's reasonably stable - the important
      parts of the testsuite pass in all the (normal,threaded,SMP) ways.
      
      In more detail:
      
        - Each capability now has a run queue, instead of one global run
          queue.  The Capability and Task APIs have been completely
          rewritten; see Capability.h and Task.h for the details.
      
        - Each capability has its own pool of worker Tasks.  Hence, Haskell
          threads on a Capability's run queue will run on the same worker
          Task(s).  As long as the OS is doing something reasonable, this
          should mean they usually stick to the same CPU.  Another way to
          look at this is that we're assuming each Capability is associated
          with a fixed CPU.
      
        - What used to be StgMainThread is now part of the Task structure.
          Every OS thread in the runtime has an associated Task, and it
          can ask for its current Task at any time with myTask().
      
        - removed RTS_SUPPORTS_THREADS symbol, use THREADED_RTS instead
          (it is now defined for SMP too).
      
        - The RtsAPI has had to change; we must explicitly pass a Capability
          around now.  The previous interface assumed some global state.
          SchedAPI has also changed a lot.
      
        - The OSThreads API now supports thread-local storage, used to
          implement myTask(), although it could be done more efficiently
          using gcc's __thread extension when available.
      
        - I've moved some POSIX-specific stuff into the posix subdirectory,
          moving in the direction of separating out platform-specific
          implementations.
      
        - lots of lock-debugging and assertions in the runtime.  In particular,
          when DEBUG is on, we catch multiple ACQUIRE_LOCK()s, and there is
          also an ASSERT_LOCK_HELD() call.
      
      What's missing so far:
      
        - I have almost certainly broken the Win32 build, will fix soon.
      
        - any kind of thread migration or load balancing.  This is high up
          the agenda, though.
      
        - various performance tweaks to do
      
        - throwTo and forkProcess still do not work in SMP mode
      03a9ff01
  24. 22 Apr, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-04-22 12:28:00 by simonmar] · ec0984a9
      simonmar authored
      - Now that labels are always prefixed with '&' in .hc code, we have to
        fix some sloppiness in the RTS .cmm code.  Fortunately it's not too
        painful.
      
      - SMP: acquire/release the storage manager lock around
        atomicModifyMutVar#.  This is a hack: atomicModifyMutVar# isn't
        atomic under SMP otherwise, but the SM lock is a large sledgehammer.
        I think I'll apply the sledgehammer to the MVar primitives too, for
        the time being.
      ec0984a9
  25. 27 Mar, 2005 1 commit
    • panne's avatar
      [project @ 2005-03-27 13:41:13 by panne] · 03dc2dd3
      panne authored
      * Some preprocessors don't like the C99/C++ '//' comments after a
        directive, so use '/* */' instead. For consistency, a lot of '//' in
        the include files were converted, too.
      
      * UnDOSified libraries/base/cbits/runProcess.c.
      
      * My favourite sport: Killed $Id$s.
      03dc2dd3
  26. 10 Feb, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-02-10 13:01:52 by simonmar] · e7c3f957
      simonmar authored
      GC changes: instead of threading old-generation mutable lists
      through objects in the heap, keep it in a separate flat array.
      
      This has some advantages:
      
        - the IND_OLDGEN object is now only 2 words, so the minimum
          size of a THUNK is now 2 words instead of 3.  This saves
          some amount of allocation (about 2% on average according to
          my measurements), and is more friendly to the cache by
          squashing objects together more.
      
        - keeping the mutable list separate from the IND object
          will be necessary for our multiprocessor implementation.
      
        - removing the mut_link field makes the layout of some objects
          more uniform, leading to less complexity and special cases.
      
        - I also unified the two mutable lists (mut_once_list and mut_list)
          into a single mutable list, which lead to more simplifications
          in the GC.
      e7c3f957
  27. 18 Nov, 2004 1 commit
  28. 13 Aug, 2004 1 commit
  29. 28 Apr, 2003 1 commit
    • simonmar's avatar
      [project @ 2003-04-28 09:55:20 by simonmar] · 1fd3ead7
      simonmar authored
      Following the recent change to the layout of the StgRetDyn frame, we
      now need to bump RESERVED_STACK_WORDS because this governs the amount
      of room which is guaranteed to be available on the stack in the event
      of a stack check failure.
      
      This accounts for at least one cause of recent crashes in the HEAD.
      1fd3ead7
  30. 25 Mar, 2003 2 commits
  31. 11 Dec, 2002 1 commit
    • simonmar's avatar
      [project @ 2002-12-11 15:36:20 by simonmar] · 0bffc410
      simonmar authored
      Merge the eval-apply-branch on to the HEAD
      ------------------------------------------
      
      This is a change to GHC's evaluation model in order to ultimately make
      GHC more portable and to reduce complexity in some areas.
      
      At some point we'll update the commentary to describe the new state of
      the RTS.  Pending that, the highlights of this change are:
      
        - No more Su.  The Su register is gone, update frames are one
          word smaller.
      
        - Slow-entry points and arg checks are gone.  Unknown function calls
          are handled by automatically-generated RTS entry points (AutoApply.hc,
          generated by the program in utils/genapply).
      
        - The stack layout is stricter: there are no "pending arguments" on
          the stack any more, the stack is always strictly a sequence of
          stack frames.
      
          This means that there's no need for LOOKS_LIKE_GHC_INFO() or
          LOOKS_LIKE_STATIC_CLOSURE() any more, and GHC doesn't need to know
          how to find the boundary between the text and data segments (BIG WIN!).
      
        - A couple of nasty hacks in the mangler caused by the neet to
          identify closure ptrs vs. info tables have gone away.
      
        - Info tables are a bit more complicated.  See InfoTables.h for the
          details.
      
        - As a side effect, GHCi can now deal with polymorphic seq.  Some bugs
          in GHCi which affected primitives and unboxed tuples are now
          fixed.
      
        - Binary sizes are reduced by about 7% on x86.  Performance is roughly
          similar, some programs get faster while some get slower.  I've seen
          GHCi perform worse on some examples, but haven't investigated
          further yet (GHCi performance *should* be about the same or better
          in theory).
      
        - Internally the code generator is rather better organised.  I've moved
          info-table generation from the NCG into the main codeGen where it is
          shared with the C back-end; info tables are now emitted as arrays
          of words in both back-ends.  The NCG is one step closer to being able
          to support profiling.
      
      This has all been fairly thoroughly tested, but no doubt I've messed
      up the commit in some way.
      0bffc410
  32. 23 Sep, 2002 1 commit
  33. 28 Nov, 2001 1 commit
  34. 26 Nov, 2001 1 commit
    • simonmar's avatar
      [project @ 2001-11-26 16:54:21 by simonmar] · dbef766c
      simonmar authored
      Profiling cleanup.
      
      This commit eliminates some duplication in the various heap profiling
      subsystems, and generally centralises much of the machinery.  The key
      concept is the separation of a heap *census* (which is now done in one
      place only instead of three) from the calculation of retainer sets.
      Previously the retainer profiling code also did a heap census on the
      fly, and lag-drag-void profiling had its own census machinery.
      
      Value-adds:
      
         - you can now restrict a heap profile to certain retainer sets,
           but still display by cost centre (or type, or closure or
           whatever).
      
         - I've added an option to restrict the maximum retainer set size
           (+RTS -R<size>, defaulting to 8).
      
         - I've cleaned up the heap profiling options at the request of
           Simon PJ.  See the help text for details.  The new scheme
           is backwards compatible with the old.
      
         - I've removed some odd bits of LDV or retainer profiling-specific
           code from various parts of the system.
      
         - the time taken doing heap censuses (and retainer set calculation)
           is now accurately reported by the RTS when you say +RTS -Sstderr.
      
      Still to come:
      
         - restricting a profile to a particular biography
           (lag/drag/void/use).  This requires keeping old heap censuses
           around, but the infrastructure is now in place to do this.
      dbef766c
  35. 03 Oct, 2001 1 commit
    • simonmar's avatar
      [project @ 2001-10-03 13:57:42 by simonmar] · b4623557
      simonmar authored
      Tidy up ghc/includes/Constants and related things.
      
      Now all the constants that the compiler needs to know, such as header
      size, update frame size, info table size and so on are generated
      automatically into a header file, DeriviedConstants.h, by a small C
      program in the same way as NativeDefs.h.  The C code in the RTS is
      expected to use sizeof() directly (it already does).
      
      Also tidied up the constants in MachDeps.h - all the constants
      representing the sizes of various types are named SIZEOF_<foo>, to
      match the constants defined in config.h.  PrelStorable.lhs now doesn't
      contain any special knowledge about GHC's conventions as regards the
      size of certain types, this is all in MachDeps.h.
      b4623557
  36. 01 Aug, 2001 1 commit
  37. 31 Jul, 2001 2 commits