1. 24 Apr, 2008 1 commit
  2. 16 Apr, 2008 1 commit
  3. 24 Jul, 2007 1 commit
    • Simon Marlow's avatar
      hs_exit()/shutdownHaskell(): wait for outstanding foreign calls to complete before returning · 681aad99
      Simon Marlow authored
      This is pertinent to #1177.  When shutting down a DLL, we need to be
      sure that there are no OS threads that can return to the code that we
      are about to unload, and previously the threaded RTS was unsafe in
      this respect.
      
      When exiting a standalone program we don't have to be quite so
      paranoid: all the code will disappear at the same time as any running
      threads.  Happily exiting a program happens via a different path:
      shutdownHaskellAndExit().  If we're about to exit(), then there's no
      need to wait for foreign calls to complete.
      681aad99
  4. 15 Dec, 2006 1 commit
  5. 07 Oct, 2006 1 commit
  6. 16 Jun, 2006 1 commit
    • Simon Marlow's avatar
      Asynchronous exception support for SMP · b1953bbb
      Simon Marlow authored
      This patch makes throwTo work with -threaded, and also refactors large
      parts of the concurrency support in the RTS to clean things up.  We
      have some new files:
      
        RaiseAsync.{c,h}	asynchronous exception support
        Threads.{c,h}         general threading-related utils
      
      Some of the contents of these new files used to be in Schedule.c,
      which is smaller and cleaner as a result of the split.
      
      Asynchronous exception support in the presence of multiple running
      Haskell threads is rather tricky.  In fact, to my annoyance there are
      still one or two bugs to track down, but the majority of the tests run
      now.
      b1953bbb
  7. 07 Apr, 2006 1 commit
    • Simon Marlow's avatar
      Reorganisation of the source tree · 0065d5ab
      Simon Marlow authored
      Most of the other users of the fptools build system have migrated to
      Cabal, and with the move to darcs we can now flatten the source tree
      without losing history, so here goes.
      
      The main change is that the ghc/ subdir is gone, and most of what it
      contained is now at the top level.  The build system now makes no
      pretense at being multi-project, it is just the GHC build system.
      
      No doubt this will break many things, and there will be a period of
      instability while we fix the dependencies.  A straightforward build
      should work, but I haven't yet fixed binary/source distributions.
      Changes to the Building Guide will follow, too.
      0065d5ab
  8. 24 Mar, 2006 1 commit
    • Simon Marlow's avatar
      Add some more flexibility to the multiproc scheduler · 4368121d
      Simon Marlow authored
      There are two new options in the -threaded RTS:
       
        -qm       Don't automatically migrate threads between CPUs
        -qw       Migrate a thread to the current CPU when it is woken up
      
      previously both of these were effectively off, i.e. threads were
      migrated between CPUs willy-milly, and threads were always migrated to
      the current CPU when woken up.  This is the first step in tweaking the
      scheduling for more effective work balancing, there will no doubt be
      more to come.
      4368121d
  9. 09 Feb, 2006 1 commit
    • Simon Marlow's avatar
      Merge the smp and threaded RTS ways · eba7b660
      Simon Marlow authored
      Now, the threaded RTS also includes SMP support.  The -smp flag is a
      synonym for -threaded.  The performance implications of this are small
      to negligible, and it results in a code cleanup and reduces the number
      of combinations we have to test.
      eba7b660
  10. 25 Nov, 2005 1 commit
  11. 21 Nov, 2005 1 commit
  12. 27 Oct, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-10-27 15:26:06 by simonmar] · 677c6345
      simonmar authored
      - Very simple work-sharing amongst Capabilities: whenever a Capability
        detects that it has more than 1 thread in its run queue, it runs
        around looking for empty Capabilities, and shares the threads on its
        run queue equally with the free Capabilities it finds.
      
      - unlock the garbage collector's mutable lists, by having private
        mutable lists per capability (and per generation).  The private
        mutable lists are moved onto the main mutable lists at each GC.
        This pulls the old-generation update code out of the storage manager
        mutex, which is one of the last remaining causes of (alleged) contention.
      
      - Fix some problems with synchronising when a GC is required.  We should
        synchronise quicker now.
      677c6345
  13. 26 Oct, 2005 3 commits
  14. 21 Oct, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-10-21 14:02:17 by simonmar] · 03a9ff01
      simonmar authored
      Big re-hash of the threaded/SMP runtime
      
      This is a significant reworking of the threaded and SMP parts of
      the runtime.  There are two overall goals here:
      
        - To push down the scheduler lock, reducing contention and allowing
          more parts of the system to run without locks.  In particular,
          the scheduler does not require a lock any more in the common case.
      
        - To improve affinity, so that running Haskell threads stick to the
          same OS threads as much as possible.
      
      At this point we have the basic structure working, but there are some
      pieces missing.  I believe it's reasonably stable - the important
      parts of the testsuite pass in all the (normal,threaded,SMP) ways.
      
      In more detail:
      
        - Each capability now has a run queue, instead of one global run
          queue.  The Capability and Task APIs have been completely
          rewritten; see Capability.h and Task.h for the details.
      
        - Each capability has its own pool of worker Tasks.  Hence, Haskell
          threads on a Capability's run queue will run on the same worker
          Task(s).  As long as the OS is doing something reasonable, this
          should mean they usually stick to the same CPU.  Another way to
          look at this is that we're assuming each Capability is associated
          with a fixed CPU.
      
        - What used to be StgMainThread is now part of the Task structure.
          Every OS thread in the runtime has an associated Task, and it
          can ask for its current Task at any time with myTask().
      
        - removed RTS_SUPPORTS_THREADS symbol, use THREADED_RTS instead
          (it is now defined for SMP too).
      
        - The RtsAPI has had to change; we must explicitly pass a Capability
          around now.  The previous interface assumed some global state.
          SchedAPI has also changed a lot.
      
        - The OSThreads API now supports thread-local storage, used to
          implement myTask(), although it could be done more efficiently
          using gcc's __thread extension when available.
      
        - I've moved some POSIX-specific stuff into the posix subdirectory,
          moving in the direction of separating out platform-specific
          implementations.
      
        - lots of lock-debugging and assertions in the runtime.  In particular,
          when DEBUG is on, we catch multiple ACQUIRE_LOCK()s, and there is
          also an ASSERT_LOCK_HELD() call.
      
      What's missing so far:
      
        - I have almost certainly broken the Win32 build, will fix soon.
      
        - any kind of thread migration or load balancing.  This is high up
          the agenda, though.
      
        - various performance tweaks to do
      
        - throwTo and forkProcess still do not work in SMP mode
      03a9ff01
  15. 23 May, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-05-23 15:44:10 by simonmar] · 6d16c476
      simonmar authored
      Simplify and improve the Capability-passing machinery for bound
      threads.
      
      The old story was quite complicated: if you find a thread on the run
      queue which the current task can't run, you had to call
      passCapability(), which set a flag saying where the next Capability
      was to go, and then release the Capability.  When multiple
      Capabilities are flying around, it's not clear how this story should
      extend.
      
      The new story is much simpler: each time around the scheduler loop,
      the task looks to see whether it can make any progress, and if not, it
      releases its Capability and wakes up a task which *can* make some
      progress.  The predicate for whether we can make any progress is
      encapsulated in the (inline) function ANY_WORK_FOR_ME(Condition).
      Waking up an appropriate task is encapsulated in the function
      threadRunnable() (previously it was in two places).
      
      The logic in Capability.c is simpler, but unfortunately it is now more
      closely connected with the Scheduler, because it inspects the run
      queue.  However, performance when communicating between bound and
      unbound threads might be better.
      
      The concurrency tests still work, so hopefully this hasn't broken
      anything.
      6d16c476
  16. 19 May, 2005 1 commit
  17. 10 May, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-05-10 13:25:41 by simonmar] · bf821981
      simonmar authored
      Two SMP-related changes:
      
        - New storage manager interface:
      
          bdescr *allocateLocal(StgRegTable *reg, nat words)
      
          which allocates from the current thread's nursery (being careful
          not to clash with the heap pointer).  It can do this without
          taking any locks; the lock only has to be taken if a block needs
          to be allocated.  allocateLocal() is now used instead of allocate()
          in a few PrimOps.
      
          This removes locks from most Integer operations, cutting down
          the overhead for SMP a bit more.
      
          To make this work, we have to be able to grab the current thread's
          Capability out of thin air (i.e. when called from GMP), so the
          Capability subsystem needs to keep a hash from thread IDs to
          Capabilities.
      
        - Small MVar optimisation: instead of taking the global
          storage-manager lock, do our own locking of MVars with a bit of
          inline assembly (x86 only for now).
      bf821981
  18. 12 Apr, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-04-12 09:04:23 by simonmar] · 693550d9
      simonmar authored
      Per-task nurseries for SMP.  This was kind-of implemented before, but
      it's much cleaner now.  There is now one *step* per capability, so we
      have somewhere to hang the block count.  So for SMP, there are simply
      multiple instances of generation 0 step 0.  The rNursery entry in the
      register table now points to the step rather than the head block of
      the nurersy.
      693550d9
  19. 06 Apr, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-04-06 15:27:06 by simonmar] · 9a92cb1c
      simonmar authored
      Revamp the Task API: now we use the same implementation for threaded
      and SMP.  We also keep per-task timing stats in the threaded RTS now,
      which makes the output of +RTS -sstderr more useful.
      9a92cb1c
  20. 05 Apr, 2005 1 commit
    • simonmar's avatar
      [project @ 2005-04-05 12:19:54 by simonmar] · 16214216
      simonmar authored
      Some multi-processor hackery, including
      
        - Don't hang blocked threads off BLACKHOLEs any more, instead keep
          them all on a separate queue which is checked periodically for
          threads to wake up.
      
          This is good because (a) we don't have to worry about locking the
          closure in SMP mode when we want to block on it, and (b) it means
          the standard update code doesn't need to wake up any threads or
          check for a BLACKHOLE_BQ, simplifying the update code.
      
          The downside is that if there are lots of threads blocked on
          BLACKHOLEs, we might have to do a lot of repeated list traversal.
          We don't expect this to be common, though.  conc023 goes slower
          with this change, but we expect most programs to benefit from the
          shorter update code.
      
        - Fixing up the Capability code to handle multiple capabilities (SMP
          mode), and related changes to get the SMP mode at least building.
      16214216
  21. 27 Mar, 2005 1 commit
    • panne's avatar
      [project @ 2005-03-27 13:41:13 by panne] · 03dc2dd3
      panne authored
      * Some preprocessors don't like the C99/C++ '//' comments after a
        directive, so use '/* */' instead. For consistency, a lot of '//' in
        the include files were converted, too.
      
      * UnDOSified libraries/base/cbits/runProcess.c.
      
      * My favourite sport: Killed $Id$s.
      03dc2dd3
  22. 14 Oct, 2004 1 commit
    • simonmar's avatar
      [project @ 2004-10-14 14:58:37 by simonmar] · bb01a96b
      simonmar authored
      Threaded RTS improvements:
      
       - Unix only: implement waitRead#, waitWrite# and delay# in Haskell,
         by having a single Haskell thread (the IO manager) performing a blocking
         select() operation.  Threads communicate with the IO manager
         via channels.  This is faster than doing the select() in the RTS,
         because we only restart the select() when a new request arrives,
         rather than each time around the scheduler.
      
         On Windows we just make blocking IO calls, we don't have a fancy IO
         manager (yet).
      
       - Simplify the scheduler for the threaded RTS, now that we don't have
         to wait for IO in the scheduler loop.
      
       - Remove detectBlackHoles(), which isn't used now (not sure how long
         this has been unused for... perhaps it was needed back when main threads
         used to be GC roots, so we had to check for blackholes manually rather
         than relying on the GC.)
      
      Signals aren't quite right in the threaded RTS.  In fact, they're
      slightly worse than before, because the thread receiving signals might
      be blocked in a C call - previously there always be another thread
      stuck in awaitEvent() that would notice the signal, but that's not
      true now.  I can't see an easy fix yet.
      bb01a96b
  23. 13 Aug, 2004 1 commit
  24. 16 Dec, 2003 1 commit
    • simonmar's avatar
      [project @ 2003-12-16 13:27:31 by simonmar] · de02e02a
      simonmar authored
      Clean up Capability API
      ~~~~~~~~~~~~~~~~~~~~~~~
      
      - yieldToReturningWorker() is now yieldCapability(), and performs all
        kinds of yielding (both to returning workers and passing to other
        OS threads).  yieldCapabiltiy() does *not* re-acquire a capability.
      
      - waitForWorkCapabilty() is now waitForCapability().
      
      - releaseCapbility() also releases the capability when passing to
        another OS thread.  It is the only way to release a capability (apart
        from yieldCapability(), which calls releaseCapability() internally).
      
      - passCapability() and passCapabilityToWorker() now do not release the
        capability.  They just set a flag to indicate where the capabiliy
        should go when it it next released.
      
      
      Other cleanups:
      
        - Removed all the SMP stuff from Schedule.c.  It had extensive bitrot,
          and was just obfuscating the code.  If it is ever needed again,
          it can be resurrected from CVS.
      
        - Removed some other dead code in Schedule.c, in an attempt to make
          this file more manageable.
      de02e02a
  25. 01 Oct, 2003 1 commit
    • wolfgang's avatar
      [project @ 2003-10-01 10:49:07 by wolfgang] · 324e96d2
      wolfgang authored
      Threaded RTS:
      Don't start new worker threads earlier than necessary.
      After this commit, a Haskell program that uses neither forkOS nor forkIO is
      really single-threaded (rather than using two OS threads internally).
      
      Some details:
      Worker threads are now only created when a capability is released, and
      only when
      (there are no worker threads)
      	&& (there are runnable Haskell threads ||
      	    there are Haskell threads blocked on IO or threadDelay)
      awaitEvent can now be called from bound thread scheduling loops
      (so that we don't have to create a worker thread just to run awaitEvent)
      324e96d2
  26. 21 Sep, 2003 1 commit
    • wolfgang's avatar
      [project @ 2003-09-21 22:20:51 by wolfgang] · 85aa72b9
      wolfgang authored
      Bound Threads
      =============
      
      Introduce a way to use foreign libraries that rely on thread local state
      from multiple threads (mainly affects the threaded RTS).
      
      See the file threads.tex in CVS at haskell-report/ffi/threads.tex
      (not entirely finished yet) for a definition of this extension. A less formal
      description is also found in the documentation of Control.Concurrent.
      
      The changes mostly affect the THREADED_RTS (./configure --enable-threaded-rts),
      except for saving & restoring errno on a per-TSO basis, which is also necessary
      for the non-threaded RTS (a bugfix).
      
      Detailed list of changes
      ------------------------
      
      - errno is saved in the TSO object and restored when necessary:
      ghc/includes/TSO.h, ghc/rts/Interpreter.c, ghc/rts/Schedule.c
      
      - rts_mainLazyIO is no longer needed, main is no special case anymore
      ghc/includes/RtsAPI.h, ghc/rts/RtsAPI.c, ghc/rts/Main.c, ghc/rts/Weak.c
      
      - passCapability: a new function that releases the capability and "passes"
        it to a specific OS thread:
      ghc/rts/Capability.h ghc/rts/Capability.c
      
      - waitThread(), scheduleWaitThread() and schedule() get an optional
        Capability *initialCapability passed as an argument:
      ghc/includes/SchedAPI.h, ghc/rts/Schedule.c, ghc/rts/RtsAPI.c
      
      - Bound Thread scheduling (that's what this is all about):
      ghc/rts/Schedule.h, ghc/rts/Schedule.c
      
      - new Primop isCurrentThreadBound#:
      ghc/compiler/prelude/primops.txt.pp, ghc/includes/PrimOps.h, ghc/rts/PrimOps.hc,
      ghc/rts/Schedule.h, ghc/rts/Schedule.c
      
      - a simple function, rtsSupportsBoundThreads, that returns true if THREADED_RTS
        is defined:
      ghc/rts/Schedule.h, ghc/rts/Schedule.c
      
      - a new implementation of forkProcess (the old implementation stays in place
        for the non-threaded case). Partially broken; works for the standard
        fork-and-exec case, but not for much else. A proper forkProcess is
        really next to impossible to implement:
      ghc/rts/Schedule.c
      
      - Library support for bound threads:
          Control.Concurrent.
            rtsSupportsBoundThreads, isCurrentThreadBound, forkOS,
            runInBoundThread, runInUnboundThread
      libraries/base/Control/Concurrent.hs, libraries/base/Makefile,
      libraries/base/include/HsBase.h, libraries/base/cbits/forkOS.c (new file)
      85aa72b9
  27. 25 Jan, 2003 1 commit
    • wolfgang's avatar
      [project @ 2003-01-25 15:54:48 by wolfgang] · af136096
      wolfgang authored
      This commit fixes many bugs and limitations in the threaded RTS.
      There are still some issues remaining, though.
      
      The following bugs should have been fixed:
      
      - [+] "safe" calls could cause crashes
      - [+] yieldToReturningWorker/grabReturnCapability
          -     It used to deadlock.
      - [+] couldn't wake blocked workers
          -     Calls into the RTS could go unanswered for a long time, and
                that includes ordinary callbacks in some circumstances.
      - [+] couldn't block on an MVar and expect to be woken up by a signal
            handler
          -     Depending on the exact situation, the RTS shut down or
                blocked forever and ignored the signal.
      - [+] The locking scheme in RtsAPI.c didn't work
      - [+] run_thread label in wrong place (schedule())
      - [+] Deadlock in GHC.Handle
          -     if a signal arrived at the wrong time, an mvar was never
                filled again
      - [+] Signals delivered to the "wrong" thread were ignored or handled
            too late.
      
      Issues:
      *) If GC can move TSO objects (I don't know - can it?), then ghci
      will occasionally crash when calling foreign functions, because the
      parameters are stored on the TSO stack.
      
      *) There is still a race condition lurking in the code
      (both threaded and non-threaded RTS are affected):
      If a signal arrives after the check for pending signals in
      schedule(), but before the call to select() in awaitEvent(),
      select() will be called anyway. The signal handler will be
      executed much later than expected.
      
      *) For Win32, GHC doesn't yet support non-blocking IO, so while a
      thread is waiting for IO, no call-ins can happen. If the RTS is
      blocked in awaitEvent, it uses a polling loop on Win32, so call-ins
      should work (although the polling loop looks ugly).
      
      *) Deadlock detection is disabled for the threaded rts, because I
      don't know how to do it properly in the presence of foreign call-ins
      from foreign threads.
      This causes the tests conc031, conc033 and conc034 to fail.
      
      *) "safe" is currently treated as "threadsafe". Implementing "safe" in
      a way that blocks other Haskell threads is more difficult than was
      thought at first. I think it could be done with a few additional lines
      of code, but personally, I'm strongly in favour of abolishing the
      distinction.
      
      *) Running finalizers at program termination is inefficient - there
      are two OS threads passing messages back and forth for every finalizer
      that is run. Also (just as in the non-threaded case) the finalizers
      are run in parallel to any remaining haskell threads and to any
      foreign call-ins that might still happen.
      af136096
  28. 23 Apr, 2002 1 commit
  29. 15 Feb, 2002 2 commits
    • sof's avatar
      [project @ 2002-02-15 21:07:19 by sof] · d031626e
      sof authored
      comments only
      d031626e
    • sof's avatar
      [project @ 2002-02-15 07:50:36 by sof] · 6d7576ef
      sof authored
      Tighten up the Scheduler synchronisation story some more:
      
      - moved thread_ready_cond + the counter rts_n_waiting_tasks
        to Capability.c, leaving only sched_mutex as a synchro
        variable in Scheduler (the less stuff that inhabit
        Schedule.c, the better, methinks.)
      - upon entry to the Scheduler, a worker thread will now call
        Capability.yieldToReturningWorker() to check whether it
        needs to give up its capability.
      - Worker threads that are either idle or lack a capability,
        will now call Capability.waitForWorkCapability() and block.
      6d7576ef
  30. 14 Feb, 2002 1 commit
    • sof's avatar
      [project @ 2002-02-14 07:52:05 by sof] · efa41d9d
      sof authored
      Restructured / tidied a bit:
      
      * Capability.grabReturnCapability() is now called by resumeThread().
        It takes care of waiting on the (Capability.c-local) condition
        variable, 'returning_worker_cond' (moved here from Schedule.c)
      
      * If a worker notices upon entry to the Scheduler that there are
        worker threads waiting to deposit results of external calls,
        it gives up its capability by calling Capability.yieldCapability().
      
      * Added Scheduler.waitForWork(), which takes care of blocking
        on 'thread_ready_cond' (+ 'rts_n_waiting_tasks' book-keeping).
      
      Note: changes haven't been fully tested, due to HEAD instability.
      efa41d9d
  31. 13 Feb, 2002 1 commit
    • sof's avatar
      [project @ 2002-02-13 08:48:06 by sof] · e289780e
      sof authored
      Revised implementation of multi-threaded callouts (and callins):
      
      - unified synchronisation story for threaded and SMP builds,
        following up on SimonM's suggestion. The following synchro
        variables are now used inside the Scheduler:
      
          + thread_ready_cond - condition variable that is signalled
            when a H. thread has become runnable (via the THREAD_RUNNABLE()
            macro) and there are available capabilities. Waited on:
               + upon schedule() entry (iff no caps. available).
      	 + when a thread inside of the Scheduler spots that there
      	   are no runnable threads to service, but one or more
      	   external call is in progress.
      	 + in resumeThread(), waiting for a capability to become
      	   available.
      
            Prior to waiting on thread_ready_cond, a counter rts_n_waiting_tasks
            is incremented, so that we can keep track of the number of
            readily available worker threads (need this in order to make
            an informed decision on whether or not to create a new thread
            when an external call is made).
      
      
          + returning_worker_cond - condition variable that is waited
            on by an OS thread that has finished executing and external
            call & now want to feed its result back to the H thread
            that made the call. Before doing so, the counter
            rts_n_returning_workers is incremented.
      
            Upon entry to the Scheduler, this counter is checked for &
            if it is non-zero, the thread gives up its capability and
            signals returning_worker_cond before trying to re-grab a
            capability. (releaseCapability() takes care of this).
      
          + sched_mutex - protect Scheduler data structures.
          + gc_pending_cond - SMP-only condition variable for signalling
            completion of GCs.
      
      - initial implementation of call-ins, i.e., multiple OS threads
        may concurrently call into the RTS without interfering with
        each other. Implementation uses cheesy locking protocol to
        ensure that only one OS thread at a time can construct a
        function application -- stop-gap measure until the RtsAPI
        is revised (as discussed last month) *and* a designated
        block is used for allocating these applications.
      
      - In the implementation of call-ins, the OS thread blocks
        waiting for an RTS worker thread to complete the evaluation
        of the function application. Since main() also uses the
        RtsAPI, provide a separate entry point for it (rts_mainEvalIO()),
        which avoids creating a separate thread to evaluate Main.main,
        that can be done by the thread exec'ing main() directly.
        [Maybe there's a tidier way of doing this, a bit ugly the
        way it is now..]
      
      
      There are a couple of dark corners that needs to be looked at,
      such as conditions for shutting down (and how) + consider what
      ought to happen when async I/O is thrown into the mix (I know
      what will happen, but that's maybe not what we want).
      
      Other than that, things are in a generally happy state & I hope
      to declare myself done before the week is up.
      e289780e
  32. 12 Feb, 2002 1 commit
    • sof's avatar
      [project @ 2002-02-12 15:34:25 by sof] · 3fa80568
      sof authored
      - give rts_n_free_capabilities an interpretation
        in threaded mode (possible values: 0,1)
      - noFreeCapabilities() -? noCapabilities()
      3fa80568
  33. 08 Feb, 2002 1 commit
  34. 04 Feb, 2002 1 commit
  35. 31 Jan, 2002 1 commit
    • sof's avatar
      [project @ 2002-01-31 11:18:06 by sof] · 3b9c5eb2
      sof authored
      First steps towards implementing better interop between
      Concurrent Haskell and native threads.
      
      - factored out Capability handling into a separate source file
        (only the SMP build uses multiple capabilities tho).
      - factored out OS/native threads handling into a separate
        source file, OSThreads.{c,h}. Currently, just a pthreads-based
        implementation; Win32 version to follow.
      - scheduler code now distinguishes between multi-task threaded
        code (SMP) and single-task threaded code ('threaded RTS'),
        but sharing code between these two modes whenever poss.
      
      i.e., just a first snapshot; the bulk of the transitioning code
      remains to be implemented.
      3b9c5eb2