Skip to content
  • Jost Berthold's avatar
    Work stealing for sparks · cf9650f2
    Jost Berthold authored
       Spark stealing support for PARALLEL_HASKELL and THREADED_RTS versions of the RTS.
      
      Spark pools are per capability, separately allocated and held in the Capability 
      structure. The implementation uses Double-Ended Queues (deque) and cas-protected 
      access.
      
      The write end of the queue (position bottom) can only be used with
      mutual exclusion, i.e. by exactly one caller at a time.
      Multiple readers can steal()/findSpark() from the read end
      (position top), and are synchronised without a lock, based on a cas
      of the top position. One reader wins, the others return NULL for a
      failure.
      
      Work stealing is called when Capabilities find no other work (inside yieldCapability),
      and tries all capabilities 0..n-1 twice, unless a theft succeeds.
      
      Inside schedulePushWork, all considered cap.s (those which were idle and could 
      be grabbed) are woken up. Future versions should wake up capabilities immediately when 
      putting a new spark in the local pool, from newSpark().
    
    Patch has been re-recorded due to conflicting bugfixes in the sparks.c, also fixing a 
    (strange) conflict in the scheduler.
    cf9650f2