This project is mirrored from https://gitlab.haskell.org/ghc/ghc.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
Last successful update .
  1. 10 Feb, 2011 1 commit
  2. 02 Feb, 2011 5 commits
    • Simon Marlow's avatar
      GC refactoring and cleanup · 18896fa2
      Simon Marlow authored
      Now we keep any partially-full blocks in the gc_thread[] structs after
      each GC, rather than moving them to the generation.  This should give
      us slightly better locality (though I wasn't able to measure any
      difference).
      
      Also in this patch: better sanity checking with THREADED.
      18896fa2
    • Simon Marlow's avatar
      A small GC optimisation · bef3da1e
      Simon Marlow authored
      Store the *number* of the destination generation in the Bdescr struct,
      so that in evacuate() we don't have to deref gen to get it.
      This is another improvement ported over from my GC branch.
      bef3da1e
    • Simon Marlow's avatar
      add a const · d0bfe30b
      Simon Marlow authored
      d0bfe30b
    • Simon Marlow's avatar
      add TRY_ACQUIRE_LOCK() · d52d50e4
      Simon Marlow authored
      d52d50e4
    • Simon Marlow's avatar
      Remove the per-generation mutable lists · 32907722
      Simon Marlow authored
      Now that we use the per-capability mutable lists exclusively.
      32907722
  3. 01 Feb, 2011 1 commit
  4. 27 Jan, 2011 1 commit
  5. 21 Dec, 2010 1 commit
    • Simon Marlow's avatar
      Count allocations more accurately · db0c13a4
      Simon Marlow authored
      The allocation stats (+RTS -s etc.) used to count the slop at the end
      of each nursery block (except the last) as allocated space, now we
      count the allocated words accurately.  This should make allocation
      figures more predictable, too.
      
      This has the side effect of reducing the apparent allocations by a
      small amount (~1%), so remember to take this into account when looking
      at nofib results.
      db0c13a4
  6. 16 Dec, 2010 1 commit
  7. 15 Dec, 2010 1 commit
    • Simon Marlow's avatar
      Implement stack chunks and separate TSO/STACK objects · f30d5273
      Simon Marlow authored
      This patch makes two changes to the way stacks are managed:
      
      1. The stack is now stored in a separate object from the TSO.
      
      This means that it is easier to replace the stack object for a thread
      when the stack overflows or underflows; we don't have to leave behind
      the old TSO as an indirection any more.  Consequently, we can remove
      ThreadRelocated and deRefTSO(), which were a pain.
      
      This is obviously the right thing, but the last time I tried to do it
      it made performance worse.  This time I seem to have cracked it.
      
      2. Stacks are now represented as a chain of chunks, rather than
         a single monolithic object.
      
      The big advantage here is that individual chunks are marked clean or
      dirty according to whether they contain pointers to the young
      generation, and the GC can avoid traversing clean stack chunks during
      a young-generation collection.  This means that programs with deep
      stacks will see a big saving in GC overhead when using the default GC
      settings.
      
      A secondary advantage is that there is much less copying involved as
      the stack grows.  Programs that quickly grow a deep stack will see big
      improvements.
      
      In some ways the implementation is simpler, as nothing special needs
      to be done to reclaim stack as the stack shrinks (the GC just recovers
      the dead stack chunks).  On the other hand, we have to manage stack
      underflow between chunks, so there's a new stack frame
      (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
      The total amount of code is probably about the same as before.
      
      There are new RTS flags:
      
         -ki<size> Sets the initial thread stack size (default 1k)  Egs: -ki4k -ki2m
         -kc<size> Sets the stack chunk size (default 32k)
         -kb<size> Sets the stack chunk buffer size (default 1k)
      
      -ki was previously called just -k, and the old name is still accepted
      for backwards compatibility.  These new options are documented.
      f30d5273
  8. 10 Dec, 2010 1 commit
  9. 08 Dec, 2010 1 commit
  10. 25 Nov, 2010 1 commit
  11. 23 Nov, 2010 1 commit
  12. 19 Sep, 2010 1 commit
    • Edward Z. Yang's avatar
      Interruptible FFI calls with pthread_kill and CancelSynchronousIO. v4 · 83d563cb
      Edward Z. Yang authored
      This is patch that adds support for interruptible FFI calls in the form
      of a new foreign import keyword 'interruptible', which can be used
      instead of 'safe' or 'unsafe'.  Interruptible FFI calls act like safe
      FFI calls, except that the worker thread they run on may be interrupted.
      
      Internally, it replaces BlockedOnCCall_NoUnblockEx with
      BlockedOnCCall_Interruptible, and changes the behavior of the RTS
      to not modify the TSO_ flags on the event of an FFI call from
      a thread that was interruptible.  It also modifies the bytecode
      format for foreign call, adding an extra Word16 to indicate
      interruptibility.
      
      The semantics of interruption vary from platform to platform, but the
      intent is that any blocking system calls are aborted with an error code.
      This is most useful for making function calls to system library
      functions that support interrupting.  There is no support for pre-Vista
      Windows.
      
      There is a partner testsuite patch which adds several tests for this
      functionality.
      83d563cb
  13. 20 Sep, 2010 1 commit
  14. 15 Sep, 2010 1 commit
  15. 13 Sep, 2010 2 commits
  16. 18 Jul, 2010 1 commit
    • marcotmarcot's avatar
      Don't check for swept blocks in -DS. · eff182c3
      marcotmarcot authored
      The checkHeap function assumed the allocated part of the block contained only
      alive objects and slops.  This was not true for blocks that are collected using
      mark sweep.  The code in this patch skip the test for this kind of blocks.
      eff182c3
  17. 13 Aug, 2010 1 commit
  18. 24 Jul, 2010 1 commit
  19. 09 Jul, 2010 1 commit
    • Sergei Trofimovich's avatar
      * storage manager: preserve upper address bits on 64bit machines (thanks to zygoloid) · d12690d5
      Sergei Trofimovich authored
      Patch does not touch amd64 as it's address lengts is 48 bits at most, so amd64 is unaffected.
      
      the issue: during ia64 ghc bootstrap (both 6.10.4 and 6.12.3) I
      got the failure on stage2 phase:
          "inplace/bin/ghc-stage2"   -H32m -O -H64m -O0 -w ...
          ghc-stage2: internal error: evacuate: strange closure type 15
              (GHC version 6.12.3 for ia64_unknown_linux)
              Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
          make[1]: *** [libraries/dph/dph-base/dist-install/build/Data/Array/Parallel/Base/Hyperstrict.o] Aborted
      
      gdb backtrace (break on 'barf'):
      Breakpoint 1 at 0x400000000469ec31: file rts/RtsMessages.c, line 39.
      (gdb) run -B/var/tmp/portage/dev-lang/ghc-6.12.3/work/ghc-6.12.3/inplace/bin --info
      Starting program: /var/tmp/portage/dev-lang/ghc-6.12.3/work/ghc-6.12.3/inplace/lib/ghc-stage2 -B/var/tmp/portage/dev-lang/ghc-6.12.3/work/ghc-6.12.3/inplace/bin --info
      [Thread debugging using libthread_db enabled]
      
      Breakpoint 1, barf (s=0x40000000047915b0 "evacuate: strange closure type %d") at rts/RtsMessages.c:39
      39        va_start(ap,s);
      (gdb) bt
      #0  barf (s=0x40000000047915b0 "evacuate: strange closure type %d") at rts/RtsMessages.c:39
      #1  0x400000000474a1e0 in evacuate (p=0x6000000000147958) at rts/sm/Evac.c:756
      #2  0x40000000046d68c0 in scavenge_srt (srt=0x6000000000147958, srt_bitmap=7) at rts/sm/Scav.c:348
      ...
      
      > 16:52:53 < zygoloid> slyfox: i'm no ghc expert but it looks like HEAP_ALLOCED_GC(q)
      >                      is returning true for a FUN_STATIC closure
      > 17:18:43 < zygoloid> try: p HEAP_ALLOCED_miss((unsigned long)(*p) >> 20, *p)
      > 17:19:12 < slyfox> (gdb) p HEAP_ALLOCED_miss((unsigned long)(*p) >> 20, *p)
      > 17:19:12 < slyfox> $1 = 0
      > 17:19:40 < zygoloid> i /think/ that means the mblock_cache is broken
      > 17:22:45 < zygoloid> i can't help further. however i am suspicious that you seem to have pointers with similar-looking low 33
      >                      bits and different high 4 bits, and it looks like such pointers get put into the same bucket in
      >                      mblock_cache.
      ...
      > 17:36:16 < zygoloid> slyfox: try changing the definition of MbcCacheLine to StgWord64, see if that helps
      > 17:36:31 < zygoloid> that's in includes/rts/storage/MBlock.h
      And it helped!
      d12690d5
  20. 08 Jul, 2010 2 commits
    • Simon Marlow's avatar
      New asynchronous exception control API (ghc parts) · ad3b79d2
      Simon Marlow authored
      As discussed on the libraries/haskell-cafe mailing lists
        http://www.haskell.org/pipermail/libraries/2010-April/013420.html
      
      This is a replacement for block/unblock in the asychronous exceptions
      API to fix a problem whereby a function could unblock asynchronous
      exceptions even if called within a blocked context.
      
      The new terminology is "mask" rather than "block" (to avoid confusion
      due to overloaded meanings of the latter).
      
      In GHC, we changed the names of some primops:
      
        blockAsyncExceptions#   -> maskAsyncExceptions#
        unblockAsyncExceptions# -> unmaskAsyncExceptions#
        asyncExceptionsBlocked# -> getMaskingState#
      
      and added one new primop:
      
        maskUninterruptible#
      
      See the accompanying patch to libraries/base for the API changes.
      ad3b79d2
    • Simon Marlow's avatar
      remove outdated comment · cc94b30f
      Simon Marlow authored
      cc94b30f
  21. 22 Jun, 2010 1 commit
    • dmp@rice.edu's avatar
      Add support for collecting PAPI native events · d4942f78
      dmp@rice.edu authored
      This patch extends the PAPI support in the RTS to allow collection of native
      events. PAPI can collect data for native events that are exposed by the
      hardware beyond the PAPI present events. The native events supported on your
      hardware can found by using the papi_native_avail tool.
      
      The RTS already allows users to specify PAPI preset events from the command
      line. This patch extends that support to allow users to specify native events.
      The changes needed are:
      
      1) New option (#) for the RTS PAPI flag for native events. For example, to
         collect the native event 0x40000000, use ./a.out +RTS -a#0x40000000 -sstderr
      
      2) Update the PAPI_FLAGS struct to store whether the user specified event is a
         papi preset or a native event
      
      3) Update init_countable_events function to add the native events after parsing
         the event code and decoding the name using PAPI_event_code_to_name
      d4942f78
  22. 19 Jun, 2010 1 commit
  23. 01 Jan, 2010 1 commit
  24. 17 Jun, 2010 1 commit
  25. 26 May, 2010 1 commit
  26. 21 Apr, 2010 1 commit
    • Ian Lynagh's avatar
      Use StgWord64 instead of ullong · 81c95f7d
      Ian Lynagh authored
      This patch also fixes ullong_format_string (renamed to showStgWord64)
      so that it works with values outside the 32bit range (trac #3979), and
      simplifies the without-commas case.
      81c95f7d
  27. 01 Apr, 2010 2 commits
    • Simon Marlow's avatar
      Remove the IND_OLDGEN and IND_OLDGEN_PERM closure types · 70a2431f
      Simon Marlow authored
      These are no longer used: once upon a time they used to have different
      layout from IND and IND_PERM respectively, but that is no longer the
      case since we changed the remembered set to be an array of addresses
      instead of a linked list of closures.
      70a2431f
    • Simon Marlow's avatar
      Change the representation of the MVar blocked queue · f4692220
      Simon Marlow authored
      The list of threads blocked on an MVar is now represented as a list of
      separately allocated objects rather than being linked through the TSOs
      themselves.  This lets us remove a TSO from the list in O(1) time
      rather than O(n) time, by marking the list object.  Removing this
      linear component fixes some pathalogical performance cases where many
      threads were blocked on an MVar and became unreachable simultaneously
      (nofib/smp/threads007), or when sending an asynchronous exception to a
      TSO in a long list of thread blocked on an MVar.
      
      MVar performance has actually improved by a few percent as a result of
      this change, slightly to my surprise.
      
      This is the final cleanup in the sequence, which let me remove the old
      way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the
      new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent).  It
      is now the case that only the Capability that owns a TSO may modify
      its state (well, almost), and this simplifies various things.  More of
      the RTS is based on message-passing between Capabilities now.
      f4692220
  28. 29 Mar, 2010 3 commits
    • Simon Marlow's avatar
      Move a thread to the front of the run queue when another thread blocks on it · 2726a2f1
      Simon Marlow authored
      This fixes #3838, and was made possible by the new BLACKHOLE
      infrastructure.  To allow reording of the run queue I had to make it
      doubly-linked, which entails some extra trickiness with regard to
      GC write barriers and suchlike.
      2726a2f1
    • Simon Marlow's avatar
      New implementation of BLACKHOLEs · 5d52d9b6
      Simon Marlow authored
      This replaces the global blackhole_queue with a clever scheme that
      enables us to queue up blocked threads on the closure that they are
      blocked on, while still avoiding atomic instructions in the common
      case.
      
      Advantages:
      
       - gets rid of a locked global data structure and some tricky GC code
         (replacing it with some per-thread data structures and different
         tricky GC code :)
      
       - wakeups are more prompt: parallel/concurrent performance should
         benefit.  I haven't seen anything dramatic in the parallel
         benchmarks so far, but a couple of threading benchmarks do improve
         a bit.
      
       - waking up a thread blocked on a blackhole is now O(1) (e.g. if
         it is the target of throwTo).
      
       - less sharing and better separation of Capabilities: communication
         is done with messages, the data structures are strictly owned by a
         Capability and cannot be modified except by sending messages.
      
       - this change will utlimately enable us to do more intelligent
         scheduling when threads block on each other.  This is what started
         off the whole thing, but it isn't done yet (#3838).
      
      I'll be documenting all this on the wiki in due course.
      5d52d9b6
    • Simon Marlow's avatar
      e74936e3
  29. 11 Mar, 2010 1 commit
    • Simon Marlow's avatar
      Use message-passing to implement throwTo in the RTS · 7408b392
      Simon Marlow authored
      This replaces some complicated locking schemes with message-passing
      in the implementation of throwTo. The benefits are
      
       - previously it was impossible to guarantee that a throwTo from
         a thread running on one CPU to a thread running on another CPU
         would be noticed, and we had to rely on the GC to pick up these
         forgotten exceptions. This no longer happens.
      
       - the locking regime is simpler (though the code is about the same
         size)
      
       - threads can be unblocked from a blocked_exceptions queue without
         having to traverse the whole queue now.  It's a rare case, but
         replaces an O(n) operation with an O(1).
      
       - generally we move in the direction of sharing less between
         Capabilities (aka HECs), which will become important with other
         changes we have planned.
      
      Also in this patch I replaced several STM-specific closure types with
      a generic MUT_PRIM closure type, which allowed a lot of code in the GC
      and other places to go away, hence the line-count reduction.  The
      message-passing changes resulted in about a net zero line-count
      difference.
      7408b392
  30. 09 Mar, 2010 1 commit
    • Simon Marlow's avatar
      Split part of the Task struct into a separate struct InCall · 7effbbbb
      Simon Marlow authored
      The idea is that this leaves Tasks and OSThread in one-to-one
      correspondence.  The part of a Task that represents a call into
      Haskell from C is split into a separate struct InCall, pointed to by
      the Task and the TSO bound to it.  A given OSThread/Task thus always
      uses the same mutex and condition variable, rather than getting a new
      one for each callback.  Conceptually it is simpler, although there are
      more types and indirections in a few places now.
      
      This improves callback performance by removing some of the locks that
      we had to take when making in-calls.  Now we also keep the current Task
      in a thread-local variable if supported by the OS and gcc (currently
      only Linux).
      7effbbbb
  31. 03 Feb, 2010 1 commit