This project is mirrored from https://gitlab.haskell.org/ghc/ghc.git. Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer.
Last successful update .
  1. 04 Nov, 2011 1 commit
    • Duncan Coutts's avatar
      Add eventlog event for thread labels · c739d845
      Duncan Coutts authored
      The existing GHC.Conc.labelThread will now also emit the the thread
      label into the eventlog. Profiling tools like ThreadScope could then
      use the thread labels rather than thread numbers.
      c739d845
  2. 02 Nov, 2011 1 commit
    • Simon Marlow's avatar
      Overhaul of infrastructure for profiling, coverage (HPC) and breakpoints · 7bb0447d
      Simon Marlow authored
      User visible changes
      ====================
      
      Profilng
      --------
      
      Flags renamed (the old ones are still accepted for now):
      
        OLD            NEW
        ---------      ------------
        -auto-all      -fprof-auto
        -auto          -fprof-exported
        -caf-all       -fprof-cafs
      
      New flags:
      
        -fprof-auto              Annotates all bindings (not just top-level
                                 ones) with SCCs
      
        -fprof-top               Annotates just top-level bindings with SCCs
      
        -fprof-exported          Annotates just exported bindings with SCCs
      
        -fprof-no-count-entries  Do not maintain entry counts when profiling
                                 (can make profiled code go faster; useful with
                                 heap profiling where entry counts are not used)
      
      Cost-centre stacks have a new semantics, which should in most cases
      result in more useful and intuitive profiles.  If you find this not to
      be the case, please let me know.  This is the area where I have been
      experimenting most, and the current solution is probably not the
      final version, however it does address all the outstanding bugs and
      seems to be better than GHC 7.2.
      
      Stack traces
      ------------
      
      +RTS -xc now gives more information.  If the exception originates from
      a CAF (as is common, because GHC tends to lift exceptions out to the
      top-level), then the RTS walks up the stack and reports the stack in
      the enclosing update frame(s).
      
      Result: +RTS -xc is much more useful now - but you still have to
      compile for profiling to get it.  I've played around a little with
      adding 'head []' to GHC itself, and +RTS -xc does pinpoint the problem
      quite accurately.
      
      I plan to add more facilities for stack tracing (e.g. in GHCi) in the
      future.
      
      Coverage (HPC)
      --------------
      
       * derived instances are now coloured yellow if they weren't used
       * likewise record field names
       * entry counts are more accurate (hpc --fun-entry-count)
       * tab width is now correct (markup was previously off in source with
         tabs)
      
      Internal changes
      ================
      
      In Core, the Note constructor has been replaced by
      
              Tick (Tickish b) (Expr b)
      
      which is used to represent all the kinds of source annotation we
      support: profiling SCCs, HPC ticks, and GHCi breakpoints.
      
      Depending on the properties of the Tickish, different transformations
      apply to Tick.  See CoreUtils.mkTick for details.
      
      Tickets
      =======
      
      This commit closes the following tickets, test cases to follow:
      
        - Close #2552: not a bug, but the behaviour is now more intuitive
          (test is T2552)
      
        - Close #680 (test is T680)
      
        - Close #1531 (test is result001)
      
        - Close #949 (test is T949)
      
        - Close #2466: test case has bitrotted (doesn't compile against current
          version of vector-space package)
      7bb0447d
  3. 08 Aug, 2011 1 commit
  4. 19 May, 2011 2 commits
  5. 11 Apr, 2011 1 commit
  6. 01 Mar, 2011 1 commit
  7. 10 Feb, 2011 1 commit
  8. 02 Feb, 2011 1 commit
  9. 15 Dec, 2010 1 commit
    • Simon Marlow's avatar
      Implement stack chunks and separate TSO/STACK objects · f30d5273
      Simon Marlow authored
      This patch makes two changes to the way stacks are managed:
      
      1. The stack is now stored in a separate object from the TSO.
      
      This means that it is easier to replace the stack object for a thread
      when the stack overflows or underflows; we don't have to leave behind
      the old TSO as an indirection any more.  Consequently, we can remove
      ThreadRelocated and deRefTSO(), which were a pain.
      
      This is obviously the right thing, but the last time I tried to do it
      it made performance worse.  This time I seem to have cracked it.
      
      2. Stacks are now represented as a chain of chunks, rather than
         a single monolithic object.
      
      The big advantage here is that individual chunks are marked clean or
      dirty according to whether they contain pointers to the young
      generation, and the GC can avoid traversing clean stack chunks during
      a young-generation collection.  This means that programs with deep
      stacks will see a big saving in GC overhead when using the default GC
      settings.
      
      A secondary advantage is that there is much less copying involved as
      the stack grows.  Programs that quickly grow a deep stack will see big
      improvements.
      
      In some ways the implementation is simpler, as nothing special needs
      to be done to reclaim stack as the stack shrinks (the GC just recovers
      the dead stack chunks).  On the other hand, we have to manage stack
      underflow between chunks, so there's a new stack frame
      (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
      The total amount of code is probably about the same as before.
      
      There are new RTS flags:
      
         -ki<size> Sets the initial thread stack size (default 1k)  Egs: -ki4k -ki2m
         -kc<size> Sets the stack chunk size (default 32k)
         -kb<size> Sets the stack chunk buffer size (default 1k)
      
      -ki was previously called just -k, and the old name is still accepted
      for backwards compatibility.  These new options are documented.
      f30d5273
  10. 29 Oct, 2010 1 commit
  11. 23 Oct, 2010 1 commit
  12. 09 Sep, 2010 1 commit
  13. 20 Jul, 2010 1 commit
  14. 01 Jan, 2010 1 commit
  15. 07 Apr, 2010 3 commits
  16. 06 Apr, 2010 1 commit
  17. 01 Apr, 2010 1 commit
    • Simon Marlow's avatar
      Change the representation of the MVar blocked queue · f4692220
      Simon Marlow authored
      The list of threads blocked on an MVar is now represented as a list of
      separately allocated objects rather than being linked through the TSOs
      themselves.  This lets us remove a TSO from the list in O(1) time
      rather than O(n) time, by marking the list object.  Removing this
      linear component fixes some pathalogical performance cases where many
      threads were blocked on an MVar and became unreachable simultaneously
      (nofib/smp/threads007), or when sending an asynchronous exception to a
      TSO in a long list of thread blocked on an MVar.
      
      MVar performance has actually improved by a few percent as a result of
      this change, slightly to my surprise.
      
      This is the final cleanup in the sequence, which let me remove the old
      way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the
      new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent).  It
      is now the case that only the Capability that owns a TSO may modify
      its state (well, almost), and this simplifies various things.  More of
      the RTS is based on message-passing between Capabilities now.
      f4692220
  18. 29 Mar, 2010 1 commit
    • Simon Marlow's avatar
      New implementation of BLACKHOLEs · 5d52d9b6
      Simon Marlow authored
      This replaces the global blackhole_queue with a clever scheme that
      enables us to queue up blocked threads on the closure that they are
      blocked on, while still avoiding atomic instructions in the common
      case.
      
      Advantages:
      
       - gets rid of a locked global data structure and some tricky GC code
         (replacing it with some per-thread data structures and different
         tricky GC code :)
      
       - wakeups are more prompt: parallel/concurrent performance should
         benefit.  I haven't seen anything dramatic in the parallel
         benchmarks so far, but a couple of threading benchmarks do improve
         a bit.
      
       - waking up a thread blocked on a blackhole is now O(1) (e.g. if
         it is the target of throwTo).
      
       - less sharing and better separation of Capabilities: communication
         is done with messages, the data structures are strictly owned by a
         Capability and cannot be modified except by sending messages.
      
       - this change will utlimately enable us to do more intelligent
         scheduling when threads block on each other.  This is what started
         off the whole thing, but it isn't done yet (#3838).
      
      I'll be documenting all this on the wiki in due course.
      5d52d9b6
  19. 20 Mar, 2010 1 commit
  20. 11 Mar, 2010 1 commit
    • Simon Marlow's avatar
      Use message-passing to implement throwTo in the RTS · 7408b392
      Simon Marlow authored
      This replaces some complicated locking schemes with message-passing
      in the implementation of throwTo. The benefits are
      
       - previously it was impossible to guarantee that a throwTo from
         a thread running on one CPU to a thread running on another CPU
         would be noticed, and we had to rely on the GC to pick up these
         forgotten exceptions. This no longer happens.
      
       - the locking regime is simpler (though the code is about the same
         size)
      
       - threads can be unblocked from a blocked_exceptions queue without
         having to traverse the whole queue now.  It's a rare case, but
         replaces an O(n) operation with an O(1).
      
       - generally we move in the direction of sharing less between
         Capabilities (aka HECs), which will become important with other
         changes we have planned.
      
      Also in this patch I replaced several STM-specific closure types with
      a generic MUT_PRIM closure type, which allowed a lot of code in the GC
      and other places to go away, hence the line-count reduction.  The
      message-passing changes resulted in about a net zero line-count
      difference.
      7408b392
  21. 16 Feb, 2010 1 commit
    • Simon Marlow's avatar
      Fix a bug that can lead to noDuplicate# not working sometimes. · c44aaa10
      Simon Marlow authored
      The symptom is that under some rare conditions when running in
      parallel, an unsafePerformIO or unsafeInterleaveIO computation might
      be duplicated, so e.g. lazy I/O might give the wrong answer (the
      stream might appear to have duplicate parts or parts missing).
      
      I have a program that demonstrates it -N3 or more, some lazy I/O, and
      a lot of shared mutable state.  See the comment with stg_noDuplicatezh
      in PrimOps.cmm that explains the problem and the fix.  This took me
      about a day to find :-(
      c44aaa10
  22. 02 Feb, 2010 1 commit
  23. 17 Dec, 2009 1 commit
    • Simon Marlow's avatar
      Fix #650: use a card table to mark dirty sections of mutable arrays · 0417404f
      Simon Marlow authored
      The card table is an array of bytes, placed directly following the
      actual array data.  This means that array reading is unaffected, but
      array writing needs to read the array size from the header in order to
      find the card table.
      
      We use a bytemap rather than a bitmap, because updating the card table
      must be multi-thread safe.  Each byte refers to 128 entries of the
      array, but this is tunable by changing the constant
      MUT_ARR_PTRS_CARD_BITS in includes/Constants.h.
      0417404f
  24. 12 Dec, 2009 1 commit
    • chak@cse.unsw.edu.au.'s avatar
      Expose all EventLog events as DTrace probes · 015d3d46
      chak@cse.unsw.edu.au. authored
      - Defines a DTrace provider, called 'HaskellEvent', that provides a probe
        for every event of the eventlog framework.
      - In contrast to the original eventlog, the DTrace probes are available in
        all flavours of the runtime system (DTrace probes have virtually no
        overhead if not enabled); when -DTRACING is defined both the regular
        event log as well as DTrace probes can be used.
      - Currently, Mac OS X only.  User-space DTrace probes are implemented
        differently on Mac OS X than in the original DTrace implementation.
        Nevertheless, it shouldn't be too hard to enable these probes on other
        platforms, too.
      - Documentation is at http://hackage.haskell.org/trac/ghc/wiki/DTrace
      015d3d46
  25. 08 Dec, 2009 1 commit
  26. 07 Dec, 2009 1 commit
  27. 01 Dec, 2009 1 commit
    • Simon Marlow's avatar
      Make allocatePinned use local storage, and other refactorings · 5270423a
      Simon Marlow authored
      This is a batch of refactoring to remove some of the GC's global
      state, as we move towards CPU-local GC.  
      
        - allocateLocal() now allocates large objects into the local
          nursery, rather than taking a global lock and allocating
          then in gen 0 step 0.
      
        - allocatePinned() was still allocating from global storage and
          taking a lock each time, now it uses local storage. 
          (mallocForeignPtrBytes should be faster with -threaded).
          
        - We had a gen 0 step 0, distinct from the nurseries, which are
          stored in a separate nurseries[] array.  This is slightly strange.
          I removed the g0s0 global that pointed to gen 0 step 0, and
          removed all uses of it.  I think now we don't use gen 0 step 0 at
          all, except possibly when there is only one generation.  Possibly
          more tidying up is needed here.
      
        - I removed the global allocate() function, and renamed
          allocateLocal() to allocate().
      
        - the alloc_blocks global is gone.  MAYBE_GC() and
          doYouWantToGC() now check the local nursery only.
      5270423a
  28. 14 Oct, 2009 1 commit
  29. 25 Sep, 2009 1 commit
    • Simon Marlow's avatar
      Add a way to generate tracing events programmatically · 5407ad8e
      Simon Marlow authored
      added:
      
       primop  TraceEventOp "traceEvent#" GenPrimOp
         Addr# -> State# s -> State# s
         { Emits an event via the RTS tracing framework.  The contents
           of the event is the zero-terminated byte string passed as the first
           argument.  The event will be emitted either to the .eventlog file,
           or to stderr, depending on the runtime RTS flags. }
      
      and added the required RTS functionality to support it.  Also a bit of
      refactoring in the RTS tracing code.
      5407ad8e
  30. 18 Aug, 2009 1 commit
    • Simon Marlow's avatar
      Fix #3429: a tricky race condition · c5cafbcc
      Simon Marlow authored
      There were two bugs, and had it not been for the first one we would
      not have noticed the second one, so this is quite fortunate.
      
      The first bug is in stg_unblockAsyncExceptionszh_ret, when we found a
      pending exception to raise, but don't end up raising it, there was a
      missing adjustment to the stack pointer.  
      
      The second bug was that this case was actually happening at all: it
      ought to be incredibly rare, because the pending exception thread
      would have to be killed between us finding it and attempting to raise
      the exception.  This made me suspicious.  It turned out that there was
      a race condition on the tso->flags field; multiple threads were
      updating this bitmask field non-atomically (one of the bits is the
      dirty-bit for the generational GC).  The fix is to move the dirty bit
      into its own field of the TSO, making the TSO one word larger (sadly).
      c5cafbcc
  31. 03 Aug, 2009 1 commit
  32. 24 Jun, 2009 1 commit
  33. 13 Jun, 2009 1 commit
  34. 10 Jun, 2009 1 commit
  35. 02 Jun, 2009 1 commit
  36. 15 May, 2009 1 commit
  37. 13 Mar, 2009 1 commit
    • Simon Marlow's avatar
      Instead of a separate context-switch flag, set HpLim to zero · 304e7fb7
      Simon Marlow authored
      This reduces the latency between a context-switch being triggered and
      the thread returning to the scheduler, which in turn should reduce the
      cost of the GC barrier when there are many cores.
      
      We still retain the old context_switch flag which is checked at the
      end of each block of allocation.  The idea is that setting HpLim may
      fail if the the target thread is modifying HpLim at the same time; the
      context_switch flag is a fallback.  It also allows us to "context
      switch soon" without forcing an immediate switch, which can be costly.
      304e7fb7