1. 30 Oct, 2015 1 commit
    • Simon Marlow's avatar
      Fix segfault due to reading non-existent memory · 2624298a
      Simon Marlow authored
      It was possible to read non-existent memory, if we try to read the
      srt_offset field of an info table when there is no SRT, and the info
      table is right at the start of the text section.
      
      This actually happened to me, I'm not sure why it never happened
      before.
      
      Test Plan: validate
      
      Reviewers: rwbarton, ezyang, austin, bgamari
      
      Reviewed By: austin, bgamari
      
      Subscribers: thomie
      
      Differential Revision: https://phabricator.haskell.org/D1401
      2624298a
  2. 11 Sep, 2015 1 commit
  3. 28 Jul, 2015 1 commit
    • Simon Marlow's avatar
      Eliminate zero_static_objects_list() · f83aab95
      Simon Marlow authored
      Summary:
      [Revised version of D1076 that was committed and then backed out]
      
      In a workload with a large amount of code, zero_static_objects_list()
      takes a significant amount of time, and furthermore it is in the
      single-threaded part of the GC.
      
      This patch uses a slightly fiddly scheme for marking objects on the
      static object lists, using a flag in the low 2 bits that flips between
      two states to indicate whether an object has been visited during this
      GC or not.  We also have to take into account objects that have not
      been visited yet, which might appear at any time due to runtime linking.
      
      Test Plan: validate
      
      Reviewers: austin, ezyang, rwbarton, bgamari, thomie
      
      Reviewed By: bgamari, thomie
      
      Subscribers: thomie
      
      Differential Revision: https://phabricator.haskell.org/D1106
      f83aab95
  4. 27 Jul, 2015 1 commit
  5. 22 Jul, 2015 1 commit
    • Simon Marlow's avatar
      Eliminate zero_static_objects_list() · b949c96b
      Simon Marlow authored
      Summary:
      In a workload with a large amount of code, zero_static_objects_list()
      takes a significant amount of time, and furthermore it is in the
      single-threaded part of the GC.
      
      This patch uses a slightly fiddly scheme for marking objects on the
      static object lists, using a flag in the low 2 bits that flips between
      two states to indicate whether an object has been visited during this
      GC or not.  We also have to take into account objects that have not
      been visited yet, which might appear at any time due to runtime linking.
      
      Test Plan: validate
      
      Reviewers: austin, bgamari, ezyang, rwbarton
      
      Subscribers: thomie
      
      Differential Revision: https://phabricator.haskell.org/D1076
      b949c96b
  6. 07 Jul, 2015 1 commit
  7. 20 Jan, 2015 1 commit
  8. 13 Jan, 2015 1 commit
    • Simon Marlow's avatar
      Optimise scavenge_large_srt_bitmap · cf8e669b
      Simon Marlow authored
      Very large modules can sometimes contain very large SRT bitmaps (this
      is a separate problem that I need to look into).  The large bitmaps
      often contain a lot of zeros, so this patch skips over empty words in
      the bitmap.
      
      It makes a dramatic difference in the particular example that I saw,
      where an old gen GC was taking 0.5s before this change and 0.07s after
      it.
      cf8e669b
  9. 23 Oct, 2014 1 commit
    • Simon Marlow's avatar
      Fix a rare parallel GC bug · a11f71ef
      Simon Marlow authored
      When there's a conflict between two threads evacuating the same TSO,
      in some cases we would update the incall->tso pointer to point to the
      wrong copy of the TSO.  This would get fixed during the next GC, but
      if the thread completed in the meantime, it would likely crash.  We're
      seeing this about once per day on a heavily loaded machine (it varies
      a lot though).
      a11f71ef
  10. 21 Oct, 2014 1 commit
  11. 29 Sep, 2014 1 commit
  12. 28 Jul, 2014 1 commit
  13. 29 Apr, 2014 2 commits
    • Arash Rouhani's avatar
      Rts: Reuse scavenge_small_bitmap (#8742) · 05fcc333
      Arash Rouhani authored
      The function was inlined at two places already. And the function is
      having the STATIC_INLINE annotation, so the assembly output should.
      be the same.
      
      To convince myself, I did diff the output of the object files before
      and after the patch and they matched on my 64-bit Ubuntu 13.10 machine,
      running gcc 4.8.1-10ubuntu9.
      
      Also, I had to move scavenge_small_bitmap up a bit since it's not in any
      .h-file.
      
      While I was at it, I also applied the analogous patch for Compact.c.
      Though I had to write `thread_small_bitmap` instead of just moving it.
      05fcc333
    • Arash Rouhani's avatar
      Rts: Consistently use StgWord for sizes of bitmaps · 43b3bab3
      Arash Rouhani authored
      A long debate is in issue #8742, but the main motivation is that this
      allows for applying a patch to reuse the function scavenge_small_bitmap
      without changing the .o-file output.
      
      Similarly, I changed the types in rts/sm/Compact.c, so I can create
      a STATIC_INLINE function for the redundant code block:
      
              while (size > 0) {
                  if ((bitmap & 1) == 0) {
                      thread((StgClosure **)p);
                  }
                  p++;
                  bitmap = bitmap >> 1;
                  size--;
              }
      43b3bab3
  14. 29 Mar, 2014 1 commit
    • tibbe's avatar
      Add SmallArray# and SmallMutableArray# types · 90329b6c
      tibbe authored
      These array types are smaller than Array# and MutableArray# and are
      faster when the array size is small, as they don't have the overhead
      of a card table. Having no card table reduces the closure size with 2
      words in the typical small array case and leads to less work when
      updating or GC:ing the array.
      
      Reduces both the runtime and memory allocation by 8.8% on my insert
      benchmark for the HashMap type in the unordered-containers package,
      which makes use of lots of small arrays. With tuned GC settings
      (i.e. `+RTS -A6M`) the runtime reduction is 15%.
      
      Fixes #8923.
      90329b6c
  15. 01 Oct, 2013 2 commits
  16. 09 Jul, 2013 1 commit
  17. 28 Jan, 2013 1 commit
  18. 16 Nov, 2012 1 commit
    • Simon Marlow's avatar
      Add a write barrier for TVAR closures · 6d784c43
      Simon Marlow authored
      This improves GC performance when there are a lot of TVars in the
      heap.  For instance, a TChan with a lot of elements causes a massive
      GC drag without this patch.
      
      There's more to do - several other STM closure types don't have write
      barriers, so GC performance when there are a lot of threads blocked on
      STM isn't great.  But fixing the problem for TVar is a good start.
      6d784c43
  19. 08 Oct, 2012 1 commit
    • Simon Marlow's avatar
      Produce new-style Cmm from the Cmm parser · a7c0387d
      Simon Marlow authored
      The main change here is that the Cmm parser now allows high-level cmm
      code with argument-passing and function calls.  For example:
      
      foo ( gcptr a, bits32 b )
      {
        if (b > 0) {
           // we can make tail calls passing arguments:
           jump stg_ap_0_fast(a);
        }
      
        return (x,y);
      }
      
      More details on the new cmm syntax are in Note [Syntax of .cmm files]
      in CmmParse.y.
      
      The old syntax is still more-or-less supported for those occasional
      code fragments that really need to explicitly manipulate the stack.
      However there are a couple of differences: it is now obligatory to
      give a list of live GlobalRegs on every jump, e.g.
      
        jump %ENTRY_CODE(Sp(0)) [R1];
      
      Again, more details in Note [Syntax of .cmm files].
      
      I have rewritten most of the .cmm files in the RTS into the new
      syntax, except for AutoApply.cmm which is generated by the genapply
      program: this file could be generated in the new syntax instead and
      would probably be better off for it, but I ran out of enthusiasm.
      
      Some other changes in this batch:
      
       - The PrimOp calling convention is gone, primops now use the ordinary
         NativeNodeCall convention.  This means that primops and "foreign
         import prim" code must be written in high-level cmm, but they can
         now take more than 10 arguments.
      
       - CmmSink now does constant-folding (should fix #7219)
      
       - .cmm files now go through the cmmPipeline, and as a result we
         generate better code in many cases.  All the object files generated
         for the RTS .cmm files are now smaller.  Performance should be
         better too, but I haven't measured it yet.
      
       - RET_DYN frames are removed from the RTS, lots of code goes away
      
       - we now have some more canned GC points to cover unboxed-tuples with
         2-4 pointers, which will reduce code size a little.
      a7c0387d
  20. 07 Sep, 2012 1 commit
    • Simon Marlow's avatar
      Deprecate lnat, and use StgWord instead · 41737f12
      Simon Marlow authored
      lnat was originally "long unsigned int" but we were using it when we
      wanted a 64-bit type on a 64-bit machine.  This broke on Windows x64,
      where long == int == 32 bits.  Using types of unspecified size is bad,
      but what we really wanted was a type with N bits on an N-bit machine.
      StgWord is exactly that.
      
      lnat was mentioned in some APIs that clients might be using
      (e.g. StackOverflowHook()), so we leave it defined but with a comment
      to say that it's deprecated.
      41737f12
  21. 06 May, 2012 1 commit
  22. 26 Apr, 2012 1 commit
    • Ian Lynagh's avatar
      Fix warnings on Win64 · 1dbe6d59
      Ian Lynagh authored
      Mostly this meant getting pointer<->int conversions to use the right
      sizes. lnat is now size_t, rather than unsigned long, as that seems a
      better match for how it's used.
      1dbe6d59
  23. 02 Feb, 2011 2 commits
    • Simon Marlow's avatar
      GC refactoring and cleanup · 18896fa2
      Simon Marlow authored
      Now we keep any partially-full blocks in the gc_thread[] structs after
      each GC, rather than moving them to the generation.  This should give
      us slightly better locality (though I wasn't able to measure any
      difference).
      
      Also in this patch: better sanity checking with THREADED.
      18896fa2
    • Simon Marlow's avatar
      A small GC optimisation · bef3da1e
      Simon Marlow authored
      Store the *number* of the destination generation in the Bdescr struct,
      so that in evacuate() we don't have to deref gen to get it.
      This is another improvement ported over from my GC branch.
      bef3da1e
  24. 15 Dec, 2010 1 commit
    • Simon Marlow's avatar
      Implement stack chunks and separate TSO/STACK objects · f30d5273
      Simon Marlow authored
      This patch makes two changes to the way stacks are managed:
      
      1. The stack is now stored in a separate object from the TSO.
      
      This means that it is easier to replace the stack object for a thread
      when the stack overflows or underflows; we don't have to leave behind
      the old TSO as an indirection any more.  Consequently, we can remove
      ThreadRelocated and deRefTSO(), which were a pain.
      
      This is obviously the right thing, but the last time I tried to do it
      it made performance worse.  This time I seem to have cracked it.
      
      2. Stacks are now represented as a chain of chunks, rather than
         a single monolithic object.
      
      The big advantage here is that individual chunks are marked clean or
      dirty according to whether they contain pointers to the young
      generation, and the GC can avoid traversing clean stack chunks during
      a young-generation collection.  This means that programs with deep
      stacks will see a big saving in GC overhead when using the default GC
      settings.
      
      A secondary advantage is that there is much less copying involved as
      the stack grows.  Programs that quickly grow a deep stack will see big
      improvements.
      
      In some ways the implementation is simpler, as nothing special needs
      to be done to reclaim stack as the stack shrinks (the GC just recovers
      the dead stack chunks).  On the other hand, we have to manage stack
      underflow between chunks, so there's a new stack frame
      (UNDERFLOW_FRAME), and we now have separate TSO and STACK objects.
      The total amount of code is probably about the same as before.
      
      There are new RTS flags:
      
         -ki<size> Sets the initial thread stack size (default 1k)  Egs: -ki4k -ki2m
         -kc<size> Sets the stack chunk size (default 32k)
         -kb<size> Sets the stack chunk buffer size (default 1k)
      
      -ki was previously called just -k, and the old name is still accepted
      for backwards compatibility.  These new options are documented.
      f30d5273
  25. 05 Oct, 2010 1 commit
    • Simon Marlow's avatar
      Fix a very rare crash in GHCi · 4a05e613
      Simon Marlow authored
      When a BCO with a zero-length bitmap was right at the edge of
      allocated memory, we were reading a word of non-existent memory.
      
      This showed up as a segfault in T789(ghci) for me, but the crash was
      extremely sensitive and went away with most changes.
      
      Also, optimised scavenge_large_bitmap a bit while I was in there.
      4a05e613
  26. 13 Jul, 2010 1 commit
  27. 01 Apr, 2010 2 commits
    • Simon Marlow's avatar
      Remove the IND_OLDGEN and IND_OLDGEN_PERM closure types · 70a2431f
      Simon Marlow authored
      These are no longer used: once upon a time they used to have different
      layout from IND and IND_PERM respectively, but that is no longer the
      case since we changed the remembered set to be an array of addresses
      instead of a linked list of closures.
      70a2431f
    • Simon Marlow's avatar
      Change the representation of the MVar blocked queue · f4692220
      Simon Marlow authored
      The list of threads blocked on an MVar is now represented as a list of
      separately allocated objects rather than being linked through the TSOs
      themselves.  This lets us remove a TSO from the list in O(1) time
      rather than O(n) time, by marking the list object.  Removing this
      linear component fixes some pathalogical performance cases where many
      threads were blocked on an MVar and became unreachable simultaneously
      (nofib/smp/threads007), or when sending an asynchronous exception to a
      TSO in a long list of thread blocked on an MVar.
      
      MVar performance has actually improved by a few percent as a result of
      this change, slightly to my surprise.
      
      This is the final cleanup in the sequence, which let me remove the old
      way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the
      new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent).  It
      is now the case that only the Capability that owns a TSO may modify
      its state (well, almost), and this simplifies various things.  More of
      the RTS is based on message-passing between Capabilities now.
      f4692220
  28. 29 Mar, 2010 2 commits
    • Simon Marlow's avatar
      Move a thread to the front of the run queue when another thread blocks on it · 2726a2f1
      Simon Marlow authored
      This fixes #3838, and was made possible by the new BLACKHOLE
      infrastructure.  To allow reording of the run queue I had to make it
      doubly-linked, which entails some extra trickiness with regard to
      GC write barriers and suchlike.
      2726a2f1
    • Simon Marlow's avatar
      New implementation of BLACKHOLEs · 5d52d9b6
      Simon Marlow authored
      This replaces the global blackhole_queue with a clever scheme that
      enables us to queue up blocked threads on the closure that they are
      blocked on, while still avoiding atomic instructions in the common
      case.
      
      Advantages:
      
       - gets rid of a locked global data structure and some tricky GC code
         (replacing it with some per-thread data structures and different
         tricky GC code :)
      
       - wakeups are more prompt: parallel/concurrent performance should
         benefit.  I haven't seen anything dramatic in the parallel
         benchmarks so far, but a couple of threading benchmarks do improve
         a bit.
      
       - waking up a thread blocked on a blackhole is now O(1) (e.g. if
         it is the target of throwTo).
      
       - less sharing and better separation of Capabilities: communication
         is done with messages, the data structures are strictly owned by a
         Capability and cannot be modified except by sending messages.
      
       - this change will utlimately enable us to do more intelligent
         scheduling when threads block on each other.  This is what started
         off the whole thing, but it isn't done yet (#3838).
      
      I'll be documenting all this on the wiki in due course.
      5d52d9b6
  29. 19 Mar, 2010 2 commits
  30. 11 Mar, 2010 1 commit
    • Simon Marlow's avatar
      Use message-passing to implement throwTo in the RTS · 7408b392
      Simon Marlow authored
      This replaces some complicated locking schemes with message-passing
      in the implementation of throwTo. The benefits are
      
       - previously it was impossible to guarantee that a throwTo from
         a thread running on one CPU to a thread running on another CPU
         would be noticed, and we had to rely on the GC to pick up these
         forgotten exceptions. This no longer happens.
      
       - the locking regime is simpler (though the code is about the same
         size)
      
       - threads can be unblocked from a blocked_exceptions queue without
         having to traverse the whole queue now.  It's a rare case, but
         replaces an O(n) operation with an O(1).
      
       - generally we move in the direction of sharing less between
         Capabilities (aka HECs), which will become important with other
         changes we have planned.
      
      Also in this patch I replaced several STM-specific closure types with
      a generic MUT_PRIM closure type, which allowed a lot of code in the GC
      and other places to go away, hence the line-count reduction.  The
      message-passing changes resulted in about a net zero line-count
      difference.
      7408b392
  31. 23 Nov, 2009 1 commit
  32. 31 Dec, 2009 1 commit
  33. 17 Dec, 2009 1 commit
    • Simon Marlow's avatar
      Fix #650: use a card table to mark dirty sections of mutable arrays · 0417404f
      Simon Marlow authored
      The card table is an array of bytes, placed directly following the
      actual array data.  This means that array reading is unaffected, but
      array writing needs to read the array size from the header in order to
      find the card table.
      
      We use a bytemap rather than a bitmap, because updating the card table
      must be multi-thread safe.  Each byte refers to 128 entries of the
      array, but this is tunable by changing the constant
      MUT_ARR_PTRS_CARD_BITS in includes/Constants.h.
      0417404f
  34. 08 Dec, 2009 1 commit