- 22 Feb, 2019 40 commits
-
-
Write barriers push old values of updated field, e.g. when we have *q = p; we do updateRemembSetPushClosure(*q, q); *q = p; Here the second argument ("origin") is not useful because by the time we do the update we invalidate "origin" (`q` is no longer origin of old `*q`). In general it doesn't make sense to record origins in write barriers so we remove all origin arguments from write barriers. Fixes #170.
-
Ben Gamari authored
-
Ben Gamari authored
-
Fixes #162
-
-
These flush calls cause capabilities to flush their UpdRemSets too early, then set `upd_rem_set_syncd = true`. This causes NOT syncing UpdRemSets in nonmovingBeginFlush and losing track of some objects. Fixes #159 (return_mem_to_os)
-
This fixes a bug that happens when we start exit sequence while concurrent mark is running. Mark copies oldest gen threads to nonmoving_weak_ptr_list and nonmoving_old_weak_ptr_list (which also act as the snapshot for weaks) before releasing capabilities to allow concurrent minor collections (which may add weaks to oldest_gen weak list). We need to move these weaks back to oldest gen weak list to be able to run C finalizers of them in runAllCFinalizers in hs_exit_.
-
-
`resurrectThreads` runs a code that runs a write barrier, causing pushing objects to UpdRemSet. This causes marking some objects in the next GC cycle. We fix this by resetting UpdRemSets before releasing the capabilities. See also comments in the code. Fixes #142
-
Ben Gamari authored
Otherwise we may potentially race with the nonmoving collector. I believe this was the cause of #155.
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
Fixes #154.
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
These two collection strategies are mutually exclusive. Fixes #132.
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
Previously we relied on blatantly undefined behavior. Now we at very least use volatile loads.
-
Ben Gamari authored
-
Ben Gamari authored
This uses the nonmoving collector when compiling the testcases.
-
Ben Gamari authored
Previously we would fail to account for changes by putMVar#'s wakeup loop.
-
Ben Gamari authored
This may happen if two threads enter the thunk at the same time.
-
Ben Gamari authored
-
Ben Gamari authored
The nonmoving GC doesn't support `+RTS -G1`, which this test insists on.
-
Ben Gamari authored
-
This introduces a simple census of the non-moving heap (not to be confused with the heap census used by the heap profiler). This collects basic heap usage information (number of allocated and free blocks) which is useful when characterising fragmentation of the nonmoving heap.
-
This introduces a few events to mark key points in the nonmoving garbage collection cycle. These include: * `EVENT_CONC_MARK_BEGIN`, denoting the beginning of a round of marking. This may happen more than once in a single major collection since we the major collector iterates until it hits a fixed point. * `EVENT_CONC_MARK_END`, denoting the end of a round of marking. * `EVENT_CONC_SYNC_BEGIN`, denoting the beginning of the post-mark synchronization phase * `EVENT_CONC_UPD_REM_SET_FLUSH`, indicating that a capability has flushed its update remembered set. * `EVENT_CONC_SYNC_END`, denoting that all mutators have flushed their update remembered sets. * `EVENT_CONC_SWEEP_BEGIN`, denoting the beginning of the sweep portion of the major collection. * `EVENT_CONC_SWEEP_END`, denoting the end of the sweep portion of the major collection.
-
-
This extends the non-moving collector to allow concurrent collection. The full design of the collector implemented here is described in detail in a technical note B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell Compiler" (2018) This extension involves the introduction of a capability-local remembered set, known as the /update remembered set/, which tracks objects which may no longer be visible to the collector due to mutation. To maintain this remembered set we introduce a write barrier on mutations which is enabled while a concurrent mark is underway. The update remembered set representation is similar to that of the nonmoving mark queue, being a chunked array of `MarkEntry`s. Each `Capability` maintains a single accumulator chunk, which it flushed when it (a) is filled, or (b) when the nonmoving collector enters its post-mark synchronization phase. While the write barrier touches a significant amount of code it is conceptually straightforward: the mutator must ensure that the referee of any pointer it overwrites is added to the update remembered set. However, there are a few details: * In the case of objects with a dirty flag (e.g. `MVar`s) we can exploit the fact that only the *first* mutation requires a write barrier. * Weak references, as usual, complicate things. In particular, we must ensure that the referee of a weak object is marked if dereferenced by the mutator. For this we (unfortunately) must introduce a read barrier, as described in Note [Concurrent read barrier on deRefWeak#] (in `NonMovingMark.c`). * Stable names are also a bit tricky as described in Note [Sweeping stable names in the concurrent collector] (`NonMovingSweep.c`). We take quite some pains to ensure that the high thread count often seen in parallel Haskell applications doesn't affect pause times. To this end we allow thread stacks to be marked either by the thread itself (when it is executed or stack-underflows) or the concurrent mark thread (if the thread owning the stack is never scheduled). There is a non-trivial handshake to ensure that this happens without racing which is described in Note [StgStack dirtiness flags and concurrent marking]. Co-Authored-by:
Ömer Sinan Ağacan <omer@well-typed.com>
-
This simply runs the compile_and_run tests with `-xn`, enabling the nonmoving oldest generation.
-
This implements the core heap structure and a serial mark/sweep collector which can be used to manage the oldest-generation heap. This is the first step towards a concurrent mark-and-sweep collector aimed at low-latency applications. The full design of the collector implemented here is described in detail in a technical note B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell Compiler" (2018) The basic heap structure used in this design is heavily inspired by K. Ueno & A. Ohori. "A fully concurrent garbage collector for functional programs on multicore processors." /ACM SIGPLAN Notices/ Vol. 51. No. 9 (presented by ICFP 2016) This design is intended to allow both marking and sweeping concurrent to execution of a multi-core mutator. Unlike the Ueno design, which requires no global synchronization pauses, the collector introduced here requires a stop-the-world pause at the beginning and end of the mark phase. To avoid heap fragmentation, the allocator consists of a number of fixed-size /sub-allocators/. Each of these sub-allocators allocators into its own set of /segments/, themselves allocated from the block allocator. Each segment is broken into a set of fixed-size allocation blocks (which back allocations) in addition to a bitmap (used to track the liveness of blocks) and some additional metadata (used also used to track liveness). This heap structure enables collection via mark-and-sweep, which can be performed concurrently via a snapshot-at-the-beginning scheme (although concurrent collection is not implemented in this patch). The mark queue is a fairly straightforward chunked-array structure. The representation is a bit more verbose than a typical mark queue to accomodate a combination of two features: * a mark FIFO, which improves the locality of marking, reducing one of the major overheads seen in mark/sweep allocators (see [1] for details) * the selector optimization and indirection shortcutting, which requires that we track where we found each reference to an object in case we need to update the reference at a later point (e.g. when we find that it is an indirection). See Note [Origin references in the nonmoving collector] (in `NonMovingMark.h`) for details. Beyond this the mark/sweep is fairly run-of-the-mill. [1] R. Garner, S.M. Blackburn, D. Frampton. "Effective Prefetch for Mark-Sweep Garbage Collection." ISMM 2007. Co-Authored-By:
Ben Gamari <ben@well-typed.com>
-
-
This flag will enable the use of a non-moving oldest generation.
-