Commit 1f5d8364 authored by Simon Marlow's avatar Simon Marlow

Allow allocNursery() to allocate single blocks (#7257)

Forcing large allocations here can creates serious fragmentation in
some cases, and since the large allocations are only a small
optimisation we should allow the nursery to hoover up small blocks
before allocating large chunks.
parent a17da16f
......@@ -393,12 +393,18 @@ finish:
// Allocate a chunk of blocks that is at least min and at most max
// blocks in size. This API is used by the nursery allocator that
// wants contiguous memory preferably, but doesn't require it. When
// memory is fragmented we might have lots of large chunks that are
// memory is fragmented we might have lots of chunks that are
// less than a full megablock, so allowing the nursery allocator to
// use these reduces fragmentation considerably. e.g. on a GHC build
// with +RTS -H, I saw fragmentation go from 17MB down to 3MB on a
// single compile.
//
// Further to this: in #7257 there is a program that creates serious
// fragmentation such that the heap is full of tiny <4 block chains.
// The nursery allocator therefore has to use single blocks to avoid
// fragmentation, but we make sure that we allocate large blocks
// preferably if there are any.
//
bdescr *
allocLargeChunk (W_ min, W_ max)
{
......
......@@ -437,16 +437,12 @@ allocNursery (bdescr *tail, W_ blocks)
// tiny optimisation (~0.5%), but it's free.
while (blocks > 0) {
if (blocks >= BLOCKS_PER_MBLOCK / 4) {
n = stg_min(BLOCKS_PER_MBLOCK, blocks);
bd = allocLargeChunk(16, n); // see comment with allocLargeChunk()
// NB. we want a nice power of 2 for the minimum here
n = bd->blocks;
} else {
bd = allocGroup(blocks);
n = blocks;
}
n = stg_min(BLOCKS_PER_MBLOCK, blocks);
// allocLargeChunk will prefer large chunks, but will pick up
// small chunks if there are any available. We must allow
// single blocks here to avoid fragmentation (#7257)
bd = allocLargeChunk(1, n);
n = bd->blocks;
blocks -= n;
for (i = 0; i < n; i++) {
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment