Skip to content

M32 allocator fixes and simplification

Ben Gamari requested to merge wip/m32-fixes into master

Previously, in an attempt to reduce fragmentation, each new allocator would map a region of M32_MAX_PAGES fresh pages to seed itself. However, this ends up being extremely wasteful since it turns out that we often use fewer than this. Consequently, these pages end up getting freed which, ends up fragmenting our address space more than than we would have if we had naively allocated pages on-demand.

Here we refactor m32 to avoid this waste while achieving the fragmentation mitigation previously desired. In particular, we move all page allocation into the global m32_alloc_page, which will pull a page from the free page pool. If the free page pool is empty we then refill it by allocating a region of M32_MAP_PAGES and adding them to the pool.

Furthermore, we do away with the initial seeding entirely. That is, the allocator starts with no active pages: pages are rather allocated on an as-needed basis.

On the whole this ends up being a pleasingly simple change, simultaneously making m32 more efficient, more robust, and simpler.

Fixes #18980 (closed).

In addition, we fix an outright bug, #18981 (closed), where we would fail to reset a page's protection to read/write when adding it to the free list. This would result in a crash when we later attempt to allocate and fill the page. Fix this.

Merge request reports