Dynamic linking is fragile on Windows
@Phyx writes (on !487 (comment 187112)),
- TH TemplateHaskell is a recurrent pain in the backside for Windows users. At the core of the issue is the small memory model. The current design is that the linkers use the end of each object file to allocate jump islands, these are used essentially as PLTs when a relocation resolves to a symbol outside of the 2GB range.
This is how the dynamic linker also handles DLL calls, which will always be outside of the range. This works reasonably well for PCREL relocations, but it's an issue for non relative relocations such as R_X86_64_32 where the resulting address must be a 32 bit address. The reason this may fail is that the object file may have been loading outside of the 2gb range so having the jump table at the end of the file means the address can have the high bits set.
This happens quite easily due to fragmentation caused by fread'ing object files and archives. The new linker attempts to squeeze memory a bit more, and I have locally a TLFS2 based memory allocator which packs them even more. Fragmentation is practically zero there. However I didn't like the complexity it caused. Instead I wanted to investigate using mmap to map images and objects into a high range, and use the TLFS allocator to reserve a chunk of memory, say 10mb (and commit as needed) of the low 2gb range at startup when there is plenty left. Since each island is about 24 bytes this gives us a decent chunk (~436k relocations). We can double that with not much issue.
This should fix the TH segfaults and aborts that happen sporadically. They just depend on where memory is allocated. Dynamic linking would solve part of the problem as well, but I still think we should do this to hold the PLT entries for the DLLs.
This sounds like a sensible plan to me.