System.IO.openFile not safe with asynchronous exceptions; leaves inconsistent state
Summary
System.IO.openFile
obtains a file descriptor and locks the inode inside the ghc's runtime state. Interrupting it with an asynchronous exception may cause leaking of the FD or wrongly keeping the inode locked.
Specifically, there is a scenario where the FD is closed but the inode is left locked in the ghc runtime. This is inconsistent state, broken invariant. If the file underlying the (closed) FD is deleted, subsequent openFile
call for a completely unrelated file name with WriteMode
can result in:
openFile: resource busy (file is locked)
(if the filesystem reuses the inode).
The repro script linked below fails reliably with the error only on Linux. On OSX I observed running out of file descriptors instead ("too many open files").
I have prepared a fix, which is mostly applying masking and adding some cleanup. I'll create a PR.
Steps to reproduce
Interrupt openFile
, delete the created file, open a new file with WriteMode.
Observe:
openFile: resource busy (file is locked)
This only fails sometimes. More likely on multi core machines.
https://github.com/luntain/misc/blob/562b3c6109208d67d8328dd27ea253e26d3e73e6/Repro2.hs
Expected behavior
Opening a new file (creating) for writing should not fail with file is locked
.
Environment
- GHC version used: 8.8.3
Optional:
- Operating System: linux
- System Architecture: amd64