openFile gives "file is locked" error on Linux when creating a non-existing file
Viktor Dukhovni
ietf-dane at dukhovni.org
Sat Nov 16 07:32:07 UTC 2024
On Fri, Nov 15, 2024 at 06:45:40PM +0530, Harendra Kumar wrote:
> Coming back to this issue after a break. I reviewed the code carefully
> and I cannot find anything where we are doing something in the
> application code that affects the RTS locking mechanism. Let me walk
> through the steps of the test up to failure and what we are doing in
> the code. The test output is like this:
It is indeed not immediately clear where in your code or in some
dependency (including base, GHC, ...) a descriptor that contributes to
the RTS file reader/writer count (indexed by (dev, ino)) might be closed
without adjusting the count by calling the RTS `unlockFile()` function
(via GHC.IO.FD.release).
It may be worth noting that GHC does not *reliably* prevent simultaneous
open handles for the same underlying file, because handles returned by
hDuplicate do not contribute to the count:
demo.hs:
import GHC.IO.Handle (hDuplicate)
import System.IO
main :: IO ()
main = do
fh1 <- dupOpen "/tmp/foo"
fh2 <- dupOpen "/tmp/foo"
writeNow fh1 "abc\n"
writeNow fh2 "def\n"
readFile "/tmp/foo" >>= putStr
hClose fh1
hClose fh2
where
-- Look Mom, no lock!
dupOpen path = do
fh <- openFile path WriteMode
hDuplicate fh <* hClose fh
writeNow fh s = hPutStr fh s >> hFlush fh
$ ghc -O demo.hs
[1 of 2] Compiling Main ( demo.hs, demo.o )
[2 of 2] Linking demo
$ ./demo
def
I am not sure that Haskell really should be holding the application's
hand in this area, corrupting output files by concurrent writers can
just as easily happen by running two independent processes. But letting
go of this guardrail would IIRC be a deviation from the Haskell report,
and there are likely applications that depend on this (and don't use
hDupicate or equivalent to break the reader/writer tracking).
--
Viktor.
More information about the ghc-devs
mailing list