GHC issueshttps://gitlab.haskell.org/ghc/ghc/-/issues2019-11-03T20:24:49Zhttps://gitlab.haskell.org/ghc/ghc/-/issues/17297CNF.c:insertCompactHash doesn't correctly dirty regions2019-11-03T20:24:49ZBen GamariCNF.c:insertCompactHash doesn't correctly dirty regionsConsider `CNF.c:insertCompactHash`:
```c
void
insertCompactHash (Capability *cap,
StgCompactNFData *str,
StgClosure *p, StgClosure *to)
{
insertHashTable(str->hash, (StgWord)p, (const void*)to);
...Consider `CNF.c:insertCompactHash`:
```c
void
insertCompactHash (Capability *cap,
StgCompactNFData *str,
StgClosure *p, StgClosure *to)
{
insertHashTable(str->hash, (StgWord)p, (const void*)to);
const StgInfoTable *strinfo = &str->header.info;
if (strinfo == &stg_COMPACT_NFDATA_CLEAN_info) {
strinfo = &stg_COMPACT_NFDATA_DIRTY_info;
recordClosureMutated(cap, (StgClosure*)str);
}
}
```
At first glance this looks reasonable. However, note how we are dirtying the region:
```
strinfo = &stg_COMPACT_NFDATA_DIRTY_info;
```
This does not do at all what we want; it simply sets the local variable, not the info table pointer.
For this reason compact regions with sharing preservation enabled get added to the `mut_list` once for every object in the region. Terrible!8.8.2Ben GamariBen Gamarihttps://gitlab.haskell.org/ghc/ghc/-/issues/16707Hanging STM transaction with orElse on writeTVar2019-12-06T20:10:02ZrolandHanging STM transaction with orElse on writeTVarIn the following program the main thread's `atomically` hangs in the second iteration, although no other threads are running (and the transaction would have no reason to retry).
```haskell
import Control.Concurrent
import Control.Concur...In the following program the main thread's `atomically` hangs in the second iteration, although no other threads are running (and the transaction would have no reason to retry).
```haskell
import Control.Concurrent
import Control.Concurrent.STM
import Debug.Trace
main :: IO ()
main = (`mapM_` [1..1000]) $ \_ -> do
traceEventIO "(((("
x <- newTVarIO False
forkIO $ atomically $ writeTVar x True
traceEventIO "----"
atomically $ do -- hangs in the second iteration
_ <- readTVar x
writeTVar x True `orElse` return ()
threadDelay 100000
traceEventIO "))))"
```
Obviously, using `orElse` on `writeTVar` makes no sense, the code here is the result of boiling down a much bigger program.
```
$ ghc -debug Bug.hs
[1 of 1] Compiling Main ( Bug.hs, Bug.o )
Linking Bug ...
```
```
$ ./Bug +RTS -v
created capset 0 of type 2
created capset 1 of type 3
cap 0: initialised
assigned cap 0 to capset 0
assigned cap 0 to capset 1
cap 0: created thread 1
cap 0: running thread 1 (ThreadRunGHC)
cap 0: ((((
cap 0: created thread 2
cap 0: ----
cap 0: thread 1 stopped (blocked on a delay operation)
cap 0: running thread 2 (ThreadRunGHC)
cap 0: thread 2 stopped (finished)
cap 0: running thread 1 (ThreadRunGHC)
cap 0: thread 1 stopped (yielding)
cap 0: running thread 1 (ThreadRunGHC)
cap 0: ))))
cap 0: ((((
cap 0: created thread 3
cap 0: ----
cap 0: thread 1 stopped (yielding)
cap 0: running thread 3 (ThreadRunGHC)
cap 0: thread 3 stopped (finished)
cap 0: running thread 1 (ThreadRunGHC)
```
At this point the program hangs and uses 100% CPU. Only the main thread (1) is running (2 and 3 finished). It is past the `----`, but hasn't reached `threadDelay` yet (no `blocked on a delay operation` in the second iteration). So it must be working on the `atomically` block in between.
This is perfectly reproducible (exact same result on two different machines, tested several times).
Compiling with `-fno-omit-yields` (as suggested in
https://downloads.haskell.org/~ghc/8.6.5/docs/html/users_guide/bugs.html#bugs-in-ghc, which refers to #367) makes no difference. This also seems different to #15975, where the problem appears to be a failure to yield in between transactions. Removing either the `readTVar x` or the `` `orElse` return ()`` bit makes all 1000 iterations go through without hanging.
```
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 8.6.5
```8.8.2https://gitlab.haskell.org/ghc/ghc/-/issues/17629internal error when I pass -hT to a profiling build2020-02-06T22:20:14ZVanessa McHaleinternal error when I pass -hT to a profiling build## Summary
I get
```
decompress/compress
lzlib-test: internal error: dumpCensus; doHeapProfile
(GHC version 8.8.1 for x86_64_unknown_linux)
Please report this as a GHC bug: https://www.haskell.org/ghc/reportabug
Aborted (core ...## Summary
I get
```
decompress/compress
lzlib-test: internal error: dumpCensus; doHeapProfile
(GHC version 8.8.1 for x86_64_unknown_linux)
Please report this as a GHC bug: https://www.haskell.org/ghc/reportabug
Aborted (core dumped)
```
when trying to run a test suite with `+RTS -hT`
## Steps to reproduce
Build [lzlib](https://github.com/vmchale/lzlib) with
```
cabal test --enable-profiling
```
and download test data with
```
make
```
Then run the generated executable with:
```
dist-newstyle/build/x86_64-linux/ghc-8.8.1/lzlib-0.3.0.5/t/lzlib-test/build/lzlib-test/lzlib-test +RTS -hT
```
## Expected behavior
It should [generate a heap profile](https://downloads.haskell.org/ghc/latest/docs/html/users_guide/runtime_control.html#rts-options-for-profiling) and not fail with such an error.
It works fine if I pass `-h` instead of `-hT`.
## Environment
* GHC version used: 8.8.1, 8.8.2 release candidate
I can't reproduce this with 8.6.5 or earlier. It seems to be fixed in 8.10.1
Optional:
* Operating System: Linux
* System Architecture: x86_64, aarch648.8.2https://gitlab.haskell.org/ghc/ghc/-/issues/16916CPU burning loop when should be blocked on recv?2020-08-12T07:50:52ZIvan KasatenkoCPU burning loop when should be blocked on recv?## Summary
I have a pretty simple application that uses web sockets (websockets + wuss packages) to tunnel data. At the core there are two threads that read and write from the web socket. After some period of activity it seems to get st...## Summary
I have a pretty simple application that uses web sockets (websockets + wuss packages) to tunnel data. At the core there are two threads that read and write from the web socket. After some period of activity it seems to get stuck in a loop that consumes 100% CPU (1 CPU core) when it should rather be blocked on `recv`.
If you look at the `+RTS -Ds` output you would notice that IOManager thread constantly wakes up and goes to sleep, with the GC getting triggered from time to time. What's weird is that even though IOManager thread apparently does some work no other thread gets waken up to consume the results.
I have attached `+RTS -Ds`, `dtruss` and profile output (with <0.3% items filtered out).
[ds.txt](/uploads/aa6b31cdfd0dc34abb5f16656316c239/ds.txt)
[dtruss.txt](/uploads/939d0d85d4a74e738ce3455e9e8f1932/dtruss.txt)
[zaloopa-client-exe.prof](/uploads/5f75223c85715bfcad2a2766c953aef8/zaloopa-client-exe.prof)
## Steps to reproduce
Having `recv` and `send` going on at the same websocket.
## Expected behavior
Block with zero CPU usage.
## Environment
* GHC version used: 8.6.5 (Stack LTS-13.27), tried 8.4.4 (Stack LTS-12.26) with the same effect.
Optional:
* Operating System: macOS Mojave 10.14.5
* System Architecture: x648.8.2