GHC issueshttps://gitlab.haskell.org/ghc/ghc/-/issues2020-01-17T10:15:14Zhttps://gitlab.haskell.org/ghc/ghc/-/issues/14341Show instance for TypeReps is a bit broken2020-01-17T10:15:14ZDavid FeuerShow instance for TypeReps is a bit brokenThere are three problems.
1. Showing typereps of tuples can produce unnecessary parentheses:
```
Prelude K T> typeRep @(Int, Maybe Bool)
(Int,(Maybe Bool))
```
The fix is trivial.
2. Showing typereps of ticked (i.e., l...There are three problems.
1. Showing typereps of tuples can produce unnecessary parentheses:
```
Prelude K T> typeRep @(Int, Maybe Bool)
(Int,(Maybe Bool))
```
The fix is trivial.
2. Showing typereps of ticked (i.e., lifted) tuples and lists gives hard-to-read results, because it does not use the usual special syntax:
```
Prelude K T> typeRep @'(Int, Maybe Bool)
'(,) * * Int (Maybe Bool)
Prelude K T> typeRep @'[1,2,3]
': Nat 1 (': Nat 2 (': Nat 3 ('[] Nat)))
```
Fixing the lifted tuple case is trivial. Fixing the lifted list case is slightly less trivial, but not hard.
3. Type operator applications are not shown infix.
```
Prelude K T> typeRep @(Maybe :*: Either Int)
:*: * Maybe (Either Int)
```
This is the hardest problem to fix, although it's probably not too terribly hard. See [ticket:14341\#comment:143749](https://gitlab.haskell.org//ghc/ghc/issues/14341#note_143749) for thoughts.8.6.1https://gitlab.haskell.org/ghc/ghc/-/issues/14340Rename AND typecheck types before values2019-07-07T18:17:24ZEdward Z. YangRename AND typecheck types before valuesIn a few cases, we get in trouble during renaming of values because we don't have access to information that would be computed during typechecking. Two examples of this:
- #13905 - Here, we need to determine if a Name is a newtype const...In a few cases, we get in trouble during renaming of values because we don't have access to information that would be computed during typechecking. Two examples of this:
- #13905 - Here, we need to determine if a Name is a newtype constructor or data type constructor during renaming (desugaring of applicative do), which is not known until after typechecking
- #12088 - Perhaps? We want to rename and typecheck instance declarations at the same time, since they can occur between the type declarations.
One nit: you can't compute SCCs until you rename. But if you just rename ALL of the types at once, then SCC them, that should be fine.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ----------------------- |
| Version | 8.2.1 |
| Type | Task |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler (Type checker) |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Rename AND typecheck types before values","status":"New","operating_system":"","component":"Compiler (Type checker)","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Task","description":"In a few cases, we get in trouble during renaming of values because we don't have access to information that would be computed during typechecking. Two examples of this:\r\n\r\n* https://ghc.haskell.org/trac/ghc/ticket/13905 - Here, we need to determine if a Name is a newtype constructor or data type constructor during renaming (desugaring of applicative do), which is not known until after typechecking\r\n\r\n* https://ghc.haskell.org/trac/ghc/ticket/12088 - Perhaps? We want to rename and typecheck instance declarations at the same time, since they can occur between the type declarations.\r\n\r\nOne nit: you can't compute SCCs until you rename. But if you just rename ALL of the types at once, then SCC them, that should be fine.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14337typeRepKind can perform substantial amounts of allocation2020-01-23T19:27:39ZDavid FeuertypeRepKind can perform substantial amounts of allocationI came up with a (rather contrived) test case to demonstrate that [D4082](https://phabricator.haskell.org/D4082) reduced big-O time complexity in pathological cases. But I expected it to increase space usage by a constant factor. What I ...I came up with a (rather contrived) test case to demonstrate that [D4082](https://phabricator.haskell.org/D4082) reduced big-O time complexity in pathological cases. But I expected it to increase space usage by a constant factor. What I found was very much the opposite: it dramatically reduced allocation. The reason for this is obvious in hindsight. Every time we call `typeRepKind`, we recalculate the kind entirely from scratch. That recalculation is only a potential *time* problem for `TrApp`, because we only need to walk down links, but it's also a *space* problem for `TrTyCon`, because we're building up a `TypeRep` from a `KindRep`.
The solution, assuming we choose to keep `typeRepKind`, seems fairly clear: whether or not we choose to cache the kind in `TrApp`, we should almost certainly do so in `TrTyCon`.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Core Libraries |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | bgamari |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"typeRepKind can perform substantial amounts of allocation","status":"New","operating_system":"","component":"Core Libraries","related":[],"milestone":"8.4.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":["Typeable"],"differentials":[],"test_case":"","architecture":"","cc":["bgamari"],"type":"Bug","description":"I came up with a (rather contrived) test case to demonstrate that Phab:D4082 reduced big-O time complexity in pathological cases. But I expected it to increase space usage by a constant factor. What I found was very much the opposite: it dramatically reduced allocation. The reason for this is obvious in hindsight. Every time we call `typeRepKind`, we recalculate the kind entirely from scratch. That recalculation is only a potential ''time'' problem for `TrApp`, because we only need to walk down links, but it's also a ''space'' problem for `TrTyCon`, because we're building up a `TypeRep` from a `KindRep`.\r\n\r\nThe solution, assuming we choose to keep `typeRepKind`, seems fairly clear: whether or not we choose to cache the kind in `TrApp`, we should almost certainly do so in `TrTyCon`.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14335Plugins don't work with -fexternal-interpreter2023-08-08T13:45:03ZBen GamariPlugins don't work with -fexternal-interpreterPlugins don't work with -fexternal-interpreter.
The current plan to fix this is to enable GHC to always use the internal interpreter for plugins, even when `-fexternal-interpreter` is given. `-fexternal-interpreter` only determines whic...Plugins don't work with -fexternal-interpreter.
The current plan to fix this is to enable GHC to always use the internal interpreter for plugins, even when `-fexternal-interpreter` is given. `-fexternal-interpreter` only determines which interpreter is used for running Template Haskell splices. The following tasks have been identified:
- [X] Support loading two different `UnitState` (available units): one for the target, one for plugins
- [ ] Add command-line flags (`-plugin-package-db`, etc.) to build the plugins UnitState
- [ ] Refactor many functions to explicitly pass Platform configuration (Platform, ways, etc.) as arguments. Currently we often pass `DynFlags` and callee functions implicitly use the `UnitState` of the target platform. It doesn't compose well and we want to be explicit about the platform we are using (target or host) (see also #17957).
- [x] Pretty-printing of `Unit` (via its `Outputable` instance) implicitly queries the `UnitState` of the target platform (via `sdocWithDynFlags`). We need to remove this `Outputable` instance.8.6.1Ben GamariBen Gamarihttps://gitlab.haskell.org/ghc/ghc/-/issues/14331Overzealous free-floating kind check causes deriving clause to be rejected2023-02-01T20:13:20ZRyan ScottOverzealous free-floating kind check causes deriving clause to be rejectedGHC rejects this program:
```hs
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE PolyKinds #-}
module Bug where
class C a b
data D = D deriving (C (a :: k))
```
```
GHCi, version 8.2.1: http://www.h...GHC rejects this program:
```hs
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE PolyKinds #-}
module Bug where
class C a b
data D = D deriving (C (a :: k))
```
```
GHCi, version 8.2.1: http://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /home/rgscott/.ghci
[1 of 1] Compiling Bug ( Bug.hs, interpreted )
Bug.hs:8:1: error:
Kind variable ‘k’ is implicitly bound in datatype
‘D’, but does not appear as the kind of any
of its type variables. Perhaps you meant
to bind it (with TypeInType) explicitly somewhere?
|
8 | data D = D deriving (C (a :: k))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
But it really shouldn't, since it's quite possible to write the code that is should generate:
```hs
instance C (a :: k) D
```
Curiously, this does not appear to happen for data family instances, as this typechecks:
```hs
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE PolyKinds #-}
{-# LANGUAGE TypeFamilies #-}
module Bug where
class C a b
data family D1
data instance D1 = D1 deriving (C (a :: k))
class E where
data D2
instance E where
data D2 = D2 deriving (C (a :: k))
```
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ----------------------- |
| Version | 8.2.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler (Type checker) |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Overzealous free-floating kind check causes deriving clause to be rejected","status":"New","operating_system":"","component":"Compiler (Type checker)","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":["deriving"],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"GHC rejects this program:\r\n\r\n{{{#!hs\r\n{-# LANGUAGE DeriveAnyClass #-}\r\n{-# LANGUAGE MultiParamTypeClasses #-}\r\n{-# LANGUAGE PolyKinds #-}\r\nmodule Bug where\r\n\r\nclass C a b\r\n\r\ndata D = D deriving (C (a :: k))\r\n}}}\r\n\r\n{{{\r\nGHCi, version 8.2.1: http://www.haskell.org/ghc/ :? for help\r\nLoaded GHCi configuration from /home/rgscott/.ghci\r\n[1 of 1] Compiling Bug ( Bug.hs, interpreted )\r\n\r\nBug.hs:8:1: error:\r\n Kind variable ‘k’ is implicitly bound in datatype\r\n ‘D’, but does not appear as the kind of any\r\n of its type variables. Perhaps you meant\r\n to bind it (with TypeInType) explicitly somewhere?\r\n |\r\n8 | data D = D deriving (C (a :: k))\r\n | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n}}}\r\n\r\nBut it really shouldn't, since it's quite possible to write the code that is should generate:\r\n\r\n{{{#!hs\r\ninstance C (a :: k) D\r\n}}}\r\n\r\nCuriously, this does not appear to happen for data family instances, as this typechecks:\r\n\r\n{{{#!hs\r\n{-# LANGUAGE DeriveAnyClass #-}\r\n{-# LANGUAGE MultiParamTypeClasses #-}\r\n{-# LANGUAGE PolyKinds #-}\r\n{-# LANGUAGE TypeFamilies #-}\r\nmodule Bug where\r\n\r\nclass C a b\r\n\r\ndata family D1\r\ndata instance D1 = D1 deriving (C (a :: k))\r\n\r\nclass E where\r\n data D2\r\n\r\ninstance E where\r\n data D2 = D2 deriving (C (a :: k))\r\n}}}","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14330Sparks are not started promptly2019-07-07T18:17:26ZAndrew MartinSparks are not started promptlyThis was a question on StackOverflow. With some prompting from Yuras, I've decided to open this as an issue. Here is the original question (which has been satisfactorily answered): https://stackoverflow.com/questions/46586941/why-are-ghc...This was a question on StackOverflow. With some prompting from Yuras, I've decided to open this as an issue. Here is the original question (which has been satisfactorily answered): https://stackoverflow.com/questions/46586941/why-are-ghc-sparks-fizzling/46603680?noredirect=1\#comment80163830_46603680
Here is a more narrowly tailored version of the code I have posted there:
```hs
{-# LANGUAGE BangPatterns #-}
{-# OPTIONS_GHC -O2 -Wall -threaded -fforce-recomp #-}
import Criterion.Main
import Control.Parallel.Strategies (runEval,rpar,rseq)
import qualified Data.Vector.Primitive as PV
main :: IO ()
main = do
let fewNumbers = PV.replicate 10000000 1.00000001
manyNumbers = PV.replicate 100000000 1.00000001
defaultMain
[ bgroup "serial"
[ bench "few" $ whnf serialProduct fewNumbers
, bench "many" $ whnf serialProduct manyNumbers
]
, bgroup "parallel"
[ bench "few" $ whnf parallelProduct fewNumbers
, bench "many" $ whnf parallelProduct manyNumbers
]
]
serialProduct :: PV.Vector Double -> Double
serialProduct v =
let !len = PV.length v
go :: Double -> Int -> Double
go !d !ix = if ix < len then go (d * PV.unsafeIndex v ix) (ix + 1) else d
in go 1.0 0
-- | This only works when the vector length is a multiple of 4.
parallelProduct :: PV.Vector Double -> Double
parallelProduct v = runEval $ do
let chunk = div (PV.length v) 4
p2 <- rpar (serialProduct (PV.slice (chunk * 1) chunk v))
p3 <- rpar (serialProduct (PV.slice (chunk * 2) chunk v))
p4 <- rpar (serialProduct (PV.slice (chunk * 3) chunk v))
p1 <- rseq (serialProduct (PV.slice (chunk * 0) chunk v))
rseq (p1 * p2 * p3 * p4)
```
We can build and run this with:
```
> ghc -threaded parallel_compute.hs
> ./parallel_compute +RTS -N6
```
On my eight-core laptop, here are the results we get:
```
benchmarking serial/few
time 11.46 ms (11.29 ms .. 11.61 ms)
0.999 R² (0.998 R² .. 1.000 R²)
mean 11.52 ms (11.44 ms .. 11.62 ms)
std dev 222.8 μs (140.9 μs .. 299.6 μs)
benchmarking serial/many
time 118.1 ms (116.1 ms .. 120.0 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 117.2 ms (116.6 ms .. 117.9 ms)
std dev 920.3 μs (550.5 μs .. 1.360 ms)
variance introduced by outliers: 11% (moderately inflated)
benchmarking parallel/few
time 10.04 ms (9.968 ms .. 10.14 ms)
0.999 R² (0.999 R² .. 1.000 R²)
mean 9.970 ms (9.891 ms .. 10.03 ms)
std dev 172.9 μs (114.5 μs .. 282.9 μs)
benchmarking parallel/many
time 45.32 ms (43.55 ms .. 47.17 ms)
0.996 R² (0.993 R² .. 0.999 R²)
mean 45.93 ms (44.71 ms .. 48.10 ms)
std dev 3.041 ms (1.611 ms .. 4.654 ms)
variance introduced by outliers: 20% (moderately inflated)
```
Interestingly, in the benchmark with the smaller 10,000,000 element vector, we see almost no performance improvement from the sparks. But, in the one with the larger 100,000,000 element vector, we see a considerable speedup. It runs 2.5-3.0x faster. The reason for this is that sparks are not started between scheduling intervals. By default, this happens every 20ms. We can see the fizzling like this:
```
> ./parallel_compute 'parallel/few' +RTS -N6 -s
benchmarking parallel/few
...
SPARKS: 1536 (613 converted, 0 overflowed, 0 dud, 42 GC'd, 881 fizzled)
...
> ./parallel_compute 'parallel/many' +RTS -N6 -s
benchmarking parallel/many
...
SPARKS: 411 (411 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
...
```
For application developers, it's possible to work around this by tweaking the scheduling interval:
```
> ghc -threaded -rtsopts parallel_compute.hs
> ./parallel_compute 'parallel/few' +RTS -N6 -s -C0.001
benchmarking parallel/few
time 4.158 ms (4.013 ms .. 4.302 ms)
0.993 R² (0.987 R² .. 0.998 R²)
mean 4.094 ms (4.054 ms .. 4.164 ms)
std dev 178.5 μs (131.5 μs .. 243.7 μs)
variance introduced by outliers: 24% (moderately inflated)
...
SPARKS: 3687 (3687 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
```
Much better. But, there are two problems with this:
1. This may negatively impact the overall performance of an application.
1. It doesn't work at all for library developers. It isn't practical to tell end users of your to use certain runtime flags.
I don't know enough about the RTS to suggest a way to improve this. However, intuitively, I would expect that if I spark something and there's an idle capability, the idle capability could immediately be given the spark instead of having it placed in the local queue. This may not be possible or may not be compatible with the minimal use of locks in the implementation of sparks though.
Here is a comment I made in the StackOverflow thread:
> I suppose that in the normal case, if you're going to be sparking things, you should ensure that the work done by all the sparks plus the main thread takes well over 20ms. Otherwise, nearly everything will fizzle unless scheduling happens to be coming soon. I've always wondered about the threshold for how fine-grained sparks should be, and my understanding is now that this is roughly it.
In short, I'd like for it to be possible to realize some of the benefits of parallelism for computations that take under 20ms without resorting to `forkIO` and `MVar`.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.1 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | Yuras |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Sparks are not started promptly","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":["sparks"],"differentials":[],"test_case":"","architecture":"","cc":["Yuras"],"type":"FeatureRequest","description":"This was a question on StackOverflow. With some prompting from Yuras, I've decided to open this as an issue. Here is the original question (which has been satisfactorily answered): https://stackoverflow.com/questions/46586941/why-are-ghc-sparks-fizzling/46603680?noredirect=1#comment80163830_46603680\r\n\r\nHere is a more narrowly tailored version of the code I have posted there:\r\n\r\n{{{#!hs\r\n{-# LANGUAGE BangPatterns #-}\r\n{-# OPTIONS_GHC -O2 -Wall -threaded -fforce-recomp #-}\r\n\r\nimport Criterion.Main\r\nimport Control.Parallel.Strategies (runEval,rpar,rseq)\r\nimport qualified Data.Vector.Primitive as PV\r\n\r\nmain :: IO ()\r\nmain = do\r\n let fewNumbers = PV.replicate 10000000 1.00000001\r\n manyNumbers = PV.replicate 100000000 1.00000001\r\n defaultMain\r\n [ bgroup \"serial\"\r\n [ bench \"few\" $ whnf serialProduct fewNumbers\r\n , bench \"many\" $ whnf serialProduct manyNumbers\r\n ]\r\n , bgroup \"parallel\"\r\n [ bench \"few\" $ whnf parallelProduct fewNumbers\r\n , bench \"many\" $ whnf parallelProduct manyNumbers\r\n ]\r\n ]\r\n\r\nserialProduct :: PV.Vector Double -> Double\r\nserialProduct v =\r\n let !len = PV.length v\r\n go :: Double -> Int -> Double\r\n go !d !ix = if ix < len then go (d * PV.unsafeIndex v ix) (ix + 1) else d\r\n in go 1.0 0\r\n\r\n-- | This only works when the vector length is a multiple of 4.\r\nparallelProduct :: PV.Vector Double -> Double\r\nparallelProduct v = runEval $ do\r\n let chunk = div (PV.length v) 4\r\n p2 <- rpar (serialProduct (PV.slice (chunk * 1) chunk v))\r\n p3 <- rpar (serialProduct (PV.slice (chunk * 2) chunk v))\r\n p4 <- rpar (serialProduct (PV.slice (chunk * 3) chunk v))\r\n p1 <- rseq (serialProduct (PV.slice (chunk * 0) chunk v))\r\n rseq (p1 * p2 * p3 * p4)\r\n}}}\r\n\r\nWe can build and run this with:\r\n\r\n{{{\r\n> ghc -threaded parallel_compute.hs\r\n> ./parallel_compute +RTS -N6\r\n}}}\r\n\r\nOn my eight-core laptop, here are the results we get:\r\n\r\n{{{\r\nbenchmarking serial/few\r\ntime 11.46 ms (11.29 ms .. 11.61 ms)\r\n 0.999 R² (0.998 R² .. 1.000 R²)\r\nmean 11.52 ms (11.44 ms .. 11.62 ms)\r\nstd dev 222.8 μs (140.9 μs .. 299.6 μs)\r\n\r\nbenchmarking serial/many\r\ntime 118.1 ms (116.1 ms .. 120.0 ms)\r\n 1.000 R² (1.000 R² .. 1.000 R²)\r\nmean 117.2 ms (116.6 ms .. 117.9 ms)\r\nstd dev 920.3 μs (550.5 μs .. 1.360 ms)\r\nvariance introduced by outliers: 11% (moderately inflated)\r\n\r\nbenchmarking parallel/few\r\ntime 10.04 ms (9.968 ms .. 10.14 ms)\r\n 0.999 R² (0.999 R² .. 1.000 R²)\r\nmean 9.970 ms (9.891 ms .. 10.03 ms)\r\nstd dev 172.9 μs (114.5 μs .. 282.9 μs)\r\n\r\nbenchmarking parallel/many\r\ntime 45.32 ms (43.55 ms .. 47.17 ms)\r\n 0.996 R² (0.993 R² .. 0.999 R²)\r\nmean 45.93 ms (44.71 ms .. 48.10 ms)\r\nstd dev 3.041 ms (1.611 ms .. 4.654 ms)\r\nvariance introduced by outliers: 20% (moderately inflated)\r\n}}}\r\n\r\nInterestingly, in the benchmark with the smaller 10,000,000 element vector, we see almost no performance improvement from the sparks. But, in the one with the larger 100,000,000 element vector, we see a considerable speedup. It runs 2.5-3.0x faster. The reason for this is that sparks are not started between scheduling intervals. By default, this happens every 20ms. We can see the fizzling like this:\r\n\r\n{{{\r\n> ./parallel_compute 'parallel/few' +RTS -N6 -s\r\nbenchmarking parallel/few\r\n...\r\nSPARKS: 1536 (613 converted, 0 overflowed, 0 dud, 42 GC'd, 881 fizzled)\r\n...\r\n> ./parallel_compute 'parallel/many' +RTS -N6 -s\r\nbenchmarking parallel/many\r\n...\r\nSPARKS: 411 (411 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)\r\n...\r\n}}}\r\n\r\nFor application developers, it's possible to work around this by tweaking the scheduling interval:\r\n\r\n{{{\r\n> ghc -threaded -rtsopts parallel_compute.hs\r\n> ./parallel_compute 'parallel/few' +RTS -N6 -s -C0.001\r\nbenchmarking parallel/few\r\ntime 4.158 ms (4.013 ms .. 4.302 ms)\r\n 0.993 R² (0.987 R² .. 0.998 R²)\r\nmean 4.094 ms (4.054 ms .. 4.164 ms)\r\nstd dev 178.5 μs (131.5 μs .. 243.7 μs)\r\nvariance introduced by outliers: 24% (moderately inflated)\r\n...\r\nSPARKS: 3687 (3687 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)\r\n}}}\r\n\r\nMuch better. But, there are two problems with this:\r\n\r\n1. This may negatively impact the overall performance of an application.\r\n2. It doesn't work at all for library developers. It isn't practical to tell end users of your to use certain runtime flags.\r\n\r\nI don't know enough about the RTS to suggest a way to improve this. However, intuitively, I would expect that if I spark something and there's an idle capability, the idle capability could immediately be given the spark instead of having it placed in the local queue. This may not be possible or may not be compatible with the minimal use of locks in the implementation of sparks though.\r\n\r\nHere is a comment I made in the StackOverflow thread:\r\n\r\n> I suppose that in the normal case, if you're going to be sparking things, you should ensure that the work done by all the sparks plus the main thread takes well over 20ms. Otherwise, nearly everything will fizzle unless scheduling happens to be coming soon. I've always wondered about the threshold for how fine-grained sparks should be, and my understanding is now that this is roughly it.\r\n\r\nIn short, I'd like for it to be possible to realize some of the benefits of parallelism for computations that take under 20ms without resorting to `forkIO` and `MVar`.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14321-fsolve-constant-dicts is not very robust when dealing with GADTs2019-07-07T18:17:29ZMatthew Pickering-fsolve-constant-dicts is not very robust when dealing with GADTsI expected `-fsolve-constant-dicts` to nail #9701, it didn't fire at all but a slightly modified version does.
```
{-# LANGUAGE GADTs #-}
{-# LANGUAGE TypeApplications #-}
module Foo where
data Silly a where
Sil...I expected `-fsolve-constant-dicts` to nail #9701, it didn't fire at all but a slightly modified version does.
```
{-# LANGUAGE GADTs #-}
{-# LANGUAGE TypeApplications #-}
module Foo where
data Silly a where
Silly :: Ord a => a -> Silly a
isItSilly :: a -> Silly a -> Bool
isItSilly a (Silly x) = a < x
isItSillyIntTA :: Int -> Silly Int -> Bool
isItSillyIntTA = isItSilly @Int
isItSillyInt :: Int -> Silly Int -> Bool
isItSillyInt a x = isItSilly a x
isItSillyInt2 :: Int -> Silly Int -> Bool
isItSillyInt2 a (Silly x) = a < x
isItSillyInt3 :: Int -> Silly Int -> Bool
isItSillyInt3 a (Silly x) = isItSilly a (Silly x)
```
Both versions 2 and 3 specialise nicely using the `Int` `Ord` dictionary. The first two versions don't. I'm unsure whether it \*should\* fire or not but I am making this ticket to record this fact.
Clonable code and core dump - https://gist.github.com/mpickering/f84a5f842861211e8e731c63e82d5c01
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | dfeuer |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"-fsolve-constant-dicts is not very robust when dealing with GADTs","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":["dfeuer"],"type":"Bug","description":"I expected `-fsolve-constant-dicts` to nail #9701, it didn't fire at all but a slightly modified version does.\r\n\r\n{{{\r\n{-# LANGUAGE GADTs #-} \r\n{-# LANGUAGE TypeApplications #-} \r\nmodule Foo where \r\n \r\ndata Silly a where \r\n Silly :: Ord a => a -> Silly a \r\n \r\nisItSilly :: a -> Silly a -> Bool \r\nisItSilly a (Silly x) = a < x \r\n\r\nisItSillyIntTA :: Int -> Silly Int -> Bool\r\nisItSillyIntTA = isItSilly @Int\r\n \r\nisItSillyInt :: Int -> Silly Int -> Bool \r\nisItSillyInt a x = isItSilly a x \r\n \r\nisItSillyInt2 :: Int -> Silly Int -> Bool \r\nisItSillyInt2 a (Silly x) = a < x \r\n \r\nisItSillyInt3 :: Int -> Silly Int -> Bool \r\nisItSillyInt3 a (Silly x) = isItSilly a (Silly x)\r\n}}}\r\n\r\nBoth versions 2 and 3 specialise nicely using the `Int` `Ord` dictionary. The first two versions don't. I'm unsure whether it *should* fire or not but I am making this ticket to record this fact.\r\n\r\nClonable code and core dump - https://gist.github.com/mpickering/f84a5f842861211e8e731c63e82d5c01","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14319Stuck type families can lead to lousy error messages2020-01-23T19:27:40ZDavid FeuerStuck type families can lead to lousy error messagesI first noticed this problem at the type level:
```hs
{-# language TypeFamilies, TypeInType, ScopedTypeVariables #-}
module ArityError where
import Data.Kind
import GHC.TypeLits
import Data.Proxy
type family F (s :: Symbol) :: Type
ty...I first noticed this problem at the type level:
```hs
{-# language TypeFamilies, TypeInType, ScopedTypeVariables #-}
module ArityError where
import Data.Kind
import GHC.TypeLits
import Data.Proxy
type family F (s :: Symbol) :: Type
type family G (s :: Symbol) :: F s
type instance G "Hi" = Maybe
```
This produces the error message
```hs
ArityError.hs:10:24: error:
• Expecting one more argument to ‘Maybe’
Expected kind ‘F "Hi"’, but ‘Maybe’ has kind ‘* -> *’
• In the type ‘Maybe’
In the type instance declaration for ‘G’
|
10 | type instance G "Hi" = Maybe
| ^^^^^
```
This looks utterly bogus: `F "Hi"` is stuck, so we have no idea what arity it indicates.
----
I just realized we have a similar problem at the term level:
```hs
f :: forall (s :: Symbol). Proxy s -> F s
f _ _ = undefined
```
produces
```hs
ArityError.hs:14:1: error:
• Couldn't match expected type ‘F s’ with actual type ‘p0 -> a0’
The type variables ‘p0’, ‘a0’ are ambiguous
• The equation(s) for ‘f’ have two arguments,
but its type ‘Proxy s -> F s’ has only one
• Relevant bindings include
f :: Proxy s -> F s (bound at ArityError.hs:14:1)
|
14 | f _ _ = undefined
| ^^^^^^^^^^^^^^^^^
```
The claim that `Proxy s -> F s` has only one argument is bogus; we only know that it has *at least* one argument. The fix (I imagine) is to refrain from reporting arity errors when we don't know enough about the relevant arities.https://gitlab.haskell.org/ghc/ghc/-/issues/14318TH shadowing bind statement triggers -Wunused-matches2019-07-07T18:17:29ZlyxiaTH shadowing bind statement triggers -Wunused-matches```
{-# LANGUAGE TemplateHaskell #-}
module Test where
import Language.Haskell.TH
m :: (a -> [b]) -> a -> [b]
m =
$(newName "x" >>= \x ->
newName "f" >>= \f ->
lamE [varP f, varP x]
(doE [ bindS (varP x) (listE [varE f ...```
{-# LANGUAGE TemplateHaskell #-}
module Test where
import Language.Haskell.TH
m :: (a -> [b]) -> a -> [b]
m =
$(newName "x" >>= \x ->
newName "f" >>= \f ->
lamE [varP f, varP x]
(doE [ bindS (varP x) (listE [varE f `appE` varE x])
, noBindS (varE x)])
)
```
The splice generates the following expression:
```
\f x -> do
x <- [f x]
x
```
and `-Wunused-matches` complains that `x` is not used, while both bound occurrences are in fact used (the two uses have different types so that's quite certain).Michael SloanMichael Sloanhttps://gitlab.haskell.org/ghc/ghc/-/issues/14317Solve Coercible constraints over type constructors2019-07-07T18:17:30ZIcelandjackSolve Coercible constraints over type constructorsThe core question is, could `fails` type check?
```hs
import Data.Type.Coercion
works :: Identity a `Coercion` Compose Identity Identity a
works = Coercion
-- • Couldn't match representation of type ‘Identity’
-- ...The core question is, could `fails` type check?
```hs
import Data.Type.Coercion
works :: Identity a `Coercion` Compose Identity Identity a
works = Coercion
-- • Couldn't match representation of type ‘Identity’
-- with that of ‘Compose Identity Identity’
-- arising from a use of ‘Coercion’
-- • In the expression:
-- Coercion :: Identity `Coercion` Compose Identity Identity
fails :: Identity `Coercion` Compose Identity Identity
fails = Coercion
```
----
This arises from playing with [Traversable: A Remix](https://duplode.github.io/posts/traversable-a-remix.html).
Given `coerce :: Compose Identity Identity ~> Identity` I wanted to capture that `id1` and `id2` are actually the same arrow (up to representation)
```hs
(<%<) :: (Functor f, Functor g) => (b -> g c) -> (a -> f b) -> (a -> Compose f g c)
g <%< f = Compose . fmap g . f
id1 :: a -> Identity a
id1 = Identity
id2 :: a -> Compose Identity Identity a
id2 = Identity <%< Identity
```
So I define
```hs
data F :: (k -> Type) -> (Type -> k -> Type) where
MkF :: Coercible f f' => (a -> f' b) -> F f a b
id1F :: Coercible Identity f => F f a a
id1F = MkF id1
id2F :: Coercible (Compose Identity Identity) f => F f b b
id2F = MkF id2
```
but we can not unify the types of `id1F` and `id2F`. Does this require quantified class constraints? I'm not sure where they would go
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.1 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Solve Coercible constraints over type constructors","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"FeatureRequest","description":"The core question is, could `fails` type check?\r\n\r\n{{{#!hs\r\nimport Data.Type.Coercion\r\n\r\nworks :: Identity a `Coercion` Compose Identity Identity a\r\nworks = Coercion\r\n\r\n-- • Couldn't match representation of type ‘Identity’\r\n-- with that of ‘Compose Identity Identity’\r\n-- arising from a use of ‘Coercion’\r\n-- • In the expression:\r\n-- Coercion :: Identity `Coercion` Compose Identity Identity\r\nfails :: Identity `Coercion` Compose Identity Identity\r\nfails = Coercion\r\n}}}\r\n\r\n----\r\n\r\nThis arises from playing with [https://duplode.github.io/posts/traversable-a-remix.html Traversable: A Remix].\r\n\r\nGiven `coerce :: Compose Identity Identity ~> Identity` I wanted to capture that `id1` and `id2` are actually the same arrow (up to representation)\r\n\r\n{{{#!hs\r\n(<%<) :: (Functor f, Functor g) => (b -> g c) -> (a -> f b) -> (a -> Compose f g c)\r\ng <%< f = Compose . fmap g . f\r\n\r\nid1 :: a -> Identity a\r\nid1 = Identity\r\n\r\nid2 :: a -> Compose Identity Identity a\r\nid2 = Identity <%< Identity\r\n}}}\r\n\r\nSo I define\r\n\r\n{{{#!hs\r\ndata F :: (k -> Type) -> (Type -> k -> Type) where\r\n MkF :: Coercible f f' => (a -> f' b) -> F f a b\r\n\r\nid1F :: Coercible Identity f => F f a a\r\nid1F = MkF id1\r\n\r\nid2F :: Coercible (Compose Identity Identity) f => F f b b\r\nid2F = MkF id2\r\n}}}\r\n\r\nbut we can not unify the types of `id1F` and `id2F`. Does this require quantified class constraints? I'm not sure where they would go","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14314Consider changing CC detection strategy2019-07-07T18:17:30ZBen GamariConsider changing CC detection strategyCurrently we grep through the `$CC --version` output to identify the C compiler (see `FP_GCC_VERSION` in `aclocal.m4`). This is error prone (see [D4069](https://phabricator.haskell.org/D4069)). Phyx suggests that we instead just rely on ...Currently we grep through the `$CC --version` output to identify the C compiler (see `FP_GCC_VERSION` in `aclocal.m4`). This is error prone (see [D4069](https://phabricator.haskell.org/D4069)). Phyx suggests that we instead just rely on CPP macros. There is a handy list of these [here](https://sourceforge.net/p/predef/wiki/Compilers/).
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.1 |
| Type | Task |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Consider changing CC detection strategy","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Task","description":"Currently we grep through the `$CC --version` output to identify the C compiler (see `FP_GCC_VERSION` in `aclocal.m4`). This is error prone (see Phab:D4069). Phyx suggests that we instead just rely on CPP macros. There is a handy list of these [[https://sourceforge.net/p/predef/wiki/Compilers/|here]].","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14307NamedFieldPuns should allow "ambiguous" field names2019-07-07T18:17:32ZMichael SloanNamedFieldPuns should allow "ambiguous" field namesConsider the following example:
```haskell
{-# LANGUAGE NamedFieldPuns #-}
import DupType
data A = A { field :: Int }
f :: A -> Int
f A { field } = field
```
with
```haskell
module DupType where
data B = B { field :: Int }
```
Th...Consider the following example:
```haskell
{-# LANGUAGE NamedFieldPuns #-}
import DupType
data A = A { field :: Int }
f :: A -> Int
f A { field } = field
```
with
```haskell
module DupType where
data B = B { field :: Int }
```
This results in the following error:
```
A.hs:8:7: error:
Ambiguous occurrence ‘field’
It could refer to either ‘DupType.field’,
imported from ‘DupType’ at A.hs:3:1-14
(and originally defined at DupType.hs:3:14-18)
or ‘Main.field’, defined at A.hs:5:14
|
8 | f A { field } = field
| ^^^^^
```
This seems like poor behavior, because since a particular constructor is used, it is unambiguous which field is intended. In particular, this is inconsistent with `RecordWildCards`. Consider that `f A { .. } = field` compiles perfectly fine.
I actually encountered this issue in a bit of a different usecase. I was using `NamedFieldPuns` along with `DuplicateFieldNames`. However, I got the constructor name wrong. After the scope error in the output, there was an ambiguous field name error. This was quite confusing because `DuplicateFieldNames` was on, so ambiguity should be fine! Took me a while to realize that the scope error was the root issue. With the constructor name fixed, the code compiled. If the constructor was used to resolve field names, then the 2nd error wouldn't have been emitted.
I realize that broadening the code allowed by `NamedFieldPuns` could lead to issues where code written for newer GHC versions does not work with older GHC versions. This certainly will not change the meaning of older code. What's the policy on this?https://gitlab.haskell.org/ghc/ghc/-/issues/14299GHCi for GHC 8.2.1 crashed with simple function?2019-07-07T18:17:34ZmathiassmGHCi for GHC 8.2.1 crashed with simple function?I'm a newbie here, but I'm following your instructions!
I started GHCi, defined a simple factorial function and then called it and... it crashed.
```hs
let f 0 = 0
f n = n * f (n-1)
f 0
```
That is what I introduced in the console... ...I'm a newbie here, but I'm following your instructions!
I started GHCi, defined a simple factorial function and then called it and... it crashed.
```hs
let f 0 = 0
f n = n * f (n-1)
f 0
```
That is what I introduced in the console... After that computer hanged and gave me the following error:
```
ghc: panic! (the 'impossible' happened)
(GHC version 8.2.1 for x86_64-apple-darwin):
thread blocked indefinitely in an MVar operation
Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug
```
EDIT: This was done on a macOS 10.12.6 system with the Haskell Platform installed via Homebrew, with no further "customization".https://gitlab.haskell.org/ghc/ghc/-/issues/14297make bindist packages the wrong binaries for cross compilers2021-07-21T08:18:04ZMoritz Angermannmake bindist packages the wrong binaries for cross compilersWhen building binary distributions via `make binary-dist`, the resulting binaries in the package end up being compiled for the target instead of the host when cross compiling.
E.g. building a cross compiler for iOS on macOS yields:
```...When building binary distributions via `make binary-dist`, the resulting binaries in the package end up being compiled for the target instead of the host when cross compiling.
E.g. building a cross compiler for iOS on macOS yields:
```
./inplace/bin/ghc-cabal: Mach-O 64-bit executable x86_64
./utils/ghc-cabal/dist/build/tmp/ghc-cabal: Mach-O 64-bit executable x86_64
./utils/ghc-cabal/dist-install/build/tmp/ghc-cabal: Mach-O 64-bit executable arm64
./inplace/lib/bin/hsc2hs: Mach-O 64-bit executable x86_64
./utils/hsc2hs/dist/build/tmp/hsc2hs: Mach-O 64-bit executable x86_64
./utils/hsc2hs/dist-install/build/tmp/hsc2hs: Mach-O 64-bit executable arm64
```
to just name `ghc-cabal` and `hsc2hs`.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | high |
| Resolution | Unresolved |
| Component | Package system |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | bgamari |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"make bindist packages the wrong binaries for cross compilers","status":"New","operating_system":"","component":"Package system","related":[],"milestone":"8.4.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":["bgamari"],"type":"Bug","description":"When building binary distributions via `make binary-dist`, the resulting binaries in the package end up being compiled for the target instead of the host when cross compiling.\r\n\r\nE.g. building a cross compiler for iOS on macOS yields:\r\n{{{\r\n./inplace/bin/ghc-cabal: Mach-O 64-bit executable x86_64\r\n./utils/ghc-cabal/dist/build/tmp/ghc-cabal: Mach-O 64-bit executable x86_64\r\n./utils/ghc-cabal/dist-install/build/tmp/ghc-cabal: Mach-O 64-bit executable arm64\r\n./inplace/lib/bin/hsc2hs: Mach-O 64-bit executable x86_64\r\n./utils/hsc2hs/dist/build/tmp/hsc2hs: Mach-O 64-bit executable x86_64\r\n./utils/hsc2hs/dist-install/build/tmp/hsc2hs: Mach-O 64-bit executable arm64\r\n}}}\r\nto just name `ghc-cabal` and `hsc2hs`.","type_of_failure":"OtherFailure","blocking":[]} -->8.4.1Moritz AngermannMoritz Angermannhttps://gitlab.haskell.org/ghc/ghc/-/issues/14295tagToEnum# leads to some silly closures2019-09-05T14:07:16ZDavid FeuertagToEnum# leads to some silly closuresI don't know how important this is in practice, but it looks unfortunate.
Suppose I write
```hs
foo :: (Bool -> a) -> Int# -> a
foo f x = f (tagToEnum# x)
```
Since `tagToEnum#` can fail, GHC compiles this to
```hs
foo
= \ (@ a_a10...I don't know how important this is in practice, but it looks unfortunate.
Suppose I write
```hs
foo :: (Bool -> a) -> Int# -> a
foo f x = f (tagToEnum# x)
```
Since `tagToEnum#` can fail, GHC compiles this to
```hs
foo
= \ (@ a_a10v)
(f_s1by [Occ=Once!] :: GHC.Types.Bool -> a_a10v)
(x_s1bz [Occ=Once] :: GHC.Prim.Int#) ->
let {
sat_s1bA [Occ=Once] :: GHC.Types.Bool
[LclId]
sat_s1bA = GHC.Prim.tagToEnum# @ GHC.Types.Bool x_s1bz } in
f_s1by sat_s1bA
```
That seems pretty bad! We know that `tagToEnum#` is applied to `Bool`, so we can transform this to something like
```hs
foo f x = case leWord# (intToWord# x) 1## of
1# -> f $! tagToEnum# x
_ -> f (error "tagToEnum# was used at Bool with tag ...")
```
which avoids an extra closure at the cost of a single `Word#` comparison. The same goes for arbitrary known enumeration types. I suspect the right place to fix this up is in CorePrep.⊥https://gitlab.haskell.org/ghc/ghc/-/issues/14293View patterns with locally defined functions in restructuring don't compile2023-01-19T12:51:24ZGabor GreifView patterns with locally defined functions in restructuring don't compile```hs
{-# LANGUAGE ViewPatterns #-}
foo x = x
Just (id -> res) = pure 'a' -- WORKS
Just (foo -> res') = pure 'a' -- FAILS
bar (foo -> res) = res -- WORKS
{-
[1 of 1] Compiling Main ( T14293.hs, interpreted )
T1...```hs
{-# LANGUAGE ViewPatterns #-}
foo x = x
Just (id -> res) = pure 'a' -- WORKS
Just (foo -> res') = pure 'a' -- FAILS
bar (foo -> res) = res -- WORKS
{-
[1 of 1] Compiling Main ( T14293.hs, interpreted )
T14293.hs:6:7-9: error: Variable not in scope: foo :: Char -> t
|
6 | Just (foo -> res') = pure 'a'
| ^^^
Failed, 0 modules loaded.
-}
```
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"View patterns with locally defined functions in restructuring don't compile","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"{{{#!hs\r\n{-# LANGUAGE ViewPatterns #-}\r\n\r\nfoo x = x\r\n\r\nJust (id -> res) = pure 'a' -- WORKS\r\nJust (foo -> res') = pure 'a' -- FAILS\r\nbar (foo -> res) = res -- WORKS\r\n\r\n\r\n{-\r\n\r\n[1 of 1] Compiling Main ( T14293.hs, interpreted )\r\n\r\nT14293.hs:6:7-9: error: Variable not in scope: foo :: Char -> t\r\n |\r\n6 | Just (foo -> res') = pure 'a'\r\n | ^^^\r\nFailed, 0 modules loaded.\r\n\r\n-}\r\n}}}","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14292Coercing between constraints of newtypes2019-07-07T18:17:35ZIcelandjackCoercing between constraints of newtypesThis doesn't work
```hs
{-# Language ConstraintKinds #-}
{-# Language GADTs #-}
import Data.Coerce
newtype USD = USD Int
data Dict c where
Dict :: c => Dict c
num :: Dict (Num Int) -> Dict (Num USD)
num = coerce
```
but this does...This doesn't work
```hs
{-# Language ConstraintKinds #-}
{-# Language GADTs #-}
import Data.Coerce
newtype USD = USD Int
data Dict c where
Dict :: c => Dict c
num :: Dict (Num Int) -> Dict (Num USD)
num = coerce
```
but this does
```hs
data NUM a = NUM (a -> a -> a)
num' :: NUM Int -> NUM USD
num' = coerce
```
is this a fundamental limitation?
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.1 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Coercing between constraints of newtypes","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"FeatureRequest","description":"This doesn't work\r\n\r\n{{{#!hs\r\n{-# Language ConstraintKinds #-}\r\n{-# Language GADTs #-}\r\n\r\nimport Data.Coerce\r\n\r\nnewtype USD = USD Int\r\n\r\ndata Dict c where\r\n Dict :: c => Dict c\r\n\r\nnum :: Dict (Num Int) -> Dict (Num USD)\r\nnum = coerce\r\n}}}\r\n\r\nbut this does\r\n\r\n{{{#!hs\r\ndata NUM a = NUM (a -> a -> a)\r\n\r\nnum' :: NUM Int -> NUM USD\r\nnum' = coerce\r\n}}}\r\n\r\nis this a fundamental limitation?","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14287Early inlining causes potential join points to be missed2022-10-17T16:48:20ZjheekEarly inlining causes potential join points to be missedWhile trying to make stream fusion work with recursive step functions I noticed that the following filter implementation did not fuse nicely.
```haskell
data Stream s a = Stream (s -> Step s a) s
data Step s a = Done | Yield a s
sfilte...While trying to make stream fusion work with recursive step functions I noticed that the following filter implementation did not fuse nicely.
```haskell
data Stream s a = Stream (s -> Step s a) s
data Step s a = Done | Yield a s
sfilter :: (a -> Bool) -> Stream s a -> Stream s a
sfilter pred (Stream step s0) = Stream filterStep s0 where
filterStep s = case step s of
Done -> Done
Yield x ns
| pred x -> Yield x ns
| otherwise -> filterStep ns
fromTo :: Int -> Int -> Stream Int Int
{-# INLINE fromTo #-}
fromTo from to = Stream step from where
step i
| i > to = Done
| otherwise = Yield i (i + 1)
sfoldl :: (b -> a -> b) -> b -> Stream s a -> b
{-# INLINE sfoldl #-}
sfoldl acc z (Stream !step s0) = oneShot go z s0 where
go !y s = case step s of
Done -> y
Yield x ns -> go (acc y x) ns
ssum :: (Num a) => Stream s a -> a
ssum = sfoldl (+) 0
filterTest :: Int
filterTest = ssum $ sfilter even (fromTo 1 101)
```
For this code to work nicely, GHC should detect that filterStep is a join point. However, in the definition of sfilter it is not because not all references are tail-called & saturated.
After inlining of sfilter and some trivial case-of-case transformations filterStep should become a join point. But it seems like the simplifier never gets the change to do this because float-out optimization makes filterStep a top level binding. With -fno-full-laziness filterStep does become a join point at the call site, but of course this is not really a solution.
Then I found that the following also works:
```haskell
sfilter :: (a -> Bool) -> Stream s a -> Stream s a
sfilter pred (Stream step s0) = Stream filterStep s0 where
{-# INLINE [2] filterStep #-}
filterStep s = case step s of
Done -> Done
Yield x ns
| pred x -> Yield x ns
| otherwise -> filterStep ns
```
Simply adding an INLINE \[2\] pragma disables the inlining in the early run of the simplifier. Therefore, the float out pass does not get the change to float-out before the filterStep is recognized as a joint point.
Or at least that is my interpretation of what is going on.
What surprises me about this issue is that the gentle run seems to perform inlining while the wiki mentions that inlining is not performed in this stage: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Core2CorePipeline
Intuitively, I would think that floating-out is sub-optimal when the simplifier did not use all its tricks yet, because inlining typically opens up possibilities for simplification while floating-out typically reducing these possibilities.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Early inlining causes potential join points to be missed","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"While trying to make stream fusion work with recursive step functions I noticed that the following filter implementation did not fuse nicely.\r\n\r\n\r\n{{{#!haskell\r\n\r\ndata Stream s a = Stream (s -> Step s a) s\r\ndata Step s a = Done | Yield a s\r\n\r\nsfilter :: (a -> Bool) -> Stream s a -> Stream s a\r\nsfilter pred (Stream step s0) = Stream filterStep s0 where\r\n filterStep s = case step s of\r\n Done -> Done\r\n Yield x ns\r\n | pred x -> Yield x ns\r\n | otherwise -> filterStep ns\r\n\r\nfromTo :: Int -> Int -> Stream Int Int\r\n{-# INLINE fromTo #-}\r\nfromTo from to = Stream step from where\r\n step i\r\n | i > to = Done\r\n | otherwise = Yield i (i + 1)\r\n\r\nsfoldl :: (b -> a -> b) -> b -> Stream s a -> b\r\n{-# INLINE sfoldl #-}\r\nsfoldl acc z (Stream !step s0) = oneShot go z s0 where\r\n go !y s = case step s of\r\n Done -> y\r\n Yield x ns -> go (acc y x) ns\r\n\r\nssum :: (Num a) => Stream s a -> a\r\nssum = sfoldl (+) 0\r\n\r\nfilterTest :: Int\r\nfilterTest = ssum $ sfilter even (fromTo 1 101)\r\n}}}\r\n\r\nFor this code to work nicely, GHC should detect that filterStep is a join point. However, in the definition of sfilter it is not because not all references are tail-called & saturated. \r\n\r\nAfter inlining of sfilter and some trivial case-of-case transformations filterStep should become a join point. But it seems like the simplifier never gets the change to do this because float-out optimization makes filterStep a top level binding. With -fno-full-laziness filterStep does become a join point at the call site, but of course this is not really a solution.\r\n\r\nThen I found that the following also works:\r\n\r\n{{{#!haskell\r\nsfilter :: (a -> Bool) -> Stream s a -> Stream s a\r\nsfilter pred (Stream step s0) = Stream filterStep s0 where\r\n {-# INLINE [2] filterStep #-}\r\n filterStep s = case step s of\r\n Done -> Done\r\n Yield x ns\r\n | pred x -> Yield x ns\r\n | otherwise -> filterStep ns\r\n}}}\r\n\r\nSimply adding an INLINE [2] pragma disables the inlining in the early run of the simplifier. Therefore, the float out pass does not get the change to float-out before the filterStep is recognized as a joint point. \r\nOr at least that is my interpretation of what is going on.\r\n\r\nWhat surprises me about this issue is that the gentle run seems to perform inlining while the wiki mentions that inlining is not performed in this stage: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Core2CorePipeline\r\n\r\n\r\nIntuitively, I would think that floating-out is sub-optimal when the simplifier did not use all its tricks yet, because inlining typically opens up possibilities for simplification while floating-out typically reducing these possibilities.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/14283Remove the special case for tagToEnum# in the code generator?2020-01-23T19:27:40ZDavid FeuerRemove the special case for tagToEnum# in the code generator?The only remaining primop (aside from `unsafeCoerce#` and `seq`) that produces an enumeration type is `tagToEnum#`. There is some magic in the code generator to make garbage collection relating to this work as it has historically. We wan...The only remaining primop (aside from `unsafeCoerce#` and `seq`) that produces an enumeration type is `tagToEnum#`. There is some magic in the code generator to make garbage collection relating to this work as it has historically. We want to get rid of the special case. Simply removing it altogether actually does seem to work (in that CI doesn't complain, see [D3980](https://phabricator.haskell.org/D3980), although I don't yet know why), but the code we generate likely isn't quite the same. The challenge here is that (despite what I thought earlier) we definitely *can* end up in code generation, under certain circumstances, with
```hs
case tagToEnum# @t x of
...
```
How does this happen? After all, in `caseRules` we carefully remove every case on `tagToEnum#` and `dataToTag#` applications! The trouble comes when CorePrep puts everything in A-normal form. Strict function applications are transformed into `case` forms, so
```hs
f (tagToEnum# x)
```
will become
```hs
case tagToEnum# x of y
DEFAULT -> f y
```
Suddenly we have that ugly case! Hmph.https://gitlab.haskell.org/ghc/ghc/-/issues/14282tagToEnum# . dataToTag# not optimized away2019-07-07T18:17:38ZDavid FeuertagToEnum# . dataToTag# not optimized awayConsider
```hs
foo :: Int# -> Int#
foo x = dataToTag# (tagToEnum# x :: Bool)
bar :: Bool -> Bool
bar x = tagToEnum# (dataToTag# x)
```
These are both effectively identity functions. But while `foo` simplifies to one, `bar` does not! W...Consider
```hs
foo :: Int# -> Int#
foo x = dataToTag# (tagToEnum# x :: Bool)
bar :: Bool -> Bool
bar x = tagToEnum# (dataToTag# x)
```
These are both effectively identity functions. But while `foo` simplifies to one, `bar` does not! We might want to fix that.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"tagToEnum# . dataToTag# not optimized away","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"8.4.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1","keywords":["datacon-tags"],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"Consider\r\n\r\n{{{#!hs\r\nfoo :: Int# -> Int#\r\nfoo x = dataToTag# (tagToEnum# x :: Bool)\r\n\r\nbar :: Bool -> Bool\r\nbar x = tagToEnum# (dataToTag# x)\r\n}}}\r\n\r\nThese are both effectively identity functions. But while `foo` simplifies to one, `bar` does not! We might want to fix that.","type_of_failure":"OtherFailure","blocking":[]} -->8.6.1David FeuerDavid Feuer