GHC issueshttps://gitlab.haskell.org/ghc/ghc/-/issues2020-01-23T19:20:23Zhttps://gitlab.haskell.org/ghc/ghc/-/issues/15191Deriving via DeriveAnyClass not behaving the same as an emply instance declar...2020-01-23T19:20:23ZDarwin226Deriving via DeriveAnyClass not behaving the same as an emply instance declarationI've opened [a question on StackOverflow](https://stackoverflow.com/questions/50557019/deriving-via-deriveanyclass-not-behaving-the-same-as-an-emply-instance-declarati) describing the issue. I'll copy it here:
----
I have the following...I've opened [a question on StackOverflow](https://stackoverflow.com/questions/50557019/deriving-via-deriveanyclass-not-behaving-the-same-as-an-emply-instance-declarati) describing the issue. I'll copy it here:
----
I have the following code
```hs
{-# LANGUAGE PolyKinds, DefaultSignatures, FlexibleContexts, DeriveAnyClass, DeriveGeneric #-}
{-# LANGUAGE FlexibleInstances, MultiParamTypeClasses, UndecidableInstances #-}
module DeriveTest where
import GHC.Generics
class GenericClass a m where
instance GenericClass f m => GenericClass (M1 i c f) m
instance Condition a m => GenericClass (K1 i a) m
class Condition (a :: k) (m :: * -> *) where
instance (Condition a m, Condition b m) => Condition (a b) m
instance {-# OVERLAPPABLE #-} Condition (a :: k) m
class Class (e :: (* -> *) -> *) where
classF :: e m -> ()
default classF :: GenericClass (Rep (e m)) m => e m -> ()
classF = undefined
```
It defines the class Class of types that have a higher-kinded type as a parameter. It also defines a generic way to derive an instance of that class. Now if I declare a new datatype like this, and try to derive an instance of Class
```hs
data T a m = T
{ field :: a }
deriving (Generic, Class)
```
I get the following error:
```
* Overlapping instances for Condition a m
arising from the 'deriving' clause of a data type declaration
Matching instances:
instance [overlappable] forall k (a :: k) (m :: * -> *).
Condition a m
instance forall k1 k2 (a :: k1 -> k2) (m :: * -> *) (b :: k1).
(Condition a m, Condition b m) =>
Condition (a b) m
(The choice depends on the instantiation of `a, m'
To pick the first instance above, use IncoherentInstances
when compiling the other instance declarations)
* When deriving the instance for (Class (T a))
|
22 | deriving (Generic, Class)
| ^^^^^
```
The error sort of makes sense I guess. The instance really does depend on the instantiation of a. However, if I just write an empty instance like this:
```hs
data T a m = T
{ field :: a }
deriving (Generic)
instance Class (T a) -- works
```
It works. Why? And how can I make it behave the same with the deriving statement?
----
Ryan Scott suggested I open a ticket and that the issue probably isn't with the deriving mechanisms. Still, I chose to keep the title because that's what the original problem was and I've seen [similar issues](https://github.com/GetShopTV/swagger2/issues/144) beforehttps://gitlab.haskell.org/ghc/ghc/-/issues/15190disabling haddock disables building of manuals2023-01-07T00:32:39ZJens Petersendisabling haddock disables building of manualsAs far as I can tell disabling haddock in ghc causes the manuals also not to get built. This is known/expected?
I have not been able to track down why this seems to be the case.
<details><summary>Trac metadata</summary>
| Trac field ...As far as I can tell disabling haddock in ghc causes the manuals also not to get built. This is known/expected?
I have not been able to track down why this seems to be the case.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Build System |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"disable haddock disables building of manuals","status":"New","operating_system":"","component":"Build System","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"As far as I can tell disabling haddock in ghc causes the manuals also not to get built. This is known/expected?\r\n\r\nI have not been able to track down why this seems to be the case.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15184T4442 fails on i3862020-01-21T14:15:36ZÖmer Sinan AğacanT4442 fails on i386```
omer@i386-chroot:~/ghc/testsuite/tests/primops/should_run$ ~/ghc/inplace/bin/ghc-stage2 T4442.hs
[1 of 1] Compiling Main ( T4442.hs, T4442.o )
T4442.hs:135:14: error: Not in scope: data constructor `I64#'
|
135 | ...```
omer@i386-chroot:~/ghc/testsuite/tests/primops/should_run$ ~/ghc/inplace/bin/ghc-stage2 T4442.hs
[1 of 1] Compiling Main ( T4442.hs, T4442.o )
T4442.hs:135:14: error: Not in scope: data constructor `I64#'
|
135 | (\arr i (I64# a) s -> write arr i a s)
| ^^^^
```
Looking at the code, it uses `Int` on 64-bit, but `Int64` on i386, probably to use 64-bit integers on both platforms. I think we can just use `Int64` on both platforms and get rid of the CPP to avoid further breakage in the future.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"T4442 fails on i386","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"{{{\r\nomer@i386-chroot:~/ghc/testsuite/tests/primops/should_run$ ~/ghc/inplace/bin/ghc-stage2 T4442.hs\r\n[1 of 1] Compiling Main ( T4442.hs, T4442.o )\r\n\r\nT4442.hs:135:14: error: Not in scope: data constructor `I64#'\r\n |\r\n135 | (\\arr i (I64# a) s -> write arr i a s)\r\n | ^^^^\r\n}}}\r\n\r\nLooking at the code, it uses `Int` on 64-bit, but `Int64` on i386, probably to use 64-bit integers on both platforms. I think we can just use `Int64` on both platforms and get rid of the CPP to avoid further breakage in the future.","type_of_failure":"OtherFailure","blocking":[]} -->8.10.2https://gitlab.haskell.org/ghc/ghc/-/issues/15179Unwinding info for stg_ap_v_info is wrong2019-07-07T18:13:54ZniteriaUnwinding info for stg_ap_v_info is wrongThe `readelf --debug-dump=frames-interp` output for `stg_ap_v_info` is:
```
00000018 0000000000000034 00000000 FDE cie=00000000 pc=000000000000000f..000000000000021d
LOC CFA rbp rsp ra
000000000000000f rbp+0 v+0...The `readelf --debug-dump=frames-interp` output for `stg_ap_v_info` is:
```
00000018 0000000000000034 00000000 FDE cie=00000000 pc=000000000000000f..000000000000021d
LOC CFA rbp rsp ra
000000000000000f rbp+0 v+0 u c+0
0000000000000037 rbp+0 v+0 vexp c+0
0000000000000047 rbp+0 v+0 s c+0
```
It's wrong because it unwinds to the same frame, see `cfa = rbp + 0`.
I know the reason, I will put it in the comments below.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------------- |
| Version | 8.5 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler (Debugging) |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | bgamari, simonmar |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Unwinding info for stg_ap_v_info is wrong","status":"New","operating_system":"","component":"Compiler (Debugging)","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.5","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":["bgamari","simonmar"],"type":"Bug","description":"The `readelf --debug-dump=frames-interp` output for `stg_ap_v_info` is:\r\n\r\n{{{\r\n00000018 0000000000000034 00000000 FDE cie=00000000 pc=000000000000000f..000000000000021d\r\n LOC CFA rbp rsp ra\r\n000000000000000f rbp+0 v+0 u c+0\r\n0000000000000037 rbp+0 v+0 vexp c+0\r\n0000000000000047 rbp+0 v+0 s c+0\r\n}}}\r\n\r\nIt's wrong because it unwinds to the same frame, see `cfa = rbp + 0`.\r\n\r\nI know the reason, I will put it in the comments below.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15177Faulty instance termination check, with PolyKinds and/or TypeInType2023-04-03T11:22:17ZSimon Peyton JonesFaulty instance termination check, with PolyKinds and/or TypeInTypeWhen checking the [Paterson conditions](http://downloads.haskell.org/~ghc/master/users-guide/glasgow_exts.html#instance-termination-rules) for termination an instance declaration, we check for the number of "constructors and variables" i...When checking the [Paterson conditions](http://downloads.haskell.org/~ghc/master/users-guide/glasgow_exts.html#instance-termination-rules) for termination an instance declaration, we check for the number of "constructors and variables" in the instance head and constraints. **Question**: Do we look at
* A. All the arguments, visible or invisible?
* B. Just the visible arguments?
I think both will ensure termination, provided we are consistent.
A current bug in GHC means that we are not consistent. In particular in `TcValidity.checkInstTermination` we see
```
checkInstTermination tys theta
= check_preds theta
where
...
head_size = sizeTypes tys
...
check pred
= case classifyPredType pred of
...
-> check2 pred (sizeTypes $ filterOutInvisibleTypes (classTyCon cls) tys)
```
Notice the `filterOutInvisibleTypes` in the context predicate, but not for the `head_size`! Similarly in `sizePred` (which itself looks very ad hoc; used only for 'deriving'). Moreover, `sizeTypes` itself does not do the `filterOutInvisibleTypes` when it finds a `TyConApp`.
Bottom line: GHC mostly uses Plan A, except for an inconsistent use of Plan B at top level of `checkInstTermination` and `sizePred`.
I tried doing it both ways and fell into a swamp.
Fails plan A:
- `rebindable/T5908` has `instance (Category w, ...) => Monad (WriterT w m)`. With kind arguments this is actually `instance (Category @* w, ...) => Monad (WriterT w m)`, and now the predicate in the context and the head both have size 4. So under (B) this is OK but not under (A).
- `deriving/should_compile/T11833` is similar
- So is `overloadedrecflds/should_run/hasfieldrun02`.
Fails plan B
- `polykinds/T12718` has `instance Prelude.Num a => XNum a`, where `XNum` is poly-kinded. Under (A) this would be OK, but not under B.
- `typecheck/should_compile/T14441` is tricky. Putting in explicit kinds we have
```
type family Demote (k :: Type) :: Type
-- Demote :: forall k -> Type
type family DemoteX (a :: k) :: Demote k
-- DemoteX :: forall k. k -> Demote k
data Prox (a :: k) = P
-- P :: forall k (a:k). Prox @k a
type instance DemoteX P = P
-- type instance DemoteX (Prox @k a) (P @k @a)
-- = P @(Demote k) @(DemoteX @k a)
```
> So the LHS has 2 visible constructors and variables, namely `DemoteX` and `P`. But the type-family applications in the RHS also each have two visible, e.g. `Demote` and `k` for `Demote k`. Confusingly, these applications are hidden inside the invisible argument of `P`; but we really must look at them to ensure termination. Aaargh.
- `dependent/should_compile/T13910` is similar, but a lot more complicated.
Currently, because of the bug, these all pass. But I think it should be possible to exploit the bug to defeat the termination check, so things are not good at all.https://gitlab.haskell.org/ghc/ghc/-/issues/15176Superclass `Monad m =>` makes program run 100 times slower2019-11-16T10:52:51Zdanilo2Superclass `Monad m =>` makes program run 100 times slowerHi! I've just encountered a very bizarre error.
### General description
The code:
```
class LayersFoldableBuilder__ t (layers :: [Type]) m where
buildLayersFold__ :: SomePtr -> m (Fold.Result t) -> m (Fold.Result t)
instance Mona...Hi! I've just encountered a very bizarre error.
### General description
The code:
```
class LayersFoldableBuilder__ t (layers :: [Type]) m where
buildLayersFold__ :: SomePtr -> m (Fold.Result t) -> m (Fold.Result t)
instance Monad m => LayersFoldableBuilder__ t '[] m where
buildLayersFold__ = \_ a -> a
{-# INLINE buildLayersFold__ #-}
instance ( MonadIO m
, Storable.Storable (Layer.Cons l ())
, Layer.StorableLayer l m
, LayerFoldableBuilder__ (EnabledLayer t l) t m l
, LayersFoldableBuilder__ t ls m )
=> LayersFoldableBuilder__ t (l ': ls) m where
buildLayersFold__ = \ptr mr -> do
let fs = buildLayersFold__ @t @ls ptr'
ptr' = Ptr.plusPtr ptr $ Layer.byteSize @l
layerBuild__ @(EnabledLayer t l) @t @m @l ptr $! fs mr
{-# INLINE buildLayersFold__ #-}
```
This is a typeclass `LayersFoldableBuilder__` and ALL of its instances. Please note, that every instance has a `Monad m` or `MonadIO m` constraint. The program which uses this code heavily runs in 40ms. If we only add constraint `Monad m =>` to the class definition:
```
class Monad m => LayersFoldableBuilder__ t (layers :: [Type]) m where
buildLayersFold__ :: SomePtr -> m (Fold.Result t) -> m (Fold.Result t)
```
The program runs in 3.5s , which is almost 100 times slower.
Unfortunatelly I do not have minimal example, but it is reproducible. It is a part of the Luna Language codebase: https://github.com/luna/luna-core/blob/60bf6130691c23e52b97b067b52becb8fdb0c72e/core/src/Data/Graph/Traversal/Scoped.hs\#L102
it was introduced in the commit 60bf6130691c23e52b97b067b52becb8fdb0c72e on branch static-layers . However, building this is simple: stack bench luna-core . After invoking it we see the described results.
### Why its important and what should we do to fix it ===
1. I am writing this because I care about Haskell community. I want GHC and Haskell to be widely used. Right now, the only thing I hear from all companies around our company is that impredicative performance, even when following rules "how to write efficient code" is the biggest pain people have. Haskell is gathering attention - pure functional programming, immutability etc - are great. But it will not become a popular choice unless we care about predictive performance.
1. Such performance changes are unacceptable when thinking about Haskell and GHC as production ready systems. We need a clear way how to write high performance Haskell without the need to benchmark every part of our programs even when refactoring things. GHC has enough information to discover that we want high performance here and there (even by looking at INLINE pragmas) and should warn us about lack of optimization. We should also have a way to force GHC to apply optimizations in particular places - for example by explicit marking code to be always specialized during compilation, so GHC would never fall back to dict-passing in such places. Such possibility would solve MANY related problems and user fears.
1. The point 2 also applies to strictness. In my opinion, having more clear strictness resolution rules / tools is important. Right now the only way to know if strictness analysis did a good job and we are not constantly boxing / unboxing things is to read core, which is tedious and 99% of Haskell users do not even know how to do it (We've got 10 really, really good Haskellers here, 2 of them are capable of reading core, but not very fluently). I would love to chat more about these topics, because they are crucial for growing Haskell community and making Haskell more popular choice, which is waht we want, right? We don't want Haskell to be just a research project with "all its users being its authors" at the same time, am I
### What happens in core ===
I inspected core and have found that indeed, after adding the constraint, GHC does not apply all optimizations to the definitions. To be honest, I completely don't understand it, because the code uses everywhere explicit `INLINE` pragma to be sure everything is optimized away in the compilation stage:
```
--------------------------------------------------------------------------------
SLOW, with Monad m =>
--------------------------------------------------------------------------------
-- RHS size: {terms: 5, types: 12, coercions: 4, joins: 0/0}
buildLayersFold__ [InlPrag=INLINE]
:: forall t (layers :: [*]) (m :: * -> *).
LayersFoldableBuilder__ t layers m =>
SomePtr -> m (Fold.Result t) -> m (Fold.Result t)
[GblId[ClassOp],
Arity=1,
Caf=NoCafRefs,
Str=<S,U>,
Unf=Unf{Src=InlineStable, TopLvl=True, Value=True, ConLike=True,
WorkFree=True, Expandable=True,
Guidance=ALWAYS_IF(arity=1,unsat_ok=False,boring_ok=True)
Tmpl= \ (@ t_ao0O)
(@ (layers_ao0P :: [*]))
(@ (m_ao0Q :: * -> *))
(v_B1 [Occ=Once]
:: LayersFoldableBuilder__ t_ao0O layers_ao0P m_ao0Q) ->
v_B1
`cast` (Data.Graph.Traversal.Scoped.N:LayersFoldableBuilder__[0]
<t_ao0O>_N <layers_ao0P>_N <m_ao0Q>_N
:: (LayersFoldableBuilder__
t_ao0O layers_ao0P m_ao0Q :: Constraint)
~R# (SomePtr
-> m_ao0Q (Fold.Result t_ao0O)
-> m_ao0Q (Fold.Result t_ao0O) :: *))}]
buildLayersFold__
= \ (@ t_ao0O)
(@ (layers_ao0P :: [*]))
(@ (m_ao0Q :: * -> *))
(v_B1 :: LayersFoldableBuilder__ t_ao0O layers_ao0P m_ao0Q) ->
v_B1
`cast` (Data.Graph.Traversal.Scoped.N:LayersFoldableBuilder__[0]
<t_ao0O>_N <layers_ao0P>_N <m_ao0Q>_N
:: (LayersFoldableBuilder__
t_ao0O layers_ao0P m_ao0Q :: Constraint)
~R# (SomePtr
-> m_ao0Q (Fold.Result t_ao0O)
-> m_ao0Q (Fold.Result t_ao0O) :: *))
--------------------------------------------------------------------------------
FAST, without Monad m =>
--------------------------------------------------------------------------------
-- RHS size: {terms: 8, types: 25, coercions: 0, joins: 0/0}
Data.Graph.Traversal.Scoped.$p1LayersFoldableBuilder__
:: forall t (layers :: [*]) (m :: * -> *).
LayersFoldableBuilder__ t layers m =>
Monad m
[GblId[ClassOp],
Arity=1,
Caf=NoCafRefs,
Str=<S(SL),U(U,A)>,
RULES: Built in rule for Data.Graph.Traversal.Scoped.$p1LayersFoldableBuilder__: "Class op $p1LayersFoldableBuilder__"]
Data.Graph.Traversal.Scoped.$p1LayersFoldableBuilder__
= \ (@ t_ao0P)
(@ (layers_ao0Q :: [*]))
(@ (m_ao0R :: * -> *))
(v_B1 :: LayersFoldableBuilder__ t_ao0P layers_ao0Q m_ao0R) ->
case v_B1 of v_B1
{ Data.Graph.Traversal.Scoped.C:LayersFoldableBuilder__ v_B2
v_B3 ->
v_B2
}
-- RHS size: {terms: 8, types: 25, coercions: 0, joins: 0/0}
buildLayersFold__
:: forall t (layers :: [*]) (m :: * -> *).
LayersFoldableBuilder__ t layers m =>
SomePtr -> m (Fold.Result t) -> m (Fold.Result t)
[GblId[ClassOp],
Arity=1,
Caf=NoCafRefs,
Str=<S(LS),U(A,U)>,
RULES: Built in rule for buildLayersFold__: "Class op buildLayersFold__"]
buildLayersFold__
= \ (@ t_ao0P)
(@ (layers_ao0Q :: [*]))
(@ (m_ao0R :: * -> *))
(v_B1 :: LayersFoldableBuilder__ t_ao0P layers_ao0Q m_ao0R) ->
case v_B1 of v_B1
{ Data.Graph.Traversal.Scoped.C:LayersFoldableBuilder__ v_B2
v_B3 ->
v_B3
}
```⊥https://gitlab.haskell.org/ghc/ghc/-/issues/15167DerivClause list is not populated for (TyConI (DataD ...))2020-01-23T19:20:40Z0xd34df00dDerivClause list is not populated for (TyConI (DataD ...))```haskell
% cat Test.hs
{-# LANGUAGE LambdaCase #-}
module Test where
import Language.Haskell.TH
test :: Name -> Q [Dec]
test name = reify name >>= \case
TyConI dec -> do
runIO $ print dec
pure []
_ -> pure []...```haskell
% cat Test.hs
{-# LANGUAGE LambdaCase #-}
module Test where
import Language.Haskell.TH
test :: Name -> Q [Dec]
test name = reify name >>= \case
TyConI dec -> do
runIO $ print dec
pure []
_ -> pure []
% cat Run.hs
{-# LANGUAGE TemplateHaskell #-}
import Test
data Foo = Foo deriving (Eq, Ord, Show)
test ''Foo
% ghc Run.hs
[2 of 2] Compiling Main ( Run.hs, Run.o )
DataD [] Main.Foo [] Nothing [NormalC Main.Foo []] []
```
One might expect the `DataD` to mention `Eq, Ord, Show` in the `DerivClause` list, but it doesn't.
This behavior manifests with every ghc version I tried: 8.0.2, 8.2.2, 8.4.2. I also asked a question whether it's intended behaviour on \#haskell, and I've been advised to open a bug report, so here it is.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ---------------- |
| Version | 8.4.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Template Haskell |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"DerivClause list is not populated for (TyConI (DataD ...))","status":"New","operating_system":"","component":"Template Haskell","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.4.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"{{{#!haskell\r\n% cat Test.hs\r\n{-# LANGUAGE LambdaCase #-}\r\n\r\nmodule Test where\r\n\r\nimport Language.Haskell.TH\r\n\r\ntest :: Name -> Q [Dec]\r\ntest name = reify name >>= \\case\r\n TyConI dec -> do\r\n runIO $ print dec\r\n pure []\r\n _ -> pure []\r\n\r\n% cat Run.hs\r\n{-# LANGUAGE TemplateHaskell #-}\r\n\r\nimport Test\r\n\r\ndata Foo = Foo deriving (Eq, Ord, Show)\r\n\r\ntest ''Foo\r\n\r\n% ghc Run.hs\r\n[2 of 2] Compiling Main ( Run.hs, Run.o )\r\nDataD [] Main.Foo [] Nothing [NormalC Main.Foo []] []\r\n}}}\r\n\r\nOne might expect the `DataD` to mention `Eq, Ord, Show` in the `DerivClause` list, but it doesn't.\r\n\r\nThis behavior manifests with every ghc version I tried: 8.0.2, 8.2.2, 8.4.2. I also asked a question whether it's intended behaviour on #haskell, and I've been advised to open a bug report, so here it is.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15161ghci cannot find symbols defined by TH's addForeignFilePath2019-07-07T18:13:59ZAlp Mestanogullarighci cannot find symbols defined by TH's addForeignFilePathThis causes T14298 to fail in the `ghci` way, for example:
```sh
$ make fulltest TEST="T14298"
...
=====> T14298(ghci) 1 of 1 [0, 0, 0]
cd "./th/T14298.run" && "/home/alp/WT/ghc-slow-validate/inplace/test spaces/ghc-stage2" T14298.hs ...This causes T14298 to fail in the `ghci` way, for example:
```sh
$ make fulltest TEST="T14298"
...
=====> T14298(ghci) 1 of 1 [0, 0, 0]
cd "./th/T14298.run" && "/home/alp/WT/ghc-slow-validate/inplace/test spaces/ghc-stage2" T14298.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output -XTemplateHaskell -package template-haskell --interactive -v0 -ignore-dot-ghci -fno-ghci-history +RTS -I0.1 -RTS -v0< T14298.genscript
Actual stderr output differs from expected:
diff -uw "/dev/null" "./th/T14298.run/T14298.run.stderr.normalised"
--- /dev/null 2018-05-11 09:42:48.644575222 +0200
+++ ./th/T14298.run/T14298.run.stderr.normalised 2018-05-17 15:59:33.340172963 +0200
@@ -0,0 +1,13 @@
+ghc: ^^ Could not load 'foo', dependency unresolved. See top entry above.
+
+
+ByteCodeLink: can't find label
+During interactive linking, GHCi couldn't find the following symbol:
+ foo
+This may be due to you not asking GHCi to load extra object files,
+archives or DLLs needed by your current session. Restart GHCi, specifying
+the missing library using the -L/path/to/object/dir and -lmissinglibname
+flags, or simply by naming the relevant files on the GHCi command line.
+Alternatively, this link failure might indicate a bug in GHCi.
+If you suspect the latter, please send a bug report to:
+ glasgow-haskell-bugs@haskell.org
*** unexpected failure for T14298(ghci)
```
More generally, there is some phasing issue and Ben indicated to me we would have to suspend the compilation of the current file, compile and link the foreign one and then resume. For now I'll just mark T14298 as expected broken for the `ghci` way.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.5 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | GHCi |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"ghci cannot find symbols defined by TH's addForeignFilePath","status":"New","operating_system":"","component":"GHCi","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.5","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"This causes T14298 to fail in the `ghci` way, for example:\r\n\r\n{{{#!sh\r\n$ make fulltest TEST=\"T14298\"\r\n...\r\n=====> T14298(ghci) 1 of 1 [0, 0, 0]\r\ncd \"./th/T14298.run\" && \"/home/alp/WT/ghc-slow-validate/inplace/test spaces/ghc-stage2\" T14298.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output -XTemplateHaskell -package template-haskell --interactive -v0 -ignore-dot-ghci -fno-ghci-history +RTS -I0.1 -RTS -v0< T14298.genscript\r\nActual stderr output differs from expected:\r\ndiff -uw \"/dev/null\" \"./th/T14298.run/T14298.run.stderr.normalised\"\r\n--- /dev/null\t2018-05-11 09:42:48.644575222 +0200\r\n+++ ./th/T14298.run/T14298.run.stderr.normalised\t2018-05-17 15:59:33.340172963 +0200\r\n@@ -0,0 +1,13 @@\r\n+ghc: ^^ Could not load 'foo', dependency unresolved. See top entry above.\r\n+\r\n+\r\n+ByteCodeLink: can't find label\r\n+During interactive linking, GHCi couldn't find the following symbol:\r\n+ foo\r\n+This may be due to you not asking GHCi to load extra object files,\r\n+archives or DLLs needed by your current session. Restart GHCi, specifying\r\n+the missing library using the -L/path/to/object/dir and -lmissinglibname\r\n+flags, or simply by naming the relevant files on the GHCi command line.\r\n+Alternatively, this link failure might indicate a bug in GHCi.\r\n+If you suspect the latter, please send a bug report to:\r\n+ glasgow-haskell-bugs@haskell.org\r\n*** unexpected failure for T14298(ghci)\r\n}}}\r\n\r\nMore generally, there is some phasing issue and Ben indicated to me we would have to suspend the compilation of the current file, compile and link the foreign one and then resume. For now I'll just mark T14298 as expected broken for the `ghci` way.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15159Expand compatibility note for readMVar2020-01-23T19:20:40ZDavid FeuerExpand compatibility note for readMVarCurrently, the note reads
> Compatibility note: Prior to base 4.7, readMVar was a combination of takeMVar and putMVar. This mean that in the presence of other threads attempting to putMVar, readMVar could block. Furthermore, readMVar wo...Currently, the note reads
> Compatibility note: Prior to base 4.7, readMVar was a combination of takeMVar and putMVar. This mean that in the presence of other threads attempting to putMVar, readMVar could block. Furthermore, readMVar would not receive the next putMVar if there was already a pending thread blocked on takeMVar. The old behavior can be recovered by \[...\]
I don't think this goes quite far enough. Consider the following scenario:
```hs
main = do
mv <- newEmptyMVar
forkIO $ putMVar mv "a" >> ... >> takeMVar mv >>= ...
forkIO $ putMVar mv "b" >> ... >> takeMVar mv >>= ...
...
readMVar mv
```
(assume none of the ...s touch `mv`)
With the current implementation of `readMVar`, each child thread will put its value in the `MVar` and then take that same value out. They may do so in either order, and the order will determine what value the main thread reads.
With the *old* implementation of `readMVar`, there are two additional (symmetrical) possibilities. In one, the first child puts `"a"`, the main thread takes `"a"`, the second child puts `"b"`, the first child takes `"b"`, the main thread puts `"a"`, and the second child takes `"a"`.
So the old version of `readMVar` could make an interaction between other threads substantially less deterministic. I believe the note should probably reflect that. Its language also could really use some clarification in general.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Core Libraries |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Expand compatibility note for readMVar","status":"New","operating_system":"","component":"Core Libraries","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"Currently, the note reads\r\n\r\n> Compatibility note: Prior to base 4.7, readMVar was a combination of takeMVar and putMVar. This mean that in the presence of other threads attempting to putMVar, readMVar could block. Furthermore, readMVar would not receive the next putMVar if there was already a pending thread blocked on takeMVar. The old behavior can be recovered by [...]\r\n\r\nI don't think this goes quite far enough. Consider the following scenario:\r\n\r\n{{{#!hs\r\nmain = do\r\n mv <- newEmptyMVar\r\n forkIO $ putMVar mv \"a\" >> ... >> takeMVar mv >>= ...\r\n forkIO $ putMVar mv \"b\" >> ... >> takeMVar mv >>= ...\r\n ...\r\n readMVar mv\r\n}}}\r\n\r\n(assume none of the ...s touch `mv`)\r\n\r\nWith the current implementation of `readMVar`, each child thread will put its value in the `MVar` and then take that same value out. They may do so in either order, and the order will determine what value the main thread reads.\r\n\r\nWith the ''old'' implementation of `readMVar`, there are two additional (symmetrical) possibilities. In one, the first child puts `\"a\"`, the main thread takes `\"a\"`, the second child puts `\"b\"`, the first child takes `\"b\"`, the main thread puts `\"a\"`, and the second child takes `\"a\"`.\r\n\r\nSo the old version of `readMVar` could make an interaction between other threads substantially less deterministic. I believe the note should probably reflect that. Its language also could really use some clarification in general.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15153GHC uses O_NONBLOCK on regular files, which has no effect, and blocks the run...2023-12-13T20:38:06ZNiklas Hambüchenmail@nh2.meGHC uses O_NONBLOCK on regular files, which has no effect, and blocks the runtimeThis is the outcome of https://mail.haskell.org/pipermail/ghc-devs/2018-May/015749.html
Reading through the code of [readRawBufferPtr](http://hackage.haskell.org/package/base-4.11.1.0/docs/src/GHC.IO.FD.html#readRawBufferPtr) the first ...This is the outcome of https://mail.haskell.org/pipermail/ghc-devs/2018-May/015749.html
Reading through the code of [readRawBufferPtr](http://hackage.haskell.org/package/base-4.11.1.0/docs/src/GHC.IO.FD.html#readRawBufferPtr) the first line jumped to my eye:
```
| isNonBlocking fd = unsafe_read -- unsafe is ok, it can't block
```
This looks suspicious.
On Linux, if `fd` is a a descriptor to a regular file (on disk, a networked filesystem, or a block device), then `O_NONBLOCK` will have no effect, yet `unsafe_read` is used which will block the running OS thread.
You can read more about `O_NONBLOCK` not working on regular files on Linux here:
- https://www.nginx.com/blog/thread-pools-boost-performance-9x/
- https://stackoverflow.com/questions/8057892/epoll-on-regular-files
- https://jvns.ca/blog/2017/06/03/async-io-on-linux--select--poll--and-epoll/
- https://groups.google.com/forum/\#!topic/comp.os.linux.development.system/K-fC-G6P4EA
And indeed, the following program does NOT keep printing things in the printing thread, and instead blocks for 30 seconds:
```
module Main where
import Control.Concurrent
import Control.Monad
import qualified Data.ByteString as BS
import System.Environment
main :: IO ()
main = do
args <- getArgs
case args of
[file] -> do
forkIO $ forever $ do
putStrLn "still running"
threadDelay 100000 -- 0.1 s
bs <- BS.readFile file
putStrLn $ "Read " ++ show (BS.length bs) ++ " bytes"
_ -> error "Pass 1 argument (a file)"
```
when compiled with
```
~/.stack/programs/x86_64-linux/ghc-8.2.2/bin/ghc --make -O -threaded blocking-regular-file-read-test.hs
```
on my Ubuntu 16.04 and on a 2GB file like
```
./blocking-regular-file-read-test /mnt/images/ubuntu-18.04-desktop-amd64.iso
```
And `strace -f -e open,read` on it shows:
```
open("/mnt/images/ubuntu-18.04-desktop-amd64.iso", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 11
read(11, <unfinished ...>
```
So GHC is trying to use `O_NONBLOCK` on regular files, which cannot work and will block when used through unsafe foreign calls like that.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Runtime System |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | lehins, nh2 |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"GHC uses O_NONBLOCK on regular files, which has no effect, and blocks the runtime","status":"New","operating_system":"","component":"Runtime System","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":["lehins","nh2"],"type":"Bug","description":"This is the outcome of https://mail.haskell.org/pipermail/ghc-devs/2018-May/015749.html\r\n\r\nReading through the code of [http://hackage.haskell.org/package/base-4.11.1.0/docs/src/GHC.IO.FD.html#readRawBufferPtr readRawBufferPtr] the first line jumped to my eye:\r\n\r\n{{{\r\n | isNonBlocking fd = unsafe_read -- unsafe is ok, it can't block\r\n}}}\r\n\r\nThis looks suspicious.\r\n\r\nOn Linux, if `fd` is a a descriptor to a regular file (on disk, a networked filesystem, or a block device), then `O_NONBLOCK` will have no effect, yet `unsafe_read` is used which will block the running OS thread.\r\n\r\nYou can read more about `O_NONBLOCK` not working on regular files on Linux here:\r\n\r\n* https://www.nginx.com/blog/thread-pools-boost-performance-9x/\r\n* https://stackoverflow.com/questions/8057892/epoll-on-regular-files\r\n* https://jvns.ca/blog/2017/06/03/async-io-on-linux--select--poll--and-epoll/\r\n* https://groups.google.com/forum/#!topic/comp.os.linux.development.system/K-fC-G6P4EA\r\n\r\nAnd indeed, the following program does NOT keep printing things in the printing thread, and instead blocks for 30 seconds:\r\n\r\n{{{\r\nmodule Main where\r\n\r\nimport Control.Concurrent\r\nimport Control.Monad\r\nimport qualified Data.ByteString as BS\r\nimport System.Environment\r\n\r\nmain :: IO ()\r\nmain = do\r\n args <- getArgs\r\n case args of\r\n [file] -> do\r\n\r\n forkIO $ forever $ do\r\n putStrLn \"still running\"\r\n threadDelay 100000 -- 0.1 s\r\n bs <- BS.readFile file\r\n putStrLn $ \"Read \" ++ show (BS.length bs) ++ \" bytes\"\r\n\r\n _ -> error \"Pass 1 argument (a file)\"\r\n}}}\r\n\r\nwhen compiled with\r\n\r\n{{{\r\n~/.stack/programs/x86_64-linux/ghc-8.2.2/bin/ghc --make -O -threaded blocking-regular-file-read-test.hs\r\n}}}\r\n\r\non my Ubuntu 16.04 and on a 2GB file like\r\n\r\n{{{\r\n./blocking-regular-file-read-test /mnt/images/ubuntu-18.04-desktop-amd64.iso\r\n}}}\r\n\r\nAnd `strace -f -e open,read` on it shows:\r\n\r\n{{{\r\nopen(\"/mnt/images/ubuntu-18.04-desktop-amd64.iso\", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 11\r\nread(11, <unfinished ...>\r\n}}}\r\n\r\nSo GHC is trying to use `O_NONBLOCK` on regular files, which cannot work and will block when used through unsafe foreign calls like that.\r\n","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15151Better Interaction Between Specialization and GND2019-07-07T18:14:02ZAndrew MartinBetter Interaction Between Specialization and GNDLet us consider the following code:
```
-- Sort.hs
sort :: (Prim a, Ord a) => MutablePrimArray s a -> ST s ()
sort mutArr = ...
{-# SPECIALIZE sort :: MutablePrimArray s Int -> ST s () -#}
{-# SPECIALIZE sort :: MutablePrimArray s Int8 ...Let us consider the following code:
```
-- Sort.hs
sort :: (Prim a, Ord a) => MutablePrimArray s a -> ST s ()
sort mutArr = ...
{-# SPECIALIZE sort :: MutablePrimArray s Int -> ST s () -#}
{-# SPECIALIZE sort :: MutablePrimArray s Int8 -> ST s () -#}
{-# SPECIALIZE sort :: MutablePrimArray s Word8 -> ST s () -#}
...
```
For reference, a `MutablePrimArray` is a `MutableByteArray` with a phantom type variable to tag the element type. This sorting algorithm may be implemented in any number of ways, and the implementation is unimportant here. The specialize pragmas are intended to capture a number of common use cases. Here's where a problem arises:
```
-- Example.hs
newtype PersonId = PersonId Int
deriving (Eq,Ord,Prim)
sortPeople :: MutablePrimArray s PersonId -> MutablePrimArray s PersonId
sortPeople x = sort x
```
There isn't a rewrite rule that specializes the `sort` function when we are dealing with `PersonId`. So, we end up with a slower version of the code that explicitly passes all the dictionaries. One solution would be to just use `INLINABLE` instead. Then we don't have to try to list every type, and we just let the specialization be generate at the call site. But this isn't totally satisfying. There are a lot of types that are just newtype wrappers around `Int`. Why should we have an extra copy of the same code for each of them? (Even without newtypes, `INLINABLE` can still result in duplication if neither of the modules that needs a specialization transitively imports the other).
What I'm suggesting is that rewrite rules (like those generated by `SPECIALIZE`) could target not just the given type but also any newtype around it, provided that all typeclass instances required by the function were the result of GND. The only situations where this is unsound are situations where the user was already doing something unsound with rewrite rules. There are several implementation difficulties:
- In core, there is no good way to tell that a typeclass instance dictionary was the result of GND. I'm not sure how to work around this.
- `Eq` and `Ord` usually aren't handled by the `newtype` deriving strategy. They are handled by the `stock` strategy, which produces code with equivalent behavior but is nevertheless different code.
- The rewrite rule would need to look at additional arguments beyond the type arguments.
I suspect that these difficulties would make such this feature difficult to implement, but this feature would help me with some of my libraries and applications.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.4.2 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Better Interaction Specialization and GND","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.4.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"FeatureRequest","description":"Let us consider the following code:\r\n\r\n{{{\r\n-- Sort.hs\r\nsort :: (Prim a, Ord a) => MutablePrimArray s a -> ST s ()\r\nsort mutArr = ...\r\n{-# SPECIALIZE sort :: MutablePrimArray s Int -> ST s () -#}\r\n{-# SPECIALIZE sort :: MutablePrimArray s Int8 -> ST s () -#}\r\n{-# SPECIALIZE sort :: MutablePrimArray s Word8 -> ST s () -#}\r\n...\r\n}}}\r\n\r\nFor reference, a `MutablePrimArray` is a `MutableByteArray` with a phantom type variable to tag the element type. This sorting algorithm may be implemented in any number of ways, and the implementation is unimportant here. The specialize pragmas are intended to capture a number of common use cases. Here's where a problem arises:\r\n\r\n{{{\r\n-- Example.hs\r\nnewtype PersonId = PersonId Int\r\n deriving (Eq,Ord,Prim)\r\nsortPeople :: MutablePrimArray s PersonId -> MutablePrimArray s PersonId\r\nsortPeople x = sort x\r\n}}}\r\n\r\nThere isn't a rewrite rule that specializes the `sort` function when we are dealing with `PersonId`. So, we end up with a slower version of the code that explicitly passes all the dictionaries. One solution would be to just use `INLINABLE` instead. Then we don't have to try to list every type, and we just let the specialization be generate at the call site. But this isn't totally satisfying. There are a lot of types that are just newtype wrappers around `Int`. Why should we have an extra copy of the same code for each of them? (Even without newtypes, `INLINABLE` can still result in duplication if neither of the modules that needs a specialization transitively imports the other).\r\n\r\nWhat I'm suggesting is that rewrite rules (like those generated by `SPECIALIZE`) could target not just the given type but also any newtype around it, provided that all typeclass instances required by the function were the result of GND. The only situations where this is unsound are situations where the user was already doing something unsound with rewrite rules. There are several implementation difficulties:\r\n\r\n* In core, there is no good way to tell that a typeclass instance dictionary was the result of GND. I'm not sure how to work around this.\r\n* `Eq` and `Ord` usually aren't handled by the `newtype` deriving strategy. They are handled by the `stock` strategy, which produces code with equivalent behavior but is nevertheless different code.\r\n* The rewrite rule would need to look at additional arguments beyond the type arguments.\r\n\r\nI suspect that these difficulties would make such this feature difficult to implement, but this feature would help me with some of my libraries and applications.","type_of_failure":"OtherFailure","blocking":[]} -->Research neededhttps://gitlab.haskell.org/ghc/ghc/-/issues/15148Allow setting of custom alignments2020-01-23T19:20:41ZAndreas KlebingerAllow setting of custom alignmentsAlignment can introduce a bias to benchmarking results which isn't obvious. To help deal with that I want to add some options adjusting alignment for generated code.
To give one example I came across while looking into #15124:
Padding ...Alignment can introduce a bias to benchmarking results which isn't obvious. To help deal with that I want to add some options adjusting alignment for generated code.
To give one example I came across while looking into #15124:
Padding a function that \*\*is not even called\*\* in nofib/real/eff/FS by multiples of 8 bytes gave runtimes of \~240ms, \~220ms and \~210ms depending on the padding.
At 64 byte it seems to loop around (size of a cache line).
At a minimum it should be possible to specify alignment of functions.
Potentially we could also consider:
- Padding for sections.
- Padding for functions
- Alignment for info tables.
- Padding for info tables.
- Alignment for Data
The cases where this could have benefits:
----
Problem: We optimize one subroutine changing it's size. This leads to alignment changes for the following functions making overall performance potentially worse.
Here aligning functions along cache lines would make benchmarks more resilient against accidental alignment changes.
----
Problem: We might try to optimize code based on a specific benchmark. Going purely by performance of this benchmark we might end up optimizing our code for this specific benchmark instead of the general case.
Varying the alignment would to check if our code simply found a optimum for the alignment caused by this benchmark or if it is generally better.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.2 |
| Type | Task |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler (NCG) |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | jmct |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Allow setting of custom alignments","status":"New","operating_system":"","component":"Compiler (NCG)","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.2","keywords":["CodeGen"],"differentials":[],"test_case":"","architecture":"","cc":["jmct"],"type":"Task","description":"Alignment can introduce a bias to benchmarking results which isn't obvious. To help deal with that I want to add some options adjusting alignment for generated code.\r\n\r\nTo give one example I came across while looking into #15124: \r\n\r\nPadding a function that **is not even called** in nofib/real/eff/FS by multiples of 8 bytes gave runtimes of ~240ms, ~220ms and ~210ms depending on the padding.\r\nAt 64 byte it seems to loop around (size of a cache line).\r\n\r\nAt a minimum it should be possible to specify alignment of functions.\r\n\r\nPotentially we could also consider:\r\n* Padding for sections.\r\n* Padding for functions\r\n* Alignment for info tables.\r\n* Padding for info tables.\r\n* Alignment for Data\r\n\r\n\r\nThe cases where this could have benefits:\r\n\r\n----\r\nProblem: We optimize one subroutine changing it's size. This leads to alignment changes for the following functions making overall performance potentially worse. \r\n\r\nHere aligning functions along cache lines would make benchmarks more resilient against accidental alignment changes.\r\n\r\n----\r\nProblem: We might try to optimize code based on a specific benchmark. Going purely by performance of this benchmark we might end up optimizing our code for this specific benchmark instead of the general case.\r\n\r\nVarying the alignment would to check if our code simply found a optimum for the alignment caused by this benchmark or if it is generally better.\r\n\r\n","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15147Type checker plugin receives Wanteds that are not completely unflattened2021-06-23T19:27:32ZnfrisbyType checker plugin receives Wanteds that are not completely unflattenedWith the following, a plugin will receive a wanted constraint that includes a `fsk` flattening skolem.
```
-- "Reduced" via the plugin
type family F u v where {}
type family G a b where {}
data D p where
DC :: (p ~ F x (G () ())) => ...With the following, a plugin will receive a wanted constraint that includes a `fsk` flattening skolem.
```
-- "Reduced" via the plugin
type family F u v where {}
type family G a b where {}
data D p where
DC :: (p ~ F x (G () ())) => D p
```
(Please ignore the apparent ambiguity regarding `x`; the goal is for the plugin to eliminate any ambiguity.)
A do-nothing plugin that merely logs its inputs gives the following for the ambiguity check on DC.
```
given
[G] $d~_a4oh {0}:: (p_a4o2[sk:2] :: *)
~ (p_a4o2[sk:2] :: *) (CDictCan)
[G] $d~~_a4oi {0}:: (p_a4o2[sk:2] :: *)
~~ (p_a4o2[sk:2] :: *) (CDictCan)
[G] co_a4od {0}:: (G () () :: *)
~# (fsk_a4oc[fsk:0] :: *) (CFunEqCan)
[G] co_a4of {0}:: (F x_a4o3[sk:2] fsk_a4oc[fsk:0] :: *)
~# (fsk_a4oe[fsk:0] :: *) (CFunEqCan)
[G] co_a4og {1}:: (fsk_a4oe[fsk:0] :: *)
~# (p_a4o2[sk:2] :: *) (CTyEqCan)
derived
wanted
[WD] hole{co_a4or} {3}:: (F x_a4o6[tau:2] fsk_a4oc[fsk:0] :: *)
~# (p_a4o2[sk:2] :: *) (CNonCanonical)
untouchables [fsk_a4oe[fsk:0], fsk_a4oc[fsk:0], x_a4o3[sk:2], p_a4o2[sk:2]]
```
Note the `fsk_a4oc[fsk:0]` tyvar in the Wanted constraint, which is why I'm opening this ticket. Its presence contradicts the "The wanteds will be unflattened and zonked" claim from https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker\#Callingpluginsfromthetypechecker section.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ----------------------- |
| Version | 8.4.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler (Type checker) |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | adamgundry |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Type checker plugin receives Wanteds that are not completely unflattened","status":"New","operating_system":"","component":"Compiler (Type checker)","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.4.1","keywords":["checker","plugins","type"],"differentials":[],"test_case":"","architecture":"","cc":["adamgundry"],"type":"Bug","description":"With the following, a plugin will receive a wanted constraint that includes a `fsk` flattening skolem.\r\n\r\n{{{\r\n-- \"Reduced\" via the plugin\r\ntype family F u v where {}\r\ntype family G a b where {}\r\n\r\ndata D p where\r\n DC :: (p ~ F x (G () ())) => D p\r\n}}}\r\n\r\n(Please ignore the apparent ambiguity regarding `x`; the goal is for the plugin to eliminate any ambiguity.)\r\n\r\nA do-nothing plugin that merely logs its inputs gives the following for the ambiguity check on DC.\r\n\r\n{{{\r\ngiven\r\n [G] $d~_a4oh {0}:: (p_a4o2[sk:2] :: *)\r\n ~ (p_a4o2[sk:2] :: *) (CDictCan)\r\n [G] $d~~_a4oi {0}:: (p_a4o2[sk:2] :: *)\r\n ~~ (p_a4o2[sk:2] :: *) (CDictCan)\r\n [G] co_a4od {0}:: (G () () :: *)\r\n ~# (fsk_a4oc[fsk:0] :: *) (CFunEqCan)\r\n [G] co_a4of {0}:: (F x_a4o3[sk:2] fsk_a4oc[fsk:0] :: *)\r\n ~# (fsk_a4oe[fsk:0] :: *) (CFunEqCan)\r\n [G] co_a4og {1}:: (fsk_a4oe[fsk:0] :: *)\r\n ~# (p_a4o2[sk:2] :: *) (CTyEqCan)\r\nderived\r\nwanted\r\n [WD] hole{co_a4or} {3}:: (F x_a4o6[tau:2] fsk_a4oc[fsk:0] :: *)\r\n ~# (p_a4o2[sk:2] :: *) (CNonCanonical)\r\nuntouchables [fsk_a4oe[fsk:0], fsk_a4oc[fsk:0], x_a4o3[sk:2], p_a4o2[sk:2]]\r\n}}}\r\n\r\nNote the `fsk_a4oc[fsk:0]` tyvar in the Wanted constraint, which is why I'm opening this ticket. Its presence contradicts the \"The wanteds will be unflattened and zonked\" claim from https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker#Callingpluginsfromthetypechecker section.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15137Install the "primitive" package on CI before running tests2019-07-07T18:14:06ZÖmer Sinan AğacanInstall the "primitive" package on CI before running tests#15038 added a regression test that requires primitive package. Because the package is not installed by validate script it won't run on CI. We should probably update validate script to install it before running the test suite.
<details>...#15038 added a regression test that requires primitive package. Because the package is not installed by validate script it won't run on CI. We should probably update validate script to install it before running the test suite.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.5 |
| Type | Task |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | bgamari |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Install primitive on CI before running tests","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.5","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":["bgamari"],"type":"Task","description":"#15038 added a regression test that requires primitive package. Because the package is not installed by validate script it won't run on CI. We should probably update validate script to install it before running the test suite.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15135Overlapping typeclass instance selection depends on the optimisation level2020-01-23T19:20:40ZAntoine LeblancOverlapping typeclass instance selection depends on the optimisation levelA file A defines a typeclass, and gives an instance for all types a, and exports a function relying on said typeclass. A file B defines a data type, makes it a specific `OVERLAPPING` instance of that class, and uses the function defined ...A file A defines a typeclass, and gives an instance for all types a, and exports a function relying on said typeclass. A file B defines a data type, makes it a specific `OVERLAPPING` instance of that class, and uses the function defined in A. Which instance ends up being picked for B depends on the optimisation level those files are compiled with.
- \*Minimal test case\*\*
//A.hs//
```hs
{-# LANGUAGE FlexibleInstances #-}
{-# OPTIONS_GHC -fno-warn-simplifiable-class-constraints #-}
module A where
import Data.Maybe
class A a where
someValue :: a -> Maybe Int
instance A a where
someValue = const Nothing
getInt :: A a => a -> Int
getInt x = fromMaybe 0 $ someValue x
```
//B.hs//
```hs
module B where
import A
data B = B Int
instance {-# OVERLAPPING #-} A B where
someValue (B x) = Just x
getBInt :: Int
getBInt = getInt $ B 42
```
//Main.hs//
```hs
import B
main :: IO ()
main = putStrLn $ "B: " ++ show getBInt
```
To reproduce:
```
$ ghc -O0 -fforce-recomp Main.hs && ./Main
[1 of 3] Compiling A ( A.hs, A.o )
[2 of 3] Compiling B ( B.hs, B.o )
[3 of 3] Compiling Main ( Main.hs, Main.o )
Linking Main ...
B: 42
$ ghc -O2 -fforce-recomp Main.hs && ./Main
[1 of 3] Compiling A ( A.hs, A.o )
[2 of 3] Compiling B ( B.hs, B.o )
[3 of 3] Compiling Main ( Main.hs, Main.o )
Linking Main ...
B: 0
```
The fix introduced to fix #14434 instructs the "short-cut solver" to not automatically choose a matching instance if it marked as `INCOHERENT` or `OVERLAPPABLE`, but in this case the instance is not marked in any way. This might be the source of the bug?
Additionally, whatever the optimisation level, ghc emits a warning about the `A a =>` class constraint being simplifiable; but if it is removed, then the program prints "0" in both cases.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.4.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Overlapping typeclass instance selection depends on the optimisation level","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.4.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"A file A defines a typeclass, and gives an instance for all types a, and exports a function relying on said typeclass. A file B defines a data type, makes it a specific `OVERLAPPING` instance of that class, and uses the function defined in A. Which instance ends up being picked for B depends on the optimisation level those files are compiled with.\r\n\r\n**Minimal test case**\r\n\r\n//A.hs//\r\n{{{#!hs\r\n{-# LANGUAGE FlexibleInstances #-}\r\n{-# OPTIONS_GHC -fno-warn-simplifiable-class-constraints #-}\r\n\r\nmodule A where\r\n\r\nimport Data.Maybe\r\n\r\nclass A a where\r\n someValue :: a -> Maybe Int\r\n\r\ninstance A a where\r\n someValue = const Nothing\r\n\r\ngetInt :: A a => a -> Int\r\ngetInt x = fromMaybe 0 $ someValue x\r\n}}}\r\n\r\n//B.hs//\r\n{{{#!hs\r\nmodule B where\r\n\r\nimport A\r\n\r\ndata B = B Int\r\n\r\ninstance {-# OVERLAPPING #-} A B where\r\n someValue (B x) = Just x\r\n\r\ngetBInt :: Int\r\ngetBInt = getInt $ B 42\r\n}}}\r\n\r\n//Main.hs//\r\n{{{#!hs\r\nimport B\r\n\r\nmain :: IO ()\r\nmain = putStrLn $ \"B: \" ++ show getBInt\r\n}}}\r\n\r\nTo reproduce:\r\n{{{\r\n$ ghc -O0 -fforce-recomp Main.hs && ./Main \r\n[1 of 3] Compiling A ( A.hs, A.o )\r\n[2 of 3] Compiling B ( B.hs, B.o )\r\n[3 of 3] Compiling Main ( Main.hs, Main.o )\r\nLinking Main ...\r\nB: 42\r\n\r\n$ ghc -O2 -fforce-recomp Main.hs && ./Main \r\n[1 of 3] Compiling A ( A.hs, A.o )\r\n[2 of 3] Compiling B ( B.hs, B.o )\r\n[3 of 3] Compiling Main ( Main.hs, Main.o )\r\nLinking Main ...\r\nB: 0\r\n}}}\r\n\r\n\r\nThe fix introduced to fix ticket:14434 instructs the \"short-cut solver\" to not automatically choose a matching instance if it marked as `INCOHERENT` or `OVERLAPPABLE`, but in this case the instance is not marked in any way. This might be the source of the bug?\r\n\r\nAdditionally, whatever the optimisation level, ghc emits a warning about the `A a =>` class constraint being simplifiable; but if it is removed, then the program prints \"0\" in both cases.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15130Hadrian doesn't rebuild changed `CoreUtils.hs`2020-01-23T19:20:40ZTobias Dammerstdammers@gmail.comHadrian doesn't rebuild changed `CoreUtils.hs`When compiling stage2 with Hadrian, freezing stage1 causes changes to `CoreUtils.hs` to be ignored, and `CoreUtils.hs` not being recompiled.
The following steps should reproduce the problem:
1. Make a debug build with Hadrian: `./hadri...When compiling stage2 with Hadrian, freezing stage1 causes changes to `CoreUtils.hs` to be ignored, and `CoreUtils.hs` not being recompiled.
The following steps should reproduce the problem:
1. Make a debug build with Hadrian: `./hadrian/build.sh --flavour=Devel2`
1. Rebuild with freeze1: `./hadrian/build.sh --flavour=Devel2 --freeze1`
1. Edit `compiler/coreSyn/CoreUtils.hs`, adding some change that is easily detected in the output
1. Rebuild with freeze1: `./hadrian/build.sh --flavour=Devel2 --freeze1`
1. Observe how `CoreUtils` does not appear in the list of modules Hadrian is compiling, and how changes to that file do not end up in the stage2 compiler.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.4.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Build System |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Hadrian doesn't rebuild changed `CoreUtils.hs`","status":"New","operating_system":"","component":"Build System","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.4.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"When compiling stage2 with Hadrian, freezing stage1 causes changes to `CoreUtils.hs` to be ignored, and `CoreUtils.hs` not being recompiled.\r\n\r\nThe following steps should reproduce the problem:\r\n\r\n1. Make a debug build with Hadrian: `./hadrian/build.sh --flavour=Devel2`\r\n2. Rebuild with freeze1: `./hadrian/build.sh --flavour=Devel2 --freeze1`\r\n3. Edit `compiler/coreSyn/CoreUtils.hs`, adding some change that is easily detected in the output\r\n4. Rebuild with freeze1: `./hadrian/build.sh --flavour=Devel2 --freeze1`\r\n5. Observe how `CoreUtils` does not appear in the list of modules Hadrian is compiling, and how changes to that file do not end up in the stage2 compiler.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15129Expose ghc-pkg internals as a library2020-01-23T19:20:40ZryanreichExpose ghc-pkg internals as a libraryThe module [GHC.PackageDb](https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc-boot-8.4.1/GHC-PackageDb.html), which is in the `ghc-boot` package and therefore shipped alongside `base`, and which exposes the crucial data ty...The module [GHC.PackageDb](https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc-boot-8.4.1/GHC-PackageDb.html), which is in the `ghc-boot` package and therefore shipped alongside `base`, and which exposes the crucial data type `InstalledPackageInfo a b c d e f g` and functions `readPackageDbForGhc`, `readPackageDbForGhcPkg`, and `writePackageDb`, is nonetheless crippled outside of GHC:
- The first function requires instances of the `BinaryStringRep` and `DbUnitIdModuleRep` classes, which do not have any in any public package
- The second one uses a polymorphic parameter that, in order to run without error, actually needs to be [Distribution.InstalledPackageInfo](https://downloads.haskell.org/~ghc/latest/docs/html/libraries/Cabal-2.2.0.0/Distribution-InstalledPackageInfo.html), defined in the completely different `Cabal` package neither mentioning nor mentioned by `ghc-boot`
- The third one takes two arguments of which the first (a list of `InstalledPackageInfo a b c d e f g`) is redundant because it can be constructed from the second, but no conversion function is defined.
While it is possible to write both the instances and conversion function by hand, it is extremely awkward and I worry that it is also fragile, given the internal nature of this entire module.
The full functionality of this package is only available within the [Main](https://github.com/ghc/ghc/blob/master/utils/ghc-pkg/Main.hs) module of `ghc-pkg`, which is highly monolithic despite containing such [generally useful code](https://github.com/ghc/ghc/blob/master/utils/ghc-pkg/Main.hs#L1226-L1307) as the above instances and conversion function. Since `GHC.PackageDb` provides so much of `ghc-pkg`'s essential functionality as a library, it seems reasonable to request that this code be brought out of the opaque `Main` module and into a public library. I understand from the comments that `ghc-boot` is not the place for this, but perhaps one of the following is:
a. `Cabal`, because it already has the `InstalledPackageInfo` (no parameters) type used by `ghc-pkg`; however, `Cabal` seems committed to calling `ghc-pkg` as an executable, not providing linkage to its internals.
b. a separate `ghc-pkg` library containing, possibly, a more carefully chosen selection of that executable's code.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.2.2 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Core Libraries |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Expose ghc-pkg internals as a library","status":"New","operating_system":"","component":"Core Libraries","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"FeatureRequest","description":"The module [https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc-boot-8.4.1/GHC-PackageDb.html GHC.PackageDb], which is in the `ghc-boot` package and therefore shipped alongside `base`, and which exposes the crucial data type `InstalledPackageInfo a b c d e f g` and functions `readPackageDbForGhc`, `readPackageDbForGhcPkg`, and `writePackageDb`, is nonetheless crippled outside of GHC:\r\n\r\n- The first function requires instances of the `BinaryStringRep` and `DbUnitIdModuleRep` classes, which do not have any in any public package\r\n\r\n- The second one uses a polymorphic parameter that, in order to run without error, actually needs to be [https://downloads.haskell.org/~ghc/latest/docs/html/libraries/Cabal-2.2.0.0/Distribution-InstalledPackageInfo.html Distribution.InstalledPackageInfo], defined in the completely different `Cabal` package neither mentioning nor mentioned by `ghc-boot`\r\n\r\n- The third one takes two arguments of which the first (a list of `InstalledPackageInfo a b c d e f g`) is redundant because it can be constructed from the second, but no conversion function is defined. \r\n\r\nWhile it is possible to write both the instances and conversion function by hand, it is extremely awkward and I worry that it is also fragile, given the internal nature of this entire module.\r\n\r\nThe full functionality of this package is only available within the [https://github.com/ghc/ghc/blob/master/utils/ghc-pkg/Main.hs Main] module of `ghc-pkg`, which is highly monolithic despite containing such [https://github.com/ghc/ghc/blob/master/utils/ghc-pkg/Main.hs#L1226-L1307 generally useful code] as the above instances and conversion function. Since `GHC.PackageDb` provides so much of `ghc-pkg`'s essential functionality as a library, it seems reasonable to request that this code be brought out of the opaque `Main` module and into a public library. I understand from the comments that `ghc-boot` is not the place for this, but perhaps one of the following is: \r\n\r\na. `Cabal`, because it already has the `InstalledPackageInfo` (no parameters) type used by `ghc-pkg`; however, `Cabal` seems committed to calling `ghc-pkg` as an executable, not providing linkage to its internals.\r\n\r\nb. a separate `ghc-pkg` library containing, possibly, a more carefully chosen selection of that executable's code.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15126Opportunity to compress common info table representation.2020-01-23T19:20:40ZAndreas KlebingerOpportunity to compress common info table representation.I've looked at a lot of GHC produced assembly recently and noticed that most info tables
describing stacks have the form:
```
.align 8
.long SDjR_srt-(block_cHmk_info)+296
.long 0
.quad 6151
.quad 4294967326
```
I haven't managed t...I've looked at a lot of GHC produced assembly recently and noticed that most info tables
describing stacks have the form:
```
.align 8
.long SDjR_srt-(block_cHmk_info)+296
.long 0
.quad 6151
.quad 4294967326
```
I haven't managed to dig fully into the description however some observations:
- I noticed that the second .long directive almost always ends up being zero.
- When figuring out what is what I realized the first quad (describing the pointers) is almost never fully used.
- The last entrie (closure type + ?), here `4294967326` also seems quite repetitive given the size reserved.
So I looked in detail at spectral/simple:
- There are 2012 info tables of this sort with all of them having a zero in the second long.
- We also reserve 8 byte for the stack layout. However only a single of these tables requires more than 4 byte.
The compiled module has a size of 276384 Bytes, with 16092 being redundant:
- 4 bytes for 0
- 4 bytes unused stack description
- times 2012 info tables.
That is an overhead of 5,8% which seems like quite a lot to me.
The questions where to put that information is a different one. But only looking at the data and not how it is used tagging the pointer to the SRT table seems like a possibility.
The info table description `4294967326` also appeared over 1k times. Maybe it's possible to come up with a more efficient encoding there as well.
I didn't give it much thought yet since I don't have the time to do anything about it in the near future.
But putting it here in case anyone is interested or looks into this in the future.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.2 |
| Type | Task |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Opportunity to compress common info table representation.","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.2","keywords":["CodeGen"],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Task","description":"I've looked at a lot of GHC produced assembly recently and noticed that most info tables \r\ndescribing stacks have the form:\r\n\r\n{{{\r\n.align 8\r\n\t.long\tSDjR_srt-(block_cHmk_info)+296\r\n\t.long\t0\r\n\t.quad\t6151\r\n\t.quad\t4294967326\r\n}}}\r\n\r\nI haven't managed to dig fully into the description however some observations:\r\n\r\n* I noticed that the second .long directive almost always ends up being zero.\r\n* When figuring out what is what I realized the first quad (describing the pointers) is almost never fully used.\r\n* The last entrie (closure type + ?), here `4294967326` also seems quite repetitive given the size reserved.\r\n\r\nSo I looked in detail at spectral/simple:\r\n* There are 2012 info tables of this sort with all of them having a zero in the second long.\r\n* We also reserve 8 byte for the stack layout. However only a single of these tables requires more than 4 byte.\r\n\r\nThe compiled module has a size of 276384 Bytes, with 16092 being redundant:\r\n* 4 bytes for 0\r\n* 4 bytes unused stack description\r\n* times 2012 info tables.\r\n\r\nThat is an overhead of 5,8% which seems like quite a lot to me.\r\n\r\nThe questions where to put that information is a different one. But only looking at the data and not how it is used tagging the pointer to the SRT table seems like a possibility.\r\n\r\nThe info table description `4294967326` also appeared over 1k times. Maybe it's possible to come up with a more efficient encoding there as well.\r\n\r\nI didn't give it much thought yet since I don't have the time to do anything about it in the near future.\r\nBut putting it here in case anyone is interested or looks into this in the future.\r\n\r\n","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15117Add linting checks for DWARF unwind information2020-01-23T19:20:40ZBen GamariAdd linting checks for DWARF unwind informationDWARF unwind information is rather subtle and needs to be manually specified in some cases. Having at least some basic sanity checks would hep prevent against things like #14999.
<details><summary>Trac metadata</summary>
| Trac field ...DWARF unwind information is rather subtle and needs to be manually specified in some cases. Having at least some basic sanity checks would hep prevent against things like #14999.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | niteria |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Add linting checks for DWARF unwind information","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"8.6.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":["niteria"],"type":"Bug","description":"DWARF unwind information is rather subtle and needs to be manually specified in some cases. Having at least some basic sanity checks would hep prevent against things like #14999.","type_of_failure":"OtherFailure","blocking":[]} -->https://gitlab.haskell.org/ghc/ghc/-/issues/15113Do not make CAFs from literal strings2022-03-05T22:16:34ZSimon Peyton JonesDo not make CAFs from literal stringsCurrently (as I discovered in #15038), we get the following code for `GHC.Exception.Base.patError`:
```
lvl2_r3y3 :: [Char]
[GblId]
lvl2_r3y3 = unpackCString# lvl1_r3y2
-- RHS size: {terms: 7, types: 6, coercions: 2, joins: 0/0}
patErr...Currently (as I discovered in #15038), we get the following code for `GHC.Exception.Base.patError`:
```
lvl2_r3y3 :: [Char]
[GblId]
lvl2_r3y3 = unpackCString# lvl1_r3y2
-- RHS size: {terms: 7, types: 6, coercions: 2, joins: 0/0}
patError :: forall a. Addr# -> a
[GblId, Arity=1, Str=<B,U>x, Unf=OtherCon []]
patError
= \ (@ a_a2kh) (s_a1Pi :: Addr#) ->
raise#
@ SomeException
@ 'LiftedRep
@ a_a2kh
(Control.Exception.Base.$fExceptionPatternMatchFail_$ctoException
((untangle s_a1Pi lvl2_r3y3)
`cast` (Sym (Control.Exception.Base.N:PatternMatchFail[0])
:: (String :: *) ~R# (PatternMatchFail :: *))))
```
That stupid `lvl2_r3y3 :: String` is a CAF, and hence `patError` has CAF-refs, and hence so does any function that calls `patError`, and any function that calls them.
That's bad! Lots more CAF entries in SRTs, lots more work traversing those SRTs in the garbage collector. And for what? To share the work of unpacking a C string! This is nuts.
What to do?
1. Somehow refrain from floating `unpackCSTring# lit` to top level, even if you could otherwise do so. But that seems very ad-hoc, and it make the function bigger and less inlinable.
2. Treat a top level definition
```
x :: [Char]
x = unpackCString# y
```
as NOT a CAF, and make it single-entry so that the thunk is not updated. Then every use of `x` will unpack the string afresh, which is probably a good idea anyhow.
I like this more. It would be implemented somewhere in the code generator.Ben GamariBen Gamari