GHC issueshttps://gitlab.haskell.org/ghc/ghc/-/issues2019-07-07T18:16:46Zhttps://gitlab.haskell.org/ghc/ghc/-/issues/14503Killing a thread will block if there is another process reading from a handle2019-07-07T18:16:46ZdnadalesKilling a thread will block if there is another process reading from a handleWhen trying to kill a thread, the program (which uses a thread) hangs if there is another process trying to read from a handle. This bug can be reproduced with using [this sample code](https://github.com/capitanbatata/sandbox/tree/master...When trying to kill a thread, the program (which uses a thread) hangs if there is another process trying to read from a handle. This bug can be reproduced with using [this sample code](https://github.com/capitanbatata/sandbox/tree/master/dangling-connections). I'll explain the relevant details below.
I have the following Haskell code:
```hs
someFuncWithChans :: IO ()
someFuncWithChans = withSocketsDo $ do
h <- connectTo "localhost" (PortNumber 9090)
hSetBuffering h NoBuffering
ch <- newChan
putStrLn "Starting the handler reader"
readerTid <- forkIO $ handleReader h ch
cmdsHandler h ch
putStrLn "Killing the handler reader"
killThread readerTid
putStrLn "Closing the handle"
hClose h
cmdsHandler :: Handle -> Chan Action -> IO ()
cmdsHandler h ch = do
act <- readChan ch
case act of
Quit -> putStrLn "Bye bye"
Line line -> do
hPutStrLn h (reverse line)
cmdsHandler h ch
handleReader :: Handle -> Chan Action -> IO ()
handleReader h ch = forever $ do
line <- strip <$> hGetLine h
case line of
"quit" -> writeChan ch Quit
_ -> writeChan ch (Line line)
data Action = Quit | Line String
```
Is the function `someFuncWithChans` is run along with the following Java program, then the former will block while killing the handler reader (`readerTid`).
```java
public static void main(String[] args) throws IOException, InterruptedException {
ServerSocket serverSock = new ServerSocket(9090);
Socket sock = serverSock.accept();
InputStream inStream = sock.getInputStream();
BufferedReader sockIn = new BufferedReader(new InputStreamReader(inStream));
OutputStream outStream = sock.getOutputStream();
PrintWriter sockOut = new PrintWriter(new OutputStreamWriter(outStream));
while (true) {
Thread.sleep(1000);
System.out.println("Sending foo");
sockOut.println("foo");
sockOut.flush();
String s = sockIn.readLine();
System.out.println("Got " + s );
Thread.sleep(1000);
System.out.println("Sending bar");
sockOut.println("bar");
sockOut.flush();
s = sockIn.readLine();
System.out.println("Got " + s );
Thread.sleep(1000);
System.out.println("Sending quit");
sockOut.println("quit");
sockOut.flush();
// This will cause someFuncWithChans to block when killing the
// reader thread.
s = sockIn.readLine();
System.out.println("Got " + s );
}
}
```
If the `sockIn.readLine()` is commented out, then killing the thread will succeed. This problem appears only on my Windows machine (at work), whereas it does not on my personal Linux machine.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.0.2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Killing a thread will block if there is another process reading from a handle","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.0.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"When trying to kill a thread, the program (which uses a thread) hangs if there is another process trying to read from a handle. This bug can be reproduced with using [https://github.com/capitanbatata/sandbox/tree/master/dangling-connections this sample code]. I'll explain the relevant details below.\r\n\r\nI have the following Haskell code:\r\n\r\n{{{#!hs\r\nsomeFuncWithChans :: IO ()\r\nsomeFuncWithChans = withSocketsDo $ do\r\n h <- connectTo \"localhost\" (PortNumber 9090)\r\n hSetBuffering h NoBuffering\r\n ch <- newChan\r\n putStrLn \"Starting the handler reader\"\r\n readerTid <- forkIO $ handleReader h ch\r\n cmdsHandler h ch\r\n putStrLn \"Killing the handler reader\"\r\n killThread readerTid\r\n putStrLn \"Closing the handle\"\r\n hClose h\r\n\r\ncmdsHandler :: Handle -> Chan Action -> IO ()\r\ncmdsHandler h ch = do\r\n act <- readChan ch\r\n case act of\r\n Quit -> putStrLn \"Bye bye\"\r\n Line line -> do\r\n hPutStrLn h (reverse line)\r\n cmdsHandler h ch\r\n\r\nhandleReader :: Handle -> Chan Action -> IO ()\r\nhandleReader h ch = forever $ do\r\n line <- strip <$> hGetLine h\r\n case line of\r\n \"quit\" -> writeChan ch Quit\r\n _ -> writeChan ch (Line line)\r\n\r\ndata Action = Quit | Line String\r\n\r\n}}}\r\n\r\nIs the function `someFuncWithChans` is run along with the following Java program, then the former will block while killing the handler reader (`readerTid`).\r\n\r\n{{{#!java\r\n public static void main(String[] args) throws IOException, InterruptedException {\r\n\r\n ServerSocket serverSock = new ServerSocket(9090);\r\n\r\n Socket sock = serverSock.accept();\r\n\r\n InputStream inStream = sock.getInputStream();\r\n BufferedReader sockIn = new BufferedReader(new InputStreamReader(inStream));\r\n\r\n OutputStream outStream = sock.getOutputStream();\r\n PrintWriter sockOut = new PrintWriter(new OutputStreamWriter(outStream));\r\n\r\n\r\n\r\n while (true) {\r\n Thread.sleep(1000);\r\n System.out.println(\"Sending foo\");\r\n sockOut.println(\"foo\");\r\n sockOut.flush();\r\n String s = sockIn.readLine();\r\n System.out.println(\"Got \" + s );\r\n Thread.sleep(1000);\r\n System.out.println(\"Sending bar\");\r\n sockOut.println(\"bar\");\r\n sockOut.flush();\r\n s = sockIn.readLine();\r\n System.out.println(\"Got \" + s );\r\n Thread.sleep(1000);\r\n System.out.println(\"Sending quit\");\r\n sockOut.println(\"quit\");\r\n sockOut.flush();\r\n // This will cause someFuncWithChans to block when killing the\r\n // reader thread.\r\n s = sockIn.readLine();\r\n System.out.println(\"Got \" + s );\r\n }\r\n }\r\n}}}\r\n\r\nIf the `sockIn.readLine()` is commented out, then killing the thread will succeed. This problem appears only on my Windows machine (at work), whereas it does not on my personal Linux machine.","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/13739very slow linking of executables with ld.bfd < 2.272019-07-07T18:20:20ZDouglas Wilsondouglas@well-typed.comvery slow linking of executables with ld.bfd < 2.27While building profiled executables with 8.2.1rc2 I've noticed the link times seem to have significantly regressed.
I don't have a minimal test case.
Testing on cabal head source tree
```
cabal --version
>cabal-install version 2.1.0.0...While building profiled executables with 8.2.1rc2 I've noticed the link times seem to have significantly regressed.
I don't have a minimal test case.
Testing on cabal head source tree
```
cabal --version
>cabal-install version 2.1.0.0
>compiled using version 2.1.0.0 of the Cabal library
cabal new-configure --enable-profiling --enable-newer --with-ghc=/opt/ghc-8.2.1/bin/ghc
cabal build cabal-install
# hit Ctrl-C during linking
time cabal build cabal-install
```
gives
real 1m54.833s
user 1m52.936s
sys 0m1.620s
doing the same with 8.0.2 links in less than a second
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.2.1-rc2 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Very slow linking of profiled executables","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.2.1-rc2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"While building profiled executables with 8.2.1rc2 I've noticed the link times seem to have significantly regressed.\r\n\r\nI don't have a minimal test case.\r\n\r\nTesting on cabal head source tree\r\n{{{\r\ncabal --version\r\n>cabal-install version 2.1.0.0\r\n>compiled using version 2.1.0.0 of the Cabal library \r\ncabal new-configure --enable-profiling --enable-newer --with-ghc=/opt/ghc-8.2.1/bin/ghc\r\ncabal build cabal-install\r\n\r\n# hit Ctrl-C during linking\r\ntime cabal build cabal-install\r\n}}}\r\n\r\ngives\r\nreal\t1m54.833s\r\nuser\t1m52.936s\r\nsys\t0m1.620s\r\n\r\ndoing the same with 8.0.2 links in less than a second\r\n","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/13290Data constructors should not have RULES2020-09-10T22:56:00ZSimon Peyton JonesData constructors should not have RULESGHC has never (knowingly) supported rules for data constructors, like
```
{-# RULES
"BadBadBad" Just [x,y] = Just []
#-}
```
Notice that the rule is for the *constructor itself*. Presumably the intent is that any occurrence of `J...GHC has never (knowingly) supported rules for data constructors, like
```
{-# RULES
"BadBadBad" Just [x,y] = Just []
#-}
```
Notice that the rule is for the *constructor itself*. Presumably the intent is that any occurrence of `Just` appplied to a two-element list will rewrite to `Just []`.
But currently they are accidentally allowed through, and then behave in mysterious ways because of constructor wrappers. Duncan Coutts says
```
> Well I've certainly tried to use that in the past.
> A previous version of the cbor lib which used a different
> representation did a lot of matching on constructors to re-arrange
> input to an interpreter, until I discovered that GHC actually uses
> constructor wrappers and that matching on constructors was thus not
> reliable
```
I think we should simply make it illegal for now. If you really want it, use a smart constructor thus
```
mkJust x = Just x
{-# INLINE [0] mkJust #-}
{-# RULES “Good” mkJust [x,y] = mkJust [] #-}
```
So let us
- Check in that you don't try to write a rule for a data constructor.
- Document in the user manual8.2.1David FeuerDavid Feuerhttps://gitlab.haskell.org/ghc/ghc/-/issues/13194Concurrent modifications of package.cache are not safe2020-05-31T16:40:53ZGhost UserConcurrent modifications of package.cache are not safeThere are a couple of different issues here.
1. On Linux, issuing `ghc-pkg register` for multiple packages in parallel might result in lost updates to package database because of how `registerPackage` function works - it reads existing ...There are a couple of different issues here.
1. On Linux, issuing `ghc-pkg register` for multiple packages in parallel might result in lost updates to package database because of how `registerPackage` function works - it reads existing package databases, picks the one to modify, then checks that package info for the package to register is fine and replaces package database with what was read in the beginning + new package info.
Therefore, if updates interleave, it might happen that process1 reads the database, then process2 updates it while process1 still has the old version and uses it for its update later, so update made by process2 is lost.
1. On Windows, update to package database might fail - the issue is that GHC attempts to update it using rename trick, which fails whenever any other process has file to be replaced open for reading. Combine that with the fact that GHC reads package database when compiling packages and you get problems in both Stack (https://github.com/commercialhaskell/stack/issues/2617) and Cabal (https://github.com/haskell/cabal/issues/4005).
BTW, rename trick (used for atomic database updates) not only doesn't work on Windows, it's also not atomic e.g. on NFS (https://stackoverflow.com/questions/41362016/rename-atomicity-and-nfs).
The solution to both problems is to use OS specific features to lock database file (in shared mode when reading and in exclusive mode when writing). This can be done on Windows using LockFileEx. Unfortunately for POSIX things are a bit more complicated.
There are two ways to lock a file on Linux:
1. Using fcntl(F_SET_LK) (POSIX API)
1. Using flock (BSD API)
However, fcntl locks have a serious limitation:
The record locks described above are associated with the process
(unlike the open file description locks described below). This has
> some unfortunate consequences:
- If a process closes any file descriptor referring to a file, then
all of the process's locks on that file are released, regardless
of the file descriptor(s) on which the locks were obtained. This
is bad: it means that a process can lose its locks on a file such
as /etc/passwd or /etc/mtab when for some reason a library
function decides to open, read, and close the same file.
- The threads in a process share locks. In other words, a
multithreaded program can't use record locking to ensure that
threads don't simultaneously access the same region of a file.
Whereas flock is not guaranteed to work with NFS, according to https://en.wikipedia.org/wiki/File_locking\#Problems:
> Whether and how flock locks work on network filesystems, such as NFS, is implementation dependent. On BSD systems, flock calls on a file descriptor open to a file on an NFS-mounted partition are successful no-ops. On Linux prior to 2.6.12, flock calls on NFS files would act only locally. Kernel 2.6.12 and above implement flock calls on NFS files using POSIX byte-range locks. These locks will be visible to other NFS clients that implement fcntl-style POSIX locks, but invisible to those that do not.\[4\]
Assuming that the solution would be to go with locking the database, we would need to:
1. In `registerPackage`, lock all read databases in shared mode except for the database that will later be modified, which has to be locked in exclusive mode. The handle also would need to be kept open and passed to `changeDB` later and used for rewriting the database with updated version in `GHC.PackageDb.writePackageDb` instead of `writeFileAtomic` (which is not actually unconditionally atomic, as demonstrated above).
1. `GHC.PackageDb.decodeFromFile` would lock a file in appropriate mode and return the handle to open file if appropriate.
1. Add support for locking a file. This should be fairly easy to do in GHC.IO.Handle.FD by extending function `openFile'` with appropriate parameters and then adding wrapper function `openLockedFile` or something. We can add both blocking and non-blocking locking to make ghc-pkg show information about waiting for locked package database if appropriate.
Alternatively we could add a function similar to the following: `hLock :: Handle -> LockMode -> Bool{-block-} -> IO Bool`, but that requires extracting file descriptor from Handle, which as far as I see is problematic.
Is going with locking an acceptable solution here?
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 8.0.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | ghc-pkg |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Concurrent modification of package.cache is not safe","status":"New","operating_system":"","component":"ghc-pkg","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.0.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"There are a couple of different issues here.\r\n\r\n1. On Linux, issuing `ghc-pkg register` for multiple packages in parallel might result in lost updates to package database because of how `registerPackage` function works - it reads existing package databases, picks the one to modify, then checks that package info for the package to register is fine and replaces package database with what was read in the beginning + new package info.\r\n\r\nTherefore, if updates interleave, it might happen that process1 reads the database, then process2 updates it while process1 still has the old version and uses it for its update later, so update made by process2 is lost.\r\n\r\n2. On Windows, update to package database might fail - the issue is that GHC attempts to update it using rename trick, which fails whenever any other process has file to be replaced open for reading. Combine that with the fact that GHC reads package database when compiling packages and you get problems in both Stack (https://github.com/commercialhaskell/stack/issues/2617) and Cabal (https://github.com/haskell/cabal/issues/4005).\r\n\r\nBTW, rename trick (used for atomic database updates) not only doesn't work on Windows, it's also not atomic e.g. on NFS (https://stackoverflow.com/questions/41362016/rename-atomicity-and-nfs).\r\n\r\nThe solution to both problems is to use OS specific features to lock database file (in shared mode when reading and in exclusive mode when writing). This can be done on Windows using LockFileEx. Unfortunately for POSIX things are a bit more complicated.\r\n\r\nThere are two ways to lock a file on Linux:\r\n1. Using fcntl(F_SET_LK) (POSIX API)\r\n\r\n2. Using flock (BSD API)\r\n\r\nHowever, fcntl locks have a serious limitation:\r\n\r\n The record locks described above are associated with the process\r\n (unlike the open file description locks described below). This has\r\n some unfortunate consequences:\r\n\r\n * If a process closes any file descriptor referring to a file, then\r\n all of the process's locks on that file are released, regardless\r\n of the file descriptor(s) on which the locks were obtained. This\r\n is bad: it means that a process can lose its locks on a file such\r\n as /etc/passwd or /etc/mtab when for some reason a library\r\n function decides to open, read, and close the same file.\r\n\r\n * The threads in a process share locks. In other words, a\r\n multithreaded program can't use record locking to ensure that\r\n threads don't simultaneously access the same region of a file.\r\n\r\nWhereas flock is not guaranteed to work with NFS, according to https://en.wikipedia.org/wiki/File_locking#Problems:\r\n\r\n Whether and how flock locks work on network filesystems, such as NFS, is implementation dependent. On BSD systems, flock calls on a file descriptor open to a file on an NFS-mounted partition are successful no-ops. On Linux prior to 2.6.12, flock calls on NFS files would act only locally. Kernel 2.6.12 and above implement flock calls on NFS files using POSIX byte-range locks. These locks will be visible to other NFS clients that implement fcntl-style POSIX locks, but invisible to those that do not.[4]\r\n\r\nAssuming that the solution would be to go with locking the database, we would need to:\r\n\r\n1. In `registerPackage`, lock all read databases in shared mode except for the database that will later be modified, which has to be locked in exclusive mode. The handle also would need to be kept open and passed to `changeDB` later and used for rewriting the database with updated version in `GHC.PackageDb.writePackageDb` instead of `writeFileAtomic` (which is not actually unconditionally atomic, as demonstrated above).\r\n\r\n2. `GHC.PackageDb.decodeFromFile` would lock a file in appropriate mode and return the handle to open file if appropriate.\r\n\r\n3. Add support for locking a file. This should be fairly easy to do in GHC.IO.Handle.FD by extending function `openFile'` with appropriate parameters and then adding wrapper function `openLockedFile` or something. We can add both blocking and non-blocking locking to make ghc-pkg show information about waiting for locked package database if appropriate.\r\n\r\nAlternatively we could add a function similar to the following: `hLock :: Handle -> LockMode -> Bool{-block-} -> IO Bool`, but that requires extracting file descriptor from Handle, which as far as I see is problematic.\r\n\r\nIs going with locking an acceptable solution here?","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/12990Partially applied constructors with unpacked fields simplified badly2021-02-11T14:11:42ZDavid FeuerPartially applied constructors with unpacked fields simplified badlyWhen a constructor is partially applied to at least one unpacked field, the simplifier produces pretty lousy results.
```hs
data Foo a = Foo !Int a
silly :: ((a -> Foo a) -> r) -> r
silly f = f (Foo 3)
```
compiles to
```hs
-- RHS si...When a constructor is partially applied to at least one unpacked field, the simplifier produces pretty lousy results.
```hs
data Foo a = Foo !Int a
silly :: ((a -> Foo a) -> r) -> r
silly f = f (Foo 3)
```
compiles to
```hs
-- RHS size: {terms: 9, types: 7, coercions: 0}
$WFoo :: forall a. Int -> a -> Foo a
$WFoo =
\ (@ a_aqI) (dt_ayl :: Int) (dt_aym :: a_aqI[sk:1]) ->
case dt_ayl of { I# dt_ayn -> Foo dt_ayn dt_aym }
-- RHS size: {terms: 2, types: 0, coercions: 0}
silly2 :: Int
silly2 = I# 3#
-- RHS size: {terms: 3, types: 3, coercions: 0}
silly1 :: forall a. a -> Foo a
silly1 = \ (@ a_ayG) -> $WFoo silly2
-- RHS size: {terms: 5, types: 9, coercions: 0}
silly :: forall a r. ((a -> Foo a) -> r) -> r
silly =
\ (@ a_ayG) (@ r_ayH) (f_axY :: (a -> Foo a) -> r) -> f_axY silly1
```
That is, GHC represents `Foo 3` as a closure containing a *boxed* `Int`. Manually eta-expanding would fix it.
```hs
silly :: ((a -> Foo a) -> r) -> r
silly f = f (\x -> Foo 3 x)
```
compiles to
```hs
-- RHS size: {terms: 5, types: 4, coercions: 0}
silly1 :: forall a. a -> Foo a
silly1 = \ (@ a_ayO) (x_ay9 :: a) -> Foo 3# x_ay9
-- RHS size: {terms: 5, types: 9, coercions: 0}
silly :: forall a r. ((a -> Foo a) -> r) -> r
silly =
\ (@ a_ayO) (@ r_ayP) (f_ay8 :: (a -> Foo a) -> r) -> f_ay8 silly1
```
in which the `Int` is unpacked into the closure. This transformation is valid whenever the value to be stored unpacked is known not to be bottom, and is certainly a good idea if it's known to have been forced.8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/12356StaticPointers support in GHCi2021-07-29T13:18:20ZBen GamariStaticPointers support in GHCiCurrently GHCi does not support `-XStaticPointers`, which is quite annoying for those of us who actually use `-XStaticPointers` in day-to-day development. It would be great if GHCi could grow support for this extension.
<details><summar...Currently GHCi does not support `-XStaticPointers`, which is quite annoying for those of us who actually use `-XStaticPointers` in day-to-day development. It would be great if GHCi could grow support for this extension.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 8.0.1 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | #12000 |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"StaticPointers support in GHCi","status":"New","operating_system":"","component":"Compiler","related":[12000],"milestone":"8.2.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.0.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"FeatureRequest","description":"Currently GHCi does not support `-XStaticPointers`, which is quite annoying for those of us who actually use `-XStaticPointers` in day-to-day development. It would be great if GHCi could grow support for this extension.","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1Ben GamariBen Gamarihttps://gitlab.haskell.org/ghc/ghc/-/issues/11817Add proper support for weak symbols to the runtime linker2021-11-11T17:51:13ZTamar ChristinaAdd proper support for weak symbols to the runtime linkerCurrently weak symbols support is only partially supported by the runtime linker. The current support is mostly there to provide `COMDAT` support.
In #11223 this support has been extended but there is still some work needed to finish we...Currently weak symbols support is only partially supported by the runtime linker. The current support is mostly there to provide `COMDAT` support.
In #11223 this support has been extended but there is still some work needed to finish weak symbols support.
Also there needs some investigation into how this support should look like. E.g. are we only supporting linking weak symbols in C or do we want to support it in Haskell as well.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ----------------------- |
| Version | 8.1 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Runtime System (Linker) |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | #11223 |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Add proper support for weak symbols to the runtime linker","status":"New","operating_system":"","component":"Runtime System (Linker)","related":[11223],"milestone":"8.2.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"FeatureRequest","description":"Currently weak symbols support is only partially supported by the runtime linker. The current support is mostly there to provide `COMDAT` support.\r\n\r\nIn #11223 this support has been extended but there is still some work needed to finish weak symbols support.\r\n\r\nAlso there needs some investigation into how this support should look like. E.g. are we only supporting linking weak symbols in C or do we want to support it in Haskell as well.","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/11557Unbundle Cabal from GHC2019-07-07T18:29:49ZEdward Z. YangUnbundle Cabal from GHCRecently, Duncan made it so that GHC proper does not depend on Cabal (so it is just ghc-pkg that is the user-facing executable which links against Cabal). We should now seriously consider unbundling Cabal from GHC, so that the default gl...Recently, Duncan made it so that GHC proper does not depend on Cabal (so it is just ghc-pkg that is the user-facing executable which links against Cabal). We should now seriously consider unbundling Cabal from GHC, so that the default global database we provide does NOT include Cabal.
Pros:
- Distributions will be more likely to take point version releases to Cabal, as they no longer have to finesse updating Cabal without updating GHC as they have now
- Stack is (improperly) coupling the version of Cabal they build with the release of GHC; while they should fix this, unbundling Cabal would also give them more flexibility with picking LTS packages.
Cons:
- Bootstrapping Cabal/cabal-install becomes modestly harder. Fortunately, cabal-install is already pretty obnoxious to bootstrap, so SOP is to just distribute binaries for this, in which case things are as easy as before.
- We wouldn't be strictly adhering to the Cabal spec, which requires that the compiler always be able to build the Setup executable.
- ghc-pkg would have to be statically linked
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ----------------- |
| Version | 8.1 |
| Type | Task |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | libraries (other) |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | dcoutts, hvr |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Unbundle Cabal from GHC","status":"New","operating_system":"","component":"libraries (other)","related":[],"milestone":"8.2.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"8.1","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":["dcoutts","hvr"],"type":"Task","description":"Recently, Duncan made it so that GHC proper does not depend on Cabal (so it is just ghc-pkg that is the user-facing executable which links against Cabal). We should now seriously consider unbundling Cabal from GHC, so that the default global database we provide does NOT include Cabal.\r\n\r\nPros:\r\n\r\n* Distributions will be more likely to take point version releases to Cabal, as they no longer have to finesse updating Cabal without updating GHC as they have now\r\n\r\n* Stack is (improperly) coupling the version of Cabal they build with the release of GHC; while they should fix this, unbundling Cabal would also give them more flexibility with picking LTS packages.\r\n\r\nCons:\r\n\r\n* Bootstrapping Cabal/cabal-install becomes modestly harder. Fortunately, cabal-install is already pretty obnoxious to bootstrap, so SOP is to just distribute binaries for this, in which case things are as easy as before.\r\n\r\n* We wouldn't be strictly adhering to the Cabal spec, which requires that the compiler always be able to build the Setup executable.\r\n\r\n* ghc-pkg would have to be statically linked","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/11425The GHC API doesn't provide a good hscTarget option for tooling2019-07-07T18:30:28ZBen GamariThe GHC API doesn't provide a good hscTarget option for toolingTools like [ghc-mod](https://github.com/DanielG/ghc-mod/) typically just want `TypecheckedModule`s. Sadly, the GHC API currently doesn't provide a good way to get at these in all cases (see this [ghc-mod ticket](https://github.com/Daniel...Tools like [ghc-mod](https://github.com/DanielG/ghc-mod/) typically just want `TypecheckedModule`s. Sadly, the GHC API currently doesn't provide a good way to get at these in all cases (see this [ghc-mod ticket](https://github.com/DanielG/ghc-mod/issues/205)). Each of the options we offer are cursed with their own limitations (largely quoting from the ghc-mod ticket),
## HscNothing
At first glance this looks like what you would want. But...
- Pros
\* Doesn't generate code of any sort and is therefore rather lightweight
- Cons
\* It lacks support for Template Haskell
\* Has trouble with `foreign export`s
\* [Fails](https://github.com/DanielG/ghc-mod/pull/145) to issue pattern match checker warnings (#10600)
## HscInterpreted
Okay, so `HscNothing` doesn't work. Maybe `HscInterpreted` is better?
- Pros
\* Supports Template Haskell
- Cons
\* Can't deal with unboxed tuples (#1257)
\* Slower as we need to produce unnecessary bytecode
\* Large memory footprint
\* Also can't deal with `foreign export`s
## HscAsm
- Pros
\* Supports all compilable code
- Cons
\* Slow
\* Produces `.o` files
This is quite unfortunate. It seems like we need something in between `HscNothing` and `HscInterpreted` which is willing to use the interpreter to evaluate Template Haskell splices when necessary, but doesn't produce bytecode. Unfortunately it's unclear what to do about `foreign export` (but arguably most tools would be fine with some approximate representation).8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/10986GHC should delete all temporary files it creates in /tmp2023-11-03T10:45:49ZerikdGHC should delete all temporary files it creates in /tmpGHC creates large numbers of temporary files in /tmp. These would normally get cleaned up when the machine is rebooted, but for things like Jenkins build machines that don't get rebooted often they accumulate pretty quickly. The /tmp on ...GHC creates large numbers of temporary files in /tmp. These would normally get cleaned up when the machine is rebooted, but for things like Jenkins build machines that don't get rebooted often they accumulate pretty quickly. The /tmp on my Jenkins build machine currently has over 200k files and directories, but only about 40 of them are non-GHC related files.
Under normal operation, GHC should delete all temporary files it creates.8.2.1erikderikdhttps://gitlab.haskell.org/ghc/ghc/-/issues/10249GHCi leaky abstraction: error message mentions `ghciStepIO`2021-12-06T19:23:08ZIcelandjackGHCi leaky abstraction: error message mentions `ghciStepIO````
$ ghci -fdefer-type-errors -ignore-dot-ghci
GHCi, version 7.10.0.20150316: http://www.haskell.org/ghc/ :? for help
Prelude> _
<interactive>:2:1: Warning:
Found hole ‘_’ with type: IO t0
Where: ‘t0’ is an ambiguous type vari...```
$ ghci -fdefer-type-errors -ignore-dot-ghci
GHCi, version 7.10.0.20150316: http://www.haskell.org/ghc/ :? for help
Prelude> _
<interactive>:2:1: Warning:
Found hole ‘_’ with type: IO t0
Where: ‘t0’ is an ambiguous type variable
In the first argument of ‘GHC.GHCi.ghciStepIO ::
IO a_aly -> IO a_aly’, namely
‘_’
In a stmt of an interactive GHCi command:
it <- GHC.GHCi.ghciStepIO :: IO a_aly -> IO a_aly _
*** Exception: <interactive>:2:1:
Found hole ‘_’ with type: IO t0
Where: ‘t0’ is an ambiguous type variable
In the first argument of ‘GHC.GHCi.ghciStepIO ::
IO a_aly -> IO a_aly’, namely
‘_’
In a stmt of an interactive GHCi command:
it <- GHC.GHCi.ghciStepIO :: IO a_aly -> IO a_aly _
(deferred type error)
Prelude>
```
It should ideally not expose `ghciStepIO` to the user.8.2.1Thomas MiedemaThomas Miedemahttps://gitlab.haskell.org/ghc/ghc/-/issues/10174AArch64 : ghc-stage2 segfaults compiling libraries/parallel2019-07-07T18:37:14ZerikdAArch64 : ghc-stage2 segfaults compiling libraries/parallelCompiling git HEAD (e02ef0e6d4eefa5f0) on AArch64 hardware using ghc-7.6.3 from Debian as the bootstrap compiler. LLVM is version 3.6.
It fails with:
```
"inplace/bin/ghc-stage2" -hisuf hi -osuf o -hcsuf hc -static -H64m -O0 -fllvm \...Compiling git HEAD (e02ef0e6d4eefa5f0) on AArch64 hardware using ghc-7.6.3 from Debian as the bootstrap compiler. LLVM is version 3.6.
It fails with:
```
"inplace/bin/ghc-stage2" -hisuf hi -osuf o -hcsuf hc -static -H64m -O0 -fllvm \
-this-package-key paral_2NTMt5X9WUG6LNupBFIZti -hide-all-packages -i \
-ilibraries/parallel/. -ilibraries/parallel/dist-install/build \
-ilibraries/parallel/dist-install/build/autogen \
-Ilibraries/parallel/dist-install/build \
-Ilibraries/parallel/dist-install/build/autogen -Ilibraries/parallel/. \
-optP-include -optPlibraries/parallel/dist-install/build/autogen/cabal_macros.h \
-package-key array_BLJREAlFJ4zJ2kSDphVieY -package-key base_8gvrDSBdaidLA14EDtK6ja \
-package-key conta_1uqbEcrmZiO1C91Z8ePoyI -package-key deeps_Bw45clVBNVT4S1LLyUOPfh \
-Wall -feager-blackholing -XHaskell2010 -O -fllvm -no-user-package-db -rtsopts \
-odir libraries/parallel/dist-install/build \
-hidir libraries/parallel/dist-install/build \
-stubdir libraries/parallel/dist-install/build \
-dynamic-too -c libraries/parallel/./Control/Parallel/Strategies.hs \
-o libraries/parallel/dist-install/build/Control/Parallel/Strategies.o \
-dyno libraries/parallel/dist-install/build/Control/Parallel/Strategies.dyn_o
libraries/parallel/ghc.mk:5: recipe for target
'libraries/parallel/dist-install/build/Control/Parallel/Strategies.o' failed
make[1]: *** [libraries/parallel/dist-install/build/Control/Parallel/Strategies.o] Segmentation fault
```
It seems that `inplace/lib/bin/ghc-stage1` (which is working fine) is statically linked to all the Haskell libraries where as `inplace/lib/bin/ghc-stage2` is dynamically linked which would explain why stage1 works fine and stage2 segfaults.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 7.11 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"AArch64 : ghc-stage2 segfaults compiling libraries/parallel","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"7.12.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"7.11","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"Compiling git HEAD (e02ef0e6d4eefa5f0) on AArch64 hardware using ghc-7.6.3 from Debian as the bootstrap compiler. LLVM is version 3.6.\r\n\r\nIt fails with:\r\n\r\n{{{\r\n\"inplace/bin/ghc-stage2\" -hisuf hi -osuf o -hcsuf hc -static -H64m -O0 -fllvm \\\r\n -this-package-key paral_2NTMt5X9WUG6LNupBFIZti -hide-all-packages -i \\\r\n -ilibraries/parallel/. -ilibraries/parallel/dist-install/build \\\r\n -ilibraries/parallel/dist-install/build/autogen \\\r\n -Ilibraries/parallel/dist-install/build \\\r\n -Ilibraries/parallel/dist-install/build/autogen -Ilibraries/parallel/. \\\r\n -optP-include -optPlibraries/parallel/dist-install/build/autogen/cabal_macros.h \\\r\n -package-key array_BLJREAlFJ4zJ2kSDphVieY -package-key base_8gvrDSBdaidLA14EDtK6ja \\\r\n -package-key conta_1uqbEcrmZiO1C91Z8ePoyI -package-key deeps_Bw45clVBNVT4S1LLyUOPfh \\\r\n -Wall -feager-blackholing -XHaskell2010 -O -fllvm -no-user-package-db -rtsopts \\\r\n -odir libraries/parallel/dist-install/build \\\r\n -hidir libraries/parallel/dist-install/build \\\r\n -stubdir libraries/parallel/dist-install/build \\\r\n -dynamic-too -c libraries/parallel/./Control/Parallel/Strategies.hs \\\r\n -o libraries/parallel/dist-install/build/Control/Parallel/Strategies.o \\\r\n -dyno libraries/parallel/dist-install/build/Control/Parallel/Strategies.dyn_o\r\nlibraries/parallel/ghc.mk:5: recipe for target\r\n 'libraries/parallel/dist-install/build/Control/Parallel/Strategies.o' failed\r\nmake[1]: *** [libraries/parallel/dist-install/build/Control/Parallel/Strategies.o] Segmentation fault\r\n}}}\r\n\r\nIt seems that `inplace/lib/bin/ghc-stage1` (which is working fine) is statically linked to all the Haskell libraries where as `inplace/lib/bin/ghc-stage2` is dynamically linked which would explain why stage1 works fine and stage2 segfaults.\r\n\r\n","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1erikderikdhttps://gitlab.haskell.org/ghc/ghc/-/issues/9601Make the rewrite rule system more powerful2019-07-07T18:39:51ZSophie TaylorMake the rewrite rule system more powerfulIt would be amazing if the current RULES system could be upgraded to include various predicates, or even into a full-on strategic programming system. This would allow far far more optimisations to be possible, rather than the current con...It would be amazing if the current RULES system could be upgraded to include various predicates, or even into a full-on strategic programming system. This would allow far far more optimisations to be possible, rather than the current conservative system where you have to make sure that the optimisations ALWAYS apply.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 7.8.2 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Make the rewrite rule system more powerful","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"7.10.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"7.8.2","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"FeatureRequest","description":"It would be amazing if the current RULES system could be upgraded to include various predicates, or even into a full-on strategic programming system. This would allow far far more optimisations to be possible, rather than the current conservative system where you have to make sure that the optimisations ALWAYS apply.","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/8971Native Code Generator for 8.0.1 is not as optimized as 7.6.3...2019-07-07T18:42:29ZGordonBGoodNative Code Generator for 8.0.1 is not as optimized as 7.6.3...The output assembly code is not as optimized for the Windows 32-bit version 8.0.1 compiler as the Windows 7.6.3 compiler (32-bit) when the option switches are exactly the same although it may not be limited to only the Windows platform; ...The output assembly code is not as optimized for the Windows 32-bit version 8.0.1 compiler as the Windows 7.6.3 compiler (32-bit) when the option switches are exactly the same although it may not be limited to only the Windows platform; this has a negative impact on execution time for tight loops of about a factor of two times slower.
The following code will reproduce the problem:
```haskell
-- GHC_NCG_OptimizationBug.hs
-- it seems the Haskell GHC 7.8.1 NCG Native Code Generator (NCG) doesn't
-- optimize as well for (at least) the x86 target as version 7.6.3
{-# OPTIONS_GHC -O3 -rtsopts -v -dcore-lint -ddump-asm -ddump-to-file -dumpdir . #-} -- or O2
import Data.Bits
import Control.Monad.ST (runST,ST(..))
import Data.Array.Base
-- Uses a very simple Sieve of Eratosthenes to 2 ^ 18 to prove it.
accNumPrimes :: Int -> Int
accNumPrimes acc = acc `seq` runST $ do
let bfSz = (256 * 1024 - 3) `div` 2
bfLmtWrds = (bfSz + 1) `div` 32
bufw <- newArray (0, bfLmtWrds) (-1) :: ST s (STUArray s Int Int)
-- to clear the last "uneven" bit(s)
unsafeWrite bufw bfLmtWrds (complement ((-2) `shiftL` (bfSz .&. 31)))
bufb <- (castSTUArray :: STUArray s Int Int -> ST s (STUArray s Int Bool)) bufw
let cullp i =
let p = i + i + 3 in
let s = (p * p - 3) `div` 2 in
if s > bfSz then
let count i sm = do
sm `seq` if i > bfLmtWrds then return (acc + sm) else do
wd <- unsafeRead bufw i
count (i + 1) (sm + (popCount wd)) in
count 0 1 -- use '1' for the '2' prime not in the array
else do
v <- unsafeRead bufb i
if v then
let cull j = do -- very tight inner loop
if j > bfSz then cullp (i + 1) else do
unsafeWrite bufb j False
cull (j + p) in
cull s
else cullp (i + 1)
cullp 0
main =
-- run the program a number of times to get a reasonable time...
let numloops = 2000 in
let loop n acc =
acc `seq` if n <= 0 then acc else
loop (n - 1) (accNumPrimes acc) in
print $ loop numloops 0
```
The above code takes almost twice as long to run when compiled under 7.8.1 RC2 for Windows (32-bit) as it does for the version 7.6.3 compiler (both 32-bit compilers).
The -ddump-simpl Core dump is almost identical between the two, which is also evidenced by that using the -fllvm LLVM compiler back end switch for each results in code that runs at about the same speed for each compiler run (which would use the same Core output as used for NCG, right?).
Under Windows, the compilation and run for 7.8.1 RC2 goes like this:
```
*Main> :! E:\ghc-7.8.0.20140228_32\bin\ghc --make -pgmlo "E:\llvm32\build\Release\bin\opt" -pgmlc "E:\llvm32\build\Release\bin\llc" "GHC_NCG_OptimizationBug.hs"
compile: input file WindowsVsLinuxNCG.hs
Created temporary directory: C:\Users\Gordon\AppData\Local\Temp\ghc15460_0
*** Checking old interface for main:Main:
*** Parser:
*** Renamer/typechecker:
[1 of 1] Compiling Main ( GHC_NCG_OptimizationBug.hs, GHC_NCG_OptimizationBug.o )
*** Desugar:
Result size of Desugar (after optimization)
= {terms: 260, types: 212, coercions: 0}
*** Core Linted result of Desugar (after optimization):
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 213, types: 136, coercions: 52}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 215, types: 148, coercions: 67}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=3
= {terms: 209, types: 135, coercions: 51}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 209, types: 135, coercions: 42}
*** Core Linted result of Simplifier:
*** Specialise:
Result size of Specialise = {terms: 209, types: 135, coercions: 42}
*** Core Linted result of Specialise:
*** Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}):
Result size of Float out(FOS {Lam = Just 0,
Consts = True,
PAPs = False})
= {terms: 286, types: 185, coercions: 42}
*** Core Linted result of Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}):
*** Float inwards:
Result size of Float inwards
= {terms: 286, types: 185, coercions: 42}
*** Core Linted result of Float inwards:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 502, types: 393, coercions: 103}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 428, types: 326, coercions: 29}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=3
= {terms: 420, types: 321, coercions: 29}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 420, types: 321, coercions: 29}
*** Core Linted result of Simplifier:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 418, types: 318, coercions: 29}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 418, types: 318, coercions: 29}
*** Core Linted result of Simplifier:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 475, types: 383, coercions: 32}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 444, types: 336, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 444, types: 336, coercions: 9}
*** Core Linted result of Simplifier:
*** Demand analysis:
Result size of Demand analysis
= {terms: 444, types: 336, coercions: 9}
*** Core Linted result of Demand analysis:
*** Worker Wrapper binds:
Result size of Worker Wrapper binds
= {terms: 579, types: 457, coercions: 9}
*** Core Linted result of Worker Wrapper binds:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 510, types: 415, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 420, types: 322, coercions: 9}
*** Core Linted result of Simplifier:
*** Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}):
Result size of Float out(FOS {Lam = Just 0,
Consts = True,
PAPs = True})
= {terms: 426, types: 326, coercions: 9}
*** Core Linted result of Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}):
*** Common sub-expression:
Result size of Common sub-expression
= {terms: 424, types: 326, coercions: 9}
*** Core Linted result of Common sub-expression:
*** Float inwards:
Result size of Float inwards
= {terms: 424, types: 326, coercions: 9}
*** Core Linted result of Float inwards:
*** Liberate case:
Result size of Liberate case
= {terms: 1,824, types: 1,259, coercions: 9}
*** Core Linted result of Liberate case:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 608, types: 422, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 604, types: 413, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=3
= {terms: 604, types: 413, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 604, types: 413, coercions: 9}
*** Core Linted result of Simplifier:
*** SpecConstr:
Result size of SpecConstr = {terms: 708, types: 505, coercions: 9}
*** Core Linted result of SpecConstr:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 702, types: 499, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 608, types: 405, coercions: 9}
*** Core Linted result of Simplifier:
*** Tidy Core:
Result size of Tidy Core = {terms: 608, types: 405, coercions: 9}
*** Core Linted result of Tidy Core:
*** CorePrep:
Result size of CorePrep = {terms: 825, types: 489, coercions: 9}
*** Core Linted result of CorePrep:
*** Stg2Stg:
*** CodeOutput:
*** New CodeGen:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** CPSZ:
*** Assembler:
"E:\ghc-7.8.0.20140228_32\lib/../mingw/bin/gcc.exe" "-U__i686" "-fno-stack-protector" "-DTABLES_NEXT_TO_CODE" "-I." "-x" "assembler-with-cpp" "-c" "C:\Users\Gordon\AppData\Local\Temp\ghc15460_0\ghc15460_2.s" "-o" "GHC_NCG_OptimizationBug.o"
Linking GHC_NCG_OptimizationBug.exe ...
*Main> :! GHC_NCG_OptimizationBug +RTS -s
46000000
32,965,096 bytes allocated in the heap
7,032 bytes copied during GC
41,756 bytes maximum residency (2 sample(s))
19,684 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 61 colls, 0 par 0.00s 0.00s 0.0000s 0.0000s
Gen 1 2 colls, 0 par 0.00s 0.00s 0.0001s 0.0001s
INIT time 0.00s ( 0.00s elapsed)
MUT time 1.73s ( 1.73s elapsed)
GC time 0.00s ( 0.00s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 1.73s ( 1.73s elapsed)
%GC time 0.0% (0.0% elapsed)
Alloc rate 19,006,902 bytes per MUT second
Productivity 100.0% of total user, 100.2% of total elapsed
```
whereas under version 7.6.3 goes like this:
```
*Main> :! E:\ghc-7.6.3_32\bin\ghc --make -pgmlo "E:\llvm32\build\Release\bin\opt" -pgmlc "E:\llvm32\build\Release\bin\llc" "GHC_NCG_OptimizationBug.hs"
compile: input file GHC_NCG_OptimizationBug.hs
Created temporary directory: C:\Users\Gordon\AppData\Local\Temp\ghc28200_0
*** Checking old interface for main:Main:
*** Parser:
*** Renamer/typechecker:
[1 of 1] Compiling Main ( GHC_NCG_OptimizationBug.hs, GHC_NCG_OptimizationBug.o )
*** Desugar:
Result size of Desugar (after optimization)
= {terms: 247, types: 212, coercions: 0}
*** Core Linted result of Desugar (after optimization):
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 198, types: 132, coercions: 35}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 200, types: 144, coercions: 43}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=3
= {terms: 194, types: 131, coercions: 57}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 194, types: 131, coercions: 39}
*** Core Linted result of Simplifier:
*** Specialise:
Result size of Specialise = {terms: 194, types: 131, coercions: 39}
*** Core Linted result of Specialise:
*** Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}):
Result size of Float out(FOS {Lam = Just 0,
Consts = True,
PAPs = False})
= {terms: 277, types: 191, coercions: 39}
*** Core Linted result of Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}):
*** Float inwards:
Result size of Float inwards
= {terms: 277, types: 191, coercions: 39}
*** Core Linted result of Float inwards:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 514, types: 403, coercions: 103}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 420, types: 317, coercions: 29}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=3
= {terms: 412, types: 312, coercions: 29}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 412, types: 312, coercions: 29}
*** Core Linted result of Simplifier:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 410, types: 309, coercions: 29}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 410, types: 309, coercions: 29}
*** Core Linted result of Simplifier:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 455, types: 364, coercions: 32}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 422, types: 317, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 422, types: 317, coercions: 9}
*** Core Linted result of Simplifier:
*** Demand analysis:
Result size of Demand analysis
= {terms: 422, types: 317, coercions: 9}
*** Core Linted result of Demand analysis:
*** Worker Wrapper binds:
Result size of Worker Wrapper binds
= {terms: 536, types: 427, coercions: 9}
*** Core Linted result of Worker Wrapper binds:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 480, types: 391, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 400, types: 306, coercions: 9}
*** Core Linted result of Simplifier:
*** Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}):
Result size of Float out(FOS {Lam = Just 0,
Consts = True,
PAPs = True})
= {terms: 408, types: 311, coercions: 9}
*** Core Linted result of Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}):
*** Common sub-expression:
Result size of Common sub-expression
= {terms: 406, types: 311, coercions: 9}
*** Core Linted result of Common sub-expression:
*** Float inwards:
Result size of Float inwards
= {terms: 406, types: 311, coercions: 9}
*** Core Linted result of Float inwards:
*** Liberate case:
Result size of Liberate case
= {terms: 1,186, types: 824, coercions: 9}
*** Core Linted result of Liberate case:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 585, types: 411, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 569, types: 392, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=3
= {terms: 569, types: 392, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 569, types: 392, coercions: 9}
*** Core Linted result of Simplifier:
*** SpecConstr:
Result size of SpecConstr = {terms: 746, types: 566, coercions: 9}
*** Core Linted result of SpecConstr:
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 739, types: 560, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier iteration=2
= {terms: 762, types: 546, coercions: 9}
*** Core Linted result of Simplifier:
Result size of Simplifier = {terms: 642, types: 402, coercions: 9}
*** Core Linted result of Simplifier:
*** Tidy Core:
Result size of Tidy Core = {terms: 642, types: 402, coercions: 9}
*** Core Linted result of Tidy Core:
writeBinIface: 10 Names
writeBinIface: 34 dict entries
*** CorePrep:
Result size of CorePrep = {terms: 779, types: 483, coercions: 9}
*** Core Linted result of CorePrep:
*** Stg2Stg:
*** CodeOutput:
*** CodeGen:
*** Assembler:
"E:\ghc-7.6.3_32\lib/../mingw/bin/gcc.exe" "-fno-stack-protector" "-Wl,--hash-size=31" "-Wl,--reduce-memory-overheads" "-I." "-c" "C:\Users\Gordon\AppData\Local\Temp\ghc28200_0\ghc28200_0.s" "-o" "GHC_NCG_OptimizationBug.o"
Linking GHC_NCG_OptimizationBug.exe ...
*Main> :! GHC_NCG_OptimizationBug +RTS -s
46000000
32,989,396 bytes allocated in the heap
4,976 bytes copied during GC
41,860 bytes maximum residency (2 sample(s))
19,580 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 61 colls, 0 par 0.00s 0.00s 0.0000s 0.0000s
Gen 1 2 colls, 0 par 0.00s 0.00s 0.0001s 0.0001s
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.64s ( 0.64s elapsed)
GC time 0.00s ( 0.00s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 0.66s ( 0.64s elapsed)
%GC time 0.0% (0.1% elapsed)
Alloc rate 51,495,642 bytes per MUT second
Productivity 100.0% of total user, 102.3% of total elapsed
```
Looking at the ASM dump for the innermost tight culling loop reveals the problem, with 7.8.1 RC2 outputting as follow:
```
_n3nx:
movl 76(%esp),%ecx
_c3gf:
cmpl %ecx,%eax
jg _c3jB
_c3jC:
movl %eax,%edx
sarl $5,%edx
movl %ecx,76(%esp)
movl $1,%ecx
movl %ecx,280(%esp)
movl %eax,%ecx
andl $31,%ecx
movl %eax,292(%esp)
movl 280(%esp),%eax
shll %cl,%eax
xorl $-1,%eax
movl 64(%esp),%ecx
addl $8,%ecx
movl (%ecx,%edx,4),%ecx
andl %eax,%ecx
movl 64(%esp),%eax
addl $8,%eax
movl %ecx,(%eax,%edx,4)
movl 292(%esp),%eax
addl $3,%eax
jmp _n3nx
```
and 7.6.3 outputting as follows:
```
.text
.align 4,0x90
.long 1894
.long 32
s1GZ_info:
_c1YB:
cmpl 16(%ebp),%esi
jg _c1YE
movl %esi,%edx
sarl $5,%edx
movl $1,%eax
movl %esi,%ecx
andl $31,%ecx
shll %cl,%eax
xorl $-1,%eax
movl 12(%ebp),%ecx
movl 8(%ecx,%edx,4),%ecx
andl %eax,%ecx
movl 12(%ebp),%eax
movl %ecx,8(%eax,%edx,4)
addl 4(%ebp),%esi
jmp s1GZ_info
_c1YE:
movl 8(%ebp),%esi
addl $8,%ebp
jmp s1GB_info
```
The second code is clearly much more efficient, with the only memory access reading/writing the sieve buffer array and one register reload of the prime value to add to the current position index, whereas the first (7.8.1 RC2) code has three register spills and five register re-loads, almost as if debugging were still turned on.
This bug was tested under Windows, but likely applies to other platforms, at least for 32-bit versions but also possibly to others.8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/8299Add richer data model address arithmetic: AddrDiff and AddrInt (ie d Int_ptr_...2019-07-07T18:45:40ZCarter SchonwaldAdd richer data model address arithmetic: AddrDiff and AddrInt (ie d Int_ptr_diff and Int_ptr_size)currently GHC's internals and code gen don't provide a strong distinction between Ints as data, and Ints for pointer / address arithmetic. This also comes up as being problematical in a number of ways.
1. We wind up having many portabil...currently GHC's internals and code gen don't provide a strong distinction between Ints as data, and Ints for pointer / address arithmetic. This also comes up as being problematical in a number of ways.
1. We wind up having many portability issues around Int and pointer / address sizes.
1. adds some inessential complexity / problems to adding new architectures to ghc. eg x32 ABI which has 32bit pointers and 64bit ints somewhat breaks current assumptions in the GHC primops (because arrays are indexed by a byte offset, and the valid range for those is determined by the ABI pointer size!)
1. this Ints and Ptrs confusion we currently have, also means we can't leverage the Integer support in SIMD registers that is in most modern CPUs! If we could have those two separated better, theres a lot of low leve optimizations we could do for Int/Word data that we currently cant do!
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 7.6.3 |
| Type | FeatureRequest |
| TypeOfFailure | OtherFailure |
| Priority | high |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Add richer data model address arithmetic: AddrDiff and AddrInt (ie d Int_ptr_diff and Int_ptr_size)","status":"New","operating_system":"","component":"Compiler","related":[],"milestone":"7.10.1","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"7.6.3","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"FeatureRequest","description":"currently GHC's internals and code gen don't provide a strong distinction between Ints as data, and Ints for pointer / address arithmetic. This also comes up as being problematical in a number of ways.\r\n\r\n1. We wind up having many portability issues around Int and pointer / address sizes. \r\n\r\n2. adds some inessential complexity / problems to adding new architectures to ghc. eg x32 ABI which has 32bit pointers and 64bit ints somewhat breaks current assumptions in the GHC primops (because arrays are indexed by a byte offset, and the valid range for those is determined by the ABI pointer size!)\r\n\r\n3. this Ints and Ptrs confusion we currently have, also means we can't leverage the Integer support in SIMD registers that is in most modern CPUs! If we could have those two separated better, theres a lot of low leve optimizations we could do for Int/Word data that we currently cant do!\r\n\r\n","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/7602Threaded RTS performing badly on recent OS X (10.8?)2019-07-08T22:08:56ZSimon MarlowThreaded RTS performing badly on recent OS X (10.8?)This ticket is to remind us about the following problem: OS X is now using llvm-gcc, and as a result GHC's garbage collector with -threaded is much slower than it should be (approx 30% slower overall runtime). Some results here: [http://...This ticket is to remind us about the following problem: OS X is now using llvm-gcc, and as a result GHC's garbage collector with -threaded is much slower than it should be (approx 30% slower overall runtime). Some results here: [http://www.haskell.org/pipermail/cvs-ghc/2011-July/063552.html](http://www.haskell.org/pipermail/cvs-ghc/2011-July/063552.html)
This is because the GC code relies on having fast access to thread-local state. It uses one of two methods: either a register variable (gcc only) or `__thread` variables (which aren't supported on OS X). To make things work on OS X, we use calls to `pthread_getspecific` instead (see #5634), which is quite slow, even though it compiles to inline assembly.
I don't recall which OS X / XCode versions are affected, maybe a Mac expert could fill in the details.
We have tried other fixes, such as passing around the thread-local state as extra arguments, but performance wasn't good. Ideally Apple will implement TLS in OS X at some point and we can start to use it.
A workaround is to install a real gcc (using homebrew?) and use that to compile GHC. Whoever builds the GHC distributions for OS X should probably do it that way, so everyone benefits.
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ---------------- |
| Version | 7.6.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Runtime System |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | Unknown/Multiple |
| Architecture | Unknown/Multiple |
</details>
<!-- {"blocked_by":[],"summary":"Threaded RTS performing badly on recent OS X (10.8?)","status":"New","operating_system":"Unknown/Multiple","component":"Runtime System","related":[],"milestone":"_|_","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"7.6.1","keywords":[],"differentials":[],"test_case":"","architecture":"Unknown/Multiple","cc":[""],"type":"Bug","description":"This ticket is to remind us about the following problem: OS X is now using llvm-gcc, and as a result GHC's garbage collector with -threaded is much slower than it should be (approx 30% slower overall runtime). Some results here: [http://www.haskell.org/pipermail/cvs-ghc/2011-July/063552.html]\r\n\r\nThis is because the GC code relies on having fast access to thread-local state. It uses one of two methods: either a register variable (gcc only) or `__thread` variables (which aren't supported on OS X). To make things work on OS X, we use calls to `pthread_getspecific` instead (see #5634), which is quite slow, even though it compiles to inline assembly.\r\n\r\nI don't recall which OS X / XCode versions are affected, maybe a Mac expert could fill in the details.\r\n\r\nWe have tried other fixes, such as passing around the thread-local state as extra arguments, but performance wasn't good. Ideally Apple will implement TLS in OS X at some point and we can start to use it.\r\n\r\nA workaround is to install a real gcc (using homebrew?) and use that to compile GHC. Whoever builds the GHC distributions for OS X should probably do it that way, so everyone benefits.\r\n","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1thoughtpolicethoughtpolicehttps://gitlab.haskell.org/ghc/ghc/-/issues/4144Exception: ToDo: hGetBuf - when using custom handle infrastructure2019-07-07T19:00:24ZAntoineLatterException: ToDo: hGetBuf - when using custom handle infrastructureWhen trying to use the custom handle infrastructure, `hGetContents` fails like so:
```
*** Exception: ToDo: hGetBuf
```
This exception occurs twice in `GHC.IO.Handle.Text`
The handle implementation I'm using is attached.
It would be ...When trying to use the custom handle infrastructure, `hGetContents` fails like so:
```
*** Exception: ToDo: hGetBuf
```
This exception occurs twice in `GHC.IO.Handle.Text`
The handle implementation I'm using is attached.
It would be neat if I could pass along some witness that my device implements `RawDevice`, then we could just run the same code that we use for file-descriptors. But I'd be happy enough with a general solution, as I just plan to use this for testing.8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/3399Generalize the type of Data.List.{deleteBy, deleteFirstsBy}2023-08-07T18:31:08ZiagoGeneralize the type of Data.List.{deleteBy, deleteFirstsBy}Better (more general) type signatures would be
```
deleteBy :: (b -> a -> Bool) -> b -> [a] -> [a]
deleteFirstsBy :: (b -> a -> Bool) -> [b] -> [a] -> [a]
```
*Example of why it is useful*:
```
deleteBy ((==) . fst) 1 [(1,'a'), (2, 'b...Better (more general) type signatures would be
```
deleteBy :: (b -> a -> Bool) -> b -> [a] -> [a]
deleteFirstsBy :: (b -> a -> Bool) -> [b] -> [a] -> [a]
```
*Example of why it is useful*:
```
deleteBy ((==) . fst) 1 [(1,'a'), (2, 'b')]
```
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 6.10.4 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | libraries/base |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture | |
</details>
<!-- {"blocked_by":[],"summary":"Generalize the type of Data.List.{deleteBy, deleteFirstsBy}","status":"New","operating_system":"","component":"libraries/base","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"6.10.4","keywords":[],"differentials":[],"test_case":"","architecture":"","cc":[""],"type":"Bug","description":"Better (more general) type signatures would be\r\n\r\n\r\n{{{\r\ndeleteBy :: (b -> a -> Bool) -> b -> [a] -> [a]\r\ndeleteFirstsBy :: (b -> a -> Bool) -> [b] -> [a] -> [a]\r\n}}}\r\n\r\n\r\n''Example of why it is useful'':\r\n{{{\r\ndeleteBy ((==) . fst) 1 [(1,'a'), (2, 'b')]\r\n}}}\r\n","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/2496Invalid Eq/Ord instances in Data.Version2020-03-12T11:24:03ZguestInvalid Eq/Ord instances in Data.Version(From Adrian Hey)
In Data.Version we have:
```
data Version =
Version {versionBranch :: [Int]
,versionTags :: [String]
}
instance Eq Version where
v1 == v2 = versionBranch v1 == versionBranch v2
...(From Adrian Hey)
In Data.Version we have:
```
data Version =
Version {versionBranch :: [Int]
,versionTags :: [String]
}
instance Eq Version where
v1 == v2 = versionBranch v1 == versionBranch v2
&& sort (versionTags v1) == sort (versionTags v2)
-- tags may be in any order
instance Ord Version where
v1 `compare` v2 = versionBranch v1 `compare` versionBranch v2
```
The "laws" for valid Eq/Ord instances were argued about recently but the H98 report seems reasonably clear that (==) is supposed to test for equality and the compare method is supposed to define a total ordering. There is also an implied but not explicitly stated law that:
```
(x == y = True) <-> (x `compare` y = EQ)
```
and also this I guess..
```
(x == y = False) <-> ~(x `compare` y = EQ)
```
This law is implied by the Eq constraint on the Ord class (which seems to serve no purpose otherwise).
See also:
[http://www.haskell.org/pipermail/haskell-prime/2008-March/002330.html](http://www.haskell.org/pipermail/haskell-prime/2008-March/002330.html)
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | -------------- |
| Version | 6.8.3 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | libraries/base |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | Multiple |
| Architecture | Multiple |
</details>
<!-- {"blocked_by":[],"summary":"Invalid Eq/Ord instances in Data.Version","status":"New","operating_system":"Multiple","component":"libraries/base","related":[],"milestone":"","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"6.8.3","keywords":[],"differentials":[],"test_case":"","architecture":"Multiple","cc":[""],"type":"Bug","description":"(From Adrian Hey)\r\n\r\nIn Data.Version we have:\r\n\r\n{{{\r\ndata Version = \r\n Version {versionBranch :: [Int]\r\n ,versionTags :: [String]\r\n }\r\n\r\ninstance Eq Version where\r\n v1 == v2 = versionBranch v1 == versionBranch v2 \r\n && sort (versionTags v1) == sort (versionTags v2)\r\n -- tags may be in any order\r\n\r\ninstance Ord Version where\r\n v1 `compare` v2 = versionBranch v1 `compare` versionBranch v2\r\n}}}\r\n\r\nThe \"laws\" for valid Eq/Ord instances were argued about recently but the H98 report seems reasonably clear that (==) is supposed to test for equality and the compare method is supposed to define a total ordering. There is also an implied but not explicitly stated law that:\r\n\r\n{{{\r\n (x == y = True) <-> (x `compare` y = EQ)\r\n}}}\r\nand also this I guess..\r\n{{{\r\n (x == y = False) <-> ~(x `compare` y = EQ)\r\n}}}\r\n\r\nThis law is implied by the Eq constraint on the Ord class (which seems to serve no purpose otherwise).\r\n\r\nSee also:\r\n[http://www.haskell.org/pipermail/haskell-prime/2008-March/002330.html]\r\n\r\n\r\n\r\n \r\n","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1https://gitlab.haskell.org/ghc/ghc/-/issues/1851"make install-strip" should work2019-07-07T19:11:26ZIan Lynagh <igloo@earth.li>"make install-strip" should workWith the bindists (not sure about a normal build tree) install-strip doesn't work:
```
$ make install-strip
make: *** No rule to make target `install-strip'. Stop.
$
```
It is defined in mk/install.mk, so it presumably is meant to. Th...With the bindists (not sure about a normal build tree) install-strip doesn't work:
```
$ make install-strip
make: *** No rule to make target `install-strip'. Stop.
$
```
It is defined in mk/install.mk, so it presumably is meant to. The blurb after running configure should mention it, too.
The target is described in the GNU coding standards: http://www.gnu.org/prep/standards/html_node/Standard-Targets.html
<details><summary>Trac metadata</summary>
| Trac field | Value |
| ---------------------- | ------------ |
| Version | 6.8.1 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | normal |
| Resolution | Unresolved |
| Component | Build System |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | Unknown |
| Architecture | Unknown |
</details>
<!-- {"blocked_by":[],"summary":"\"make install-strip\" should work","status":"New","operating_system":"Unknown","component":"Build System","related":[],"milestone":"6.8.2","resolution":"Unresolved","owner":{"tag":"Unowned"},"version":"6.8.1","keywords":[],"differentials":[],"test_case":"","architecture":"Unknown","cc":[""],"type":"Bug","description":"With the bindists (not sure about a normal build tree) install-strip doesn't work:\r\n{{{\r\n$ make install-strip\r\nmake: *** No rule to make target `install-strip'. Stop.\r\n$\r\n}}}\r\nIt is defined in mk/install.mk, so it presumably is meant to. The blurb after running configure should mention it, too.\r\n\r\nThe target is described in the GNU coding standards: http://www.gnu.org/prep/standards/html_node/Standard-Targets.html\r\n","type_of_failure":"OtherFailure","blocking":[]} -->8.2.1Thomas MiedemaThomas Miedema