Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
GHC
GHC
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 4,393
    • Issues 4,393
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
    • Iterations
  • Merge Requests 378
    • Merge Requests 378
  • Requirements
    • Requirements
    • List
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Operations
    • Operations
    • Incidents
    • Environments
  • Analytics
    • Analytics
    • CI / CD
    • Code Review
    • Insights
    • Issue
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Glasgow Haskell Compiler
  • GHCGHC
  • Issues
  • #8457

Closed
Open
Opened Oct 18, 2013 by errge@trac-errge

-ffull-laziness does more harm than good

In this bug report I'd like to argue that -ffull-laziness shouldn't be turned on automatically with either -O nor -O2, because it's dangerous and can cause serious memory leaks which are hard to debug or prevent. I'll also try to show that its optimization benefits are negligible. Actually, my benchmarks show that it's beneficial to turn it off even in the cases where we don't hit a space leak.

We've met this issue last week, but it had been reported several times before: e.g. #917 and #5262.

A typical example is the following:

main :: IO ()
main = task () >> task ()

task :: () -> IO ()
task () = printvalues [1..1000000 :: Int]

printvalues :: [Int] -> IO ()
printvalues (x:xs) = print x >> printvalues xs
printvalues [] = return ()

We succeed with -O0, but fail with -O:

errge@curry:~/tmp $ ~/tmp/ghc/inplace/bin/ghc-stage2 -v0 -O0 -fforce-recomp lazy && ./lazy +RTS -t >/dev/null
<<ghc: 1620098744 bytes, 3117 GCs, 32265/42580 avg/max bytes residency (3 samples), 2M in use, 0.00 INIT (0.00 elapsed), 1.28 MUT (1.28 elapsed), 0.02 GC (0.02 elapsed) :ghc>>
errge@curry:~/tmp $ ~/tmp/ghc/inplace/bin/ghc-stage2 -v0 -O -fforce-recomp lazy && ./lazy +RTS -t >/dev/null
<<ghc: 1444098612 bytes, 2761 GCs, 3812497/13044272 avg/max bytes residency (7 samples), 28M in use, 0.00 INIT (0.00 elapsed), 1.02 MUT (1.03 elapsed), 0.12 GC (0.12 elapsed) :ghc>>

28M? What the leak!? Well, it's -ffull-laziness:

errge@curry:~/tmp $ ~/tmp/ghc/inplace/bin/ghc-stage2 -v0 -O -fno-full-laziness  -fforce-recomp lazy && ./lazy +RTS -t >/dev/null
<<ghc: 1484098612 bytes, 2835 GCs, 34812/42580 avg/max bytes residency (2 samples), 1M in use, 0.00 INIT (0.00 elapsed), 1.04 MUT (1.04 elapsed), 0.02 GC (0.02 elapsed) :ghc>>

We get constant space and the fastest run-time too, since we spare some cycles on GC.

Note, that in this instance we are trying to explicity disable sharing by using () as a fake argument for the function. Also note, that this function may easily be a utility function in a larger code base or in a library, therefore it's impractical to say that you shouldn't use it twice "too close together".

Quoting from the GHC user guide:

 -O2:

    Means: “Apply every non-dangerous optimisation, even if it means
    significantly longer compile times.”

    The avoided “dangerous” optimisations are those that can make
    runtime or space worse if you're unlucky. They are normally turned
    on or off individually.

    At the moment, -O2 is unlikely to produce better code than -O.

This seems to be false at the moment.

We decided to make a broader investigation into this issue and wanted to know if we can disable this optimization without too much pain. Came up with this benchmark plan:

  • let's benchmark GHC,
  • compile all stages with -O, but hack the stage1 compiler to

emit -t statistics for every file compiled,

  • gather these statistics while compiling the libraries and the

stage2 compiler.

On the second run we compile the stage1 compiler with -O -fno-full-laziness, but leave everything else unchanged in the environment.

When we have both results of the compilation of ~1600 files, we match them up and compute the (logarithmic) ratio of CPU and memory difference between compilations, the final results for our benchmark.

The results and the raw data can be found at https://github.com/errge/notlazy.

The overall compilation time dropped from 26:20 to 25:12, which is a 4% improvement. Investigating the full matching shows that this overall result is from small improvements all around the place.

The results plotted:

  • https://github.com/errge/notlazy/blob/master/cpu.png
  • https://github.com/errge/notlazy/blob/master/mem.png

The graphs show the logarithmic (100*log_10(new/orig)) ratio of change in cpu and memory consumption. Therefore negative results mean that the new compilation method is faster.

As can be seen on the CPU graph, in most of the cases the difference is negligible (actually smaller than what can be measured on small files, this is why we have the spike at 0). In overall we see a small improvement in CPU, and there are some outliers in both directions, but there are more drastic improvement cases than drastic regressions.

On the memory graph the situation is much more close to zero. There is one big positive memory outlier: DsListComp.lhs. It uses 69M originally and now uses 103M. But compiles in 2 seconds both ways and there are files in the source tree which requires 400M to compile, so this is not an issue.

After all this, I'd like to hear other opinions about just disabling this optimization in -O and -O2 and leaving it as an option that can be turned on when needed, my reasons once more:

  • it's unsafe,

  • it's hard to debug when you hit its issues,

  • the optimization doesn't seem to be very productive,

  • it's always easy to force sharing, but it's not easy to force

copying.

Apparently a Haskell programmer should be lazy, but never fully lazy.

Research done by Gergely Risko <errge> and Mihaly Barasz <klao>, confirmed on two different machines with no other running processes.

Trac metadata
Trac field Value
Version 7.7
Type Bug
TypeOfFailure OtherFailure
Priority high
Resolution Unresolved
Component Compiler
Test case
Differential revisions
BlockedBy
Related
Blocking
CC
Operating system
Architecture
Assignee
Assign to
8.0.1
Milestone
8.0.1 (Past due)
Assign milestone
Time tracking
None
Due date
None
Reference: ghc/ghc#8457