... | ... | @@ -16,19 +16,14 @@ Tickets with stuff that would make nested CPR better: |
|
|
|
|
|
### TODOs
|
|
|
|
|
|
- Does Nick Frisby’s late λ-lifting alleviate problems when CPR’ing join-points?
|
|
|
|
|
|
- Need to see if his branch can be merged onto master.
|
|
|
- Paper-Writeup of CPR
|
|
|
- Shouldn’t nested CPR help a lot with Complex-heavy code? Is there something in nofib?
|
|
|
- Which of the existing CPR tickets are solved right now?
|
|
|
- Try passing CPR information from the scrunitee to the pattern variables. For that: Reverse flow of analysis for complex scrunitees (for simple, we want the demand coming from the body, for complex, this is not so important.)
|
|
|
- Use ticky-profiling to learn more about the effects of nested CPR.
|
|
|
- Look at DmdAnal-related \[SLPJ-Tickets\] and see which ones are affected by nested-cpr.
|
|
|
- Do not destroy join points (see below).
|
|
|
- Can we make sure more stuff gets the `Converging` flag, e.g. after a `case` of an unboxed value? Should case binders get the `Converging` flag? What about pattern match variables in strict data constructors? Unboxed values?
|
|
|
- Why does nested CPR make some stuff so bad?
|
|
|
|
|
|
- Possibly because of character reboxing. Try avoiding CPR’ing `C#` alltogether!
|
|
|
- Do not destroy join points or improve the code genrator (see below).
|
|
|
- Can we make sure more stuff gets the `Converging` flag, e.g. after a `case` of an unboxed value? Should case binders get the `Converging` flag? What about pattern match variables in strict data constructors? Unboxed values? See below.
|
|
|
|
|
|
### Degradation exploration and explanation
|
|
|
|
... | ... | @@ -54,75 +49,22 @@ And here a summary of the problems identified, and solution attempts |
|
|
Nested CPR is only sound if we know that the nested values are known to converge for sure. The semantics is clear: If `f` has CPR `<...>m(tm(),)`, then in the body of `case f x of (a,b)`, when entering `a`, we are guaranteed termination.
|
|
|
|
|
|
|
|
|
What is the semantics of an outer `t`? Given `f` with CPR `<L>tm()` and `g` with CPR `<S>tm()`? Does the latter even make sense? If so, should `f undefined` have CPR `m()` or `tm()`? Two possibilities:
|
|
|
What is the semantics of an outer `t`? Given `f` with CPR `<L>tm()` and `g` with CPR `<S>tm()`? Does the latter even make sense? If so, should `f undefined` have CPR `m()` or `tm()`? Three possibilities:
|
|
|
|
|
|
1. The convergence information a function is what holds if its strictness annotations are fulfilled: So if `g x` has `tm()` if `x` has `t` (possibly because it has previously been evaluated by the caller), otherwise `m()`. `f x` always has `m ()` (presumably because `x` is _never_ entered when evaluating `f`.
|
|
|
1. The convergence information a function is what holds always. This would in effect prevent `<S>tm()` from happening.
|
|
|
1. The convergence information always holds, but special care is taken for unlifted types: `I#`, like any function expecting an unlifted parameter or free variable, would get `<S>tm()`. (For unlifted types, `<L>` and `<S>` are identical. One might turn that into a third way `<#>`, but unless there is more use to that than just clarification, we do not do that).
|
|
|
|
|
|
|
|
|
Clearly, 1. holds strictly more information than 2.: Under variant `2`, `<S>tm()` would not occur, while under variant 1 that would be usefully different from `<S>m()`.
|
|
|
|
|
|
|
|
|
Also, under 2, `I#` would not be known to terminate for sure, as it is strict. This would destroy any hope for nested CPR for things like `(Int, Int)`.
|
|
|
|
|
|
|
|
|
For the implementation this implies that we need to be careful when `lub`’ing: If one branch is lazy, but not absent in an argument or free variable), and the other branch is strict, then even if both branches claim to terminate, we need to remove the termination flag (as one had the termination under a stronger hypothesis as the hole result) (Feels inelegant.)
|
|
|
|
|
|
|
|
|
Similar thought needs to be considered whenever one goes up in the lattice (better always go up by using `lub` then...)
|
|
|
|
|
|
|
|
|
In pictures: This is the lattice, which is not a simple product lattice any more:
|
|
|
|
|
|
```wiki
|
|
|
------ <L><L>------
|
|
|
/ | \ \
|
|
|
/ | <L><L>t \
|
|
|
<S><L> | <S><L>
|
|
|
| \ \ | | | \
|
|
|
| \ <S><L>t | / | <S><L>t
|
|
|
| \ | / |
|
|
|
\ \ | / |
|
|
|
\ ----- <S><S>----- |
|
|
|
\ | \ |
|
|
|
\ | <S><S>t /
|
|
|
\ | /
|
|
|
---------⊥-------------/
|
|
|
```
|
|
|
|
|
|
|
|
|
When evaluating a demand type as a demand transformer, we need to compare the convergence flags of the argument expressions with the strictness demands, and possibly adjust the termination information. I believe can be done using `deferAndUse` as before.
|
|
|
|
|
|
|
|
|
What about nested strictness annotations? For now the hypothesis of the demand transformer only requires the arguments (and free variables) to be terminating; if there is a `<S(S)>`, then it should not be marked as converging. Maybe this can be improved later, using nested CPR.
|
|
|
|
|
|
Clearly, 1. and 3. hold strictly more information than 2.: Under variant `2`, `<S>tm()` would not occur, while the other variants allow that. Also, under 2, `I#` would not be known to terminate for sure, as it is strict. This would destroy any hope for nested CPR for things like `(Int, Int)`.
|
|
|
|
|
|
Some implementation implications:
|
|
|
|
|
|
- There is no unit for `lubDmdType` any more. So for case, use `botDmdType` for no alternatives, and `foldr1` if there are multiple.
|
|
|
- Unlifted variables (e.g. `Int#`) are tricky. Everything is strict in them, so for an \*unlifted\* argument, `<L>t` implies `<S>t` and hence `<S>t ⊔ <L>t = <S>t`, and we really want to make use of that stronger equation. But when lub’ing, we don’t know any more if this is the demand for an unlifted type. So instead, the demand type of `x :: Int#` itself is `{x ↦ <L>} t`, while `x :: Int` continues to have type `{x ↦ <S>} t`.
|
|
|
- It is important that functions (including primitive operations and constructors like `I#`) have a strict demand on their unlifted argument. But it turned out to be easier to enforce this in the demand analyser: So even if `f` claims to have a lazy demand on a argument of unlifted type, we make this demand strict before feeding it into the argument.
|
|
|
- **It is no longer safe to remove variables from the demand environment**, if the demand on them is strict. There must be so many bugs lurking around...
|
|
|
|
|
|
|
|
|
Unfortunate example:
|
|
|
|
|
|
```wiki
|
|
|
g :: Int -> Int
|
|
|
g 1 = 0
|
|
|
g x = x
|
|
|
```
|
|
|
|
|
|
|
|
|
What should its signature be? We want `<S>t`, but we get `<S>`. Why? Because the branches have signatures `t` and `{x ↦ S} t`. So their lub is going to be `{x ↦ L}`. In a later step, the demand on `x` will be strict again, but that is not easily visible here.
|
|
|
|
|
|
|
|
|
Can we avoid this? Not if we want to keep using the strictness demand as the hypothesis for the termination information. The alternative would be to to track the demand required separately from strictness. Then the lub of `t` and `{x ↦ required} t` would be `{x ↦ required} t`, and this case would work fine. That would turn it into a completely separate analysis, it seems.
|
|
|
I worked on 1, but it turned out to be too complicated. Notes at [AdvancedConverges](nested-cpr/advanced-converges). So I’ll proceed with 3. now.
|
|
|
|
|
|
### join points
|
|
|
|
|
|
|
|
|
CPR can kill join points.
|
|
|
CPR can kill join points. Attempts to mitigate that:
|
|
|
|
|
|
#### Common Context
|
|
|
|
... | ... | @@ -143,13 +85,26 @@ Alternative: Detect join points during `dmdAnal` and make sure that their CPR in |
|
|
- Enabling CPR for sumtypes: (min -3.8%, mean -0.0%, max 1.7%) (slightly better than with Common Context)
|
|
|
- Enabling sum types inside nested CPR: TBD
|
|
|
|
|
|
|
|
|
Unfortunately, naive approaches are not possible: We need to know if `j` is a joint point not only for `let j = .. in ... j ..`, but also for expressions further out. Not nice.
|
|
|
|
|
|
#### Improvement to the code generator
|
|
|
|
|
|
|
|
|
It seems feasible to make the code generate generate better code for local functions that are not quite join-points any more, by jumping, passing both a continuation and a stack delta to the live variables. To be investigated.
|
|
|
|
|
|
#### Late lambda lifting
|
|
|
|
|
|
|
|
|
Might also help. Need to see if his branch can be merged onto master. (But I like the code generator idea better.)
|
|
|
|
|
|
### Side tracks
|
|
|
|
|
|
- Should `runSTRep` be inlined (see [ticket:1600\#comment:34](https://gitlab.haskell.org//ghc/ghc/issues/1600))?
|
|
|
- Can we use `Terminates` CPR information to eagerly evaluate thunks? Yes, and there is a small gain there: [\#8655](https://gitlab.haskell.org//ghc/ghc/issues/8655)
|
|
|
- Can we use `Converges` CPR information to eagerly evaluate thunks? Yes, and there is a small gain there: [\#8655](https://gitlab.haskell.org//ghc/ghc/issues/8655).
|
|
|
|
|
|
- But why no allocation change? Understand this better!
|
|
|
- Can we statically and/or dynamically count the number of thunks, and the number of CBV’ed thunks?
|
|
|
- Can we use `Converges` in `exprOkForSpeculation`?
|
|
|
- Why is `cacheprof` not deterministic? (→ [\#8611](https://gitlab.haskell.org//ghc/ghc/issues/8611))
|
|
|
- What became of Simon’s better-ho-cardinality branch? See [better-ho-cardinality](nested-cpr/better-ho-cardinality).
|
|
|
- Try vtunes to get better numbers. |