Skip to content

DmdAnal/Meta: Is strictness the right property to look for?

In #19917, we have seen how strictness analysis can be "too aggressive". Small, executable reproducer:

main = print $ loop "foo" (error "woops")
  where
    loop xs l = case xs of
      x:xs -> loop xs x :: String
      []   -> l `seq` loop [l] l

This program will crash with "woops" when compiled with optimisations, because loop is inferred to be strict in its l parameter. That is very surprising! Note that l doesn't even occur freely in the cons branch of loop. And neither will error "woops" be evaluated under call-by-need; instead we'll get an infinite loop.

On the other hand, loop's parameter l certainly does satisfy the strictness definition: Whenever l is bottom/diverges, loop xs l will, too, simply because loop always diverges.

That is to say: The above program points out that The Strictness Definition of Olde entirely neglects diverging program traces, simply because Strictness is a property of denotational semantics and all such program traces share the same denotation ⊥.

I think a more useful definition is

Strong Strictness. f is strong strict if f x will evaluate x after finitely many steps in the small-step semantics.

NB: For terminating programs/traces, strong strictness and strictness coincide. It is only about diverging traces where we see the difference.

loop above is strong strict in the first but not in its second argument, as loop "foo" (error "woops") will loop forever without ever throwing that error "woops". It is still strong strict in xs. Nice!

Unfortunately, I'm pretty sure that this stronger notion of strictness would regress performance all over the place. For example, \x -> error "blah" would not be strong strict (in x, obviously). But I think it's good to describe the ideal model so that one day we might even implement it.


Here is a bit Related Work:

  1. Usage analysis as implemented in GHC actually works for diverging programs, too: We'd never say that loop above does not use its first argument, although loop diverges. Usage/Absence analysis has to account for diverging/infinite traces to be useful.
  2. What Strictness Analysis is to variables in lazy functional languages is quite similar to what Very Busy Expression Analysis is to expressions in an imperative setting. Very Busy Expression Analysis is a backwards analysis with a MUST powerset lattice (e.g., ⊥ is all Exprs, lub is intersection). I investigated how this analysis handles infinite loops. It turns out that infinite loops are very busy in every expression, even if it doesn't occur in the loop. Pretty much the same problem as for us!
  3. Then I recalled that the gen/kill formulation for Post-dominance analysis is also a backwards analysis with a MUST powerset lattice and actually shares the all the same problems wrt. infinite loops. And sure enough, there is an abundance of discussion and literature about post-dominance and infinite loops:

Implementation-wise, I think we can do the following during fixed-point iteration (I don't want to touch strictness of error just yet):

  1. We keep initialising strictness signatures of recursive functions such as loop with bottom (meaning an application of loop is strict in every var).
  2. We do one round of fixed-point iteration and get to find a too optimistic signature that involves just the arguments and free variables that occur during any finite evaluation of loop. E.g., we'll get something like <1B><1B>b.
  3. Now we'll do another iteration, but we'll omit the b, replacing it with x (exnDiv, could have chosen topDiv too). NB: We'll keep the argument demands and demands on free vars we accumulated, because they actually occur in finitely many steps.
  4. Then onto the next iteration: From the x:xs branch, we'll see that we are strict in x and xs, but not in l (because we switched from b to x!). So when we lub together with the [] branch, we see {l:->L}.

And then we'll get <1B><L>x, which is perfect! Still strict in the first arg, no longer strict in the second and we get to see through x that evaluation of loop will never terminate, hence dead-code elimination will keep working as before.

What happened in (3) is that we basically used b to figure out in the first iteration just the right set of FVs that loop has to be strict in in the recursive call. We could've instead done a special FV traversal that accounts for FVs in strictness signatures and initialised the set of occuring FVs to demand B instead, I suppose. Maybe we should do that instead... The important part is that instead of saying that loop is strict in every variable during the first iteration, we say that it's strict in just the sensible set of FVs instead.

I put up an experimental implementation of this idea in !8983.

Edited by Sebastian Graf
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information