Clarify semantics of evaluate/seq#
Forked from a discussion at #15226 (closed).
There are two aspects of the behavior of evaluate
and its underlying primop seq#
that aren't so clear.
- Should the strictness of
evaluate
when executed be visible to demand analysis? - Should
evaluate
be considered to throw a precise exception?
Question 1 mainly affects programs like f x = someSideEffect >> evaluate x
. If the strictness of evaluate
is visible to demand analysis, then GHC may eagerly evaluate the argument of f
at a call-site to avoid producing a thunk, meaning x
gets evaluated before someSideEffect
is actually performed. Since evaluate
is the only tool we provide for sequencing evaluation relative to side-effects, I think this behavior would be very undesirable.
A user who wants strictness can ask for it as easily as evaluate $! x
. But unfortunately we rewrite case x of x' { _ -> seq# x' s }
to case x of x' { _ -> (# s, x' #) }
so evaluate $! x
is equivalent to pure $! x
with current GHC, which is a nasty surprise. (This rewrite rule is also the reason we may sometimes evaluate y
before x
in evaluate x >> evaluate y
, even though evaluate
is currently lazy.)
Question 2 mainly affects programs like a = evaluate err1 >> err2
. If evaluate
is considered to throw a precise exception, then execution of a
is lazy in err2
and can only throw err1
, while if evaluate
is not considered to throw a precise exception then execution of a
is strict in err2
and may throw either err1
or err2
. I don't feel strongly about this question.