One thing that I have noticed in the past is that it is not possible to embed an efficient floating point literal in one's program. For instance, you might write:
f::Double->...g=...f(1/0)
However, GHC will end up floating out the free expression 1/0 to a top-level CAF. This means that you must take a tag check every time you refer to the literal. Surely we can do better than this.
Either we should avoid floating out such simple expressions or we should introduce constant-folding for this case and teach C-- about non-finite floating-point literals.
Other literals:
1/0 -> infinity (this one), similarly for negative infinity
0/0 -> NaN (one particular NaN literal should suffice), originally reported in #20379 (closed)
-0.0 -> negative zero. Currently compiles to negateFloat# 0.0#, orginally reported in #20380 (closed).
Edited
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
I think there's a bit of tension between source levelRational literals and the compiler internal support for IEEE754 single and double precision literals. Ultimately, I think we really should represent the latter as IEEE754 literals in Core and do the conversion (which might lose information) in desugaring.
Note that source level Haskell2010 doesn't allow IEE754-specific literals such as -0.0 or infinity (the latter might work with -XNegativeLiterals), because they are not Rationals. So we can only "expose" them through constant-folding -1.0*0.0 and 1.0/0.0.
Since we only support Float and Double at the moment, and because semantics of Float and Double strictly follow IEEE754, we might just as well (design 1)
Use the IEEE754 support of the host architecture and have LitFloat !Float, LitDouble !Double. I don't think we ever plan to be able to run GHC on architectures that lack IEEE754 single and double precision numbers, so that seems like an acceptable compromise.
Just turn LitFloat Rational into LitFloat (NaN | Inf | NegInf | (Sign, Rational)) with appropriate functions for constant folding
would just be a stop-gap and bound to be a buggy replicate of FP semantics. Strictly following IEEE754 in terms of representation of FP literals might also prevent regresssions like #19569 (closed) in the future.
What do you think about these designs, @simonpj? Since this concerns constant-folding, this might also be of interest to @hsyl20.
I'm curious how GHCJS works around that. I don't think it matter too much, though: We wouldn't dare to compile GHC itself to JavaScript, right? (Because then we'd probably observe broken floating point constnat folding. But no less broken than today, I suppose.) If so, I'd rather try to compile it to WASM, which has proper support for single precision floats.
My point is: I don't think we ever want GHC to support (host! not target, as in JS) architectures where we can't expect IEEE754 Float or Double implementations. And then we might as well use the IEEE754 implementation of the host architecture.
What do you think about these designs, @simonpj? Since this concerns constant-folding, this might also be of interest to @hsyl20.
If the semantics of the Float type is precisely that of 64-bit IEEE754 floats, then representing Float literals that way seems a robust solution to me; anything else has to emulate IEEE754 semantics, which is pretty tricky.
I suppose that we must use the correct precision for the target architecture, since that's where it will be executed. Compile time transformations must faithfully follow target (not host) precision and semantics.
I think that all our Floats are single-precision, 32bit IEEE754 floats and our Doubles are 64bit IEEE754 floats. The question is if we want to maintain that stance for any future target architectures.
If I read https://www.haskell.org/onlinereport/haskell2010/haskellch6.html#dx13-135001 correctly, then it's theoretically possible for a Haskell2010 compiler to have type Float = Double. If GHC were to do that (as probably is the case for GHCJS), then the constant-folding done on Floats by a bootstrapped GHC (e.g. when host of stage2 is the target of stage1) might no longer respect respect the single-precision IEEE754 semantics. We might get more precision.
But that would probably still be better than what we currently do, because Rational features infinite precision. For example, we'd constant-fold 0.1+0.2::Double to 0.3, when IEEE754 semantics would say 0.30000000000000004 instead. In fact, I wonder why this isn't a problem for GHC today (I just tried it; it isn't).
And here I found the answer: During constant-folding, we call this function after each operator:
-- When excess precision is not requested, cut down the precision of the-- Rational value to that of Float/Double. We confuse host architecture-- and target architecture here, but it's convenient (and wrong :-).convFloating::RuleOpts->Literal->LiteralconvFloatingenv(LitFloatf)|not(roExcessRationalPrecisionenv)=LitFloat(toRational(fromRationalf::Float))convFloatingenv(LitDoubled)|not(roExcessRationalPrecisionenv)=LitDouble(toRational(fromRationald::Double))convFloating_l=l
So at least constant-folding already goes through Float/Double of the host architecture. I wonder when is a time when we wouldn't want roExcessRationalPrecision to be off... Anyway, I see not much of a point in doing these conversions when we could just as well use Float and Double in Literal to begin with.
I'm working on #24331 and I face this issue too. In #24331 we want to bitcast any Word64 literal into a Double literal (constant folding). But currently we can't because we represent Double literals as Rational and not as Double, so casting bits representing NaN for example would yield wrong results.
We don't make any promise with -fexcess-precision (which is off by default):
When this option is given, intermediate floating point values can have a greater precision/range than the final type. Generally this is a good thing, but some programs may rely on the exact precision/range of Float/Double values and should not use this option for their compilation.
Use the IEEE754 support of the host architecture and have LitFloat !Float, LitDouble !Double.
It would be much simpler than current !7800 which has LitFloatingR Rational. If people require more precision, they should use Rational in their Haskell code imo. Constant-folding floating-point stuff is difficult enough.
About GHCJS, it uses this SaneDouble type in its JS AST. It uses Double as a backing type for Float, but as long as constant folding and primops truncate results to respect float precision it works fine.
@clyring Should I open a new MR to test design 1 or do you agree that !7800 should be changed to remove Rational support?
I don't think the LitFloatingR stuff is a significant source of complexity in !7800. If I remember correctly I added it because it was specifically requested in review and it was very easy to do so.
In any case, as far as #24331 goes you can for now just not rewrite (with mzero) bitcasts that would result in "not-sane" values like NaNs or negative zero or infinity.
I don't think the LitFloatingR stuff is a significant source of complexity in !7800. If I remember correctly I added it because it was specifically requested in review and it was very easy to do so.
I'd have to look more closely. I was hoping to get rid of canonicalization, of some conversions, and probably of most of what -fexcess-precision does. I.e. a Float/Double in Core stays a Float/Double until codegen.
In any case, as far as #24331 goes you can for now just not rewrite (with mzero) bitcasts that would result in "not-sane" values like NaNs or negative zero or infinity.