Floating point woes; Different behavior on 32- vs 64-bit x86
I've the following snippet:
x, y, r :: Double
x = -4.4
y = 2.4999999999999956
r = x * y
Using GHC 7.8.3, on a Mac, I get the following response from ghci:
*Main> decodeFloat r
(-6192449487634421,-49)
Using GHC 7.8.3, on a Linux machine, I get the following response:
*Main> decodeFloat r
(-6192449487634422,-49)
Note that off-by-one difference in the first component of the output.
I'm not 100% sure as to which one is actually correct; but the point is that these are IEEE floating point numbers running on the same architecture (Intel X86), and thus should decode in precisely the same way.
While I observed this with 7.8.3; I don't think this is a new regression; I suspect it'll show up in older versions as well.
Also, for full disclosure: I ran the Mac version naively; but the Linux version on top of a VirtualBox image. I'd very much doubt that would make a difference, but there might be 32/64-bit concerns. So, if someone can validate the Linux output on a true 64-bit Linux machine, that would really help track down the issue further.
Trac metadata
| Trac field | Value |
|---|---|
| Version | 7.8.3 |
| Type | Bug |
| TypeOfFailure | OtherFailure |
| Priority | high |
| Resolution | Unresolved |
| Component | Compiler |
| Test case | |
| Differential revisions | |
| BlockedBy | |
| Related | |
| Blocking | |
| CC | |
| Operating system | |
| Architecture |