Commit efba41e2 authored by Gabor Greif's avatar Gabor Greif 💬

Another batch of typo fixes in non-code

parent 46af6835
......@@ -1333,7 +1333,7 @@ strictenDmd (JD { sd = s, ud = u})
poke_u Abs = UHead
poke_u (Use _ u) = u
-- Deferring and peeeling
-- Deferring and peeling
type DmdShell -- Describes the "outer shell"
-- of a Demand
......
......@@ -438,7 +438,7 @@ floated them out. Well, a clever optimiser might leave one there to
avoid a space leak, deliberately recomputing a thunk. Also (and this
really does happen occasionally) let-floating may make a function f smaller
so it can be inlined, so now (f True) may generate a local no-fv closure.
This actually happened during bootsrapping GHC itself, with f=mkRdrFunBind
This actually happened during bootstrapping GHC itself, with f=mkRdrFunBind
in TcGenDeriv.) -}
-----------------------------------------------------------------------------
......
......@@ -554,7 +554,7 @@ mkCoreAppsDs :: SDoc -> CoreExpr -> [CoreExpr] -> CoreExpr
mkCoreAppsDs s fun args = foldl (mkCoreAppDs s) fun args
mkCastDs :: CoreExpr -> Coercion -> CoreExpr
-- We define a desugarer-specific verison of CoreUtils.mkCast,
-- We define a desugarer-specific version of CoreUtils.mkCast,
-- because in the immediate output of the desugarer, we can have
-- apparently-mis-matched coercions: E.g.
-- let a = b
......
......@@ -891,7 +891,7 @@ codegen time. I found that binary sizes jumped by 6-10% when I
started to specialise INLINE functions (again, Note [Inline
specialisations] in Specialise).
So it seeems better to drop the binding for f_spec, and the rule
So it seems better to drop the binding for f_spec, and the rule
itself, if the auto-generated rule is the *only* reason that it is
being kept alive.
......
......@@ -446,7 +446,7 @@ Note [How tuples work] See also Note [Known-key names] in PrelNames
* When looking up an OccName in the original-name cache
(IfaceEnv.lookupOrigNameCache), we spot the tuple OccName to make sure
we get the right wired-in name. This guy can't tell the difference
betweeen BoxedTuple and ConstraintTuple (same OccName!), so tuples
between BoxedTuple and ConstraintTuple (same OccName!), so tuples
are not serialised into interface files using OccNames at all.
-}
......
......@@ -1499,7 +1499,7 @@ Then we want to rewrite (g (h x)) to (k x) and only then try f's rules. If
we match f's rules against the un-simplified RHS, it won't match. This
makes a particularly big difference when superclass selectors are involved:
op ($p1 ($p2 (df d)))
We want all this to unravel in one sweeep.
We want all this to unravel in one sweep.
Note [Avoid redundant simplification]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......
......@@ -75,7 +75,7 @@ a short-hand, not an algorithm.
(y:ys) -> E1[y,ys]
[] -> E2
@
Transformations of this kind are almost embarassingly simple. How could
Transformations of this kind are almost embarrassingly simple. How could
anyone write a paper about them?
\end{itemize}
This paper is about humble transformations, and how to implement them.
......
......@@ -1664,7 +1664,7 @@ have the big, un-optimised of f (albeit specialised) captured in an
INLINABLE pragma for f_spec, we won't get that optimisation.
So we simply drop INLINABLE pragmas when specialising. It's not really
a complete solution; ignoring specalisation for now, INLINABLE functions
a complete solution; ignoring specialisation for now, INLINABLE functions
don't get properly strictness analysed, for example. But it works well
for examples involving specialisation, which is the dominant use of
INLINABLE. See Trac #4874.
......
......@@ -1173,7 +1173,7 @@ binders the CPR property. Specifically
fw False x = 3
Of course there is the usual risk of re-boxing: we have 'x' available
boxed and unboxed, but we return the unboxed verison for the wrapper to
boxed and unboxed, but we return the unboxed version for the wrapper to
box. If the wrapper doesn't cancel with its caller, we'll end up
re-boxing something that we did have available in boxed form.
......
......@@ -1299,7 +1299,7 @@ did, we would do this:
This loop goes on for ever and triggers the simpl_loop limit.
Solution: kick out the CDictCan which will have pend_sc = False,
becuase we've already added its superclasses. So we won't re-add
because we've already added its superclasses. So we won't re-add
them. If we forget the pend_sc flag, our cunning scheme for avoiding
generating superclasses repeatedly will fail.
......
-- !!! Simple test of dupChan
-- Embarassingly, the published version fails!
-- Embarrassingly, the published version fails!
module Main where
......
{-# LANGUAGE GADTs #-}
-- Triggered a desugaring bug in earlier verison
-- Triggered a desugaring bug in earlier version
module Shouldcompile where
......
......@@ -37,7 +37,7 @@ main = do
-- this time we should get an integer with all bits set, that is -1
print (I# (magicInt1# `orI#` magicInt2#) == -1)
-- suprising as the first two tests may look, this is what we expect from
-- surprising as the first two tests may look, this is what we expect from
-- bitwise negation in two's complement enccoding
print (I# (notI# 0#) == -1)
print (I# (notI# -1#) == 0)
......
......@@ -951,7 +951,7 @@ strange move.
\end{tabular}}|
\end{center}
|15.~N*e5|
but black can easly win back the pawn.
but black can easily win back the pawn.
\begin{center}|
{\bf\begin{tabular}{rp{50pt}p{50pt}}
15 & \ldots & Rac8?\\
......
......@@ -147,7 +147,7 @@ Be7 16. d4 d6 {<sab>}) 13. Nxf6+ (13. Bb6 Qc8 14. Nxf6+ gxf6 15. d4 Bc7
16. Bxc7 Qxc7 {<saw> and the black king is exposed.}) 13... Qxf6 14. Bb6 {
?! strange move.} (14. Qd2 Be7 15. c3 a5 16. a3 bxa3 17. bxa3 {<saw> with
the plan ofs owning the `b' file.}) 14... Bc5 (14... Be7) 15. Bc7 (15. Nxe5 {
but black can easly win back the pawn.}) 15... Rac8? (15... d6 16. d4
but black can easily win back the pawn.}) 15... Rac8? (15... d6 16. d4
exd4 17. e5 Qe7 18. exd6 Nxd6 19. Bxd6 Qxd6 {<ab>}) 16. Bxe5 Qg6 17. d4 (
17. Bg3 Rfe8 18. Ne5 Qf6 19. Nxd7 Qxb2 20. Re1 {<aw> white should now
try use his center pawns to push home his advantage.}) 17... Bd6 18.
......
......@@ -31,7 +31,7 @@ safeRecomp01:
# at moment we revert to 'no flags' so we recompile if previously
# flags were specified. An alternate design would be to assume the
# safe haskell flags from the old compile still apply but we
# go with the previous design as that's the least suprise to a user.
# go with the previous design as that's the least surprise to a user.
# See [SafeRecomp02] though.
'$(TEST_HC)' -c SafeRecomp01.hs
'$(TEST_HC)' --show-iface SafeRecomp01.hi | grep -E '^trusted:'
......
-- !!! THIS TEST IS FOR TYPE SYNONIMS AND FACTORISATION IN THEIR PRESENCE.
-- !!! THIS TEST IS FOR TYPE SYNONYMS AND FACTORISATION IN THEIR PRESENCE.
module Test where
data M a = A | B a (M a)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment