- 12 Jan, 2011 1 commit
-
-
simonpj@microsoft.com authored
This patch embodies many, many changes to the contraint solver, which make it simpler, more robust, and more beautiful. But it has taken me ages to get right. The forcing issue was some obscure programs involving recursive dictionaries, but these eventually led to a massive refactoring sweep. Main changes are: * No more "frozen errors" in the monad. Instead "insoluble constraints" are now part of the WantedConstraints type. * The WantedConstraint type is a product of bags, instead of (as before) a bag of sums. This eliminates a good deal of tagging and untagging. * This same WantedConstraints data type is used - As the way that constraints are gathered - As a field of an implication constraint - As both argument and result of solveWanted - As the argument to reportUnsolved * We do not generate any evidence for Derived constraints. They are purely there to allow "impovement" by unifying unification variables. * In consequence, nothing is ever *rewritten* by a Derived constraint. This removes, by construction, all the horrible potential recursive-dictionary loops that were making us tear our hair out. No more isGoodRecEv search either. Hurrah! * We add the superclass Derived constraints during canonicalisation, after checking for duplicates. So fewer superclass constraints are generated than before. * Skolem tc-tyvars no longer carry SkolemInfo. Instead, the SkolemInfo lives in the GivenLoc of the Implication, where it can be tidied, zonked, and substituted nicely. This alone is a major improvement. * Tidying is improved, so that we tend to get t1, t2, t3, rather than t1, t11, t111, etc Moreover, unification variables are always printed with a digit (thus a0, a1, etc), so that plain 'a' is available for a skolem arising from a type signature etc. In this way, (a) We quietly say which variables are unification variables, for those who know and care (b) Types tend to get printed as the user expects. If he writes f :: a -> a f = ...blah... then types involving 'a' get printed with 'a', rather than some tidied variant. * There are significant improvements in error messages, notably in the "Cannot deduce X from Y" messages.
-
- 13 Dec, 2010 1 commit
-
-
simonpj@microsoft.com authored
This patch finally deals with the super-delicate question of superclases in possibly-recursive dictionaries. The key idea is the DFun Superclass Invariant (see TcInstDcls): In the body of a DFun, every superclass argument to the returned dictionary is either * one of the arguments of the DFun, or * constant, bound at top level To establish the invariant, we add new "silent" superclass argument(s) to each dfun, so that the dfun does not do superclass selection internally. There's a bit of hoo-ha to make sure that we don't print those silent arguments in error messages; a knock on effect was a change in interface-file format. A second change is that instead of the complex and fragile "self dictionary binding" in TcInstDcls and TcClassDcl, using the same mechanism for existential pattern bindings. See Note [Subtle interaction of recursion and overlap] in TcInstDcls and Note [Binding when looking up instances] in InstEnv. Main notes are here: * Note [Silent Superclass Arguments] in TcInstDcls, including the DFun Superclass Invariant Main code changes are: * The code for MkId.mkDictFunId and mkDictFunTy * DFunUnfoldings get a little more complicated; their arguments are a new type DFunArg (in CoreSyn) * No "self" argument in tcInstanceMethod * No special tcSimplifySuperClasss * No "dependents" argument to EvDFunApp IMPORTANT It turns out that it's quite tricky to generate the right DFunUnfolding for a specialised dfun, when you use SPECIALISE INSTANCE. For now I've just commented it out (in DsBinds) but that'll lose some optimisation, and I need to get back to this.
-
- 12 Nov, 2010 1 commit
-
-
simonpj@microsoft.com authored
Regression testing and user feedback for GHC 7.0 taught us a lot. This patch fixes numerous small bugs, and some major ones (eg Trac #4484, #4492), and improves type error messages. The main changes are: * Entirely remove the "skolem equivalance class" stuff; a very useful simplification * Instead, when flattening "wanted" constraints we generate unification variables (not flatten-skolems) for the flattened type function application * We then need a fixup pass at the end, TcSimplify.solveCTyFunEqs, which resolves any residual equalities of form F xi ~ alpha * When we come across a definite failure (e.g. Int ~ [a]), we now defer reporting the error until the end, in case we learn more about 'a'. That is particularly important for occurs-check errors. These are called "frozen" type errors. * Other improvements in error message generation. * Better tracing messages
-
- 22 Oct, 2010 1 commit
-
-
simonpj@microsoft.com authored
There are two main changes * New LANGUAGE option RebindableSyntax, which implies NoImplicitPrelude * if-the-else becomes rebindable, with function name "ifThenElse" (but case expressions are unaffected) Thanks to Sam Anklesaria for doing most of the work here
-
- 13 Oct, 2010 1 commit
-
-
benl@ouroborus.net authored
-
- 18 Sep, 2010 1 commit
-
-
Ian Lynagh authored
and remove the temporary DOpt class workaround.
-
- 15 Sep, 2010 1 commit
-
-
simonpj@microsoft.com authored
This resulted in an infinite loop in applyTypeToArgs, in syb
-
- 13 Sep, 2010 1 commit
-
-
simonpj@microsoft.com authored
This major patch implements the new OutsideIn constraint solving algorithm in the typecheker, following our JFP paper "Modular type inference with local assumptions". Done with major help from Dimitrios Vytiniotis and Brent Yorgey.
-
- 29 Oct, 2009 1 commit
-
-
simonpj@microsoft.com authored
This patch has been a long time in gestation and has, as a result, accumulated some extra bits and bobs that are only loosely related. I separated the bits that are easy to split off, but the rest comes as one big patch, I'm afraid. Note that: * It comes together with a patch to the 'base' library * Interface file formats change slightly, so you need to recompile all libraries The patch is mainly giant tidy-up, driven in part by the particular stresses of the Data Parallel Haskell project. I don't expect a big performance win for random programs. Still, here are the nofib results, relative to the state of affairs without the patch Program Size Allocs Runtime Elapsed -------------------------------------------------------------------------------- Min -12.7% -14.5% -17.5% -17.8% Max +4.7% +10.9% +9.1% +8.4% Geometric Mean +0.9% -0.1% -5.6% -7.3% The +10.9% allocation outlier is rewrite, which happens to have a very delicate optimisation opportunity involving an interaction of CSE and inlining (see nofib/Simon-nofib-notes). The fact that the 'before' case found the optimisation is somewhat accidental. Runtimes seem to go down, but I never kno wwhether to really trust this number. Binary sizes wobble a bit, but nothing drastic. The Main Ideas are as follows. InlineRules ~~~~~~~~~~~ When you say {-# INLINE f #-} f x = <rhs> you intend that calls (f e) are replaced by <rhs>[e/x] So we should capture (\x.<rhs>) in the Unfolding of 'f', and never meddle with it. Meanwhile, we can optimise <rhs> to our heart's content, leaving the original unfolding intact in Unfolding of 'f'. So the representation of an Unfolding has changed quite a bit (see CoreSyn). An INLINE pragma gives rise to an InlineRule unfolding. Moreover, it's only used when 'f' is applied to the specified number of arguments; that is, the number of argument on the LHS of the '=' sign in the original source definition. For example, (.) is now defined in the libraries like this {-# INLINE (.) #-} (.) f g = \x -> f (g x) so that it'll inline when applied to two arguments. If 'x' appeared on the left, thus (.) f g x = f (g x) it'd only inline when applied to three arguments. This slightly-experimental change was requested by Roman, but it seems to make sense. Other associated changes * Moving the deck chairs in DsBinds, which processes the INLINE pragmas * In the old system an INLINE pragma made the RHS look like (Note InlineMe <rhs>) The Note switched off optimisation in <rhs>. But it was quite fragile in corner cases. The new system is more robust, I believe. In any case, the InlineMe note has disappeared * The workerInfo of an Id has also been combined into its Unfolding, so it's no longer a separate field of the IdInfo. * Many changes in CoreUnfold, esp in callSiteInline, which is the critical function that decides which function to inline. Lots of comments added! * exprIsConApp_maybe has moved to CoreUnfold, since it's so strongly associated with "does this expression unfold to a constructor application". It can now do some limited beta reduction too, which Roman found was an important. Instance declarations ~~~~~~~~~~~~~~~~~~~~~ It's always been tricky to get the dfuns generated from instance declarations to work out well. This is particularly important in the Data Parallel Haskell project, and I'm now on my fourth attempt, more or less. There is a detailed description in TcInstDcls, particularly in Note [How instance declarations are translated]. Roughly speaking we now generate a top-level helper function for every method definition in an instance declaration, so that the dfun takes a particularly stylised form: dfun a d1 d2 = MkD (op1 a d1 d2) (op2 a d1 d2) ...etc... In fact, it's *so* stylised that we never need to unfold a dfun. Instead ClassOps have a special rewrite rule that allows us to short-cut dictionary selection. Suppose dfun :: Ord a -> Ord [a] d :: Ord a Then compare (dfun a d) --> compare_list a d in one rewrite, without first inlining the 'compare' selector and the body of the dfun. To support this a) ClassOps have a BuiltInRule (see MkId.dictSelRule) b) DFuns have a special form of unfolding (CoreSyn.DFunUnfolding) which is exploited in CoreUnfold.exprIsConApp_maybe Implmenting all this required a root-and-branch rework of TcInstDcls and bits of TcClassDcl. Default methods ~~~~~~~~~~~~~~~ If you give an INLINE pragma to a default method, it should be just as if you'd written out that code in each instance declaration, including the INLINE pragma. I think that it now *is* so. As a result, library code can be simpler; less duplication. The CONLIKE pragma ~~~~~~~~~~~~~~~~~~ In the DPH project, Roman found cases where he had p n k = let x = replicate n k in ...(f x)...(g x).... {-# RULE f (replicate x) = f_rep x #-} Normally the RULE would not fire, because doing so involves (in effect) duplicating the redex (replicate n k). A new experimental modifier to the INLINE pragma, {-# INLINE CONLIKE replicate #-}, allows you to tell GHC to be prepared to duplicate a call of this function if it allows a RULE to fire. See Note [CONLIKE pragma] in BasicTypes Join points ~~~~~~~~~~~ See Note [Case binders and join points] in Simplify Other refactoring ~~~~~~~~~~~~~~~~~ * I moved endPass from CoreLint to CoreMonad, with associated jigglings * Better pretty-printing of Core * The top-level RULES (ones that are not rules for locally-defined things) are now substituted on every simplifier iteration. I'm not sure how we got away without doing this before. This entails a bit more plumbing in SimplCore. * The necessary stuff to serialise and deserialise the new info across interface files. * Something about bottoming floats in SetLevels Note [Bottoming floats] * substUnfolding has moved from SimplEnv to CoreSubs, where it belongs -------------------------------------------------------------------------------- Program Size Allocs Runtime Elapsed -------------------------------------------------------------------------------- anna +2.4% -0.5% 0.16 0.17 ansi +2.6% -0.1% 0.00 0.00 atom -3.8% -0.0% -1.0% -2.5% awards +3.0% +0.7% 0.00 0.00 banner +3.3% -0.0% 0.00 0.00 bernouilli +2.7% +0.0% -4.6% -6.9% boyer +2.6% +0.0% 0.06 0.07 boyer2 +4.4% +0.2% 0.01 0.01 bspt +3.2% +9.6% 0.02 0.02 cacheprof +1.4% -1.0% -12.2% -13.6% calendar +2.7% -1.7% 0.00 0.00 cichelli +3.7% -0.0% 0.13 0.14 circsim +3.3% +0.0% -2.3% -9.9% clausify +2.7% +0.0% 0.05 0.06 comp_lab_zift +2.6% -0.3% -7.2% -7.9% compress +3.3% +0.0% -8.5% -9.6% compress2 +3.6% +0.0% -15.1% -17.8% constraints +2.7% -0.6% -10.0% -10.7% cryptarithm1 +4.5% +0.0% -4.7% -5.7% cryptarithm2 +4.3% -14.5% 0.02 0.02 cse +4.4% -0.0% 0.00 0.00 eliza +2.8% -0.1% 0.00 0.00 event +2.6% -0.0% -4.9% -4.4% exp3_8 +2.8% +0.0% -4.5% -9.5% expert +2.7% +0.3% 0.00 0.00 fem -2.0% +0.6% 0.04 0.04 fft -6.0% +1.8% 0.05 0.06 fft2 -4.8% +2.7% 0.13 0.14 fibheaps +2.6% -0.6% 0.05 0.05 fish +4.1% +0.0% 0.03 0.04 fluid -2.1% -0.2% 0.01 0.01 fulsom -4.8% +9.2% +9.1% +8.4% gamteb -7.1% -1.3% 0.10 0.11 gcd +2.7% +0.0% 0.05 0.05 gen_regexps +3.9% -0.0% 0.00 0.00 genfft +2.7% -0.1% 0.05 0.06 gg -2.7% -0.1% 0.02 0.02 grep +3.2% -0.0% 0.00 0.00 hidden -0.5% +0.0% -11.9% -13.3% hpg -3.0% -1.8% +0.0% -2.4% ida +2.6% -1.2% 0.17 -9.0% infer +1.7% -0.8% 0.08 0.09 integer +2.5% -0.0% -2.6% -2.2% integrate -5.0% +0.0% -1.3% -2.9% knights +4.3% -1.5% 0.01 0.01 lcss +2.5% -0.1% -7.5% -9.4% life +4.2% +0.0% -3.1% -3.3% lift +2.4% -3.2% 0.00 0.00 listcompr +4.0% -1.6% 0.16 0.17 listcopy +4.0% -1.4% 0.17 0.18 maillist +4.1% +0.1% 0.09 0.14 mandel +2.9% +0.0% 0.11 0.12 mandel2 +4.7% +0.0% 0.01 0.01 minimax +3.8% -0.0% 0.00 0.00 mkhprog +3.2% -4.2% 0.00 0.00 multiplier +2.5% -0.4% +0.7% -1.3% nucleic2 -9.3% +0.0% 0.10 0.10 para +2.9% +0.1% -0.7% -1.2% paraffins -10.4% +0.0% 0.20 -1.9% parser +3.1% -0.0% 0.05 0.05 parstof +1.9% -0.0% 0.00 0.01 pic -2.8% -0.8% 0.01 0.02 power +2.1% +0.1% -8.5% -9.0% pretty -12.7% +0.1% 0.00 0.00 primes +2.8% +0.0% 0.11 0.11 primetest +2.5% -0.0% -2.1% -3.1% prolog +3.2% -7.2% 0.00 0.00 puzzle +4.1% +0.0% -3.5% -8.0% queens +2.8% +0.0% 0.03 0.03 reptile +2.2% -2.2% 0.02 0.02 rewrite +3.1% +10.9% 0.03 0.03 rfib -5.2% +0.2% 0.03 0.03 rsa +2.6% +0.0% 0.05 0.06 scc +4.6% +0.4% 0.00 0.00 sched +2.7% +0.1% 0.03 0.03 scs -2.6% -0.9% -9.6% -11.6% simple -4.0% +0.4% -14.6% -14.9% solid -5.6% -0.6% -9.3% -14.3% sorting +3.8% +0.0% 0.00 0.00 sphere -3.6% +8.5% 0.15 0.16 symalg -1.3% +0.2% 0.03 0.03 tak +2.7% +0.0% 0.02 0.02 transform +2.0% -2.9% -8.0% -8.8% treejoin +3.1% +0.0% -17.5% -17.8% typecheck +2.9% -0.3% -4.6% -6.6% veritas +3.9% -0.3% 0.00 0.00 wang -6.2% +0.0% 0.18 -9.8% wave4main -10.3% +2.6% -2.1% -2.3% wheel-sieve1 +2.7% -0.0% +0.3% -0.6% wheel-sieve2 +2.7% +0.0% -3.7% -7.5% x2n1 -4.1% +0.1% 0.03 0.04 -------------------------------------------------------------------------------- Min -12.7% -14.5% -17.5% -17.8% Max +4.7% +10.9% +9.1% +8.4% Geometric Mean +0.9% -0.1% -5.6% -7.3%
-
- 10 Sep, 2009 1 commit
-
-
simonpj@microsoft.com authored
This patch implements three significant improvements to Template Haskell. Declaration-level splices with no "$" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This change simply allows you to omit the "$(...)" wrapper for declaration-level TH splices. An expression all by itself is not legal, so we now treat it as a TH splice. Thus you can now say data T = T1 | T2 deriveMyStuff ''T where deriveMyStuff :: Name -> Q [Dec] This makes a much nicer interface for clients of libraries that use TH: no scary $(deriveMyStuff ''T). Nested top-level splices ~~~~~~~~~~~~~~~~~~~~~~~~ Previously TH would reject this, saying that splices cannot be nested: f x = $(g $(h 'x)) But there is no reason for this not to work. First $(h 'x) is run, yielding code <blah> that is spliced instead of the $(h 'x). Then (g <blah>) is typechecked and run, yielding code that replaces the $(g ...) splice. So this simply lifts the restriction. Fix Trac #3467: non-top-level type splices ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It appears that when I added the ability to splice types in TH programs, I failed to pay attention to non-top-level splices -- that is, splices inside quotatation brackets. This patch fixes the problem. I had to modify HsType, so there's a knock-on change to Haddock. Its seems that a lot of lines of code has changed, but almost all the new lines are comments! General tidying up ~~~~~~~~~~~~~~~~~~ As a result of thinking all this out I re-jigged the data type ThStage, which had far too many values before. And I wrote a nice state transition diagram to make it all precise; see Note [Template Haskell state diagram] in TcSplice Lots more refactoring in TcSplice, resulting in significantly less code. (A few more lines, but actually less code -- the rest is comments.) I think the result is significantly cleaner.
-
- 20 Aug, 2009 1 commit
-
-
chak@cse.unsw.edu.au. authored
-
- 03 Jun, 2009 1 commit
-
-
simonpj@microsoft.com authored
Roman found situations where he had case (f n) of _ -> e where he knew that f (which was strict in n) would terminate if n did. Notice that the result of (f n) is discarded. So it makes sense to transform to case n of _ -> e Rather than attempt some general analysis to support this, I've added enough support that you can do this using a rewrite rule: RULE "f/seq" forall n. seq (f n) e = seq n e You write that rule. When GHC sees a case expression that discards its result, it mentally transforms it to a call to 'seq' and looks for a RULE. (This is done in Simplify.rebuildCase.) As usual, the correctness of the rule is up to you. This patch implements the extra stuff. I have not documented it explicitly in the user manual yet... let's see how useful it is first. The patch looks bigger than it is, because a) Comments; see esp MkId Note [seqId magic] b) Some refactoring. Notably, I moved the special desugaring for seq from MkCore back into DsUtils where it properly belongs. (It's really a desugaring thing, not a CoreSyn invariant.) c) Annoyingly, in a RULE left-hand side we need to be careful that the magical desugaring done in MkId Note [seqId magic] item (c) is *not* done on the LHS of a rule. Or rather, we arrange to un-do it, in DsBinds.decomposeRuleLhs.
-
- 28 May, 2009 1 commit
-
-
simonpj@microsoft.com authored
-
- 27 Apr, 2009 1 commit
-
-
chak@cse.unsw.edu.au. authored
- This patch changes the equality constraint solver such that it does not instantiate any type variables that occur in the constraints that are to be solved (or in the environment). Instead, it returns a bag of type bindings. - If these type bindings (together with the other results of the solver) are discarded, solver invocation has no effect (outside the solver) and can be repeated (that's imported for TcSimplifyRestricted). - For the type bindings to take effect, the caller of the solver needs to execute them. - The solver will still instantiate type variables thet were created during solving (e.g., skolem flexibles used during type flattening). See also http://hackage.haskell.org/trac/ghc/wiki/TypeFunctionsSolving
-
- 15 Mar, 2009 1 commit
-
-
chak@cse.unsw.edu.au. authored
- During fianlisation we use to occasionally swivel variable-variable equalities - Now, normalisation ensures that they are always oriented as appropriate for instantation. - Also fixed #1899 properly; the previous fix fixed a symptom, not the cause.
-
- 11 Feb, 2009 1 commit
-
-
simonpj@microsoft.com authored
The function FunDeps.grow was not doing the right thing when type equality constraints were involved. That wasn't really its fault: its input was being filtered by fdPredsOfInsts. To fix this I did a bit of refactoring, so that the (revolting) fdPredsOfInsts is now less important (maybe we can get rid of it in due course). The 'grow' function moves from FunDeps to Inst.growInstsTyVars TcMTType.growThetaTyVars TcMType.growTyVars The main comments are with the first of these, in Note [Growing the tau-tvs using constraints] in Inst. Push to the branch if conflict free.
-
- 16 Dec, 2008 1 commit
-
-
Simon Marlow authored
rolling back: Fri Dec 5 16:54:00 GMT 2008 simonpj@microsoft.com * Completely new treatment of INLINE pragmas (big patch) This is a major patch, which changes the way INLINE pragmas work. Although lots of files are touched, the net is only +21 lines of code -- and I bet that most of those are comments! HEADS UP: interface file format has changed, so you'll need to recompile everything. There is not much effect on overall performance for nofib, probably because those programs don't make heavy use of INLINE pragmas. Program Size Allocs Runtime Elapsed Min -11.3% -6.9% -9.2% -8.2% Max -0.1% +4.6% +7.5% +8.9% Geometric Mean -2.2% -0.2% -1.0% -0.8% (The +4.6% for on allocs is cichelli; see other patch relating to -fpass-case-bndr-to-join-points.) The old INLINE system ~~~~~~~~~~~~~~~~~~~~~ The old system worked like this. A function with an INLINE pragam got a right-hand side which looked like f = __inline_me__ (\xy. e) The __inline_me__ part was an InlineNote, and was treated specially in various ways. Notably, the simplifier didn't inline inside an __inline_me__ note. As a result, the code for f itself was pretty crappy. That matters if you say (map f xs), because then you execute the code for f, rather than inlining a copy at the call site. The new story: InlineRules ~~~~~~~~~~~~~~~~~~~~~~~~~~ The new system removes the InlineMe Note altogether. Instead there is a new constructor InlineRule in CoreSyn.Unfolding. This is a bit like a RULE, in that it remembers the template to be inlined inside the InlineRule. No simplification or inlining is done on an InlineRule, just like RULEs. An Id can have an InlineRule *or* a CoreUnfolding (since these are two constructors from Unfolding). The simplifier treats them differently: - An InlineRule is has the substitution applied (like RULES) but is otherwise left undisturbed. - A CoreUnfolding is updated with the new RHS of the definition, on each iteration of the simplifier. An InlineRule fires regardless of size, but *only* when the function is applied to enough arguments. The "arity" of the rule is specified (by the programmer) as the number of args on the LHS of the "=". So it makes a difference whether you say {-# INLINE f #-} f x = \y -> e or f x y = e This is one of the big new features that InlineRule gives us, and it is one that Roman really wanted. In contrast, a CoreUnfolding can fire when it is applied to fewer args than than the function has lambdas, provided the result is small enough. Consequential stuff ~~~~~~~~~~~~~~~~~~~ * A 'wrapper' no longer has a WrapperInfo in the IdInfo. Instead, the InlineRule has a field identifying wrappers. * Of course, IfaceSyn and interface serialisation changes appropriately. * Making implication constraints inline nicely was a bit fiddly. In the end I added a var_inline field to HsBInd.VarBind, which is why this patch affects the type checker slightly * I made some changes to the way in which eta expansion happens in CorePrep, mainly to ensure that *arguments* that become let-bound are also eta-expanded. I'm still not too happy with the clarity and robustness fo the result. * We now complain if the programmer gives an INLINE pragma for a recursive function (prevsiously we just ignored it). Reason for change: we don't want an InlineRule on a LoopBreaker, because then we'd have to check for loop-breaker-hood at occurrence sites (which isn't currenlty done). Some tests need changing as a result. This patch has been in my tree for quite a while, so there are probably some other minor changes. M ./compiler/basicTypes/Id.lhs -11 M ./compiler/basicTypes/IdInfo.lhs -82 M ./compiler/basicTypes/MkId.lhs -2 +2 M ./compiler/coreSyn/CoreFVs.lhs -2 +25 M ./compiler/coreSyn/CoreLint.lhs -5 +1 M ./compiler/coreSyn/CorePrep.lhs -59 +53 M ./compiler/coreSyn/CoreSubst.lhs -22 +31 M ./compiler/coreSyn/CoreSyn.lhs -66 +92 M ./compiler/coreSyn/CoreUnfold.lhs -112 +112 M ./compiler/coreSyn/CoreUtils.lhs -185 +184 M ./compiler/coreSyn/MkExternalCore.lhs -1 M ./compiler/coreSyn/PprCore.lhs -4 +40 M ./compiler/deSugar/DsBinds.lhs -70 +118 M ./compiler/deSugar/DsForeign.lhs -2 +4 M ./compiler/deSugar/DsMeta.hs -4 +3 M ./compiler/hsSyn/HsBinds.lhs -3 +3 M ./compiler/hsSyn/HsUtils.lhs -2 +7 M ./compiler/iface/BinIface.hs -11 +25 M ./compiler/iface/IfaceSyn.lhs -13 +21 M ./compiler/iface/MkIface.lhs -24 +19 M ./compiler/iface/TcIface.lhs -29 +23 M ./compiler/main/TidyPgm.lhs -55 +49 M ./compiler/parser/ParserCore.y -5 +6 M ./compiler/simplCore/CSE.lhs -2 +1 M ./compiler/simplCore/FloatIn.lhs -6 +1 M ./compiler/simplCore/FloatOut.lhs -23 M ./compiler/simplCore/OccurAnal.lhs -36 +5 M ./compiler/simplCore/SetLevels.lhs -59 +54 M ./compiler/simplCore/SimplCore.lhs -48 +52 M ./compiler/simplCore/SimplEnv.lhs -26 +22 M ./compiler/simplCore/SimplUtils.lhs -28 +4 M ./compiler/simplCore/Simplify.lhs -91 +109 M ./compiler/specialise/Specialise.lhs -15 +18 M ./compiler/stranal/WorkWrap.lhs -14 +11 M ./compiler/stranal/WwLib.lhs -2 +2 M ./compiler/typecheck/Inst.lhs -1 +3 M ./compiler/typecheck/TcBinds.lhs -17 +27 M ./compiler/typecheck/TcClassDcl.lhs -1 +2 M ./compiler/typecheck/TcExpr.lhs -4 +6 M ./compiler/typecheck/TcForeign.lhs -1 +1 M ./compiler/typecheck/TcGenDeriv.lhs -14 +13 M ./compiler/typecheck/TcHsSyn.lhs -3 +2 M ./compiler/typecheck/TcInstDcls.lhs -5 +4 M ./compiler/typecheck/TcRnDriver.lhs -2 +11 M ./compiler/typecheck/TcSimplify.lhs -10 +17 M ./compiler/vectorise/VectType.hs +7 Mon Dec 8 12:43:10 GMT 2008 simonpj@microsoft.com * White space only M ./compiler/simplCore/Simplify.lhs -2 Mon Dec 8 12:48:40 GMT 2008 simonpj@microsoft.com * Move simpleOptExpr from CoreUnfold to CoreSubst M ./compiler/coreSyn/CoreSubst.lhs -1 +87 M ./compiler/coreSyn/CoreUnfold.lhs -72 +1 Mon Dec 8 17:30:18 GMT 2008 simonpj@microsoft.com * Use CoreSubst.simpleOptExpr in place of the ad-hoc simpleSubst (reduces code too) M ./compiler/deSugar/DsBinds.lhs -50 +16 Tue Dec 9 17:03:02 GMT 2008 simonpj@microsoft.com * Fix Trac #2861: bogus eta expansion Urghlhl! I "tided up" the treatment of the "state hack" in CoreUtils, but missed an unexpected interaction with the way that a bottoming function simply swallows excess arguments. There's a long Note [State hack and bottoming functions] to explain (which accounts for most of the new lines of code). M ./compiler/coreSyn/CoreUtils.lhs -16 +53 Mon Dec 15 10:02:21 GMT 2008 Simon Marlow <marlowsd@gmail.com> * Revert CorePrep part of "Completely new treatment of INLINE pragmas..." The original patch said: * I made some changes to the way in which eta expansion happens in CorePrep, mainly to ensure that *arguments* that become let-bound are also eta-expanded. I'm still not too happy with the clarity and robustness fo the result. Unfortunately this change apparently broke some invariants that were relied on elsewhere, and in particular lead to panics when compiling with profiling on. Will re-investigate in the new year. M ./compiler/coreSyn/CorePrep.lhs -53 +58 M ./configure.ac -1 +1 Mon Dec 15 12:28:51 GMT 2008 Simon Marlow <marlowsd@gmail.com> * revert accidental change to configure.ac M ./configure.ac -1 +1
-
- 05 Dec, 2008 1 commit
-
-
simonpj@microsoft.com authored
This is a major patch, which changes the way INLINE pragmas work. Although lots of files are touched, the net is only +21 lines of code -- and I bet that most of those are comments! HEADS UP: interface file format has changed, so you'll need to recompile everything. There is not much effect on overall performance for nofib, probably because those programs don't make heavy use of INLINE pragmas. Program Size Allocs Runtime Elapsed Min -11.3% -6.9% -9.2% -8.2% Max -0.1% +4.6% +7.5% +8.9% Geometric Mean -2.2% -0.2% -1.0% -0.8% (The +4.6% for on allocs is cichelli; see other patch relating to -fpass-case-bndr-to-join-points.) The old INLINE system ~~~~~~~~~~~~~~~~~~~~~ The old system worked like this. A function with an INLINE pragam got a right-hand side which looked like f = __inline_me__ (\xy. e) The __inline_me__ part was an InlineNote, and was treated specially in various ways. Notably, the simplifier didn't inline inside an __inline_me__ note. As a result, the code for f itself was pretty crappy. That matters if you say (map f xs), because then you execute the code for f, rather than inlining a copy at the call site. The new story: InlineRules ~~~~~~~~~~~~~~~~~~~~~~~~~~ The new system removes the InlineMe Note altogether. Instead there is a new constructor InlineRule in CoreSyn.Unfolding. This is a bit like a RULE, in that it remembers the template to be inlined inside the InlineRule. No simplification or inlining is done on an InlineRule, just like RULEs. An Id can have an InlineRule *or* a CoreUnfolding (since these are two constructors from Unfolding). The simplifier treats them differently: - An InlineRule is has the substitution applied (like RULES) but is otherwise left undisturbed. - A CoreUnfolding is updated with the new RHS of the definition, on each iteration of the simplifier. An InlineRule fires regardless of size, but *only* when the function is applied to enough arguments. The "arity" of the rule is specified (by the programmer) as the number of args on the LHS of the "=". So it makes a difference whether you say {-# INLINE f #-} f x = \y -> e or f x y = e This is one of the big new features that InlineRule gives us, and it is one that Roman really wanted. In contrast, a CoreUnfolding can fire when it is applied to fewer args than than the function has lambdas, provided the result is small enough. Consequential stuff ~~~~~~~~~~~~~~~~~~~ * A 'wrapper' no longer has a WrapperInfo in the IdInfo. Instead, the InlineRule has a field identifying wrappers. * Of course, IfaceSyn and interface serialisation changes appropriately. * Making implication constraints inline nicely was a bit fiddly. In the end I added a var_inline field to HsBInd.VarBind, which is why this patch affects the type checker slightly * I made some changes to the way in which eta expansion happens in CorePrep, mainly to ensure that *arguments* that become let-bound are also eta-expanded. I'm still not too happy with the clarity and robustness fo the result. * We now complain if the programmer gives an INLINE pragma for a recursive function (prevsiously we just ignored it). Reason for change: we don't want an InlineRule on a LoopBreaker, because then we'd have to check for loop-breaker-hood at occurrence sites (which isn't currenlty done). Some tests need changing as a result. This patch has been in my tree for quite a while, so there are probably some other minor changes.
-
- 03 Oct, 2008 1 commit
-
-
simonpj@microsoft.com authored
nameModule fails on an InternalName. These ASSERTS tell you which call failed.
-
- 01 Oct, 2008 2 commits
-
-
chak@cse.unsw.edu.au. authored
- This cleans up some of the mess in reduceImplication and documents the precondition on the form of wanted equalities properly. - I also made the back off test a bit smarter by allowing to back off in the presence of wanted equalities as long as none of them got solved in the attempt. (That should save generating some superfluous bindings.) MERGE TO 6.10
-
chak@cse.unsw.edu.au. authored
MERGE TO 6.10
-
- 16 Sep, 2008 1 commit
-
-
chak@cse.unsw.edu.au. authored
-
- 15 Sep, 2008 1 commit
-
-
chak@cse.unsw.edu.au. authored
-
- 14 Sep, 2008 1 commit
-
-
chak@cse.unsw.edu.au. authored
-
- 13 Sep, 2008 1 commit
-
-
chak@cse.unsw.edu.au. authored
- Implements normalisation of class constraints containing synonym family applications or skolems refined by local equalities. - Clean up of TcSimplify.reduceContext by using the new equality solver. - Removed all the now unused code of the old algorithm. - This completes the implementation of the new algorithm, but it is largely untested => many regressions.
-
- 10 Sep, 2008 1 commit
-
-
simonpj@microsoft.com authored
-
- 07 Sep, 2008 1 commit
-
-
chak@cse.unsw.edu.au. authored
- This adds the new equational solver based on the notion of normalised equalities. - The new algorithm is conceptually much simpler and will eventually enable us to implement a fully integrated solver that solves equality and dictionary constraints together. - More details are at <http://hackage.haskell.org/trac/ghc/wiki/TypeFunctionsSolving> - The code is there, but it is not being used yet.
-
- 31 Jul, 2008 1 commit
-
-
batterseapower authored
-
- 16 Jun, 2008 1 commit
-
-
Ian Lynagh authored
* Allow -ffoo flags to be deprecated * Mark some -ffoo flags as deprecated * Avoid using deprecated flags in error messages, in the build system, etc * Add a flag to en/disable the deprecated flag warning
-
- 05 Jun, 2008 1 commit
-
-
simonpj@microsoft.com authored
Sorry -- my 'validate' didn't work right and I missed a trick. This patch must accompany * Fix Trac #2045: use big-tuple machiney for implication constraints
-
- 10 May, 2008 2 commits
-
-
Ian Lynagh authored
-
Ian Lynagh authored
-
- 06 May, 2008 3 commits
-
-
Ian Lynagh authored
-
Ian Lynagh authored
It's possible that returning Nothing was the right thing to do here, but the comment and variable name indicated that it was written for implicit parameters, so make it a panic for now just in case.
-
simonpj@microsoft.com authored
The real work of fixing Trac #2246 is to use shortCutLit in MatchLit.dsOverLit, so that type information discovered late in the day by the type checker can still be exploited during desugaring. However, as usual I found myself doing some refactoring along the way, to tidy up the handling of overloaded literals. The main change is to split HsOverLit into a record, which in turn uses a sum type for the three variants. This makes the code significantly more modular. data HsOverLit id = OverLit { ol_val :: OverLitVal, ol_rebindable :: Bool, -- True <=> rebindable syntax -- False <=> standard syntax ol_witness :: SyntaxExpr id, -- Note [Overloaded literal witnesses] ol_type :: PostTcType } data OverLitVal = HsIntegral !Integer -- Integer-looking literals; | HsFractional !Rational -- Frac-looking literals | HsIsString !FastString -- String-looking literals
-
- 23 Apr, 2008 1 commit
-
-
Ian Lynagh authored
-
- 12 Apr, 2008 1 commit
-
-
Ian Lynagh authored
-
- 29 Mar, 2008 1 commit
-
-
Ian Lynagh authored
Modules that need it import it themselves instead.
-
- 06 Mar, 2008 1 commit
-
-
simonpj@microsoft.com authored
The Inst.shortCutIntLit mechanism in the type checker was missing cases where a floating-point literal was given without an explicit decimal point. As a result, programs with lots of floating-point literals (without decimals) ended up with massive Static Reference Tables. This is not cool. See comments with Trac #783 for details.
-
- 29 Feb, 2008 1 commit
-
-
chak@cse.unsw.edu.au. authored
-