- 25 Jan, 2006 1 commit
-
-
simonpj@microsoft.com authored
This very large commit adds impredicativity to GHC, plus numerous other small things. *** WARNING: I have compiled all the libraries, and *** a stage-2 compiler, and everything seems *** fine. But don't grab this patch if you *** can't tolerate a hiccup if something is *** broken. The big picture is this: a) GHC handles impredicative polymorphism, as described in the "Boxy types: type inference for higher-rank types and impredicativity" paper b) GHC handles GADTs in the new simplified (and very sligtly less epxrssive) way described in the "Simple unification-based type inference for GADTs" paper But there are lots of smaller changes, and since it was pre-Darcs they are not individually recorded. Some things to watch out for: c) The story on lexically-scoped type variables has changed, as per my email. I append the story below for completeness, but I am still not happy with it, and it may change again. In particular, the new story does not allow a pattern-bound scoped type variable to be wobbly, so (\(x::[a]) -> ...) is usually rejected. This is more restrictive than before, and we might loosen up again. d) A consequence of adding impredicativity is that GHC is a bit less gung ho about converting automatically between (ty1 -> forall a. ty2) and (forall a. ty1 -> ty2) In particular, you may need to eta-expand some functions to make typechecking work again. Furthermore, functions are now invariant in their argument types, rather than being contravariant. Again, the main consequence is that you may occasionally need to eta-expand function arguments when using higher-rank polymorphism. Please test, and let me know of any hiccups Scoped type variables in GHC ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ January 2006 0) Terminology. A *pattern binding* is of the form pat = rhs A *function binding* is of the form f pat1 .. patn = rhs A binding of the formm var = rhs is treated as a (degenerate) *function binding*. A *declaration type signature* is a separate type signature for a let-bound or where-bound variable: f :: Int -> Int A *pattern type signature* is a signature in a pattern: \(x::a) -> x f (x::a) = x A *result type signature* is a signature on the result of a function definition: f :: forall a. [a] -> a head (x:xs) :: a = x The form x :: a = rhs is treated as a (degnerate) function binding with a result type signature, not as a pattern binding. 1) The main invariants: A) A lexically-scoped type variable always names a (rigid) type variable (not an arbitrary type). THIS IS A CHANGE. Previously, a scoped type variable named an arbitrary *type*. B) A type signature always describes a rigid type (since its free (scoped) type variables name rigid type variables). This is also a change, a consequence of (A). C) Distinct lexically-scoped type variables name distinct rigid type variables. This choice is open; 2) Scoping 2(a) If a declaration type signature has an explicit forall, those type variables are brought into scope in the right hand side of the corresponding binding (plus, for function bindings, the patterns on the LHS). f :: forall a. a -> [a] f (x::a) = [x :: a, x] Both occurences of 'a' in the second line are bound by the 'forall a' in the first line A declaration type signature *without* an explicit top-level forall is implicitly quantified over all the type variables that are mentioned in the type but not already in scope. GHC's current rule is that this implicit quantification does *not* bring into scope any new scoped type variables. f :: a -> a f x = ...('a' is not in scope here)... This gives compatibility with Haskell 98 2(b) A pattern type signature implicitly brings into scope any type variables mentioned in the type that are not already into scope. These are called *pattern-bound type variables*. g :: a -> a -> [a] g (x::a) (y::a) = [y :: a, x] The pattern type signature (x::a) brings 'a' into scope. The 'a' in the pattern (y::a) is bound, as is the occurrence on the RHS. A pattern type siganture is the only way you can bring existentials into scope. data T where MkT :: forall a. a -> (a->Int) -> T f x = case x of MkT (x::a) f -> f (x::a) 2a) QUESTION class C a where op :: forall b. b->a->a instance C (T p q) where op = <rhs> Clearly p,q are in scope in <rhs>, but is 'b'? Not at the moment. Nor can you add a type signature for op in the instance decl. You'd have to say this: instance C (T p q) where op = let op' :: forall b. ... op' = <rhs> in op' 3) A pattern-bound type variable is allowed only if the pattern's expected type is rigid. Otherwise we don't know exactly *which* skolem the scoped type variable should be bound to, and that means we can't do GADT refinement. This is invariant (A), and it is a big change from the current situation. f (x::a) = x -- NO; pattern type is wobbly g1 :: b -> b g1 (x::b) = x -- YES, because the pattern type is rigid g2 :: b -> b g2 (x::c) = x -- YES, same reason h :: forall b. b -> b h (x::b) = x -- YES, but the inner b is bound k :: forall b. b -> b k (x::c) = x -- NO, it can't be both b and c 3a) You cannot give different names for the same type variable in the same scope (Invariant (C)): f1 :: p -> p -> p -- NO; because 'a' and 'b' would be f1 (x::a) (y::b) = (x::a) -- bound to the same type variable f2 :: p -> p -> p -- OK; 'a' is bound to the type variable f2 (x::a) (y::a) = (x::a) -- over which f2 is quantified -- NB: 'p' is not lexically scoped f3 :: forall p. p -> p -> p -- NO: 'p' is now scoped, and is bound to f3 (x::a) (y::a) = (x::a) -- to the same type varialble as 'a' f4 :: forall p. p -> p -> p -- OK: 'p' is now scoped, and its occurences f4 (x::p) (y::p) = (x::p) -- in the patterns are bound by the forall 3b) You can give a different name to the same type variable in different disjoint scopes, just as you can (if you want) give diferent names to the same value parameter g :: a -> Bool -> Maybe a g (x::p) True = Just x :: Maybe p g (y::q) False = Nothing :: Maybe q 3c) Scoped type variables respect alpha renaming. For example, function f2 from (3a) above could also be written: f2' :: p -> p -> p f2' (x::b) (y::b) = x::b where the scoped type variable is called 'b' instead of 'a'. 4) Result type signatures obey the same rules as pattern types signatures. In particular, they can bind a type variable only if the result type is rigid f x :: a = x -- NO g :: b -> b g x :: b = x -- YES; binds b in rhs 5) A *pattern type signature* in a *pattern binding* cannot bind a scoped type variable (x::a, y) = ... -- Legal only if 'a' is already in scope Reason: in type checking, the "expected type" of the LHS pattern is always wobbly, so we can't bind a rigid type variable. (The exception would be for an existential type variable, but existentials are not allowed in pattern bindings either.) Even this is illegal f :: forall a. a -> a f x = let ((y::b)::a, z) = ... in Here it looks as if 'b' might get a rigid binding; but you can't bind it to the same skolem as a. 6) Explicitly-forall'd type variables in the *declaration type signature(s)* for a *pattern binding* do not scope AT ALL. x :: forall a. a->a -- NO; the forall a does Just (x::a->a) = Just id -- not scope at all y :: forall a. a->a Just y = Just (id :: a->a) -- NO; same reason THIS IS A CHANGE, but one I bet that very few people will notice. Here's why: strange :: forall b. (b->b,b->b) strange = (id,id) x1 :: forall a. a->a y1 :: forall b. b->b (x1,y1) = strange This is legal Haskell 98 (modulo the forall). If both 'a' and 'b' both scoped over the RHS, they'd get unified and so cannot stand for distinct type variables. One could *imagine* allowing this: x2 :: forall a. a->a y2 :: forall a. a->a (x2,y2) = strange using the very same type variable 'a' in both signatures, so that a single 'a' scopes over the RHS. That seems defensible, but odd, because though there are two type signatures, they introduce just *one* scoped type variable, a. 7) Possible extension. We might consider allowing \(x :: [ _ ]) -> <expr> where "_" is a wild card, to mean "x has type list of something", without naming the something.
-
- 11 Jan, 2006 1 commit
-
-
simonmar authored
fix string desugaring: we can only use the ASCII unpackCString# if all the chars are <= 0x7F, not <= 0xFF. (fixes recent breakage in nofib/real/compress2)
-
- 06 Jan, 2006 1 commit
-
-
simonmar authored
Add support for UTF-8 source files GHC finally has support for full Unicode in source files. Source files are now assumed to be UTF-8 encoded, and the full range of Unicode characters can be used, with classifications recognised using the implementation from Data.Char. This incedentally means that only the stage2 compiler will recognise Unicode in source files, because I was too lazy to port the unicode classifier code into libcompat. Additionally, the following synonyms for keywords are now recognised: forall symbol (U+2200) forall right arrow (U+2192) -> left arrow (U+2190) <- horizontal ellipsis (U+22EF) .. there are probably more things we could add here. This will break some source files if Latin-1 characters are being used. In most cases this should result in a UTF-8 decoding error. Later on if we want to support more encodings (perhaps with a pragma to specify the encoding), I plan to do it by recoding into UTF-8 before parsing. Internally, there were some pretty big changes: - FastStrings are now stored in UTF-8 - Z-encoding has been moved right to the back end. Previously we used to Z-encode every identifier on the way in for simplicity, and only decode when we needed to show something to the user. Instead, we now keep every string in its UTF-8 encoding, and Z-encode right before printing it out. To avoid Z-encoding the same string multiple times, the Z-encoding is cached inside the FastString the first time it is requested. This speeds up the compiler - I've measured some definite improvement in parsing at least, and I expect compilations overall to be faster too. It also cleans up a lot of cruft from the OccName interface. Z-encoding is nicely hidden inside the Outputable instance for Names & OccNames now. - StringBuffers are UTF-8 too, and are now represented as ForeignPtrs. - I've put together some test cases, not by any means exhaustive, but there are some interesting UTF-8 decoding error cases that aren't obvious. Also, take a look at unicode001.hs for a demo.
-
- 04 Apr, 2005 1 commit
-
-
simonpj authored
This commit combines three overlapping things: 1. Make rebindable syntax work for do-notation. The idea here is that, in particular, (>>=) can have a type that has class constraints on its argument types, e.g. (>>=) :: (Foo m, Baz a) => m a -> (a -> m b) -> m b The consequence is that a BindStmt and ExprStmt must have individual evidence attached -- previously it was one batch of evidence for the entire Do Sadly, we can't do this for MDo, because we use bind at a polymorphic type (to tie the knot), so we still use one blob of evidence (now in the HsStmtContext) for MDo. For arrow syntax, the evidence is in the HsCmd. For list comprehensions, it's all built-in anyway. So the evidence on a BindStmt is only used for ordinary do-notation. 2. Tidy up HsSyn. In particular: - Eliminate a few "Out" forms, which we can manage without (e.g. - It ought to be the case that the type checker only decorates the syntax tree, but doesn't change one construct into another. That wasn't true for NPat, LitPat, NPlusKPat, so I've fixed that. - Eliminate ResultStmts from Stmt. They always had to be the last Stmt, which led to awkward pattern matching in some places; and the benefits didn't seem to outweigh the costs. Now each construct that uses [Stmt] has a result expression too (e.g. GRHS). 3. Make 'deriving( Ix )' generate a binding for unsafeIndex, rather than for index. This is loads more efficient. (This item only affects TcGenDeriv, but some of point (2) also affects TcGenDeriv, so it has to be in one commit.)
-
- 31 Mar, 2005 1 commit
-
-
simonmar authored
Tweaks to get the GHC sources through Haddock. Doesn't quite work yet, because Haddock complains about the recursive modules. Haddock needs to understand SOURCE imports (it can probably just ignore them as a first attempt).
-
- 01 Mar, 2005 1 commit
-
-
simonpj authored
Make desugaring of pattern-matching much more civilised. Before this change we wrapped new bindings around the right hand side; but that meant they ended up wrapped in reverse order. Now we accumulate the bindings separately, in the eqn_wrap field of an EqnInfo. This cures a desugaring bug encountered by Akos Korosmezey immortalised as ds055
-
- 27 Jan, 2005 1 commit
-
-
simonpj authored
-------------------------------------------- Replace hi-boot files with hs-boot files -------------------------------------------- This major commit completely re-organises the way that recursive modules are dealt with. * It should have NO EFFECT if you do not use recursive modules * It is a BREAKING CHANGE if you do ====== Warning: .hi-file format has changed, so if you are ====== updating into an existing HEAD build, you'll ====== need to make clean and re-make The details: [documentation still to be done] * Recursive loops are now broken with Foo.hs-boot (or Foo.lhs-boot), not Foo.hi-boot * An hs-boot files is a proper source file. It is compiled just like a regular Haskell source file: ghc Foo.hs generates Foo.hi, Foo.o ghc Foo.hs-boot generates Foo.hi-boot, Foo.o-boot * hs-boot files are precisely a subset of Haskell. In particular: - they have the same import, export, and scoping rules - errors (such as kind errors) in hs-boot files are checked You do *not* need to mention the "original" name of something in an hs-boot file, any more than you do in any other Haskell module. * The Foo.hi-boot file generated by compiling Foo.hs-boot is a machine- generated interface file, in precisely the same format as Foo.hi * When compiling Foo.hs, its exports are checked for compatibility with Foo.hi-boot (previously generated by compiling Foo.hs-boot) * The dependency analyser (ghc -M) knows about Foo.hs-boot files, and generates appropriate dependencies. For regular source files it generates Foo.o : Foo.hs Foo.o : Baz.hi -- Foo.hs imports Baz Foo.o : Bog.hi-boot -- Foo.hs source-imports Bog For a hs-boot file it generates similar dependencies Bog.o-boot : Bog.hs-boot Bog.o-boot : Nib.hi -- Bog.hs-boto imports Nib * ghc -M is also enhanced to use the compilation manager dependency chasing, so that ghc -M Main will usually do the job. No need to enumerate all the source files. * The -c flag is no longer a "compiler mode". It simply means "omit the link step", and synonymous with -no-link.
-
- 22 Dec, 2004 1 commit
-
-
simonpj authored
---------------------------------------- New Core invariant: keep case alternatives in sorted order ---------------------------------------- We now keep the alternatives of a Case in the Core language in sorted order. Sorted, that is, by constructor tag for DataAlt by literal for LitAlt The main reason is that it makes matching and equality testing more robust. But in fact some lines of code vanished from SimplUtils.mkAlts. WARNING: no change to interface file formats, but you'll need to recompile your libraries so that they generate interface files that respect the invariant.
-
- 30 Sep, 2004 1 commit
-
-
simonpj authored
------------------------------------ Add Generalised Algebraic Data Types ------------------------------------ This rather big commit adds support for GADTs. For example, data Term a where Lit :: Int -> Term Int App :: Term (a->b) -> Term a -> Term b If :: Term Bool -> Term a -> Term a ..etc.. eval :: Term a -> a eval (Lit i) = i eval (App a b) = eval a (eval b) eval (If p q r) | eval p = eval q | otherwise = eval r Lots and lots of of related changes throughout the compiler to make this fit nicely. One important change, only loosely related to GADTs, is that skolem constants in the typechecker are genuinely immutable and constant, so we often get better error messages from the type checker. See TcType.TcTyVarDetails. There's a new module types/Unify.lhs, which has purely-functional unification and matching for Type. This is used both in the typechecker (for type refinement of GADTs) and in Core Lint (also for type refinement).
-
- 06 Apr, 2004 1 commit
-
-
simonpj authored
The "rebindable-syntax" stuff wasn't dealing with the new location information correctly. This commit fixes the problem, and thereby makes mdofail004 work right. Maybe others too.
-
- 10 Dec, 2003 1 commit
-
-
simonmar authored
Add accurate source location annotations to HsSyn ------------------------------------------------- Every syntactic entity in HsSyn is now annotated with a SrcSpan, which details the exact beginning and end points of that entity in the original source file. All honest compilers should do this, and it was about time GHC did the right thing. The most obvious benefit is that we now have much more accurate error messages; when running GHC inside emacs for example, the cursor will jump to the exact location of an error, not just a line somewhere nearby. We haven't put a huge amount of effort into making sure all the error messages are accurate yet, so there could be some tweaking still needed, although the majority of messages I've seen have been spot-on. Error messages now contain a column number in addition to the line number, eg. read001.hs:25:10: Variable not in scope: `+#' To get the full text span info, use the new option -ferror-spans. eg. read001.hs:25:10-11: Variable not in scope: `+#' I'm not sure whether we should do this by default. Emacs won't understand the new error format, for one thing. In a more elaborate editor setting (eg. Visual Studio), we can arrange to actually highlight the subexpression containing an error. Eventually this information will be used so we can find elements in the abstract syntax corresponding to text locations, for performing high-level editor functions (eg. "tell me the type of this expression I just highlighted"). Performance of the compiler doesn't seem to be adversely affected. Parsing is still quicker than in 6.0.1, for example. Implementation: This was an excrutiatingly painful change to make: both Simon P.J. and myself have been working on it for the last three weeks or so. The basic changes are: - a new datatype SrcSpan, which represents a beginning and end position in a source file. - To reduce the pain as much as possible, we also defined: data Located e = L SrcSpan e - Every datatype in HsSyn has an equivalent Located version. eg. type LHsExpr id = Located (HsExpr id) and pretty much everywhere we used to use HsExpr we now use LHsExpr. Believe me, we thought about this long and hard, and all the other options were worse :-) Additional changes/cleanups we made at the same time: - The abstract syntax for bindings is now less arcane. MonoBinds and HsBinds with their built-in list constructors have gone away, replaced by HsBindGroup and HsBind (see HsSyn/HsBinds.lhs). - The various HsSyn type synonyms have now gone away (eg. RdrNameHsExpr, RenamedHsExpr, and TypecheckedHsExpr are now HsExpr RdrName, HsExpr Name, and HsExpr Id respectively). - Utilities over HsSyn are now collected in a new module HsUtils. More stuff still needs to be moved in here. - MachChar now has a real Char instead of an Int. All GHC versions that can compile GHC now support 32-bit Chars, so this was a simplification.
-
- 17 Nov, 2003 1 commit
-
-
simonmar authored
GC export list.
-
- 09 Oct, 2003 1 commit
-
-
simonpj authored
------------------------- GHC heart/lung transplant ------------------------- This major commit changes the way that GHC deals with importing types and functions defined in other modules, during renaming and typechecking. On the way I've changed or cleaned up numerous other things, including many that I probably fail to mention here. Major benefit: GHC should suck in many fewer interface files when compiling (esp with -O). (You can see this with -ddump-rn-stats.) It's also some 1500 lines of code shorter than before. ** So expect bugs! I can do a 3-stage bootstrap, and run ** the test suite, but you may be doing stuff I havn't tested. ** Don't update if you are relying on a working HEAD. In particular, (a) External Core and (b) GHCi are very little tested. But please, please DO test this version! ------------------------ Big things ------------------------ Interface files, version control, and importing declarations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * There is a totally new data type for stuff that lives in interface files: Original names IfaceType.IfaceExtName Types IfaceType.IfaceType Declarations (type,class,id) IfaceSyn.IfaceDecl Unfoldings IfaceSyn.IfaceExpr (Previously we used HsSyn for type/class decls, and UfExpr for unfoldings.) The new data types are in iface/IfaceType and iface/IfaceSyn. They are all instances of Binary, so they can be written into interface files. Previous engronkulation concering the binary instance of RdrName has gone away -- RdrName is not an instance of Binary any more. Nor does Binary.lhs need to know about the ``current module'' which it used to, which made it specialised to GHC. A good feature of this is that the type checker for source code doesn't need to worry about the possibility that we might be typechecking interface file stuff. Nor does it need to do renaming; we can typecheck direct from IfaceSyn, saving a whole pass (module TcIface) * Stuff from interface files is sucked in *lazily*, rather than being eagerly sucked in by the renamer. Instead, we use unsafeInterleaveIO to capture a thunk for the unfolding of an imported function (say). If that unfolding is every pulled on, TcIface will scramble over the unfolding, which may in turn pull in the interface files of things mentioned in the unfolding. The External Package State is held in a mutable variable so that it can be side-effected by this lazy-sucking-in process (which may happen way later, e.g. when the simplifier runs). In effect, the EPS is a kind of lazy memo table, filled in as we suck things in. Or you could think of it as a global symbol table, populated on demand. * This lazy sucking is very cool, but it can lead to truly awful bugs. The intent is that updates to the symbol table happen atomically, but very bad things happen if you read the variable for the table, and then force a thunk which updates the table. Updates can get lost that way. I regret this subtlety. One example of the way it showed up is that the top level of TidyPgm (which updates the global name cache) to be much more disciplined about those updates, since TidyPgm may itself force thunks which allocate new names. * Version numbering in interface files has changed completely, fixing one major bug with ghc --make. Previously, the version of A.f changed only if A.f's type and unfolding was textually different. That missed changes to things that A.f's unfolding mentions; which was fixed by eagerly sucking in all of those things, and listing them in the module's usage list. But that didn't work with --make, because they might have been already sucked in. Now, A.f's version changes if anything reachable from A.f (via interface files) changes. A module with unchanged source code needs recompiling only if the versions of any of its free variables changes. [This isn't quite right for dictionary functions and rules, which aren't mentioned explicitly in the source. There are extensive comments in module MkIface, where all version-handling stuff is done.] * We don't need equality on HsDecls any more (because they aren't used in interface files). Instead we have a specialised equality for IfaceSyn (eqIfDecl etc), which uses IfaceEq instead of Bool as its result type. See notes in IfaceSyn. * The horrid bit of the renamer that tried to predict what instance decls would be needed has gone entirely. Instead, the type checker simply sucks in whatever instance decls it needs, when it needs them. Easy! Similarly, no need for 'implicitModuleFVs' and 'implicitTemplateHaskellFVs' etc. Hooray! Types and type checking ~~~~~~~~~~~~~~~~~~~~~~~ * Kind-checking of types is far far tidier (new module TcHsTypes replaces the badly-named TcMonoType). Strangely, this was one of my original goals, because the kind check for types is the Right Place to do type splicing, but it just didn't fit there before. * There's a new representation for newtypes in TypeRep.lhs. Previously they were represented using "SourceTypes" which was a funny compromise. Now they have their own constructor in the Type datatype. SourceType has turned back into PredType, which is what it used to be. * Instance decl overlap checking done lazily. Consider instance C Int b instance C a Int These were rejected before as overlapping, because when seeking (C Int Int) one couldn't tell which to use. But there's no problem when seeking (C Bool Int); it can only be the second. So instead of checking for overlap when adding a new instance declaration, we check for overlap when looking up an Inst. If we find more than one matching instance, we see if any of the candidates dominates the others (in the sense of being a substitution instance of all the others); and only if not do we report an error. ------------------------ Medium things ------------------------ * The TcRn monad is generalised a bit further. It's now based on utils/IOEnv.lhs, the IO monad with an environment. The desugarer uses the monad too, so that anything it needs can get faulted in nicely. * Reduce the number of wired-in things; in particular Word and Integer are no longer wired in. The latter required HsLit.HsInteger to get a Type argument. The 'derivable type classes' data types (:+:, :*: etc) are not wired in any more either (see stuff about derivable type classes below). * The PersistentComilerState is now held in a mutable variable in the HscEnv. Previously (a) it was passed to and then returned by many top-level functions, which was painful; (b) it was invariably accompanied by the HscEnv. This change tidies up top-level plumbing without changing anything important. * Derivable type classes are treated much more like 'deriving' clauses. Previously, the Ids for the to/from functions lived inside the TyCon, but now the TyCon simply records their existence (with a simple boolean). Anyone who wants to use them must look them up in the environment. This in turn makes it easy to generate the to/from functions (done in types/Generics) using HsSyn (like TcGenDeriv for ordinary derivings) instead of CoreSyn, which in turn means that (a) we don't have to figure out all the type arguments etc; and (b) it'll be type-checked for us. Generally, the task of generating the code has become easier, which is good for Manuel, who wants to make it more sophisticated. * A Name now says what its "parent" is. For example, the parent of a data constructor is its type constructor; the parent of a class op is its class. This relationship corresponds exactly to the Avail data type; there may be other places we can exploit it. (I made the change so that version comparison in interface files would be a bit easier; but in fact it tided up other things here and there (see calls to Name.nameParent). For example, the declaration pool, of declararations read from interface files, but not yet used, is now keyed only by the 'main' name of the declaration, not the subordinate names. * New types OccEnv and OccSet, with the usual operations. OccNames can be efficiently compared, because they have uniques, thanks to the hashing implementation of FastStrings. * The GlobalRdrEnv is now keyed by OccName rather than RdrName. Not only does this halve the size of the env (because we don't need both qualified and unqualified versions in the env), but it's also more efficient because we can use a UniqFM instead of a FiniteMap. Consequential changes to Provenance, which has moved to RdrName. * External Core remains a bit of a hack, as it was before, done with a mixture of HsDecls (so that recursiveness and argument variance is still inferred), and IfaceExprs (for value declarations). It's not thoroughly tested. ------------------------ Minor things ------------------------ * DataCon fields dcWorkId, dcWrapId combined into a single field dcIds, that is explicit about whether the data con is a newtype or not. MkId.mkDataConWorkId and mkDataConWrapId are similarly combined into MkId.mkDataConIds * Choosing the boxing strategy is done for *source* type decls only, and hence is now in TcTyDecls, not DataCon. * WiredIn names are distinguished by their n_sort field, not by their location, which was rather strange * Define Maybes.mapCatMaybes :: (a -> Maybe b) -> [a] -> [b] and use it here and there * Much better pretty-printing of interface files (--show-iface) Many, many other small things. ------------------------ File changes ------------------------ * New iface/ subdirectory * Much of RnEnv has moved to iface/IfaceEnv * MkIface and BinIface have moved from main/ to iface/ * types/Variance has been absorbed into typecheck/TcTyDecls * RnHiFiles and RnIfaces have vanished entirely. Their work is done by iface/LoadIface * hsSyn/HsCore has gone, replaced by iface/IfaceSyn * typecheck/TcIfaceSig has gone, replaced by iface/TcIface * typecheck/TcMonoType has been renamed to typecheck/TcHsType * basicTypes/Var.hi-boot and basicTypes/Generics.hi-boot have gone altogether
-
- 15 Jul, 2003 1 commit
-
-
ross authored
Add extra functions operating on outsized tuples (used by the translation of arrow notation).
-
- 24 Jun, 2003 1 commit
-
-
simonpj authored
---------------------------------------------- Add support for Ross Paterson's arrow notation ---------------------------------------------- Ross Paterson's ICFP'01 paper described syntax to support John Hughes's "arrows", rather as do-notation supports monads. Except that do-notation is relatively modest -- you can write monads by hand without much trouble -- whereas arrow-notation is more-or-less essential for writing arrow programs. It desugars to a massive pile of tuple construction and selection! For some time, Ross has had a pre-processor for arrow notation, but the resulting type error messages (reported in terms of the desugared code) are impenetrable. This commit integrates the syntax into GHC. The type error messages almost certainly still require tuning, but they should be better than with the pre-processor. Main syntactic changes (enabled with -farrows) exp ::= ... | proc pat -> cmd cmd ::= exp1 -< exp2 | exp1 >- exp2 | exp1 -<< exp2 | exp1 >>- exp2 | \ pat1 .. patn -> cmd | let decls in cmd | if exp then cmd1 else cmd2 | do { cstmt1 .. cstmtn ; cmd } | (| exp |) cmd1 .. cmdn | cmd1 qop cmd2 | case exp of { calts } cstmt :: = let decls | pat <- cmd | rec { cstmt1 .. cstmtn } | cmd New keywords and symbols: proc rec -< >- -<< >>- (| |) The do-notation in cmds was not described in Ross's ICFP'01 paper; instead it's in his chapter in The Fun of Programming (Plagrave 2003). The four arrow-tail forms (-<) etc cover (a) which order the pices come in (-< vs >-), and (b) whether the locally bound variables can be used in the arrow part (-< vs -<<) . In previous presentations, the higher-order-ness (b) was inferred, but it makes a big difference to the typing required so it seems more consistent to be explicit. The 'rec' form is also available in do-notation: * you can use 'rec' in an ordinary do, with the obvious meaning * using 'mdo' just says "infer the minimal recs" Still to do ~~~~~~~~~~~ Top priority is the user manual. The implementation still lacks an implementation of the case form of cmd. Implementation notes ~~~~~~~~~~~~~~~~~~~~ Cmds are parsed, and indeed renamed, as expressions. The type checker distinguishes the two.
-
- 03 Jun, 2003 1 commit
-
-
panne authored
Nuked debugging output
-
- 02 Jun, 2003 3 commits
-
-
simonpj authored
Wibbles to tuples
-
simonpj authored
Wibbles to nested tuples
-
simonpj authored
------------------------------------- Fix the big-tuple-from-desugaring problem ------------------------------------- The desugarer generates a tuple from - mutually recursive bindings - pattern bindings If either bind a lot of variables, GHC can generate a big tuple that isn't in the library, with subsequent disaster. This commit fixes the problem, by using nested tuples. It does *not* fix the problem with big tuples written by the user. And there's still a potential desugarer problem with parallel list comprehensions that bind a lot of variables (and parallel array comprehensions) -- but I expect they are much much rarer. The fix isn't fully tested yet -- I'll try to do that today.
-
- 07 Nov, 2002 1 commit
-
-
simonpj authored
------------------ Fix an obscure bug in implicit parameters, interacting with lazy pattern matching ------------------ MERGE TO STABLE BRANCH The problem was this: data UniqueSupply = US Integer newUnique :: (?uniqueSupply :: UniqueSupply) => Integer newUnique = r where US r = ?uniqueSupply The lazy pattern match in the where clause killed GHC 5.04 because the SourceType {?uniqueSupply::UniqueSupply} of the RHS of the 'where' didn't look like a UniqueSupply. The fix is simple: in DsUtils.mkSelectorBinds, use the pattern, not the rhs, to get the type reqd. More efficient too. Test is typecheck/should_compile/tc164.hs
-
- 30 Oct, 2002 1 commit
-
-
simonpj authored
Add string/rational literals, and e::t form to TH
-
- 09 Oct, 2002 1 commit
-
-
simonpj authored
----------------------------------- Lots more Template Haskell stuff ----------------------------------- At last! Top-level declaration splices work! Syntax is $(f x) not "splice (f x)" as in the paper. Lots jiggling around, particularly with the top-level plumbining. Note the new data type HsDecls.HsGroup.
-
- 13 Sep, 2002 1 commit
-
-
simonpj authored
-------------------------------------- Make Template Haskell into the HEAD -------------------------------------- This massive commit transfers to the HEAD all the stuff that Simon and Tim have been doing on Template Haskell. The meta-haskell-branch is no more! WARNING: make sure that you * Update your links if you are using link trees. Some modules have been added, some have gone away. * Do 'make clean' in all library trees. The interface file format has changed, and you can get strange panics (sadly) if GHC tries to read old interface files: e.g. ghc-5.05: panic! (the `impossible' happened, GHC version 5.05): Binary.get(TyClDecl): ForeignType * You need to recompile the rts too; Linker.c has changed However the libraries are almost unaltered; just a tiny change in Base, and to the exports in Prelude. NOTE: so far as TH itself is concerned, expression splices work fine, but declaration splices are not complete. --------------- The main change --------------- The main structural change: renaming and typechecking have to be interleaved, because we can't rename stuff after a declaration splice until after we've typechecked the stuff before (and the splice itself). * Combine the renamer and typecheker monads into one (TcRnMonad, TcRnTypes) These two replace TcMonad and RnMonad * Give them a single 'driver' (TcRnDriver). This driver replaces TcModule.lhs and Rename.lhs * The haskell-src library package has a module Language/Haskell/THSyntax which defines the Haskell data type seen by the TH programmer. * New modules: hsSyn/Convert.hs converts THSyntax -> HsSyn deSugar/DsMeta.hs converts HsSyn -> THSyntax * New module typecheck/TcSplice type-checks Template Haskell splices. ------------- Linking stuff ------------- * ByteCodeLink has been split into ByteCodeLink (which links) ByteCodeAsm (which assembles) * New module ghci/ObjLink is the object-code linker. * compMan/CmLink is removed entirely (was out of place) Ditto CmTypes (which was tiny) * Linker.c initialises the linker when it is first used (no need to call initLinker any more). Template Haskell makes it harder to know when and whether to initialise the linker. ------------------------------------- Gathering the LIE in the type checker ------------------------------------- * Instead of explicitly gathering constraints in the LIE tcExpr :: RenamedExpr -> TcM (TypecheckedExpr, LIE) we now dump the constraints into a mutable varabiable carried by the monad, so we get tcExpr :: RenamedExpr -> TcM TypecheckedExpr Much less clutter in the code, and more efficient too. (Originally suggested by Mark Shields.) ----------------- Remove "SysNames" ----------------- Because the renamer and the type checker were entirely separate, we had to carry some rather tiresome implicit binders (or "SysNames") along inside some of the HsDecl data structures. They were both tiresome and fragile. Now that the typechecker and renamer are more intimately coupled, we can eliminate SysNames (well, mostly... default methods still carry something similar). ------------- Clean up HsPat ------------- One big clean up is this: instead of having two HsPat types (InPat and OutPat), they are now combined into one. This is more consistent with the way that HsExpr etc is handled; there are some 'Out' constructors for the type checker output. So: HsPat.InPat --> HsPat.Pat HsPat.OutPat --> HsPat.Pat No 'pat' type parameter in HsExpr, HsBinds, etc Constructor patterns are nicer now: they use HsPat.HsConDetails for the three cases of constructor patterns: prefix, infix, and record-bindings The *same* data type HsConDetails is used in the type declaration of the data type (HsDecls.TyData) Lots of associated clean-up operations here and there. Less code. Everything is wonderful.
-
- 29 Apr, 2002 1 commit
-
-
simonmar authored
FastString cleanup, stage 1. The FastString type is no longer a mixture of hashed strings and literal strings, it contains hashed strings only with O(1) comparison (except for UnicodeStr, but that will also go away in due course). To create a literal instance of FastString, use FSLIT(".."). By far the most common use of the old literal version of FastString was in the pattern ptext SLIT("...") this combination still works, although it doesn't go via FastString any more. The next stage will be to remove the need to use this special combination at all, using a RULE. To convert a FastString into an SDoc, now use 'ftext' instead of 'ptext'. I've also removed all the FAST_STRING related macros from HsVersions.h except for SLIT and FSLIT, just use the relevant functions from FastString instead.
-
- 11 Apr, 2002 1 commit
-
-
simonpj authored
------------------- Mainly derived Read ------------------- This commit is a tangle of several things that somehow got wound up together, I'm afraid. The main course ~~~~~~~~~~~~~~~ Replace the derived-Read machinery with Koen's cunning new parser combinator library. The result should be * much smaller code sizes from derived Read * faster execution of derived Read WARNING: I have not thoroughly tested this stuff; I'd be glad if you did! All the hard work is done, but there may be a few nits. The Read class gets two new methods, not exposed in the H98 inteface of course: class Read a where readsPrec :: Int -> ReadS a readList :: ReadS [a] readPrec :: ReadPrec a -- NEW readListPrec :: ReadPrec [a] -- NEW There are the following new libraries: Text.ParserCombinators.ReadP Koens combinator parser Text.ParserCombinators.ReadPrec Ditto, but with precedences Text.Read.Lex An emasculated lexical analyser that provides the functionality of H98 'lex' TcGenDeriv is changed to generate code that uses the new libraries. The built-in instances of Read (List, Maybe, tuples, etc) use the new libraries. Other stuff ~~~~~~~~~~~ 1. Some fixes the the plumbing of external-core generation. Sigbjorn did most of the work earlier, but this commit completes the renaming and typechecking plumbing. 2. Runtime error-generation functions, such as GHC.Err.recSelErr, GHC.Err.recUpdErr, etc, now take an Addr#, pointing to a UTF8-encoded C string, instead of a Haskell string. This makes the *calls* to these functions easier to generate, and smaller too, which is a good thing. In particular, it means that MkId.mkRecordSelectorId doesn't need to be passed "unpackCStringId", which was GRUESOME; and that in turn means that tcTypeAndClassDecls doesn't need to be passed unf_env, which is a very worthwhile cleanup. Win/win situation. 3. GHC now faithfully translates do-notation using ">>" for statements with no binding, just as the report says. While I was there I tidied up HsDo to take a list of Ids instead of 3 (but now 4) separate Ids. Saves a bit of code here and there. Also introduced Inst.newMethodFromName to package a common idiom.
-
- 05 Apr, 2002 1 commit
-
-
sof authored
Friday afternoon pet peeve removal: define (Util.notNull :: [a] -> Bool) and use it
-
- 01 Apr, 2002 1 commit
-
-
simonpj authored
------------------------------------ Change the treatment of the stupid context on data constructors ----------------------------------- Data types can have a context: data (Eq a, Ord b) => T a b = T1 a b | T2 a and that makes the constructors have a context too (notice that T2's context is "thinned"): T1 :: (Eq a, Ord b) => a -> b -> T a b T2 :: (Eq a) => a -> T a b Furthermore, this context pops up when pattern matching (though GHC hasn't implemented this, but it is in H98, and I've fixed GHC so that it now does): f (T2 x) = x gets inferred type f :: Eq a => T a b -> a I say the context is "stupid" because the dictionaries passed are immediately discarded -- they do nothing and have no benefit. It's a flaw in the language. Up to now I have put this stupid context into the type of the "wrapper" constructors functions, T1 and T2, but that turned out to be jolly inconvenient for generics, and record update, and other functions that build values of type T (because they don't have suitable dictionaries available). So now I've taken the stupid context out. I simply deal with it separately in the type checker on occurrences of a constructor, either in an expression or in a pattern. To this end * Lots of changes in DataCon, MkId * New function Inst.tcInstDataCon to instantiate a data constructor I also took the opportunity to * Rename dataConId --> dataConWorkId for consistency. * Tidied up MkId.rebuildConArgs quite a bit, and renamed it mkReboxingAlt * Add function DataCon.dataConExistentialTyVars, with the obvious meaning
-
- 11 Feb, 2002 1 commit
-
-
chak authored
******************************* * Merging from ghc-ndp-branch * ******************************* This commit merges the current state of the "parallel array extension" and includes the following: * (Almost) completed Milestone 1: - The option `-fparr' activates the H98 extension for parallel arrays. - These changes have a high likelihood of conflicting (in the CVS sense) with other changes to GHC and are the reason for merging now. - ToDo: There are still some (less often used) functions not implemented in `PrelPArr' and a mechanism is needed to automatically import `PrelPArr' iff `-fparr' is given. Documentation that should go into the Commentary is currently in `ghc/compiler/ndpFlatten/TODO'. * Partial Milestone 2: - The option `-fflatten' activates the flattening transformation and `-ndp' selects the "ndp" way (where all libraries have to be compiled with flattening). The way option `-ndp' automagically turns on `-fparr' and `-fflatten'. - Almost all changes are in the new directory `ndpFlatten' and shouldn't affect the rest of the compiler. The only exception are the options and the points in `HscMain' where the flattening phase is called when `-fflatten' is given. - This isn't usable yet, but already implements function lifting, vectorisation, and a new analysis that determines which parts of a module have to undergo the flattening transformation. Missing are data structure and function specialisation, the unboxed array library (including fusion rules), and lots of testing. I have just run the regression tests on the thing without any problems. So, it seems, as if we haven't broken anything crucial.
-
- 26 Nov, 2001 1 commit
-
-
simonpj authored
---------------------- Implement Rank-N types ---------------------- This commit implements the full glory of Rank-N types, using the Odersky/Laufer approach described in their paper "Putting type annotations to work" In fact, I've had to adapt their approach to deal with the full glory of Haskell (including pattern matching, and the scoped-type-variable extension). However, the result is: * There is no restriction to rank-2 types. You can nest forall's as deep as you like in a type. For example, you can write a type like p :: ((forall a. Eq a => a->a) -> Int) -> Int This is a rank-3 type, illegal in GHC 5.02 * When matching types, GHC uses the cunning Odersky/Laufer coercion rules. For example, suppose we have q :: (forall c. Ord c => c->c) -> Int Then, is this well typed? x :: Int x = p q Yes, it is, but GHC has to generate the right coercion. Here's what it looks like with all the big lambdas and dictionaries put in: x = p (\ f :: (forall a. Eq a => a->a) -> q (/\c \d::Ord c -> f c (eqFromOrd d))) where eqFromOrd selects the Eq superclass dictionary from the Ord dicationary: eqFromOrd :: Ord a -> Eq a * You can use polymorphic types in pattern type signatures. For example: f (g :: forall a. a->a) = (g 'c', g True) (Previously, pattern type signatures had to be monotypes.) * The basic rule for using rank-N types is that you must specify a type signature for every binder that you want to have a type scheme (as opposed to a plain monotype) as its type. However, you don't need to give the type signature on the binder (as I did above in the defn for f). You can give it in a separate type signature, thus: f :: (forall a. a->a) -> (Char,Bool) f g = (g 'c', g True) GHC will push the external type signature inwards, and use that information to decorate the binders as it comes across them. I don't have a *precise* specification of this process, but I think it is obvious enough in practice. * In a type synonym you can use rank-N types too. For example, you can write type IdFun = forall a. a->a f :: IdFun -> (Char,Bool) f g = (g 'c', g True) As always, type synonyms must always occur saturated; GHC expands them before it does anything else. (Still, GHC goes to some trouble to keep them unexpanded in error message.) The main plan is as before. The main typechecker for expressions, tcExpr, takes an "expected type" as its argument. This greatly improves error messages. The new feature is that when this "expected type" (going down) meets an "actual type" (coming up) we use the new subsumption function TcUnify.tcSub which checks that the actual type can be coerced into the expected type (and produces a coercion function to demonstrate). The main new chunk of code is TcUnify.tcSub. The unifier itself is unchanged, but it has moved from TcMType into TcUnify. Also checkSigTyVars has moved from TcMonoType into TcUnify. Result: the new module, TcUnify, contains all stuff relevant to subsumption and unification. Unfortunately, there is now an inevitable loop between TcUnify and TcSimplify, but that's just too bad (a simple TcUnify.hi-boot file). All of this doesn't come entirely for free. Here's the typechecker line count (INCLUDING comments) Before 16,551 After 17,116
-
- 25 Oct, 2001 1 commit
-
-
sof authored
- Pet peeve removal / code tidyup, replaced various sub-optimal uses of 'length' with something a bit better, i.e., replaced the following patterns * length as `cmpOp` length bs * length as `cmpOp` val -- incl. uses where val == 1 and val == 0 * {take,drop,splitAt} (length as) bs * length [ () | pat <- as ] with uses of misc Util functions. I'd be surprised if there's a noticeable reduction in running times as a result of these changes, but every little bit helps. [ The changes have been tested wrt testsuite/ - I'm seeing a couple of unexpected breakages coming from CorePrep, but I'm currently assuming that these are due to other recent changes. ] - compMan/CompManager.lhs: restored 4.08 compilability + some code cleanup. None of these changes are HEADworthy.
-
- 12 Jul, 2001 1 commit
-
-
simonpj authored
-------------------------------------------- Fix another bug in the squash-newtypes story. -------------------------------------------- [This one was spotted by Marcin, and is now enshrined in test tc130.] The desugarer straddles the boundary between the type checker and Core, so it sometimes needs to look through newtypes/implicit parameters and sometimes not. This is really a bit painful, but I can't think of a better way to do it. The only simple way to fix things was to pass a bit more type information in the HsExpr type, from the type checker to the desugarer. That led to the non-local changes you can see. On the way I fixed one other thing. In various HsSyn constructors there is a Type that is bogus (bottom) before the type checker, and filled in with a real type by the type checker. In one place it was a (Maybe Type) which was Nothing before, and (Just ty) afterwards. I've defined a type synonym HsTypes.PostTcType for this, and a named bottom value HsTypes.placeHolderType to use when you want the bottom value.
-
- 25 Jun, 2001 1 commit
-
-
simonpj authored
---------------- Squash newtypes ---------------- This commit squashes newtypes and their coerces, from the typechecker onwards. The original idea was that the coerces would not get in the way of optimising transformations, but despite much effort they continue to do so. There's no very good reason to retain newtype information beyond the typechecker, so now we don't. Main points: * The post-typechecker suite of Type-manipulating functions is in types/Type.lhs, as before. But now there's a new suite in types/TcType.lhs. The difference is that in the former, newtype are transparent, while in the latter they are opaque. The typechecker should only import TcType, not Type. * The operations in TcType are all non-monadic, and most of them start with "tc" (e.g. tcSplitTyConApp). All the monadic operations (used exclusively by the typechecker) are in a new module, typecheck/TcMType.lhs * I've grouped newtypes with predicate types, thus: data Type = TyVarTy Tyvar | .... | SourceTy SourceType data SourceType = NType TyCon [Type] | ClassP Class [Type] | IParam Type [SourceType was called PredType.] This is a little wierd in some ways, because NTypes can't occur in qualified types. However, the idea is that a SourceType is a type that is opaque to the type checker, but transparent to the rest of the compiler, and newtypes fit that as do implicit parameters and dictionaries. * Recursive newtypes still retain their coreces, exactly as before. If they were transparent we'd get a recursive type, and that would make various bits of the compiler diverge (e.g. things which do type comparison). * I've removed types/Unify.lhs (non-monadic type unifier and matcher), merging it into TcType. Ditto typecheck/TcUnify.lhs (monadic unifier), merging it into TcMType.
-
- 18 May, 2001 1 commit
-
-
simonpj authored
----------------------------- Get unbox-strict-fields right ----------------------------- The problem was that when a library was compiled *without* -funbox-strict-fields, and the main program was compiled *with* that flag, we were wrongly treating the fields of imported data types as unboxed. To fix this I added an extra constructor to StrictnessMark to express whether the "!" annotation came from an interface file (don't fiddle) or a source file (decide whether to unbox). On the way I tided things up: * StrictnessMark moves to Demand.lhs, and doesn't have the extra DataCon fields that kept it in DataCon before. * HsDecls.BangType has one constructor, not three, with a StrictnessMark field. * DataCon keeps track of its strictness signature (dcRepStrictness), but not its "user strict marks" (which were never used) * All the functions, like getUniquesDs, that used to take an Int saying how many uniques to allocate, now return an infinite list. This saves arguments and hassle. But it involved touching quite a few files. * rebuildConArgs takes a list of Uniques to use as its unique supply. This means I could combine DsUtils.rebuildConArgs with MkId.rebuildConArgs (hooray; the main point of the previous change) I also tidied up one or two error messages
-
- 26 Feb, 2001 1 commit
-
-
simonmar authored
Implement do-style bindings on the GHCi command line. The syntax for a command-line is exactly that of a do statement, with the following meanings: - `pat <- expr' performs expr, and binds each of the variables in pat. - `let pat = expr; ...' binds each of the variables in pat, doesn't do any evaluation - `expr' behaves as `it <- expr' if expr is IO-typed, or `let it = expr' followed by `print it' otherwise.
-
- 07 Nov, 2000 1 commit
-
-
simonmar authored
This commit completes the merge of compiler part of the HEAD with the before-ghci-branch to before-ghci-branch-merged.
-
- 18 Oct, 2000 1 commit
-
-
sewardj authored
Make the desugarer compile.
-
- 28 Sep, 2000 1 commit
-
-
simonpj authored
------------------------------------ Mainly PredTypes (28 Sept 00) ------------------------------------ Three things in this commit: 1. Main thing: tidy up PredTypes 2. Move all Keys into PrelNames 3. Check for unboxed tuples in function args 1. Tidy up PredTypes ~~~~~~~~~~~~~~~~~~~~ The main thing in this commit is to modify the representation of Types so that they are a (much) better for the qualified-type world. This should simplify Jeff's life as he proceeds with implicit parameters and functional dependencies. In particular, PredType, introduced by Jeff, is now blessed and dignified with a place in TypeRep.lhs: data PredType = Class Class [Type] | IParam Name Type Consider these examples: f :: (Eq a) => a -> Int g :: (?x :: Int -> Int) => a -> Int h :: (r\l) => {r} => {l::Int | r} Here the "Eq a" and "?x :: Int -> Int" and "r\l" are all called *predicates*, and are represented by a PredType. (We don't support TREX records yet, but the setup is designed to expand to allow them.) In addition, Type gains an extra constructor: data Type = .... | PredTy PredType so that PredType is injected directly into Type. So the type p => t is represented by PredType p `FunTy` t I have deleted the hackish IPNote stuff; predicates are dealt with entirely through PredTys, not through NoteTy at all. 2. Move Keys into PrelNames ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is just a housekeeping operation. I've moved all the pre-assigned Uniques (aka Keys) from Unique.lhs into PrelNames.lhs. I've also moved knowKeyRdrNames from PrelInfo down into PrelNames. This localises in PrelNames lots of stuff about predefined names. Previously one had to alter three files to add one, now only one. 3. Unboxed tuples ~~~~~~~~~~~~~~~~~~ Add a static check for unboxed tuple arguments. E.g. data T = T (# Int, Int #) is illegal
-
- 22 Sep, 2000 1 commit
-
-
simonpj authored
-------------------------------------------------- Tidying up HsLit, and making it possible to define your own numeric library Simon PJ 22 Sept 00 -------------------------------------------------- ** NOTE: I did these changes on the aeroplane. They should compile, and the Prelude still compiles OK, but it's entirely possible that I've broken something The original reason for this many-file but rather shallow commit is that it's impossible in Haskell to write your own numeric library. Why? Because when you say '1' you get (Prelude.fromInteger 1), regardless of what you hide from the Prelude, or import from other libraries you have written. So the idea is to extend the -fno-implicit-prelude flag so that in addition to no importing the Prelude, you can rebind fromInteger -- Applied to literal constants fromRational -- Ditto negate -- Invoked by the syntax (-x) the (-) used when desugaring n+k patterns After toying with other designs, I eventually settled on a simple, crude one: rather than adding a new flag, I just extended the semantics of -fno-implicit-prelude so that uses of fromInteger, fromRational and negate are all bound to "whatever is in scope" rather than "the fixed Prelude functions". So if you say {-# OPTIONS -fno-implicit-prelude #-} module M where import MyPrelude( fromInteger ) x = 3 the literal 3 will use whatever (unqualified) "fromInteger" is in scope, in this case the one gotten from MyPrelude. On the way, though, I studied how HsLit worked, and did a substantial tidy up, deleting quite a lot of code along the way. In particular. * HsBasic.lhs is renamed HsLit.lhs. It defines the HsLit type. * There are now two HsLit types, both defined in HsLit. HsLit for non-overloaded literals (like 'x') HsOverLit for overloaded literals (like 1 and 2.3) * HsOverLit completely replaces Inst.OverloadedLit, which disappears. An HsExpr can now be an HsOverLit as well as an HsLit. * HsOverLit carries the Name of the fromInteger/fromRational operation, so that the renamer can help with looking up the unqualified name when -fno-implicit-prelude is on. Ditto the HsExpr for negation. It's all very tidy now. * RdrHsSyn contains the stuff that handles -fno-implicit-prelude (see esp RdrHsSyn.prelQual). RdrHsSyn also contains all the "smart constructors" used by the parser when building HsSyn. See for example RdrHsSyn.mkNegApp (previously the renamer (!) did the business of turning (- 3#) into -3#). * I tidied up the handling of "special ids" in the parser. There's much less duplication now. * Move Sven's Horner stuff to the desugarer, where it belongs. There's now a nice function DsUtils.mkIntegerLit which brings together related code from no fewer than three separate places into one single place. Nice! * A nice tidy-up in MatchLit.partitionEqnsByLit became possible. * Desugaring of HsLits is now much tidier (DsExpr.dsLit) * Some stuff to do with RdrNames is moved from ParseUtil.lhs to RdrHsSyn.lhs, which is where it really belongs. * I also removed many unnecessary imports from modules quite a bit of dead code in divers places
-
- 07 Sep, 2000 1 commit
-
-
simonpj authored
* Make the desugarer use string equality for string literal patterns longer than 1 character. And put a specialised eqString into PrelBase, with a suitable specialisation rule. This makes a huge difference to the size of the code generated by deriving(Read) notably in Time.lhs
-
- 07 Aug, 2000 1 commit
-
-
qrczak authored
Now Char, Char#, StgChar have 31 bits (physically 32). "foo"# is still an array of bytes. CharRep represents 32 bits (on a 64-bit arch too). There is also Int8Rep, used in those places where bytes were originally meant. readCharArray, indexCharOffAddr etc. still use bytes. Storable and {I,M}Array use wide Chars. In future perhaps all sized integers should be primitive types. Then some usages of indexing primops scattered through the code could be changed to then-available Int8 ones, and then Char variants of primops could be made wide (other usages that handle text should use conversion that will be provided later). I/O and _ccall_ arguments assume ISO-8859-1. UTF-8 is internally used for string literals (only). Z-encoding is ready for Unicode identifiers. Ranges of intlike and charlike closures are more easily configurable. I've probably broken nativeGen/MachCode.lhs:chrCode for Alpha but I don't know the Alpha assembler to fix it (what is zapnot?). Generally I'm not sure if I've done the NCG changes right. This commit breaks the binary compatibility (of course). TODO: * is* and to{Lower,Upper} in Char (in progress). * Libraries for text conversion (in design / experiments), to be plugged to I/O and a higher level foreign library. * PackedString. * StringBuffer and accepting source in encodings other than ISO-8859-1.
-