\begin{code} {-# OPTIONS -fno-warn-tabs #-} -- The above warning supression flag is a temporary kludge. -- While working on this module you are encouraged to remove it and -- detab the module (please do the detabbing in a separate patch). See -- http://hackage.haskell.org/trac/ghc/wiki/Commentary/CodingStyle#TabsvsSpaces -- for details module TcCanonical( canonicalize, occurCheckExpand, emitWorkNC, StopOrContinue (..) ) where #include "HsVersions.h" import TcRnTypes import TcType import Type import Kind import TcEvidence import Class import TyCon import TypeRep import Var import VarEnv import Outputable import Control.Monad ( when ) import MonadUtils import Control.Applicative ( (<|>) ) import TysWiredIn ( eqTyCon ) import VarSet import TcSMonad import FastString import Util import Maybes( catMaybes ) \end{code} %************************************************************************ %* * %* The Canonicaliser * %* * %************************************************************************ Note [Canonicalization] ~~~~~~~~~~~~~~~~~~~~~~~ Canonicalization converts a flat constraint to a canonical form. It is unary (i.e. treats individual constraints one at a time), does not do any zonking, but lives in TcS monad because it needs to create fresh variables (for flattening) and consult the inerts (for efficiency). The execution plan for canonicalization is the following: 1) Decomposition of equalities happens as necessary until we reach a variable or type family in one side. There is no decomposition step for other forms of constraints. 2) If, when we decompose, we discover a variable on the head then we look at inert_eqs from the current inert for a substitution for this variable and contine decomposing. Hence we lazily apply the inert substitution if it is needed. 3) If no more decomposition is possible, we deeply apply the substitution from the inert_eqs and continue with flattening. 4) During flattening, we examine whether we have already flattened some function application by looking at all the CTyFunEqs with the same function in the inert set. The reason for deeply applying the inert substitution at step (3) is to maximise our chances of matching an already flattened family application in the inert. The net result is that a constraint coming out of the canonicalization phase cannot be rewritten any further from the inerts (but maybe /it/ can rewrite an inert or still interact with an inert in a further phase in the simplifier. \begin{code} -- Informative results of canonicalization data StopOrContinue = ContinueWith Ct -- Either no canonicalization happened, or if some did -- happen, it is still safe to just keep going with this -- work item. | Stop -- Some canonicalization happened, extra work is now in -- the TcS WorkList. instance Outputable StopOrContinue where ppr Stop = ptext (sLit "Stop") ppr (ContinueWith w) = ptext (sLit "ContinueWith") <+> ppr w continueWith :: Ct -> TcS StopOrContinue continueWith = return . ContinueWith andWhenContinue :: TcS StopOrContinue -> (Ct -> TcS StopOrContinue) -> TcS StopOrContinue andWhenContinue tcs1 tcs2 = do { r <- tcs1 ; case r of Stop -> return Stop ContinueWith ct -> tcs2 ct } \end{code} Note [Caching for canonicals] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Our plan with pre-canonicalization is to be able to solve a constraint really fast from existing bindings in TcEvBinds. So one may think that the condition (isCNonCanonical) is not necessary. However consider the following setup: InertSet = { [W] d1 : Num t } WorkList = { [W] d2 : Num t, [W] c : t ~ Int} Now, we prioritize equalities, but in our concrete example (should_run/mc17.hs) the first (d2) constraint is dealt with first, because (t ~ Int) is an equality that only later appears in the worklist since it is pulled out from a nested implication constraint. So, let's examine what happens: - We encounter work item (d2 : Num t) - Nothing is yet in EvBinds, so we reach the interaction with inerts and set: d2 := d1 and we discard d2 from the worklist. The inert set remains unaffected. - Now the equation ([W] c : t ~ Int) is encountered and kicks-out (d1 : Num t) from the inerts. Then that equation gets spontaneously solved, perhaps. We end up with: InertSet : { [G] c : t ~ Int } WorkList : { [W] d1 : Num t} - Now we examine (d1), we observe that there is a binding for (Num t) in the evidence binds and we set: d1 := d2 and end up in a loop! Now, the constraints that get kicked out from the inert set are always Canonical, so by restricting the use of the pre-canonicalizer to NonCanonical constraints we eliminate this danger. Moreover, for canonical constraints we already have good caching mechanisms (effectively the interaction solver) and we are interested in reducing things like superclasses of the same non-canonical constraint being generated hence I don't expect us to lose a lot by introducing the (isCNonCanonical) restriction. A similar situation can arise in TcSimplify, at the end of the solve_wanteds function, where constraints from the inert set are returned as new work -- our substCt ensures however that if they are not rewritten by subst, they remain canonical and hence we will not attempt to solve them from the EvBinds. If on the other hand they did get rewritten and are now non-canonical they will still not match the EvBinds, so we are again good. \begin{code} -- Top-level canonicalization -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ canonicalize :: Ct -> TcS StopOrContinue canonicalize ct@(CNonCanonical { cc_ev = ev, cc_loc = d }) = do { traceTcS "canonicalize (non-canonical)" (ppr ct) ; {-# SCC "canEvVar" #-} canEvNC d ev } canonicalize (CDictCan { cc_loc = d , cc_ev = ev , cc_class = cls , cc_tyargs = xis }) = {-# SCC "canClass" #-} canClass d ev cls xis -- Do not add any superclasses canonicalize (CTyEqCan { cc_loc = d , cc_ev = ev , cc_tyvar = tv , cc_rhs = xi }) = {-# SCC "canEqLeafTyVarEq" #-} canEqLeafTyVarEq d ev tv xi canonicalize (CFunEqCan { cc_loc = d , cc_ev = ev , cc_fun = fn , cc_tyargs = xis1 , cc_rhs = xi2 }) = {-# SCC "canEqLeafFunEq" #-} canEqLeafFunEq d ev fn xis1 xi2 canonicalize (CIrredEvCan { cc_ev = ev , cc_loc = d }) = canIrred d ev canonicalize (CHoleCan { cc_ev = ev, cc_loc = d }) = canHole d ev canEvNC :: CtLoc -> CtEvidence -> TcS StopOrContinue -- Called only for non-canonical EvVars canEvNC d ev = case classifyPredType (ctEvPred ev) of ClassPred cls tys -> canClassNC d ev cls tys EqPred ty1 ty2 -> canEqNC d ev ty1 ty2 TuplePred tys -> canTuple d ev tys IrredPred {} -> canIrred d ev \end{code} %************************************************************************ %* * %* Tuple Canonicalization %* * %************************************************************************ \begin{code} canTuple :: CtLoc -> CtEvidence -> [PredType] -> TcS StopOrContinue canTuple d ev tys = do { traceTcS "can_pred" (text "TuplePred!") ; let xcomp = EvTupleMk xdecomp x = zipWith (\_ i -> EvTupleSel x i) tys [0..] ; ctevs <- xCtFlavor ev tys (XEvTerm xcomp xdecomp) ; canEvVarsCreated d ctevs } \end{code} %************************************************************************ %* * %* Class Canonicalization %* * %************************************************************************ \begin{code} canClass, canClassNC :: CtLoc -> CtEvidence -> Class -> [Type] -> TcS StopOrContinue -- Precondition: EvVar is class evidence -- The canClassNC version is used on non-canonical constraints -- and adds superclasses. The plain canClass version is used -- for already-canonical class constraints (but which might have -- been subsituted or somthing), and hence do not need superclasses canClassNC d ev cls tys = canClass d ev cls tys andWhenContinue emitSuperclasses canClass d ev cls tys = do { -- sctx <- getTcSContext ; (xis, cos) <- flattenMany d FMFullFlatten (ctEvFlavour ev) tys ; let co = mkTcTyConAppCo (classTyCon cls) cos xi = mkClassPred cls xis ; mb <- rewriteCtFlavor ev xi co ; case mb of Just new_ev -> let (ClassPred cls xis_for_dict) = classifyPredType (ctEvPred new_ev) in continueWith $CDictCan { cc_ev = new_ev, cc_loc = d , cc_tyargs = xis_for_dict, cc_class = cls } Nothing -> return Stop } emitSuperclasses :: Ct -> TcS StopOrContinue emitSuperclasses ct@(CDictCan { cc_loc = d, cc_ev = ev , cc_tyargs = xis_new, cc_class = cls }) -- Add superclasses of this one here, See Note [Adding superclasses]. -- But only if we are not simplifying the LHS of a rule. = do { newSCWorkFromFlavored d ev cls xis_new -- Arguably we should "seq" the coercions if they are derived, -- as we do below for emit_kind_constraint, to allow errors in -- superclasses to be executed if deferred to runtime! ; continueWith ct } emitSuperclasses _ = panic "emit_superclasses of non-class!" \end{code} Note [Adding superclasses] ~~~~~~~~~~~~~~~~~~~~~~~~~~ Since dictionaries are canonicalized only once in their lifetime, the place to add their superclasses is canonicalisation (The alternative would be to do it during constraint solving, but we'd have to be extremely careful to not repeatedly introduced the same superclass in our worklist). Here is what we do: For Givens: We add all their superclasses as Givens. For Wanteds: Generally speaking we want to be able to add superclasses of wanteds for two reasons: (1) Oportunities for improvement. Example: class (a ~ b) => C a b Wanted constraint is: C alpha beta We'd like to simply have C alpha alpha. Similar situations arise in relation to functional dependencies. (2) To have minimal constraints to quantify over: For instance, if our wanted constraint is (Eq a, Ord a) we'd only like to quantify over Ord a. To deal with (1) above we only add the superclasses of wanteds which may lead to improvement, that is: equality superclasses or superclasses with functional dependencies. We deal with (2) completely independently in TcSimplify. See Note [Minimize by SuperClasses] in TcSimplify. Moreover, in all cases the extra improvement constraints are Derived. Derived constraints have an identity (for now), but we don't do anything with their evidence. For instance they are never used to rewrite other constraints. See also [New Wanted Superclass Work] in TcInteract. For Deriveds: We do nothing. Here's an example that demonstrates why we chose to NOT add superclasses during simplification: [Comes from ticket #4497] class Num (RealOf t) => Normed t type family RealOf x Assume the generated wanted constraint is: RealOf e ~ e, Normed e If we were to be adding the superclasses during simplification we'd get: Num uf, Normed e, RealOf e ~ e, RealOf e ~ uf ==> e ~ uf, Num uf, Normed e, RealOf e ~ e ==> [Spontaneous solve] Num uf, Normed uf, RealOf uf ~ uf While looks exactly like our original constraint. If we add the superclass again we'd loop. By adding superclasses definitely only once, during canonicalisation, this situation can't happen. \begin{code} newSCWorkFromFlavored :: CtLoc -- Depth -> CtEvidence -> Class -> [Xi] -> TcS () -- Returns superclasses, see Note [Adding superclasses] newSCWorkFromFlavored d flavor cls xis | isDerived flavor = return () -- Deriveds don't yield more superclasses because we will -- add them transitively in the case of wanteds. | isGiven flavor = do { let sc_theta = immSuperClasses cls xis xev_decomp x = zipWith (\_ i -> EvSuperClass x i) sc_theta [0..] xev = XEvTerm { ev_comp = panic "Can't compose for given!" , ev_decomp = xev_decomp } ; ctevs <- xCtFlavor flavor sc_theta xev ; emitWorkNC d ctevs } | isEmptyVarSet (tyVarsOfTypes xis) = return () -- Wanteds with no variables yield no deriveds. -- See Note [Improvement from Ground Wanteds] | otherwise -- Wanted case, just add those SC that can lead to improvement. = do { let sc_rec_theta = transSuperClasses cls xis impr_theta = filter is_improvement_pty sc_rec_theta ; traceTcS "newSCWork/Derived"$ text "impr_theta =" <+> ppr impr_theta ; mb_der_evs <- mapM newDerived impr_theta ; emitWorkNC d (catMaybes mb_der_evs) } is_improvement_pty :: PredType -> Bool -- Either it's an equality, or has some functional dependency is_improvement_pty ty = go (classifyPredType ty) where go (EqPred {}) = True go (ClassPred cls _tys) = not $null fundeps where (_,fundeps) = classTvsFds cls go (TuplePred ts) = any is_improvement_pty ts go (IrredPred {}) = True -- Might have equalities after reduction? \end{code} %************************************************************************ %* * %* Irreducibles canonicalization %* * %************************************************************************ \begin{code} canIrred :: CtLoc -> CtEvidence -> TcS StopOrContinue -- Precondition: ty not a tuple and no other evidence form canIrred d ev = do { let ty = ctEvPred ev ; traceTcS "can_pred" (text "IrredPred = " <+> ppr ty) ; (xi,co) <- flatten d FMFullFlatten (ctEvFlavour ev) ty -- co :: xi ~ ty ; let no_flattening = xi eqType ty -- We can't use isTcReflCo, because even if the coercion is -- Refl, the output type might have had a substitution -- applied to it. For example 'a' might now be 'C b' ; if no_flattening then continueWith$ CIrredEvCan { cc_ev = ev, cc_loc = d } else do { mb <- rewriteCtFlavor ev xi co ; case mb of Just new_ev -> canEvNC d new_ev -- Re-classify and try again Nothing -> return Stop } } -- Found a cached copy canHole :: CtLoc -> CtEvidence -> TcS StopOrContinue canHole d ev = do { let ty = ctEvPred ev ; (xi,co) <- flatten d FMFullFlatten (ctEvFlavour ev) ty -- co :: xi ~ ty ; mb <- rewriteCtFlavor ev xi co ; case mb of Just new_ev -> emitInsoluble (CHoleCan { cc_ev = new_ev, cc_loc = d}) Nothing -> return () -- Found a cached copy; won't happen ; return Stop } \end{code} %************************************************************************ %* * %* Flattening (eliminating all function symbols) * %* * %************************************************************************ Note [Flattening] ~~~~~~~~~~~~~~~~~~~~ flatten ty ==> (xi, cc) where xi has no type functions, unless they appear under ForAlls cc = Auxiliary given (equality) constraints constraining the fresh type variables in xi. Evidence for these is always the identity coercion, because internally the fresh flattening skolem variables are actually identified with the types they have been generated to stand in for. Note that it is flatten's job to flatten *every type function it sees*. flatten is only called on *arguments* to type functions, by canEqGiven. Recall that in comments we use alpha[flat = ty] to represent a flattening skolem variable alpha which has been generated to stand in for ty. ----- Example of flattening a constraint: ------ flatten (List (F (G Int))) ==> (xi, cc) where xi = List alpha cc = { G Int ~ beta[flat = G Int], F beta ~ alpha[flat = F beta] } Here * alpha and beta are 'flattening skolem variables'. * All the constraints in cc are 'given', and all their coercion terms are the identity. NB: Flattening Skolems only occur in canonical constraints, which are never zonked, so we don't need to worry about zonking doing accidental unflattening. Note that we prefer to leave type synonyms unexpanded when possible, so when the flattener encounters one, it first asks whether its transitive expansion contains any type function applications. If so, it expands the synonym and proceeds; if not, it simply returns the unexpanded synonym. \begin{code} data FlattenMode = FMSubstOnly | FMFullFlatten -- Flatten a bunch of types all at once. flattenMany :: CtLoc -> FlattenMode -> CtFlavour -> [Type] -> TcS ([Xi], [TcCoercion]) -- Coercions :: Xi ~ Type -- Returns True iff (no flattening happened) -- NB: The EvVar inside the 'ctxt :: CtEvidence' is unused, -- we merely want (a) Given/Solved/Derived/Wanted info -- (b) the GivenLoc/WantedLoc for when we create new evidence flattenMany d f ctxt tys = -- pprTrace "flattenMany" empty $go tys where go [] = return ([],[]) go (ty:tys) = do { (xi,co) <- flatten d f ctxt ty ; (xis,cos) <- go tys ; return (xi:xis,co:cos) } -- Flatten a type to get rid of type function applications, returning -- the new type-function-free type, and a collection of new equality -- constraints. See Note [Flattening] for more detail. flatten :: CtLoc -> FlattenMode -> CtFlavour -> TcType -> TcS (Xi, TcCoercion) -- Postcondition: Coercion :: Xi ~ TcType flatten loc f ctxt ty | Just ty' <- tcView ty = do { (xi, co) <- flatten loc f ctxt ty' ; if eqType xi ty then return (ty,co) else return (xi,co) } -- Small tweak for better error messages flatten _ _ _ xi@(LitTy {}) = return (xi, mkTcReflCo xi) flatten loc f ctxt (TyVarTy tv) = flattenTyVar loc f ctxt tv flatten loc f ctxt (AppTy ty1 ty2) = do { (xi1,co1) <- flatten loc f ctxt ty1 ; (xi2,co2) <- flatten loc f ctxt ty2 ; return (mkAppTy xi1 xi2, mkTcAppCo co1 co2) } flatten loc f ctxt (FunTy ty1 ty2) = do { (xi1,co1) <- flatten loc f ctxt ty1 ; (xi2,co2) <- flatten loc f ctxt ty2 ; return (mkFunTy xi1 xi2, mkTcFunCo co1 co2) } flatten loc f ctxt (TyConApp tc tys) -- For a normal type constructor or data family application, we just -- recursively flatten the arguments. | not (isSynFamilyTyCon tc) = do { (xis,cos) <- flattenMany loc f ctxt tys ; return (mkTyConApp tc xis, mkTcTyConAppCo tc cos) } -- Otherwise, it's a type function application, and we have to -- flatten it away as well, and generate a new given equality constraint -- between the application and a newly generated flattening skolem variable. | otherwise = ASSERT( tyConArity tc <= length tys ) -- Type functions are saturated do { (xis, cos) <- flattenMany loc f ctxt tys ; let (xi_args, xi_rest) = splitAt (tyConArity tc) xis (cos_args, cos_rest) = splitAt (tyConArity tc) cos -- The type function might be *over* saturated -- in which case the remaining arguments should -- be dealt with by AppTys fam_ty = mkTyConApp tc xi_args ; (ret_co, rhs_xi) <- case f of FMSubstOnly -> return (mkTcReflCo fam_ty, fam_ty) FMFullFlatten -> do { mb_ct <- lookupFlatEqn fam_ty ; case mb_ct of Just ct | let ctev = cc_ev ct flav = ctEvFlavour ctev , flav canRewrite ctxt -> -- You may think that we can just return (cc_rhs ct) but not so. -- return (mkTcCoVarCo (ctId ct), cc_rhs ct, []) -- The cached constraint resides in the cache so we have to flatten -- the rhs to make sure we have applied any inert substitution to it. -- Alternatively we could be applying the inert substitution to the -- cache as well when we interact an equality with the inert. -- The design choice is: do we keep the flat cache rewritten or not? -- For now I say we don't keep it fully rewritten. do { traceTcS "flatten/flat-cache hit"$ ppr ct ; let rhs_xi = cc_rhs ct ; (flat_rhs_xi,co) <- flatten (cc_loc ct) f flav rhs_xi ; let final_co = evTermCoercion (ctEvTerm ctev) mkTcTransCo mkTcSymCo co ; return (final_co, flat_rhs_xi) } _ | Given <- ctxt -- Given: make new flatten skolem -> do { traceTcS "flatten/flat-cache miss" $empty ; rhs_ty <- newFlattenSkolemTy fam_ty ; let co = mkTcReflCo fam_ty new_ev = CtGiven { ctev_pred = mkTcEqPred fam_ty rhs_ty , ctev_evtm = EvCoercion co } ct = CFunEqCan { cc_ev = new_ev , cc_fun = tc , cc_tyargs = xi_args , cc_rhs = rhs_ty , cc_loc = loc } ; updWorkListTcS$ extendWorkListEq ct ; return (co, rhs_ty) } | otherwise -- Wanted or Derived: make new unification variable -> do { traceTcS "flatten/flat-cache miss" $empty ; rhs_xi_var <- newFlexiTcSTy (typeKind fam_ty) ; let pred = mkTcEqPred fam_ty rhs_xi_var ; mw <- newWantedEvVar pred ; case mw of Fresh ctev -> do { let ct = CFunEqCan { cc_ev = ctev , cc_fun = tc , cc_tyargs = xi_args , cc_rhs = rhs_xi_var , cc_loc = loc } ; updWorkListTcS$ extendWorkListEq ct ; return (evTermCoercion (ctEvTerm ctev), rhs_xi_var) } Cached {} -> panic "flatten TyConApp, var must be fresh!" } } -- Emit the flat constraints ; return ( mkAppTys rhs_xi xi_rest -- NB mkAppTys: rhs_xi might not be a type variable -- cf Trac #5655 , mkTcAppCos (mkTcSymCo ret_co mkTcTransCo mkTcTyConAppCo tc cos_args) $cos_rest ) } flatten loc _f ctxt ty@(ForAllTy {}) -- We allow for-alls when, but only when, no type function -- applications inside the forall involve the bound type variables. = do { let (tvs, rho) = splitForAllTys ty ; (rho', co) <- flatten loc FMSubstOnly ctxt rho -- Substitute only under a forall -- See Note [Flattening under a forall] ; return (mkForAllTys tvs rho', foldr mkTcForAllCo co tvs) } \end{code} Note [Flattening under a forall] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Under a forall, we (a) MUST apply the inert subsitution (b) MUST NOT flatten type family applications Hence FMSubstOnly. For (a) consider c ~ a, a ~ T (forall b. (b, [c]) If we don't apply the c~a substitution to the second constraint we won't see the occurs-check error. For (b) consider (a ~ forall b. F a b), we don't want to flatten to (a ~ forall b.fsk, F a b ~ fsk) because now the 'b' has escaped its scope. We'd have to flatten to (a ~ forall b. fsk b, forall b. F a b ~ fsk b) and we have not begun to think about how to make that work! \begin{code} flattenTyVar, flattenFinalTyVar :: CtLoc -> FlattenMode -> CtFlavour -> TcTyVar -> TcS (Xi, TcCoercion) -- "Flattening" a type variable means to apply the substitution to it -- The substitution is actually the union of the substitution in the TyBinds -- for the unification variables that have been unified already with the inert -- equalities, see Note [Spontaneously solved in TyBinds] in TcInteract. flattenTyVar loc f ctxt tv | not (isTcTyVar tv) -- Happens when flatten under a (forall a. ty) = flattenFinalTyVar loc f ctxt tv -- So ty contains referneces to the non-TcTyVar a | otherwise = do { mb_ty <- isFilledMetaTyVar_maybe tv ; case mb_ty of { Just ty -> flatten loc f ctxt ty ; Nothing -> -- Try in ty_binds do { ty_binds <- getTcSTyBindsMap ; case lookupVarEnv ty_binds tv of { Just (_tv,ty) -> flatten loc f ctxt ty ; -- NB: ty_binds coercions are all ReflCo, -- so no need to transitively compose co' with another coercion, -- unlike in 'flatten_from_inerts' Nothing -> -- Try in the inert equalities do { ieqs <- getInertEqs ; let mco = tv_eq_subst ieqs tv -- co : v ~ ty ; case mco of { Just (co,ty) -> do { (ty_final,co') <- flatten loc f ctxt ty ; return (ty_final, co' mkTcTransCo mkTcSymCo co) } ; -- NB recursive call. -- Why? Because inert subst. non-idempotent, Note [Detailed InertCans Invariants] -- In fact, because of flavors, it couldn't possibly be idempotent, -- this is explained in Note [Non-idempotent inert substitution] Nothing -> flattenFinalTyVar loc f ctxt tv } } } } } } where tv_eq_subst subst tv | Just ct <- lookupVarEnv subst tv , let ctev = cc_ev ct , ctEvFlavour ctev canRewrite ctxt = Just (evTermCoercion (ctEvTerm ctev), cc_rhs ct) -- NB: even if ct is Derived we are not going to -- touch the actual coercion so we are fine. | otherwise = Nothing flattenFinalTyVar loc f ctxt tv = -- Done, but make sure the kind is zonked do { let knd = tyVarKind tv ; (new_knd,_kind_co) <- flatten loc f ctxt knd ; let ty = mkTyVarTy (setVarType tv new_knd) ; return (ty, mkTcReflCo ty) } \end{code} Note [Non-idempotent inert substitution] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The inert substitution is not idempotent in the broad sense. It is only idempotent in that it cannot rewrite the RHS of other inert equalities any further. An example of such an inert substitution is: [G] g1 : ta8 ~ ta4 [W] g2 : ta4 ~ a5Fj Observe that the wanted cannot rewrite the solved goal, despite the fact that ta4 appears on an RHS of an equality. Now, imagine a constraint: [W] g3: ta8 ~ Int coming in. If we simply apply once the inert substitution we will get: [W] g3_1: ta4 ~ Int and because potentially ta4 is untouchable we will try to insert g3_1 in the inert set, getting a panic since the inert only allows ONE equation per LHS type variable (as it should). For this reason, when we reach to flatten a type variable, we flatten it recursively, so that we can make sure that the inert substitution /is/ fully applied. Insufficient (non-recursive) rewriting was the reason for #5668. %************************************************************************ %* * %* Equalities %* * %************************************************************************ \begin{code} canEvVarsCreated :: CtLoc -> [CtEvidence] -> TcS StopOrContinue canEvVarsCreated _loc [] = return Stop -- Add all but one to the work list -- and return the first (if any) for futher processing canEvVarsCreated loc (ev : evs) = do { emitWorkNC loc evs; canEvNC loc ev } -- Note the "NC": these are fresh goals, not necessarily canonical emitWorkNC :: CtLoc -> [CtEvidence] -> TcS () emitWorkNC loc evs | null evs = return () | otherwise = updWorkListTcS (extendWorkListCts (map mk_nc evs)) where mk_nc ev = CNonCanonical { cc_ev = ev, cc_loc = loc } ------------------------- canEqNC, canEq :: CtLoc -> CtEvidence -> Type -> Type -> TcS StopOrContinue canEqNC loc ev ty1 ty2 = canEq loc ev ty1 ty2 andWhenContinue emitKindConstraint canEq _loc ev ty1 ty2 | eqType ty1 ty2 -- Dealing with equality here avoids -- later spurious occurs checks for a~a = if isWanted ev then setEvBind (ctev_evar ev) (EvCoercion (mkTcReflCo ty1)) >> return Stop else return Stop -- If one side is a variable, orient and flatten, -- WITHOUT expanding type synonyms, so that we tend to -- substitute a ~ Age rather than a ~ Int when @type Age = Int@ canEq loc ev ty1@(TyVarTy {}) ty2 = canEqLeaf loc ev ty1 ty2 canEq loc ev ty1 ty2@(TyVarTy {}) = canEqLeaf loc ev ty1 ty2 -- See Note [Naked given applications] canEq loc ev ty1 ty2 | Just ty1' <- tcView ty1 = canEq loc ev ty1' ty2 | Just ty2' <- tcView ty2 = canEq loc ev ty1 ty2' canEq loc ev ty1@(TyConApp fn tys) ty2 | isSynFamilyTyCon fn, length tys == tyConArity fn = canEqLeaf loc ev ty1 ty2 canEq loc ev ty1 ty2@(TyConApp fn tys) | isSynFamilyTyCon fn, length tys == tyConArity fn = canEqLeaf loc ev ty1 ty2 canEq loc ev ty1 ty2 | Just (tc1,tys1) <- tcSplitTyConApp_maybe ty1 , Just (tc2,tys2) <- tcSplitTyConApp_maybe ty2 , isDecomposableTyCon tc1 && isDecomposableTyCon tc2 = canDecomposableTyConApp loc ev tc1 tys1 tc2 tys2 canEq loc ev s1@(ForAllTy {}) s2@(ForAllTy {}) | tcIsForAllTy s1, tcIsForAllTy s2 , CtWanted { ctev_evar = orig_ev } <- ev = do { let (tvs1,body1) = tcSplitForAllTys s1 (tvs2,body2) = tcSplitForAllTys s2 ; if not (equalLength tvs1 tvs2) then canEqFailure loc ev s1 s2 else do { traceTcS "Creating implication for polytype equality"$ ppr ev ; deferTcSForAllEq (loc,orig_ev) (tvs1,body1) (tvs2,body2) ; return Stop } } | otherwise = do { traceTcS "Ommitting decomposition of given polytype equality" $pprEq s1 s2 -- See Note [Do not decompose given polytype equalities] ; return Stop } -- The last remaining source of success is an application -- e.g. F a b ~ Maybe c where F has arity 1 -- See Note [Equality between type applications] -- Note [Care with type applications] in TcUnify canEq loc ev ty1 ty2 = do { let flav = ctEvFlavour ev ; (s1, co1) <- flatten loc FMSubstOnly flav ty1 ; (s2, co2) <- flatten loc FMSubstOnly flav ty2 ; mb_ct <- rewriteCtFlavor ev (mkTcEqPred s1 s2) (mkHdEqPred s2 co1 co2) ; case mb_ct of Nothing -> return Stop Just new_ev -> last_chance new_ev s1 s2 } where last_chance ev ty1 ty2 | Just (tc1,tys1) <- tcSplitTyConApp_maybe ty1 , Just (tc2,tys2) <- tcSplitTyConApp_maybe ty2 , isDecomposableTyCon tc1 && isDecomposableTyCon tc2 = canDecomposableTyConApp loc ev tc1 tys1 tc2 tys2 | Just (s1,t1) <- tcSplitAppTy_maybe ty1 , Just (s2,t2) <- tcSplitAppTy_maybe ty2 = do { let xevcomp [x,y] = EvCoercion (mkTcAppCo (evTermCoercion x) (evTermCoercion y)) xevcomp _ = error "canEqAppTy: can't happen" -- Can't happen xevdecomp x = let xco = evTermCoercion x in [EvCoercion (mkTcLRCo CLeft xco), EvCoercion (mkTcLRCo CRight xco)] ; ctevs <- xCtFlavor ev [mkTcEqPred s1 s2, mkTcEqPred t1 t2] (XEvTerm xevcomp xevdecomp) ; canEvVarsCreated loc ctevs } | otherwise = do { emitInsoluble (CNonCanonical { cc_ev = ev, cc_loc = loc }) ; return Stop } ------------------------ canDecomposableTyConApp :: CtLoc -> CtEvidence -> TyCon -> [TcType] -> TyCon -> [TcType] -> TcS StopOrContinue canDecomposableTyConApp loc ev tc1 tys1 tc2 tys2 | tc1 /= tc2 || length tys1 /= length tys2 -- Fail straight away for better error messages = canEqFailure loc ev (mkTyConApp tc1 tys1) (mkTyConApp tc2 tys2) | otherwise = do { let xcomp xs = EvCoercion (mkTcTyConAppCo tc1 (map evTermCoercion xs)) xdecomp x = zipWith (\_ i -> EvCoercion$ mkTcNthCo i (evTermCoercion x)) tys1 [0..] xev = XEvTerm xcomp xdecomp ; ctevs <- xCtFlavor ev (zipWith mkTcEqPred tys1 tys2) xev ; canEvVarsCreated loc ctevs } canEqFailure :: CtLoc -> CtEvidence -> TcType -> TcType -> TcS StopOrContinue -- See Note [Make sure that insolubles are fully rewritten] canEqFailure loc ev ty1 ty2 = do { let flav = ctEvFlavour ev ; (s1, co1) <- flatten loc FMSubstOnly flav ty1 ; (s2, co2) <- flatten loc FMSubstOnly flav ty2 ; mb_ct <- rewriteCtFlavor ev (mkTcEqPred s1 s2) (mkHdEqPred s2 co1 co2) ; case mb_ct of Just new_ev -> emitInsoluble (CNonCanonical { cc_ev = new_ev, cc_loc = loc }) Nothing -> pprPanic "canEqFailure" (ppr ev $$ppr ty1$$ ppr ty2) ; return Stop } ------------------------ emitKindConstraint :: Ct -> TcS StopOrContinue emitKindConstraint ct -- By now ct is canonical = case ct of CTyEqCan { cc_loc = loc , cc_ev = ev, cc_tyvar = tv , cc_rhs = ty } -> emit_kind_constraint loc ev (mkTyVarTy tv) ty CFunEqCan { cc_loc = loc , cc_ev = ev , cc_fun = fn, cc_tyargs = xis1 , cc_rhs = xi2 } -> emit_kind_constraint loc ev (mkTyConApp fn xis1) xi2 _ -> continueWith ct where emit_kind_constraint loc _ev ty1 ty2 | compatKind k1 k2 -- True when ty1,ty2 are themselves kinds, = continueWith ct -- because then k1, k2 are BOX | otherwise = ASSERT( isKind k1 && isKind k2 ) do { mw <- newDerived (mkEqPred k1 k2) ; case mw of Nothing -> return () Just kev -> emitWorkNC kind_co_loc [kev] ; continueWith ct } where k1 = typeKind ty1 k2 = typeKind ty2 -- Always create a Wanted kind equality even if -- you are decomposing a given constraint. -- NB: DV finds this reasonable for now. Maybe we have to revisit. kind_co_loc = setCtLocOrigin loc (KindEqOrigin ty1 ty2 (ctLocOrigin loc)) \end{code} Note [Make sure that insolubles are fully rewritten] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When an equality fails, we still want to rewrite the equality all the way down, so that it accurately reflects (a) the mutable reference substitution in force at start of solving (b) any ty-binds in force at this point in solving See Note [Kick out insolubles] in TcInteract. And if we don't do this there is a bad danger that TcSimplify.applyTyVarDefaulting will find a variable that has in fact been substituted. Note [Do not decompose given polytype equalities] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider [G] (forall a. t1 ~ forall a. t2). Can we decompose this? No -- what would the evidence look like. So instead we simply discard this given evidence. Note [Combining insoluble constraints] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As this point we have an insoluble constraint, like Int~Bool. * If it is Wanted, delete it from the cache, so that subsequent Int~Bool constraints give rise to separate error messages * But if it is Derived, DO NOT delete from cache. A class constraint may get kicked out of the inert set, and then have its functional dependency Derived constraints generated a second time. In that case we don't want to get two (or more) error messages by generating two (or more) insoluble fundep constraints from the same class constraint. Note [Naked given applications] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider: data A a type T a = A a and the given equality: [G] A a ~ T Int We will reach the case canEq where we do a tcSplitAppTy_maybe, but if we dont have the guards (Nothing <- tcView ty1) (Nothing <- tcView ty2) then the given equation is going to fall through and get completely forgotten! What we want instead is this clause to apply only when there is no immediate top-level synonym; if there is one it will be later on unfolded by the later stages of canEq. Test-case is in typecheck/should_compile/GivenTypeSynonym.hs Note [Equality between type applications] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If we see an equality of the form s1 t1 ~ s2 t2 we can always split it up into s1 ~ s2 /\ t1 ~ t2, since s1 and s2 can't be type functions (type functions use the TyConApp constructor, which never shows up as the LHS of an AppTy). Other than type functions, types in Haskell are always (1) generative: a b ~ c d implies a ~ c, since different type constructors always generate distinct types (2) injective: a b ~ a d implies b ~ d; we never generate the same type from different type arguments. Note [Canonical ordering for equality constraints] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Implemented as (<+=) below: - Type function applications always come before anything else. - Variables always come before non-variables (other than type function applications). Note that we don't need to unfold type synonyms on the RHS to check the ordering; that is, in the rules above it's OK to consider only whether something is *syntactically* a type function application or not. To illustrate why this is OK, suppose we have an equality of the form 'tv ~ S a b c', where S is a type synonym which expands to a top-level application of the type function F, something like type S a b c = F d e Then to canonicalize 'tv ~ S a b c' we flatten the RHS, and since S's expansion contains type function applications the flattener will do the expansion and then generate a skolem variable for the type function application, so we end up with something like this: tv ~ x F d e ~ x where x is the skolem variable. This is one extra equation than absolutely necessary (we could have gotten away with just 'F d e ~ tv' if we had noticed that S expanded to a top-level type function application and flipped it around in the first place) but this way keeps the code simpler. Unlike the OutsideIn(X) draft of May 7, 2010, we do not care about the ordering of tv ~ tv constraints. There are several reasons why we might: (1) In order to be able to extract a substitution that doesn't mention untouchable variables after we are done solving, we might prefer to put touchable variables on the left. However, in and of itself this isn't necessary; we can always re-orient equality constraints at the end if necessary when extracting a substitution. (2) To ensure termination we might think it necessary to put variables in lexicographic order. However, this isn't actually necessary as outlined below. While building up an inert set of canonical constraints, we maintain the invariant that the equality constraints in the inert set form an acyclic rewrite system when viewed as L-R rewrite rules. Moreover, the given constraints form an idempotent substitution (i.e. none of the variables on the LHS occur in any of the RHS's, and type functions never show up in the RHS at all), the wanted constraints also form an idempotent substitution, and finally the LHS of a given constraint never shows up on the RHS of a wanted constraint. There may, however, be a wanted LHS that shows up in a given RHS, since we do not rewrite given constraints with wanted constraints. Suppose we have an inert constraint set tg_1 ~ xig_1 -- givens tg_2 ~ xig_2 ... tw_1 ~ xiw_1 -- wanteds tw_2 ~ xiw_2 ... where each t_i can be either a type variable or a type function application. Now suppose we take a new canonical equality constraint, t' ~ xi' (note among other things this means t' does not occur in xi') and try to react it with the existing inert set. We show by induction on the number of t_i which occur in t' ~ xi' that this process will terminate. There are several ways t' ~ xi' could react with an existing constraint: TODO: finish this proof. The below was for the case where the entire inert set is an idempotent subustitution... (b) We could have t' = t_j for some j. Then we obtain the new equality xi_j ~ xi'; note that neither xi_j or xi' contain t_j. We now canonicalize the new equality, which may involve decomposing it into several canonical equalities, and recurse on these. However, none of the new equalities will contain t_j, so they have fewer occurrences of the t_i than the original equation. (a) We could have t_j occurring in xi' for some j, with t' /= t_j. Then we substitute xi_j for t_j in xi' and continue. However, since none of the t_i occur in xi_j, we have decreased the number of t_i that occur in xi', since we eliminated t_j and did not introduce any new ones. \begin{code} data TypeClassifier = VarCls TcTyVar -- ^ Type variable | FunCls TyCon [Type] -- ^ Type function, exactly saturated | OtherCls TcType -- ^ Neither of the above classify :: TcType -> TypeClassifier classify (TyVarTy tv) = ASSERT2( isTcTyVar tv, ppr tv ) VarCls tv classify (TyConApp tc tys) | isSynFamilyTyCon tc , tyConArity tc == length tys = FunCls tc tys classify ty | Just ty' <- tcView ty = case classify ty' of OtherCls {} -> OtherCls ty var_or_fn -> var_or_fn | otherwise = OtherCls ty -- See note [Canonical ordering for equality constraints]. reOrient :: CtEvidence -> TypeClassifier -> TypeClassifier -> Bool -- (t1 reOrient t2) responds True -- iff we should flip to (t2~t1) -- We try to say False if possible, to minimise evidence generation -- -- Postcondition: After re-orienting, first arg is not OTherCls reOrient _ev (OtherCls {}) cls2 = ASSERT( case cls2 of { OtherCls {} -> False; _ -> True } ) True -- One must be Var/Fun reOrient _ev (FunCls {}) _ = False -- Fun/Other on rhs -- But consider the following variation: isGiven ev && isMetaTyVar tv -- See Note [No touchables as FunEq RHS] in TcSMonad reOrient _ev (VarCls {}) (FunCls {}) = True reOrient _ev (VarCls {}) (OtherCls {}) = False reOrient _ev (VarCls tv1) (VarCls tv2) | isMetaTyVar tv2 && not (isMetaTyVar tv1) = True | otherwise = False -- Just for efficiency, see CTyEqCan invariants ------------------ canEqLeaf :: CtLoc -> CtEvidence -> Type -> Type -> TcS StopOrContinue -- Canonicalizing "leaf" equality constraints which cannot be -- decomposed further (ie one of the types is a variable or -- saturated type function application). -- Preconditions: -- * one of the two arguments is variable -- or an exactly-saturated family application -- * the two types are not equal (looking through synonyms) canEqLeaf loc ev s1 s2 | cls1 re_orient cls2 = do { traceTcS "canEqLeaf (reorienting)" $ppr ev <+> dcolon <+> pprEq s1 s2 ; let xcomp [x] = EvCoercion (mkTcSymCo (evTermCoercion x)) xcomp _ = panic "canEqLeaf: can't happen" xdecomp x = [EvCoercion (mkTcSymCo (evTermCoercion x))] xev = XEvTerm xcomp xdecomp ; ctevs <- xCtFlavor ev [mkTcEqPred s2 s1] xev ; case ctevs of [] -> return Stop [ctev] -> canEqLeafOriented loc ctev cls2 s1 _ -> panic "canEqLeaf" } | otherwise = do { traceTcS "canEqLeaf"$ ppr (mkTcEqPred s1 s2) ; canEqLeafOriented loc ev cls1 s2 } where re_orient = reOrient ev cls1 = classify s1 cls2 = classify s2 canEqLeafOriented :: CtLoc -> CtEvidence -> TypeClassifier -> TcType -> TcS StopOrContinue -- By now s1 will either be a variable or a type family application canEqLeafOriented loc ev (FunCls fn tys1) s2 = canEqLeafFunEq loc ev fn tys1 s2 canEqLeafOriented loc ev (VarCls tv) s2 = canEqLeafTyVarEq loc ev tv s2 canEqLeafOriented _ ev (OtherCls {}) _ = pprPanic "canEqLeafOriented" (ppr (ctEvPred ev)) canEqLeafFunEq :: CtLoc -> CtEvidence -> TyCon -> [TcType] -> TcType -> TcS StopOrContinue canEqLeafFunEq loc ev fn tys1 ty2 -- ev :: F tys1 ~ ty2 = do { traceTcS "canEqLeafFunEq" $pprEq (mkTyConApp fn tys1) ty2 ; let flav = ctEvFlavour ev -- Flatten type function arguments -- cos1 :: xis1 ~ tys1 -- co2 :: xi2 ~ ty2 ; (xis1,cos1) <- flattenMany loc FMFullFlatten flav tys1 ; (xi2, co2) <- flatten loc FMFullFlatten flav ty2 -- Fancy higher-dimensional coercion between equalities! -- SPJ asks why? Why not just co : F xis1 ~ F tys1? ; let fam_head = mkTyConApp fn xis1 xco = mkHdEqPred ty2 (mkTcTyConAppCo fn cos1) co2 -- xco :: (F xis1 ~ xi2) ~ (F tys1 ~ ty2) ; mb <- rewriteCtFlavor ev (mkTcEqPred fam_head xi2) xco ; case mb of { Nothing -> return Stop ; Just new_ev | isTcReflCo xco -> continueWith new_ct | otherwise -> do { updWorkListTcS (extendWorkListEq new_ct); return Stop } where new_ct = CFunEqCan { cc_ev = new_ev, cc_loc = loc , cc_fun = fn, cc_tyargs = xis1, cc_rhs = xi2 } } } canEqLeafTyVarEq :: CtLoc -> CtEvidence -> TcTyVar -> TcType -> TcS StopOrContinue canEqLeafTyVarEq loc ev tv s2 -- ev :: tv ~ s2 = do { traceTcS "canEqLeafTyVarEq"$ pprEq (mkTyVarTy tv) s2 ; let flav = ctEvFlavour ev ; (xi1,co1) <- flattenTyVar loc FMFullFlatten flav tv -- co1 :: xi1 ~ tv ; (xi2,co2) <- flatten loc FMFullFlatten flav s2 -- co2 :: xi2 ~ s2 ; let co = mkHdEqPred s2 co1 co2 -- co :: (xi1 ~ xi2) ~ (tv ~ s2) ; traceTcS "canEqLeafTyVarEq2" $empty ; case (getTyVar_maybe xi1, getTyVar_maybe xi2) of { (Nothing, _) -> -- Rewriting the LHS did not yield a type variable -- so go around again to canEq do { mb <- rewriteCtFlavor ev (mkTcEqPred xi1 xi2) co ; case mb of Nothing -> return Stop Just new_ev -> canEq loc new_ev xi1 xi2 } ; (Just tv1', Just tv2') | tv1' == tv2' -> do { when (isWanted ev)$ setEvBind (ctev_evar ev) (mkEvCast (EvCoercion (mkTcReflCo xi1)) co) ; return Stop } ; (Just tv1', _) -> -- LHS rewrote to a type variable, RHS to something else case occurCheckExpand tv1' xi2 of Nothing -> -- Occurs check error do { mb <- rewriteCtFlavor ev (mkTcEqPred xi1 xi2) co ; case mb of Nothing -> return Stop Just new_ev -> canEqFailure loc new_ev xi1 xi2 } Just xi2' -> -- No occurs check, so we can continue; but make sure -- that the new goal has enough type synonyms expanded by -- by the occurCheckExpand do { mb <- rewriteCtFlavor ev (mkTcEqPred xi1 xi2') co ; case mb of Nothing -> return Stop Just new_ev -> continueWith $CTyEqCan { cc_ev = new_ev, cc_loc = loc , cc_tyvar = tv1', cc_rhs = xi2' } } } } mkHdEqPred :: Type -> TcCoercion -> TcCoercion -> TcCoercion -- Make a higher-dimensional equality -- co1 :: s1~t1, co2 :: s2~t2 -- Then (mkHdEqPred t2 co1 co2) :: (s1~s2) ~ (t1~t2) mkHdEqPred t2 co1 co2 = mkTcTyConAppCo eqTyCon [mkTcReflCo (defaultKind (typeKind t2)), co1, co2] -- Why defaultKind? Same reason as the comment on TcType/mkTcEqPred. I truly hate this (DV) \end{code} Note [Occurs check expansion] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @occurCheckExpand tv xi@ expands synonyms in xi just enough to get rid of occurrences of tv outside type function arguments, if that is possible; otherwise, it returns Nothing. For example, suppose we have type F a b = [a] Then occurCheckExpand b (F Int b) = Just [Int] but occurCheckExpand a (F a Int) = Nothing We don't promise to do the absolute minimum amount of expanding necessary, but we try not to do expansions we don't need to. We prefer doing inner expansions first. For example, type F a b = (a, Int, a, [a]) type G b = Char We have occurCheckExpand b (F (G b)) = F Char even though we could also expand F to get rid of b. See also Note [Type synonyms and canonicalization]. \begin{code} occurCheckExpand :: TcTyVar -> Type -> Maybe Type -- Check whether the given variable occurs in the given type. We may -- have needed to do some type synonym unfolding in order to get rid -- of the variable, so we also return the unfolded version of the -- type, which is guaranteed to be syntactically free of the given -- type variable. If the type is already syntactically free of the -- variable, then the same type is returned. occurCheckExpand tv ty | not (tv elemVarSet tyVarsOfType ty) = Just ty | otherwise = go ty where go t@(TyVarTy tv') | tv == tv' = Nothing | otherwise = Just t go ty@(LitTy {}) = return ty go (AppTy ty1 ty2) = do { ty1' <- go ty1 ; ty2' <- go ty2 ; return (mkAppTy ty1' ty2') } -- mkAppTy <$> go ty1 <*> go ty2 go (FunTy ty1 ty2) = do { ty1' <- go ty1 ; ty2' <- go ty2 ; return (mkFunTy ty1' ty2') } -- mkFunTy <$> go ty1 <*> go ty2 go ty@(ForAllTy {}) | tv elemVarSet tyVarsOfTypes tvs_knds = Nothing -- Can't expand away the kinds unless we create -- fresh variables which we don't want to do at this point. | otherwise = do { rho' <- go rho ; return (mkForAllTys tvs rho') } where (tvs,rho) = splitForAllTys ty tvs_knds = map tyVarKind tvs -- For a type constructor application, first try expanding away the -- offending variable from the arguments. If that doesn't work, next -- see if the type constructor is a type synonym, and if so, expand -- it and try again. go ty@(TyConApp tc tys) | isSynFamilyTyCon tc -- It's ok for tv to occur under a type family application = return ty -- Eg. (a ~ F a) is not an occur-check error -- NB This case can't occur during canonicalisation, -- because the arg is a Xi-type, but can occur in the -- call from TcErrors | otherwise = (mkTyConApp tc <$> mapM go tys) <|> (tcView ty >>= go) \end{code} Note [Type synonyms and canonicalization] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We treat type synonym applications as xi types, that is, they do not count as type function applications. However, we do need to be a bit careful with type synonyms: like type functions they may not be generative or injective. However, unlike type functions, they are parametric, so there is no problem in expanding them whenever we see them, since we do not need to know anything about their arguments in order to expand them; this is what justifies not having to treat them as specially as type function applications. The thing that causes some subtleties is that we prefer to leave type synonym applications *unexpanded* whenever possible, in order to generate better error messages. If we encounter an equality constraint with type synonym applications on both sides, or a type synonym application on one side and some sort of type application on the other, we simply must expand out the type synonyms in order to continue decomposing the equality constraint into primitive equality constraints. For example, suppose we have type F a = [Int] and we encounter the equality F a ~ [b] In order to continue we must expand F a into [Int], giving us the equality [Int] ~ [b] which we can then decompose into the more primitive equality constraint Int ~ b. However, if we encounter an equality constraint with a type synonym application on one side and a variable on the other side, we should NOT (necessarily) expand the type synonym, since for the purpose of good error messages we want to leave type synonyms unexpanded as much as possible. However, there is a subtle point with type synonyms and the occurs check that takes place for equality constraints of the form tv ~ xi. As an example, suppose we have type F a = Int and we come across the equality constraint a ~ F a This should not actually fail the occurs check, since expanding out the type synonym results in the legitimate equality constraint a ~ Int. We must actually do this expansion, because unifying a with F a will lead the type checker into infinite loops later. Put another way, canonical equality constraints should never *syntactically* contain the LHS variable in the RHS type. However, we don't always need to expand type synonyms when doing an occurs check; for example, the constraint a ~ F b is obviously fine no matter what F expands to. And in this case we would rather unify a with F b (rather than F b's expansion) in order to get better error messages later. So, when doing an occurs check with a type synonym application on the RHS, we use some heuristics to find an expansion of the RHS which does not contain the variable from the LHS. In particular, given a ~ F t1 ... tn we first try expanding each of the ti to types which no longer contain a. If this turns out to be impossible, we next try expanding F itself, and so on.