Make the design of defaulting explicit
GHC does some "defaulting" on kind and runtimerep variables, but the design has never really been described. Yet in !6851 (closed), !6866 (merged) it became clear that we didn't really know what we were doing. This ticket sets out a design, and articulates a choice.
See also:
 #21434 which we could tackle once this design is settled
 #17201: levity polymorphism and defaulting
 GHC proposal #409 about named defaults.
Common ground, especially for inferring the type of a letbound definition without a type signature

There are three places that defaulting can take place

GeneralisationDefaulting, when generalising a userwritten type or kind signature, or when inferring the type of a (poassibly mutuallyrecursive) letbinding. Mainly done in
quantifyTyVars
today. This defaulting happens before any quantification takes place; it can thus affect which variables end up quantified in a variable's type. 
TypeClassDefaulting. Done at toplevel only (
simplifyTop
), used to default e.g.Num a
by fixinga := Integer
. Driven by ‘default’ declarations. Example: 
FinalZonkDefaulting. Done by the final zonk. This is a catchall defaulting step, looking for any variable not yet filled in. Posterchild example:
(length @alpha ([] @alpha)) :: Int
. It makes absolutely no difference whatalpha
is. Today, it works by: Default
(alpha :: RuntimeRep) := LiftedRep
 Default
(alpha :: Levity) := Lifted
 Default
(alpha :: k) := Any @k
 Default

Generalisation
Properties of generalisation. Normally, we expect type inference to be orderindependent: changing order of definitions in a file (or e.g. the order of arguments in a tuple) should not affect type inference. However, this is hard to achieve, precisely because we are not trying to infer principal types (they're too general for most uses). So, we hope to be orderindependent only when all of the following extensions are in effect. When one of these is not in effect, then we offer no guarantees (though we want the algorithm to be convenient in practice):
PolyKinds
PolyMultiplicities
PolyRepresentations
MonoLocalBinds
NoMonomorphismRestriction
Generalisation. When generalising a type, every local type variable free in the type being examined must be:
 (P) Promoted to the outer level (thereby becoming global), or
 (D) Defaulted to a closed type (GeneralisationDefaulting), or
 (Q) Quantified
Definition: toplevel. Generalisation happens at toplevel when we are generalising a toplevel type signature or inferring the type of a toplevel variable definition.
Generalisation consists of these steps, each explicated below:

Determine the types we are going to work with.

taus are the types to be generalised.

theta are the constraints arising from looking at the RHS of the variable declarations (
simplifyInfer
) or from kindchecking the type signature (kindGeneralize
). We further split uptheta
into singletonsing_theta
and nonsingleton constraintsnon_sing_theta
. This ensures we quantify over nonsingleton classes such asIP
. See § Singleton constraints below. 
cand_tys is
taus ++ non_sing_theta
.


Collect the type variables we are working with. These are the deep free vars of cand_tys. (Here, "deep" means that we get kind variables in type variable occurrences.) We do not normalise cand_tys, even though doing so might eliminate some type variables via type family reductions. (Currently, we normalise the type later in
mkInferredPolyId
, in order to present simple types in e.g. GHCi. We could imagine doing it earlier, which would affect generalisation.)
Only collect local variables; these are the type variables whose level number equals the ambient level. Variables with an outer (= lower) level number are called global variables and will certainly not be generalised.

Carve up these local variables into disjoint subsets, as described here. The first bin to catch a type variable wins, if a tyvar could end up in multiple subsets. A variable in any of these subsets is called a mono tyvar and will not be generalised. Each must be promoted (P) or defaulted (D).

Restricted tyvars. The free vars of the nonquantifiable constraints of theta. See "nonquantifiable constraints" below. (This handles the MR.)

Concrete tyvars. Any "concrete" tyvars (
ConcreteTv
). We never quantify over a concrete tyvar. 
Escapingkind tyvars. The free vars of the kind of the type(s) to be generalised. We cannot quantify over these (#23051). See
Note [Inferred type with escaping kind]
in GHC.Tc.Gen.Bind. 
Component tyvars. Avoid unnecessary polymorphism (see "Avoiding unnecessary polymorphism").
 The free vars of the kind components of tau (if
XNoPolyKinds
)  The free vars of the runtimerep components of tau (if
XNoPolyRepresentations
)  The free vars of the multiplicity components of tau (if
XNoPolyMultiplicities
)
 The free vars of the kind components of tau (if



Identify and promote (P) the promotable tyvars, according to these rules:
 All restricted tyvars are promotable tyvars. (This is necessary to preserve backward compatibility, even though it allows spooky action at a distance.)
 If we are not at toplevel, the concrete tyvars and component tyvars are promotable tyvars.
 Any nonglobal variable determined by the promotable tyvars unioned with global tyvars using sing_theta (see Section "Determined type variables" below) is a promotable tyvar.
All promotable tyvars are promoted (P). Promoted variables are now global.
Why promote in nested definitions? Because this allows us to determine the unquantified variable by call sites, which are all, necessarily, nearby. Example: #17201:
a = I# (let x = undefined in x)
. Here we should be able to inferx :: Int#
. 
Identify and default (D) the defaultable tyvars, according to these rules:
 Any remaining (i.e. unpromoted) mono tyvar is defaultable.
 Any nonglobal (and thus also nonpromoted) variable determined by the mono tyvars unioned with global tyvars using sing_theta (see Section "Determined type variables" below) is a defaultable tyvar.
All defaultable tyvars are defaulted (D).
Why default at top level? Because this ensures no "spooky action a distance" where the type of a toplevel function is fixed by its call sites. It ensures that toplevel functions always get fullysettled types.

If any defaulting took place in step (5), then: Resimplify theta. Reason: the defaulting might make some constraints soluble by instances.
In principle, solving some of these constraints could expose some new constraints with functional dependencies and therefore lead to new tyvars being determined (as in step 3.3 or 4.2). However, this is rare. This is another way the infelicity documented in "determined type variables" below might arise. We could imagine looping, repeating 2, 3, and 4 if any simplifications happen. (Why 2? Because simplification might lead to new nonquantifiable constraints.) We do not do this loop, currently.

Identify and skolemise (Q) the quantifiable type variables.
 The remaining local type variables (that have neither been promoted nor defaulted) are quantifiable.
 Any nonglobal variable obtained by growing the set of quantifiable type variables using sing_theta (using
growThetaTyVars
, notcloseWrtFunDeps
) is also quantifiable.  The constraints over which we will quantify is the subset of theta that mentions the qtvs.
All quantifiable (Q) type variables are skolemised. No defaulting here (unlike today).
Note carefully that we do not default all type variables of kind RuntimeRep
, nor do we restrict quantification over them. Rather, we promote and default the runtimerep components as defined above. This makes #17201 work as expected. E.g.
f1 :: forall (p :: RuntimeRep > Type) (r :: RuntimeRep). p r > p r
f1 x = x
 Inferred type g1 :: forall (p :: RuntimeRep > Type) (r :: RuntimeRep). p r > p r
g1 = f1
When inferring g1
's type we will quantify over r :: RuntimeRep
; in this example there are no RuntimeRep
components.
Singleton constraints (from Step 1.2 above)
A singleton constraint is one for which we guarantee coherence. We should strive to generalise nonsingleton constraints as much as possible, to allow the caller to determine the evidence as opposed to making a unilateral decision at the definition site.
This is important in particular for implicit parameters, as we want to allow the following (test tc219
):
{# LANGUAGE ImplicitParams, NoMonomorphismRestriction #}
bar = show ?c
foo1 = let { ?c = 'x' } in bar
foo2 = let { ?c = True } in bar
This means we should NOT make use of nonsingleton constraints when deciding which type variables are determined by other type variables in closeWrtFunDeps
.
Finding the free variables (from Step 1 above)
Question: Do we look for variables after synonym expansion or before? And also what happens if we do some solving for type family reductions?
Example around type synonyms:
type P a = Int
x :: P a
x = 5
f y = if y then x else 6
GHC 9.2 says f :: forall {a}. Bool > P a
.
Answer: It doesn't really matter. We might end up getting an extraneous free variable (by not expanding a type synonym or type family application), but that causes no harm. Recall that generalised type variables are inferred and thus are not subject to visible type instantiation. In practice we do not expand type synonyms, but we believe that doing so would have no effect on users (beyond, say, :info
).
Local and global type variables (from Step 2.1 above)
 A local type variable has a level number equal to the ambient level of the binding; equivalently, these are the the variables that do not appear in the environment.
 A global type variable has a level number strictly less than the ambient level of the binding; equivalently, these are free in the environment.
 Local variables are candidates for quantification. Global variables are definitely not available for quantification.
Example of something that is (surprisingly, perhaps) not a local variable:
tau = alpha[0] > Int
[W] C alpha[0] beta[1]
class C a b  a > b
Here, beta
is not a local variable, because it does not appear free in tau. The [W] C alpha beta
constraint will end up in the residual implication. beta
does not need to be promoted. Later, if we learn alpha[0]
, when the solver looks at the residual implication, it will choose beta
accordingly. beta
is not a candidate for generalisation.
Avoiding unnecessary polymorphism (from Step 2.2 above)

The kind components of a type ty are those kinds ki where
 ki is the kind of a unification variable free in ty. (A skolem is certain to be global.)
 ki is the kind of a localforall’d binder in ty. e.g.
(forall (a::ki). blah) > Int
 ki is the kind of a coercion in ty. e.g.
ty > (co :: ki)

The “runtimereps components of a type” ty is a set of types (all of kind
RuntimeRep
) that are The argument
r
ofTYPE r
, which is itself a kind component of ty
 The argument

The “multiplicitity components of a type” ty is a set of types (all of kind
Multiplicity
) that appear as the multiplicity argument of(>)
in ty.
All of these are a bit arbitrary. For example a kind component might be F (Proxy (TYPE r))
, where F
removes the Proxy
, but we do not consider
r
to be a runtimerep component. We're just trying to get the vastly common cases right here  we won't get other cases right.
Nonquantifiable constraints (from Step 2.2.1 above)
These are the nonquantifiable constraints:
 If the MR applies, all constraints.

HasCallStack
constraints  If
XNoFlexibleContexts
: any constraint that does not look likeC a b (c Int) (d e)
. (Note that arguments can be applications of type variables, just not applications of tycons.) Actually, we seem not to do this. See #10608 (closed) and #10351 (closed).  A nominal equality that does not have a type family at the head on one side or the other.
Determined type variables (from Steps 3.3 and 4.2 above)
Definition. A type variable tv is determined by a set gtvs using theta iff, given specific instantiations for gtvs, no more than one instantiation for tv can possibly satisfy theta.
This
happens via fundeps: with C a b  a > b
, and constraint C t1 t2
, where tyvars(t1) are all in gtvs, then injectivetyvars(t2) are determined by gtvs. We must be careful to compute a fixpoint here in order to find all determined tyvars.
Recall that equality constraints ~
have a bidirectional fundep, as if we had class a ~ b  a > b, b > a
.
We avoid quantifying over determined tyvars so as to avoid Henry Ford polymorphism ("any type a
as long as it's Int
"). Henry Ford polymorphism is not just unnecessary, but it can be actively harmful due to an infelicity in the solver. If we end up inferring a type quantified over, say, C Int a
(where C
has the fundep above and we have instance C Int Bool
), then we will have a residual implication like forall a. C Int a => C Int a
. The solver might see the [W] C Int a
, notice the functional dependency, and then spit out [W] Bool ~ a
, which is not soluble. This does not happen in practice, but is a potential problem with Henry Ford polymorphism. (In the olden days, we had Derived constraints, and the fundep would give rise to [D] Bool ~ a
, which the solver would then discard if unsolved.)
Note that we cannot conclusively determine all determined variables, according to the declarative definition above. This is because further solving might uncover new functional dependencies that we do not currently know. So we might miss some determined variables, infer Henry Ford polymorphism, and then fall afoul of the problem in the paragraph just above. We propose not losing sleep over this remote possibility.
Defaulting during generalisation (from Step 4 above)
When defaulting a type variable (a :: ki) in Step 4 of generalisation, we use the kind:
(alpha :: Type) := Type
(alpha :: RuntimeRep) := LiftedRep
(alpha :: Levity) := Lifted
(alpha :: Multiplicity) := Many
If none of these apply, we
 Report an error
 Set alpha to be a SkolemTv  that is, a new, toplevel constant
We don't want alpha
to stay as a unification variable, because it appears free in the type of the function f
we are generalising, and we don't want the calls of f
to subsequently fix alpha
. Making it a skolem solves that. We are going to report an error anyway. NB: skolems carry their own SkolemInfo
, so an unbound skolem of this sort should not cause trouble.
NB: We do no type family simplification when figuring out the kinds of unification variables above. If we did, there could be subtle order dependencies among the defaulted variables, when defaulting one variable allows the kind of another variable to reduce to e.g. Type
. But we just don't do this. (We do expand type synonyms, as usual.)
Note. I nearly wrote "Report an error  unless alpha
is already mentioned in an insoluble contraint.". Sam suggested
 coerce :: forall r (a :: TYPE r) (b :: TYPE r). Coercible a b => a > b
f :: forall rr (a :: TYPE rr). a > a
f = coerce
We get an unsolved alpha[conc] ~ rr[sk]
. But that's fine. We are not generalising (because this function has a type signature) so none of the stuff in this section applies.
A worry. This kinddriven defaulting is vulnerable to ordersensitivity. E.g what if the kind is (F Int)
?
Note. At one stage we said "We always default multiplicity variables (with NoPolyMultiplicities); we never promote them. Why? It is very hard to learn more about a multiplicity from usage sites." But
 What does "multiplicity variables" mean? Perhaps "variables free in a multiplicty component and not free anywhere else"?
 It seems very ad hoc.
Maybe treating them uniformly is OK?
Old stuff now out of date
To default (D) we take two steps: For each kind component ki
 Promote its free variables, just as in (P)
 Add a new wanted constraint e.g.
rr ~ LiftedRep
(RAE: We have agreed that this step is Just Wrong: It does not take into account that the runtimerep component might beF Int
whereF Int = UnliftedRep
, for example.)
Notes
 If
rr
is a local unification variable, a very common case, we can just unify itrr := LiftedRep
. This is just an efficiency shortcut. But if we can't unify it we still emit the equality constraint, which ensures that even if ki isF blah
, which doesn’t yet reduce, we’ll check that it eventually reduces toLiftedRep
, the default.
Similarly for
 each runtimerep component rr (constraint is
rr ~ LiftedRep
);  each multiplicity component m (constraint:
m ~ Many
);  each kind component (with NoPolyKinds) (constraint
Concrete# ki
);
We do not do this for Concrete#
constraints, which can be controlled by the classbased default
mechanism.
Alternative, less preferred defaulting strategy
An alternative approach to defaulting is to use kinddirected defaulting during quantification. This is what is done today, where quantifyTyVars
takes variables it cannot quantify and tries to default them:

(alpha :: Type) := Type
(with NoPolyKinds) (alpha :: RuntimeRep) := LiftedRep
(alpha :: Levity) := Lifted
(alpha :: Multiplicity) := Many
The problem with this approach is that it requires looking at the kind during quantification. If that kind is F beta
, where beta
will be learned later, we might miss an opportunity to default, because we learn only later that, say, F beta
really is Multiplicity
. This alternative seems bound to be fragile (this can be observed in GHC today). Example:
{# LANGUAGE RankNTypes, StandaloneKindSignatures #}
{# LANGUAGE DataKinds, LinearTypes, TypeFamilies, PartialTypeSignatures #}
module Bug where
import Data.Kind
import GHC.Exts
import GHC.Types
import Data.Proxy
type Multy :: () > Type
type family Multy a where
Multy '() = Multiplicity
type Fun :: forall (a :: ()) > Multy a > Type > Type > Type
type family Fun a m where
Fun '() m = FUN m
myId1 :: Proxy _u > Fun _u _ a a
myId1 (Proxy :: Proxy '()) x = x
myId2 :: Proxy '() > Fun '() _ a a
myId2 (Proxy :: Proxy '()) x = x
GHC infers myId1
to be multiplicity polymorphic, because it doesn't learn that _u := '()
until after kindgeneralizing the type. On the other hand myId2
is multiplicitymonomorphic.
The Choice: promoting vs defaulting
Finally we have a free design choice: should we promote, or default? That is, do we add the extra constraints used for defaulting? (That is, we always promote. The only question is about those extra constraints.)
Simon prefers (D) Default. Reasons

It makes reasoning more local. You can figure out what a type signature means without looking to see how it is used.

It eliminates the nasty possibility of an inferred type like
f :: forall a. Foo LiftedRep a => blah
, where we have aninstance Foo LiftedRep t
. This can happen if we inferf :: forall a. Foo rr a => blah
, whererr
is still a unification variable, ultimately set toLiftedRep
On point (2) note that we have this possibility today:
class C a b where
meth :: a > b > ()
instance C a Integer where
meth _ _ = ()
x = 5  MR
f z y = (meth z y, x + y)
GHC infers f :: C a Integer => a > Integer > ((), Integer)
, exactly this point is worried about.
Richard prefers (P) Promote. Reasons:
 It allows GHC to accept more programs. Example:
f x = x
g () = f 3#
We see from the usage site that f
should have type forall (a :: TYPE IntRep). a > a
. Another, more elaborate example is below. Note that the representationpolymorphic Num
is a fine idea. See proposal at https://github.com/ghcproposals/ghcproposals/pull/30 and ticket #12708.
class Num (a :: TYPE rep) where ...  methods as today
someFun :: Int# > ...
someFun z = ... let frob x y = x + (y * 2) in frob 3 4 + z ...
We need to know that frob
should work over unboxed numbers, but this knowledge comes only from usage sites.

(P) is more in keeping with the original spirit of Haskell. The monomorphism restriction (MR) imposes a "do not quantify" dictum like e.g. NoPolyRepresentations would. It currently uses strategy (P). But we could imagine (we are not doing this. thought experiment only!) changing the MR to use (D) instead. Then,
x = 5
would always beInteger
: this is more local and simpler. But it would be a real pain to users, and not just around legacy code.I (Richard) want a future where more people use unlifted and unboxed types. (Unlifted especially.) If we promote rather than default, it will be easier to write functions over unlifted and unboxed types.
Compromise: Simon and Richard have both agreed to the following: Promote (P) when a declaration is local; Default (D) when a declaration is toplevel. This means that decisions around defaulting can take into account an entire toplevel definition  but no more.
Other points
When kindgeneralising an explicit userwritten type signature: If we promote (P) we could allow
f :: forall a. a > a
f @(a :: UnliftedType) x = x
where the usesite in f
’s defn fixes the monomorphic (promoted) representation variable in f
’s type sig.
Or we could default more aggressively to make the “complete” type signature more complete.
Implementation
Turning to implementation, here is a plan for GHC.Tc.Solver.decideQuantification

Identify all defaultable "components", namely
 (a) kind components (with NoPolyKinds)
 (b) runtimerep components (with NoPolyRepresentations)
 (c) multiplicity components (with NoPolyMultiplicities)
 (d) the argument of a Concrete# constraint
 (e) constraints that fall under the Monomorphism Restriction, e.g. Num a

Promotion and defaulting. For each such component,
 Promote all its free variables and
 If we are doing defaulting (D), then for (ac), emit an constraint, thus
 (a) Emit
Concrete# ki
for a kind componentki
; this ensures thatki
will eventually be concrete.  (b) Emit
rr ~ LiftedRep
for a runtimerep componentrr
.  (c) Emit
mu ~ Many
for a multiplicity componentmu
 (a) Emit
Question: what do we do with the evidence for these equalities? Simply discard it: the equalities are not required in order to ensure the program is welltyped, according to our basic typing rules. (Here, "basic" means our typing rules without the notquitecompositional levity/representation monomorphism restrictions.) So there's no place we will need the evidence. The equality is there only to force defaulting.
Note: we say "emit a constraint" but we really mean "take a short cut if possible, and emit a constraint if not". Example: rather than emit
rr ~ LiftedRep
dounifyDerived rr LiftedRep
. That will try to unifyrr
(which can be an arbitrary type, remember) withLiftedRep
. This will often succeed, but perhaps not always; e.g. mayberr
isF alpha
andalpha
gets unified later; in this latter (rare) case we do emit a constraint.Note: we need a shortcut for emitting a
Concrete#
constraint. For this short cut it might be helpful to default before promoting, because for local unification varaibles we know that no other constraints will affect them, whereas for global ones we might learn more. Also, if shortcut unification with LiftedRep succeeds, there was no point in promoting (efficiency).Note: we need no special handling of variables of type/kind
RuntimeRep
. Instead, we now treat runtimerep components specially, not variables. 
If we emitted any constraints, simplify the constraints again, in the hope of unlocking further simplification.
Question: when wanteds can simplify wanteds, maybe that means we won't get to (Eq a, Ord a) and can get rid of
mkMinimalBySCs
? 
Find and promote any tyvars that are functionally dependent (via a constraint) on a promoted (global, nonlocal) type variable. This step follows (1) because (1) promotes some variables, which might transitively make others determined.

Find which type variables to quantify: all local (nonpromoted) tyvars free in tautvs, or transitively connected to those by any constraint. (See current
decideQuantifiedTyVars
.) The "transitively dependent" bit is so that we quantify overb
inC a b => a > a
, but not overa
inC a => b > b
. 
Find which constraints to quantify: the ones that mentions any quantified tyvar. Simple.
We should remove all the defaulting from quantifyTyVars
, and use a simplified version of (15) for the other calls to quantifyTyVars
.
Examples
We lay out a number of examples here and say how we would like them to behave. As we continue to refine the proposal, we will use these examples as a testsuite.
f1 x = x
We must infer f1 :: forall (a :: TYPE LiftedRep). a > a
.
Jun 2022: We get x :: alpha
with alpha :: TYPE rho
for a ConcreteTv
rho
. We will not quantify rho
. This is toplevel, so we default rho := LiftedRep
because rho :: RuntimeRep
. Good.
f2 :: a > a
f2 x = x
We must understand that type signature to be f2 :: forall (a :: TYPE LiftedRep). a > a
.
Jun 2022: When kindchecking f2
's signature, we get a :: TYPE rho
. rho
is free in a runtimerep component and will not be generalised. This is toplevel, so we default rho := LiftedRep
because rho :: RuntimeRep
. Good.
f3 :: a > a
f3 = f3
We still want to understand that type signature to be f3 :: forall (a :: TYPE LiftedRep). a > a
, even though the more general f3 :: forall r (a :: TYPE r). a > a
is possible. (RAE: What goes wrong if we infer the more general type here? Nothing, but there seems to be no way to do this while retaining the principle that a type signature is authoritative. A design that allowed generalisation of f3
's type would seem to also do the same for f2
, which would not work.)
Jun 2022: Same as f2
because the action is all in the type signature.
f3b :: a > a
f3b (x :: (b :: TYPE IntRep)) = x
It would be possible to allow the type signature to promote the unknown kind of a
, learning what it is in the definition. But we choose not to do this: seems more complicated for little gain. Let's reject. This also retains the principle that a type signature stands alone: it tells you the full type of the variable, without any wiggle room.
Jun 2022: We indeed reject.
f4 = (f4 :: a > a)
The expression signature :: a > a
means forall (a :: TYPE r0). a > a
. It seems inferring types f4 :: forall r (a :: TYPE r). a > a
or f4 :: forall (a :: TYPE LiftedRep). a > a
would both be acceptable.
Jun 2022: When kindchecking that signature, we get a :: TYPE rho
. rho
is free in a runtimerepcomponent, and is thus not generalised in the signature (we do normally do kindgeneralisation in expression type signatures). We promote rho
. We then instantiate f4 :: forall a. a > a
to have type alpha > alpha
(for alpha :: TYPE rho
). There are no constraints, so we now must infer the type of f4
. rho
is free in a runtimerep of alpha > alpha
and so is not generalised. The definition of f4
is toplevel, and so we default rho := LiftedRep
because rho :: RuntimeRep
. Done. (We could imagine not looking in runtimerep components in simplifyInfer
 and thus inferring a more general type here  but there seems to be little incentive for this wrinkle. Let's not.)
f4b :: forall (a :: TYPE IntRep). a > a
f4b = (f4b :: b > b)
The behaviour of this example depends on our defaultingvspromoting strategy for expression type signatures. Either choice seems palatable here. If we default in expression type signatures, this will be rejected. If we promote, this will be accepted.
Jun 2022: The signature b > b
becomes forall b. b > b
(by the renamer). Then we infer b :: TYPE rho
. rho
is free in a runtimerep component and so is not generalised. The expression signature is not toplevel, and so we promote (no defaulting yet). We soon unify rho := IntRep
and all is well.
f5 :: forall (a :: TYPE (F Int)). a > a
f5 = f5
We want to accept, with f5
having the type given. The user is asking for something unusual, but let's not deny them. Note that f5 x = x
(or any other definition that binds the argument) would be rejected, as the type is insufficiently concrete.
f6 :: forall x (a :: TYPE (F x)). a > a
f6 = f6  cannot bind the argument here
This, too, should be accepted.
f7 :: forall r (a :: TYPE r). a > a
f7 = f7  cannot bind the argument here
This must be accepted in order to support userwritten levity polymorphism.
f8 = (undefined :: forall (a :: TYPE (F Int)). a > a)
It's not clear what to do here. The type given might just be too exotic to infer, especially if there is no equation for F Int
. Rejection is OK. It should probably be accepted if F Int = LiftedRep
. Maybe even if F Int
reduces to some other concrete RuntimeRep
.
f9 = (undefined :: forall r (a :: TYPE r). a > a)
This should be accepted, but with f9 :: forall (a :: TYPE LiftedRep). a > a
inferred. The RHS will just be instantiated appropriately.
f10 = (f10 :: forall r (a :: TYPE r). a > a)
This should be rejected, as f10
's type is not as general as its expression type signature.
f11 = (undefined :: forall x (a :: TYPE (F x)). a > Proxy x > a)
This is unclear. Should probably treat like f8
: maybe accept and maybe not, but f11
is probably more specific than the type given would suggest.
type Phant x = LiftedRep
f12 = (f12 :: forall x (a :: TYPE (Phant x)). a > Proxy x > a)
This should be accepted. We should do our analysis after expanding type synonyms. If we did it beforehand, then we might think we need to promote or default x
, which would make f12
more specific than its expression type signature.
type family TF1 a where
TF1 _ = LiftedRep
f13 = (f13 :: forall x (a :: TYPE (TF1 x)). a > Proxy x > a)
This is unclear. Do we normalise type families before determining free variables? If we do, then we're liable to be affected by givens (or the lack thereof) in our analysis.
f14 :: forall (r :: RuntimeRep) (a :: TYPE r). a > a
f14 = g
where
g :: b > b
g = g
This is unclear. We know that the variable b
in the type of g
should not be representationpolymorphic. But could its representation be the r
in scope? Put another way, if we're not quantifying a runtimerep component, does that mean that the runtimerep component is necessarily concrete?
Properties
If the above examples are unit tests against our ideas, then here are propertybased tests.
Order independence. The type inferred for a variable definition should not depend on whether variables have been unified eagerly or are delayed. Similarly, it should not depend on whether type families have been reduced or not.