... | ... | @@ -505,7 +505,7 @@ map :: forall (v1 :: Levity) (v2 :: Levity) (v3 :: Levity) (v4 :: Levity) |
|
|
|
|
|
Here, `v1` and `v2` are the levities of `a` and `b`, respectively, and `v3` and `v4` are the levities of the argument and result lists, respectively. As previously articulated, we can't have proper levity polymorphic functions, instead requiring specialization. This would mean **sixteen** `map`s. That's quite a high number. But I (Richard) think all of the variants make sense. I'd love for there to be some optimization to prevent the need to export 16 definitions here, but I don't see one. In any case, I do think this would work swimmingly for users. Is it worth the cost? That's up for debate.
|
|
|
|
|
|
SG: IUC, the difference in operational semantics mainly comes from when we return levity polymorphic expressions from a function (in which case we might or might not need to `seq` them) and when we put them into constructor fields. To reduce specialisation obligation for the latter, I could imagine an approach `class LevityPolymorphic (v :: Levity) where mseq :: a -> b -> b` and use that instead of `seq` in constructor wrappers (well, basically anywhere we need to convert the levity polymorphic thing into an unlifted thing). Maybe we need to actually make this a binary relation (thinking about cases where we need to convert between two different levity polymorphic things here, which probably subsumes all above cases), similar to `Coercible`, but not actually zero-cost, I'm afraid. The different variants of `map` above then arise as specialisations of that. For returning levity polymorphic expressions I could imagine a worker/wrapper approach that exposes the `mseq`, so that at least the wrapper will always inline and specialise, so that the runtime overhead at monomorphic call sites should be neglible.
|
|
|
SG: IUC, the difference in operational semantics mainly comes from when we return levity polymorphic expressions from a function (in which case we might or might not need to `seq` them) and when we put them into constructor fields. To reduce specialisation obligation for the latter, I could imagine an approach `class LevityPolymorphic (v :: Levity) where mseq :: a -> b -> b` and use that instead of `seq` in constructor wrappers (well, basically anywhere we need to convert the levity polymorphic thing into an unlifted thing). Maybe we need to actually make this a binary relation (thinking about cases where we need to convert between two different levity polymorphic things here, which probably subsumes all above cases), similar to `Coercible`, but not actually zero-cost, I'm afraid. The different variants of `map` above then arise as specialisations of that. For returning levity polymorphic expressions I could imagine a worker/wrapper approach that exposes the `mseq`, so that at least the wrapper will always inline and specialise, so that the runtime overhead at monomorphic call sites should be neglible. [Sixten](https://github.com/ollef/sixten)'s *type representation polymorphism* is an implicit implementation of the same feature: Basically pass on a dictionary to a polymorphic function containing representation details such as size and so on. (Which is only really interesting for representation polymorphism, which is an addendum to levity polymorphism. Although this means that the *calling convention* depends on these dictionary arguments (in that the caller doesn't statically know the type and size of its arguments on the stack/registers), kind of like the format string parameter to C's variadic `printf` function. I imagine that rules out arity expansion for these functions. But that's fine; that's the price we have to pay for *representation* polymorphism. Specialisation to e.g. `BoxedRep Lifted` is always possible.)
|
|
|
|
|
|
**Conjecture:** If we take this idea to its logical extreme -- of making all functions levity polymorphic in all of the datatypes mentioned -- then RAE believes that using `!` to control kinds in datatype definitions would not break any code.
|
|
|
|
... | ... | |