pattern (Find_in env) val <- _v | Just val <- lookup env _v
I assume that still makes ita valid "loop"?
Well done, yes that's the expected behaviour for this variety of loopiness. IOW once you switch on UndecidableInstances
, it's up to you to make sure your instance constraints don't form a loop. In general there might be a long chain of one constraint leading to another instance to another to another ... The best GHC can do is tell what's on the top of the stack when it overflows.
Thanks @Jade, the offending text is still in the docos and is still wrong. It's tricky cross-referencing the old to new User Guide structure, so it might have been 'moved' but not '_re_moved'.
Would you be able to try one thing please: does @simonpj's extra hack as at GHC9.6 still cause loopiness?
If so, this ticket still stands. By all means amend the example in the current docos, if you're able.
The example in the User's Guide of an explicitly bidir Pattern Syn seems tantalisingly close to being written as implicitly bidir:
pattern HeadC x <- x:xs where HeadC x = [x]
True that the top line alone can't be turned into implicit bidir -- " it wouldn’t specify a value for the ⟨xs⟩
on the right-hand side.", as the Guide says.
But that xs
is just clutter, we're never going to use it, so a wildcard works just as well:
pattern HeadC x <- x:_ where ...
What's more, a lazy pattern also works just as well:
pattern HeadC x <- x: ~[] where ...
> (\(HeadC x1) -> x1) "hello" -- ===> 'h'
The PattSyn just isn't interested in the tail of the list.
So if only I could go bidirectional:
pattern HeadC x = x: ~[]
With the rule that when converting the PattSyn to a builder, drop the lazy ~
.
Another request here in the same ball park. That is, not to put constructors on LHS, but to allow multiple unidirectional <-
equations, with different data constructors on RHS.
(Thanks Simon. Heads up that I'll be travelling from mid-October on. I'll be able to respond to an occasional message; but not put in concentrated feedback on any proposal. I'm satisfied the prototype for my proposal is proving reasonably robust over the use-cases I wanted to cover. And is rejecting the gnarly examples I wanted to reject.)
Thanks, hmm. I concocted this example apropos of a discussion elsewhere about cognitive overload; also ref the proposal to do away with punning on data constructors/type constructors. This variant gives the same confusion:
otherT2 MkT{ fooT, barT = noname } = MkT{ barT, .. }
So you might not like the design, but the documentation seems clear enough doesn’t it?
I was in two minds whether I should report this as a fault, or as a doco weakness. I'm still in two minds. My (overly-naïve as it turns out) prior expectation was:
barT
is a shorthand for barT = barT
; and, ..
is a shorthand for , fooT = fooT
, for all fields not appearing so far; andPart of 15.5.5's wording is almost identical to what you quote for 15.5.4; but isn't accurate -- it's limited by a later point:
The expansion is purely syntactic, so the record wildcard expression refers to the nearest enclosing variables that are spelled the same as the omitted field names.
This in 15.5.5 is just not true:
These rules restrict record wildcards to the situations in which the user could have written the expanded version.
I could have written (I did!) ... = MkT{ barT, .. }
or ... = MkT{ barT = barT, .. }
.
It's only further down we see 15.5.5's 'cannot refer to a top-level binding' -- which seems sensible. Then 15.5.4's absence of such a restriction (and therefore non-sensible behaviour) is a wot? that I think is not pointed out sufficiently strongly. 15.5.5 is explicit:
omitting
c
since the variablec
is not in scope (apart from the binding of the record selectorc
, of course)
15.5.4 could be more explicit, using the same example as 15.5.5:
f a b = R{ a, c }
is probably not what you meant, you'll get a rejection that c
is ill-typed; and not equivalent to f a b = R{ .. }
.
(I suspect this is either a bug or a mis-documentation.)
GHC Users' guide 15.5.4 re NamedFieldPuns
and 15.5.5 re RecordWildCards
'More details', fourth bullet.
{-# LANGUAGE NamedFieldPuns, RecordWildCards #-}
module ConfusingRecSyntax where
data T a = MkT{ fooT :: Int, barT :: a }
someT = MkT{ fooT = 5, barT = (+ 1) }
otherT MkT{ fooT } = MkT{ barT, .. } -- barT not bound on lhs
-- infer otherT :: T a1 -> T (T a2 -> a2)
gnaa = barT (otherT someT) someT 5 -- yields 6
The Users' guide says
- When record wildcards are use in record construction, a field
f
is initialised only iff
is in scope, and is not imported or bound at top level. For example,f
can be bound by an enclosing pattern match or let/where-binding.
In that equation for otherT
, there's no pattern match or let/where binding for barT
. So GHC falls back to the top-level binding (field selector) in the decl data T
.
Strictly speaking, it's the NamedFieldPun
that's using this top-level binding; but § 15.5.4 is unspecific about where it gets its binding from.
§ 15.5.4 should use the same wording as 15.5.5 about the binding; both should be more explicit that "not ... bound at top level" doesn't apply for record selector top-level functions.
Or
GHC's behaviour is wrong: that use of barT
should be rejected/warned
warning: [-Wmissing-fields]
* Fields of `MkT' not initialised: barT
I think we've pretty much talked this to a standstill.
keep the fundep interactions between constraints
So we haven't lost confluence.
If you want improvement to be hmmph
-like, subject to explicit signatures, you don't have a principal type. In the absence of signatures, the hmmph
did improve two usages to the same type. Is this losing principal types?
Does this give any insight for SetField
? Not much:
SetField
are non-overlapping, so we don't have to worry about non-Full FunDeps.s
ource and t
arget record types with explicit constructors in the instance head.Yes, those truncated (as I see it) FunDeps Adam advocates are problematic for coverage. I experimented with just nixing Instance Coverage Condition altogether:
SetField
examples were accepted; and improvement was effective -- including for changing a phantom type.x s b -> t
(include field type b
, with non-wiggly arrow); and used instance SetField "foo" (Tph a) (Tph _b) Char
(non-covered -- where Tph
is the type with the phantom).x t -> s
FunDep is needed for composing updates (equivalent to show . read
). And is necessarily uncovered, as the field's type is a parameter to the datatype. (Contrast the original Lenses-style approach would be SetField x s t a b
, where a
is the to-be-updated field's type; then we get a covered FunDep x t a -> s
.)Adam's 'hmm' example for a phantom-typed datatype wouldn't improve to two different typings for the phantom -- again putting FunDep x s b -> t
didn't make this worse. This definition works, again with either form of the FunDep:
hmmph v r = (setField "foo" v r :: Tph b1, setField "foo" v r :: Tph b2)
So in short, I remain unconvinced about introducing the drama of a whole 'nuvva style of arrow -- as opposed to merely marking the instance as UNCOVERED
.
I don't see any motivation except for the specific case of SetField
's disappointed[**] FunDep x t ~> s
-- for which I advocate introducing s
's to-be-updated field type as a
.
[**] "disappointed" is a literary reference: a pier as a disappointed bridge, James Joyce.
Thank you to @simonpj for more insight as to GHC's "two entirely separate steps"; and it turns out Hugs behaves similarly (although it's much more restrictive as to what instances are accepted in the first place). I've modified Hugs per my proposal. So it's accepting key example 4, without any special form of FunDep arrow:
class TypeEq a b (res :: Bool) | a b ~> res -- not wiggly
instance {-# OVERLAPPING #-} TypeEq a a True
instance {-# OVERLAPPABLE #-} TypeEq a b False
mutatis mutandis:
DataKinds
;I also implemented (a kludged form of) Instance Guards -- just because I could, having immersed myself in that part of the compiler. So this is accepted
class AddNat (x :: Nat) (y :: Nat) (z :: Nat) | x y -> z
, x z -> y
, y x -> z
instance AddNat Z y y
instance {-# OVERLAPPABLE #-} (AddNat x' y z') => AddNat (S x') y (S z') | y /~ (S z')
The kludge is to factor guards into the overlap testing: if two instance heads unify in no particular substitution ordering, but one has a guard, treat that as overlappable (less specific).
Step 1. Improvement of a wanted.
Modified Hugs effectively implements what Adam's calling 'Single Choice Inference':
Step 2. Discharging a Wanted using an instance.
Instance selection I haven't changed:
not doing what GHC is currently doing.
Improving a Wanted via a FunDep from some (in effect) random instance head ...
I'm puzzled why GHC is doing this: AFAIAA, GHC has never observed the SICC; so improving a Wanted from an instance has always been unreliable(?)
In contrast, Hugs has always enforced the SICC, improving from an instance head would therefore be legitimate. I tried an experiment: suppress Hugs instance consistency check [**], see what happens:
import TypeCast
data TFalse = TFalse deriving (Eq, Show)
data TTrue = TTrue deriving (Eq, Show)
class TypeEq a b e | a b -> e where
tEq :: a -> b -> e
instance TypeEq a a TTrue where tEq _ _ = TTrue -- Strong evidence
-- instance (TypeCast TTrue e) => TypeEq a a e where -- Mild evidence
-- tEq _ _ = typeCast TTrue
-- instance TypeEq a b TFalse -- no overlap (by current Hugs rules)
-- ===> trying tEq True False always rejected
-- 'Constraints are not consistent with FunDeps'
instance (TypeCast TFalse e) => TypeEq a b e where -- strictly overlappable [***]
tEq _ _ = typeCast TFalse
-- tests
-- tEq True False ===> :: TTrue
-- tEq True () ===> :: TFalse
-- > :t tEq True False :: TFalse ===> rejected Constraints are not consistent ... -- Good!
-- > :t tEq True () :: TTrue ===> rejected can't TypeCast
TTrue
instance marked Mild evidence
is the same as what works best in GHC. That is, tEq True False :: TFalse
is rejected by both GHC and Hugs: cannot cast to TTrue
. (It's evidence I didn't mess up, because it's not consistent with the TFalse
instance, by SICC.)Strong Evidence
is what GHC gets wrong: accepting tEq True False :: TFalse
and readily returning that True
is not the same type as False
. Hugs, though again rejects: Constraints are not consistent with functional dependency
-- that is, TypeEq
generated constraints -- Good!
.Is Hugs improving a Wanted from the TTrue
instance? The compiler code is too dense for me to be sure. It is at least not rejecting that instance even though the Wanted has TFalse
; and keeping that instance alive/recognising the FunDep should improve to TTrue
.
[**] You'll chiefly have to trust I didn't foul anything up. I commented-out the consistency test and error reporting.
[***] I didn't change Hugs' overlap rules, which compare whole instance heads, and require they be in strict substitution ordering.
Addit: (Should have tried this first.) With consistency checking suppressed, Hugs accepts the Trac #10675 O.P. example; and infers f10675 :: Maybe a -> [Maybe a]
-- just the same as GHC "Bizarre. Where did that Maybe
come from?"
My Chaos
linked from Key Example # 3: Hugs accepts the instance decls, but all the OVERLAPPABLE
ones are unreachable, so rejects any program that's trying to use them. Again I'm thinking the key factor here is that #10675's FunDep is non-Full.
(see the
hmm
example on the wiki page)
Yeah, sorry: I still don't understand why you want to set
the same value into the same record at the same field, but get two different resulting record types.
- Could the change of phantom type be applied by a different function that wraps applying the
setField
?
data FooRec p a = MkFooRec { foo :: a } -- so `p` is a phantom
hmmish (v :: b) (r :: FooRec p1 _a) = ((setField @"foo" v r), changePhantom @t (setField @"foo" v r))
changePhantom :: forall p2. FooRec _p1 a -> FooRec p2 a
changePhantom x = x -- whichever of these does the job
changePhantom (MkFooRec x') = (MkFooRec x') -- }
changePhantom (MkFooRec {foo = x'}) = MkFooRec {foo = x'} -- }
Assuming t
is in the type environment. Then the 'wrapper' changePhantom
merely pushes t
into the setField
's result type. setField
itself doesn't know or understand what's going on.
Thanks @adamgundry, welcome back. Yes I had quite a bit of discussion with @simonpj: drafting a proposal seemed a better way forward than lengthy comments here. The proposal is specifically for how to make FunDeps in general more principled; it doesn't directly tackle Wiggly arrows.
I suggest the quickest way to catch up is reading through my questions wrt Key Example 3. At least this started off being questions, then turned into more of a motivation for not doing what GHC is currently doing.
Improving a Wanted via a FunDep from some (in effect) random instance head is alarmingly unpredictable when GHC doesn't enforce the SICC -- nor even the LICC. Is that improvement what you mean by "wanted-wanted interactions"? Then I propose not improving anything (whatever flavour of arrow) until definitely matching a single instance. (That'll need different rules for selecting instances in presence of FunDeps, see my proposal.)
Then Dysfunctional/liberaller-than-liberal coverage can be a relaxation at the level of instances only; no need for a new style of arrow.
I've also been adding comments/questions on most pages of the wiki. They should all be marked AntC (or check the edit history). If they make sense, by all means merge them in.