Commit 0a5e4f5f authored by Sylvain Henry's avatar Sylvain Henry Committed by Marge Bot

Split glasgow_exts into several files (#17316)

parent b1a32170
.. _ghc-language-features:
Language extensions
===================
.. index::
single: language, GHC extensions
.. toctree::
:maxdepth: 1
exts/intro
exts/syntax
exts/import_export
exts/types
exts/records
exts/deriving
exts/patterns
exts/typeclasses
exts/literals
exts/constraints
exts/type_signatures
exts/bindings
exts/template_haskell
exts/strict
exts/parallel
exts/primitives
exts/ffi
exts/safe_haskell
exts/misc
exts/pragmas
.. _ambiguity:
Ambiguous types and the ambiguity check
---------------------------------------
.. extension:: AllowAmbiguousTypes
:shortdesc: Allow the user to write ambiguous types, and
the type inference engine to infer them.
:since: 7.8.1
Allow type signatures which appear that they would result in
an unusable binding.
Each user-written type signature is subjected to an *ambiguity check*.
The ambiguity check rejects functions that can never be called; for
example: ::
f :: C a => Int
The idea is there can be no legal calls to ``f`` because every call will
give rise to an ambiguous constraint. Indeed, the *only* purpose of the
ambiguity check is to report functions that cannot possibly be called.
We could soundly omit the ambiguity check on type signatures entirely,
at the expense of delaying ambiguity errors to call sites. Indeed, the
language extension :extension:`AllowAmbiguousTypes` switches off the ambiguity
check.
Ambiguity can be subtle. Consider this example which uses functional
dependencies: ::
class D a b | a -> b where ..
h :: D Int b => Int
The ``Int`` may well fix ``b`` at the call site, so that signature
should not be rejected. Moreover, the dependencies might be hidden.
Consider ::
class X a b where ...
class D a b | a -> b where ...
instance D a b => X [a] b where...
h :: X a b => a -> a
Here ``h``\'s type looks ambiguous in ``b``, but here's a legal call: ::
...(h [True])...
That gives rise to a ``(X [Bool] beta)`` constraint, and using the
instance means we need ``(D Bool beta)`` and that fixes ``beta`` via
``D``\'s fundep!
Behind all these special cases there is a simple guiding principle.
Consider ::
f :: type
f = ...blah...
g :: type
g = f
You would think that the definition of ``g`` would surely typecheck!
After all ``f`` has exactly the same type, and ``g=f``. But in fact
``f``\'s type is instantiated and the instantiated constraints are solved
against the constraints bound by ``g``\ 's signature. So, in the case an
ambiguous type, solving will fail. For example, consider the earlier
definition ``f :: C a => Int``: ::
f :: C a => Int
f = ...blah...
g :: C a => Int
g = f
In ``g``\'s definition, we'll instantiate to ``(C alpha)`` and try to
deduce ``(C alpha)`` from ``(C a)``, and fail.
So in fact we use this as our *definition* of ambiguity: a type ``ty``
is ambiguous if and only if ``((undefined :: ty) :: ty)`` would fail to
typecheck. We use a very similar test for *inferred* types, to ensure
that they too are unambiguous.
*Switching off the ambiguity check.* Even if a function has an
ambiguous type according the "guiding principle", it is possible that
the function is callable. For example: ::
class D a b where ...
instance D Bool b where ...
strange :: D a b => a -> a
strange = ...blah...
foo = strange True
Here ``strange``\'s type is ambiguous, but the call in ``foo`` is OK
because it gives rise to a constraint ``(D Bool beta)``, which is
soluble by the ``(D Bool b)`` instance.
Another way of getting rid of the ambiguity at the call site is to use
the :extension:`TypeApplications` extension to specify the types. For example: ::
class D a b where
h :: b
instance D Int Int where ...
main = print (h @Int @Int)
Here ``a`` is ambiguous in the definition of ``D`` but later specified
to be `Int` using type applications.
:extension:`AllowAmbiguousTypes` allows you to switch off the ambiguity check.
However, even with ambiguity checking switched off, GHC will complain about a
function that can *never* be called, such as this one: ::
f :: (Int ~ Bool) => a -> a
Sometimes :extension:`AllowAmbiguousTypes` does not mix well with :extension:`RankNTypes`.
For example: ::
foo :: forall r. (forall i. (KnownNat i) => r) -> r
foo f = f @1
boo :: forall j. (KnownNat j) => Int
boo = ....
h :: Int
h = foo boo
This program will be rejected as ambiguous because GHC will not unify
the type variables `j` and `i`.
Unlike the previous examples, it is not currently possible
to resolve the ambiguity manually by using :extension:`TypeApplications`.
.. note::
*A historical note.* GHC used to impose some more restrictive and less
principled conditions on type signatures. For type
``forall tv1..tvn (c1, ...,cn) => type`` GHC used to require
a. that each universally quantified type variable ``tvi`` must be "reachable"
from ``type``, and
b. that every constraint ``ci`` mentions at least one of the universally
quantified type variables ``tvi``. These ad-hoc restrictions are
completely subsumed by the new ambiguity check.
.. _applicative-do:
Applicative do-notation
-----------------------
.. index::
single: Applicative do-notation
single: do-notation; Applicative
.. extension:: ApplicativeDo
:shortdesc: Enable Applicative do-notation desugaring
:since: 8.0.1
Allow use of ``Applicative`` ``do`` notation.
The language option :extension:`ApplicativeDo` enables an alternative translation for
the do-notation, which uses the operators ``<$>``, ``<*>``, along with ``join``
as far as possible. There are two main reasons for wanting to do this:
- We can use do-notation with types that are an instance of ``Applicative`` and
``Functor``, but not ``Monad``
- In some monads, using the applicative operators is more efficient than monadic
bind. For example, it may enable more parallelism.
Applicative do-notation desugaring preserves the original semantics, provided
that the ``Applicative`` instance satisfies ``<*> = ap`` and ``pure = return``
(these are true of all the common monadic types). Thus, you can normally turn on
:extension:`ApplicativeDo` without fear of breaking your program. There is one pitfall
to watch out for; see :ref:`applicative-do-pitfall`.
There are no syntactic changes with :extension:`ApplicativeDo`. The only way it shows
up at the source level is that you can have a ``do`` expression that doesn't
require a ``Monad`` constraint. For example, in GHCi: ::
Prelude> :set -XApplicativeDo
Prelude> :t \m -> do { x <- m; return (not x) }
\m -> do { x <- m; return (not x) }
:: Functor f => f Bool -> f Bool
This example only requires ``Functor``, because it is translated into ``(\x ->
not x) <$> m``. A more complex example requires ``Applicative``, ::
Prelude> :t \m -> do { x <- m 'a'; y <- m 'b'; return (x || y) }
\m -> do { x <- m 'a'; y <- m 'b'; return (x || y) }
:: Applicative f => (Char -> f Bool) -> f Bool
Here GHC has translated the expression into ::
(\x y -> x || y) <$> m 'a' <*> m 'b'
It is possible to see the actual translation by using :ghc-flag:`-ddump-ds`, but be
warned, the output is quite verbose.
Note that if the expression can't be translated into uses of ``<$>``, ``<*>``
only, then it will incur a ``Monad`` constraint as usual. This happens when
there is a dependency on a value produced by an earlier statement in the
``do``-block: ::
Prelude> :t \m -> do { x <- m True; y <- m x; return (x || y) }
\m -> do { x <- m True; y <- m x; return (x || y) }
:: Monad m => (Bool -> m Bool) -> m Bool
Here, ``m x`` depends on the value of ``x`` produced by the first statement, so
the expression cannot be translated using ``<*>``.
In general, the rule for when a ``do`` statement incurs a ``Monad`` constraint
is as follows. If the do-expression has the following form: ::
do p1 <- E1; ...; pn <- En; return E
where none of the variables defined by ``p1...pn`` are mentioned in ``E1...En``,
and ``p1...pn`` are all variables or lazy patterns,
then the expression will only require ``Applicative``. Otherwise, the expression
will require ``Monad``. The block may return a pure expression ``E`` depending
upon the results ``p1...pn`` with either ``return`` or ``pure``.
Note: the final statement must match one of these patterns exactly:
- ``return E``
- ``return $ E``
- ``pure E``
- ``pure $ E``
otherwise GHC cannot recognise it as a ``return`` statement, and the
transformation to use ``<$>`` that we saw above does not apply. In
particular, slight variations such as ``return . Just $ x`` or ``let x
= e in return x`` would not be recognised.
If the final statement is not of one of these forms, GHC falls back to
standard ``do`` desugaring, and the expression will require a
``Monad`` constraint.
When the statements of a ``do`` expression have dependencies between
them, and ``ApplicativeDo`` cannot infer an ``Applicative`` type, it
uses a heuristic algorithm to try to use ``<*>`` as much as possible.
This algorithm usually finds the best solution, but in rare complex
cases it might miss an opportunity. There is an algorithm that finds
the optimal solution, provided as an option:
.. ghc-flag:: -foptimal-applicative-do
:shortdesc: Use a slower but better algorithm for ApplicativeDo
:type: dynamic
:reverse: -fno-optimal-applicative-do
:category: optimization
:since: 8.0.1
Enables an alternative algorithm for choosing where to use ``<*>``
in conjunction with the ``ApplicativeDo`` language extension.
This algorithm always finds the optimal solution, but it is
expensive: ``O(n^3)``, so this option can lead to long compile
times when there are very large ``do`` expressions (over 100
statements). The default ``ApplicativeDo`` algorithm is ``O(n^2)``.
.. _applicative-do-strict:
Strict patterns
~~~~~~~~~~~~~~~
A strict pattern match in a bind statement prevents
``ApplicativeDo`` from transforming that statement to use
``Applicative``. This is because the transformation would change the
semantics by making the expression lazier.
For example, this code will require a ``Monad`` constraint::
> :t \m -> do { (x:xs) <- m; return x }
\m -> do { (x:xs) <- m; return x } :: Monad m => m [b] -> m b
but making the pattern match lazy allows it to have a ``Functor`` constraint::
> :t \m -> do { ~(x:xs) <- m; return x }
\m -> do { ~(x:xs) <- m; return x } :: Functor f => f [b] -> f b
A "strict pattern match" is any pattern match that can fail. For
example, ``()``, ``(x:xs)``, ``!z``, and ``C x`` are strict patterns,
but ``x`` and ``~(1,2)`` are not. For the purposes of
``ApplicativeDo``, a pattern match against a ``newtype`` constructor
is considered strict.
When there's a strict pattern match in a sequence of statements,
``ApplicativeDo`` places a ``>>=`` between that statement and the one
that follows it. The sequence may be transformed to use ``<*>``
elsewhere, but the strict pattern match and the following statement
will always be connected with ``>>=``, to retain the same strictness
semantics as the standard do-notation. If you don't want this, simply
put a ``~`` on the pattern match to make it lazy.
.. _applicative-do-pitfall:
Things to watch out for
~~~~~~~~~~~~~~~~~~~~~~~
Your code should just work as before when :extension:`ApplicativeDo` is enabled,
provided you use conventional ``Applicative`` instances. However, if you define
a ``Functor`` or ``Applicative`` instance using do-notation, then it will likely
get turned into an infinite loop by GHC. For example, if you do this: ::
instance Functor MyType where
fmap f m = do x <- m; return (f x)
Then applicative desugaring will turn it into ::
instance Functor MyType where
fmap f m = fmap (\x -> f x) m
And the program will loop at runtime. Similarly, an ``Applicative`` instance
like this ::
instance Applicative MyType where
pure = return
x <*> y = do f <- x; a <- y; return (f a)
will result in an infinite loop when ``<*>`` is called.
Just as you wouldn't define a ``Monad`` instance using the do-notation, you
shouldn't define ``Functor`` or ``Applicative`` instance using do-notation (when
using ``ApplicativeDo``) either. The correct way to define these instances in
terms of ``Monad`` is to use the ``Monad`` operations directly, e.g. ::
instance Functor MyType where
fmap f m = m >>= return . f
instance Applicative MyType where
pure = return
(<*>) = ap
.. _arrow-notation:
Arrow notation
==============
.. extension:: Arrows
:shortdesc: Enable arrow notation extension
:since: 6.8.1
Enable arrow notation.
Arrows are a generalisation of monads introduced by John Hughes. For
more details, see
- “Generalising Monads to Arrows”, John Hughes, in Science of Computer
Programming 37, pp. 67–111, May 2000. The paper that introduced arrows:
a friendly introduction, motivated with programming examples.
- “\ `A New Notation for
Arrows <http://www.soi.city.ac.uk/~ross/papers/notation.html>`__\ ”,
Ross Paterson, in ICFP, Sep 2001. Introduced the notation described
here.
- “\ `Arrows and
Computation <http://www.soi.city.ac.uk/~ross/papers/fop.html>`__\ ”,
Ross Paterson, in The Fun of Programming, Palgrave, 2003.
- “\ `Programming with
Arrows <http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf>`__\ ”, John
Hughes, in 5th International Summer School on Advanced Functional
Programming, Lecture Notes in Computer Science vol. 3622, Springer,
2004. This paper includes another introduction to the notation, with
practical examples.
- “\ `Type and Translation Rules for Arrow Notation in
GHC <http://www.haskell.org/ghc/docs/papers/arrow-rules.pdf>`__\ ”,
Ross Paterson and Simon Peyton Jones, September 16, 2004. A terse
enumeration of the formal rules used (extracted from comments in the
source code).
- The arrows web page at
``http://www.haskell.org/arrows/`` <http://www.haskell.org/arrows/>`__.
With the :extension:`Arrows` extension, GHC supports the arrow notation described in
the second of these papers, translating it using combinators from the
:base-ref:`Control.Arrow.` module.
What follows is a brief introduction to the notation; it won't make much
sense unless you've read Hughes's paper.
The extension adds a new kind of expression for defining arrows:
.. code-block:: none
exp10 ::= ...
| proc apat -> cmd
where ``proc`` is a new keyword. The variables of the pattern are bound
in the body of the ``proc``-expression, which is a new sort of thing
called a command. The syntax of commands is as follows:
.. code-block:: none
cmd ::= exp10 -< exp
| exp10 -<< exp
| cmd0
with ⟨cmd⟩\ :sup:`0` up to ⟨cmd⟩\ :sup:`9` defined using infix operators
as for expressions, and
.. code-block:: none
cmd10 ::= \ apat ... apat -> cmd
| let decls in cmd
| if exp then cmd else cmd
| case exp of { calts }
| do { cstmt ; ... cstmt ; cmd }
| fcmd
fcmd ::= fcmd aexp
| ( cmd )
| (| aexp cmd ... cmd |)
cstmt ::= let decls
| pat <- cmd
| rec { cstmt ; ... cstmt [;] }
| cmd
where ⟨calts⟩ are like ⟨alts⟩ except that the bodies are commands
instead of expressions.
Commands produce values, but (like monadic computations) may yield more
than one value, or none, and may do other things as well. For the most
part, familiarity with monadic notation is a good guide to using
commands. However the values of expressions, even monadic ones, are
determined by the values of the variables they contain; this is not
necessarily the case for commands.
A simple example of the new notation is the expression ::
proc x -> f -< x+1
We call this a procedure or arrow abstraction. As with a lambda
expression, the variable ``x`` is a new variable bound within the
``proc``-expression. It refers to the input to the arrow. In the above
example, ``-<`` is not an identifier but a new reserved symbol used for
building commands from an expression of arrow type and an expression to
be fed as input to that arrow. (The weird look will make more sense
later.) It may be read as analogue of application for arrows. The above
example is equivalent to the Haskell expression ::
arr (\ x -> x+1) >>> f
That would make no sense if the expression to the left of ``-<``
involves the bound variable ``x``. More generally, the expression to the
left of ``-<`` may not involve any local variable, i.e. a variable bound
in the current arrow abstraction. For such a situation there is a
variant ``-<<``, as in ::
proc x -> f x -<< x+1
which is equivalent to ::
arr (\ x -> (f x, x+1)) >>> app
so in this case the arrow must belong to the ``ArrowApply`` class. Such
an arrow is equivalent to a monad, so if you're using this form you may
find a monadic formulation more convenient.
do-notation for commands
------------------------
Another form of command is a form of ``do``-notation. For example, you
can write ::
proc x -> do
y <- f -< x+1
g -< 2*y
let z = x+y
t <- h -< x*z
returnA -< t+z
You can read this much like ordinary ``do``-notation, but with commands
in place of monadic expressions. The first line sends the value of
``x+1`` as an input to the arrow ``f``, and matches its output against
``y``. In the next line, the output is discarded. The arrow ``returnA``
is defined in the :base-ref:`Control.Arrow.` module as ``arr id``. The above
example is treated as an abbreviation for ::
arr (\ x -> (x, x)) >>>
first (arr (\ x -> x+1) >>> f) >>>
arr (\ (y, x) -> (y, (x, y))) >>>
first (arr (\ y -> 2*y) >>> g) >>>
arr snd >>>
arr (\ (x, y) -> let z = x+y in ((x, z), z)) >>>
first (arr (\ (x, z) -> x*z) >>> h) >>>
arr (\ (t, z) -> t+z) >>>
returnA
Note that variables not used later in the composition are projected out.
After simplification using rewrite rules (see :ref:`rewrite-rules`)
defined in the :base-ref:`Control.Arrow.` module, this reduces to ::
arr (\ x -> (x+1, x)) >>>
first f >>>
arr (\ (y, x) -> (2*y, (x, y))) >>>
first g >>>
arr (\ (_, (x, y)) -> let z = x+y in (x*z, z)) >>>
first h >>>
arr (\ (t, z) -> t+z)
which is what you might have written by hand. With arrow notation, GHC
keeps track of all those tuples of variables for you.
Note that although the above translation suggests that ``let``-bound
variables like ``z`` must be monomorphic, the actual translation
produces Core, so polymorphic variables are allowed.
It's also possible to have mutually recursive bindings, using the new
``rec`` keyword, as in the following example: ::
counter :: ArrowCircuit a => a Bool Int
counter = proc reset -> do
rec output <- returnA -< if reset then 0 else next
next <- delay 0 -< output+1
returnA -< output
The translation of such forms uses the ``loop`` combinator, so the arrow
concerned must belong to the ``ArrowLoop`` class.
Conditional commands
--------------------
In the previous example, we used a conditional expression to construct
the input for an arrow. Sometimes we want to conditionally execute
different commands, as in ::
proc (x,y) ->
if f x y
then g -< x+1
else h -< y+2
which is translated to ::
arr (\ (x,y) -> if f x y then Left x else Right y) >>>
(arr (\x -> x+1) >>> g) ||| (arr (\y -> y+2) >>> h)
Since the translation uses ``|||``, the arrow concerned must belong to
the ``ArrowChoice`` class.
There are also ``case`` commands, like ::
case input of
[] -> f -< ()
[x] -> g -< x+1
x1:x2:xs -> do
y <- h -< (x1, x2)
ys <- k -< xs
returnA -< y:ys
The syntax is the same as for ``case`` expressions, except that the
bodies of the alternatives are commands rather than expressions. The
translation is similar to that of ``if`` commands.
Defining your own control structures
------------------------------------
As we're seen, arrow notation provides constructs, modelled on those for
expressions, for sequencing, value recursion and conditionals. But
suitable combinators, which you can define in ordinary Haskell, may also
be used to build new commands out of existing ones. The basic idea is
that a command defines an arrow from environments to values. These
environments assign values to the free local variables of the command.
Thus combinators that produce arrows from arrows may also be used to
build commands from commands. For example, the ``ArrowPlus`` class
includes a combinator ::
ArrowPlus a => (<+>) :: a b c -> a b c -> a b c