Skip to content

GitLab

  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
  • GHC GHC
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 4,869
    • Issues 4,869
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
  • Merge requests 456
    • Merge requests 456
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Deployments
    • Deployments
    • Releases
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Glasgow Haskell Compiler
  • GHCGHC
  • Issues
  • #16082
Closed
Open
Created Dec 21, 2018 by Richard Eisenberg@raeDeveloper

Sort out treatment of underscores in types

I can count 4 different meanings for underscores in Haskell programs:

  1. A pattern for which we don't care to write a name. (This dates back to antiquity.)

Example:

const x _ = x
  1. An expression for which we want GHC to tell us what its expected type should be. (Relatively new: these are typed holes.)

Example:

plus x y = x + _
  1. A type which we want GHC to infer, by looking at the expression. (Relatively new: these are wild cards in partial type signatures.)

Example:

plus :: forall a. Num a => a -> a -> _
plus x y = x + y
  1. A type which we want GHC to infer, by looking at the underscore's context. (Relatively new: these are wild cards in type applications.)

Example:

x = const @_ @Bool 'x'    -- the _ is inferred to mean Char

Problems arise with the advent of visible kind application (#12045 (closed)): In type signatures, 3 of these meanings make sense (2, 3, and 4). In type/data family patterns, 3 of these meanings make sense (1, 2, and 4). Ideally, the user should have the opportunity to choose which meaning they want. In contrast, right now we use heuristics: in visible type/kind applications, we always use (4); otherwise, we use (1) (in patterns) or (3) (in types).

This is a mess, for at least three reasons:

A. Users might conceivably want different behavior than what we provide. For example, perhaps a user is writing an intricate pattern (at either term or type level) and wants to know the type (resp. kind) of the next bit of pattern. Or maybe the user wants to do this in a visible type application. Right now, there's just no way to do this.

B. It causes trouble for pretty-printing. Aside from term-level patterns, all the uses of underscores above are stored identically in the AST. This means that they are printed identically. But that's strange. For example, uses (3) and (4) might have different underscores meaning different variables. Should we number the underscores? But that would be silly for usage (1). It's all a bit muddy.

C. This causes awkwardness in the implementation. #12045 (closed) has to twiddle DynFlags to get its desired behavior, and that's sad.

This ticket is to track resolutions to these problems.

Trac metadata
Trac field Value
Version 8.7
Type Bug
TypeOfFailure OtherFailure
Priority normal
Resolution Unresolved
Component Compiler
Test case
Differential revisions
BlockedBy
Related
Blocking
CC
Operating system
Architecture
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking