profiling.rst 52.2 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
.. _profiling:

Profiling
=========

.. index::
   single: profiling
   single: cost-centre profiling
   single: -p; RTS option

GHC comes with a time and space profiling system, so that you can answer
questions like "why is my program so slow?", or "why is my program using
so much memory?".

Profiling a program is a three-step process:

17
1. Re-compile your program for profiling with the :ghc-flag:`-prof` option, and
18
   probably one of the options for adding automatic annotations:
19
   :ghc-flag:`-fprof-auto` is the most common [1]_.
20

21
   If you are using external packages with :command:`cabal`, you may need to
22 23 24 25 26
   reinstall these packages with profiling support; typically this is
   done with ``cabal install -p package --reinstall``.

2. Having compiled the program for profiling, you now need to run it to
   generate the profile. For example, a simple time profile can be
27 28 29
   generated by running the program with ``+RTS -p`` (see :rts-flag:`-p`), which
   generates a file named :file:`{prog}.prof` where ⟨prog⟩ is the name of your
   program (without the ``.exe`` extension, if you are on Windows).
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52

   There are many different kinds of profile that can be generated,
   selected by different RTS options. We will be describing the various
   kinds of profile throughout the rest of this chapter. Some profiles
   require further processing using additional tools after running the
   program.

3. Examine the generated profiling information, use the information to
   optimise your program, and repeat as necessary.

.. _cost-centres:

Cost centres and cost-centre stacks
-----------------------------------

GHC's profiling system assigns costs to cost centres. A cost is simply
the time or space (memory) required to evaluate an expression. Cost
centres are program annotations around expressions; all costs incurred
by the annotated expression are assigned to the enclosing cost centre.
Furthermore, GHC will remember the stack of enclosing cost centres for
any given expression at run-time and generate a call-tree of cost
attributions.

53
Let's take a look at an example: ::
54 55 56 57 58 59

    main = print (fib 30)
    fib n = if n < 2 then 1 else fib (n-1) + fib (n-2)

Compile and run this program as follows:

60
.. code-block:: none
61 62 63 64 65 66

    $ ghc -prof -fprof-auto -rtsopts Main.hs
    $ ./Main +RTS -p
    121393
    $

67 68 69
When a GHC-compiled program is run with the :rts-flag:`-p` RTS option, it
generates a file called :file:`prog.prof`. In this case, the file will contain
something like this:
70

71
.. code-block:: none
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119

            Wed Oct 12 16:14 2011 Time and Allocation Profiling Report  (Final)

               Main +RTS -p -RTS

            total time  =        0.68 secs   (34 ticks @ 20 ms)
            total alloc = 204,677,844 bytes  (excludes profiling overheads)

    COST CENTRE MODULE  %time %alloc

    fib         Main    100.0  100.0


                                                          individual     inherited
    COST CENTRE MODULE                  no.     entries  %time %alloc   %time %alloc

    MAIN        MAIN                    102           0    0.0    0.0   100.0  100.0
     CAF        GHC.IO.Handle.FD        128           0    0.0    0.0     0.0    0.0
     CAF        GHC.IO.Encoding.Iconv   120           0    0.0    0.0     0.0    0.0
     CAF        GHC.Conc.Signal         110           0    0.0    0.0     0.0    0.0
     CAF        Main                    108           0    0.0    0.0   100.0  100.0
      main      Main                    204           1    0.0    0.0   100.0  100.0
       fib      Main                    205     2692537  100.0  100.0   100.0  100.0

The first part of the file gives the program name and options, and the
total time and total memory allocation measured during the run of the
program (note that the total memory allocation figure isn't the same as
the amount of *live* memory needed by the program at any one time; the
latter can be determined using heap profiling, which we will describe
later in :ref:`prof-heap`).

The second part of the file is a break-down by cost centre of the most
costly functions in the program. In this case, there was only one
significant function in the program, namely ``fib``, and it was
responsible for 100% of both the time and allocation costs of the
program.

The third and final section of the file gives a profile break-down by
cost-centre stack. This is roughly a call-tree profile of the program.
In the example above, it is clear that the costly call to ``fib`` came
from ``main``.

The time and allocation incurred by a given part of the program is
displayed in two ways: “individual”, which are the costs incurred by the
code covered by this cost centre stack alone, and “inherited”, which
includes the costs incurred by all the children of this node.

The usefulness of cost-centre stacks is better demonstrated by modifying
120
the example slightly: ::
121 122 123 124 125 126 127 128 129 130 131

    main = print (f 30 + g 30)
      where
        f n  = fib n
        g n  = fib (n `div` 2)

    fib n = if n < 2 then 1 else fib (n-1) + fib (n-2)

Compile and run this program as before, and take a look at the new
profiling results:

132
.. code-block:: none
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170

    COST CENTRE MODULE                  no.     entries  %time %alloc   %time %alloc

    MAIN        MAIN                    102           0    0.0    0.0   100.0  100.0
     CAF        GHC.IO.Handle.FD        128           0    0.0    0.0     0.0    0.0
     CAF        GHC.IO.Encoding.Iconv   120           0    0.0    0.0     0.0    0.0
     CAF        GHC.Conc.Signal         110           0    0.0    0.0     0.0    0.0
     CAF        Main                    108           0    0.0    0.0   100.0  100.0
      main      Main                    204           1    0.0    0.0   100.0  100.0
       main.g   Main                    207           1    0.0    0.0     0.0    0.1
        fib     Main                    208        1973    0.0    0.1     0.0    0.1
       main.f   Main                    205           1    0.0    0.0   100.0   99.9
        fib     Main                    206     2692537  100.0   99.9   100.0   99.9

Now although we had two calls to ``fib`` in the program, it is
immediately clear that it was the call from ``f`` which took all the
time. The functions ``f`` and ``g`` which are defined in the ``where``
clause in ``main`` are given their own cost centres, ``main.f`` and
``main.g`` respectively.

The actual meaning of the various columns in the output is:

    The number of times this particular point in the call tree was
    entered.

    The percentage of the total run time of the program spent at this
    point in the call tree.

    The percentage of the total memory allocations (excluding profiling
    overheads) of the program made by this call.

    The percentage of the total run time of the program spent below this
    point in the call tree.

    The percentage of the total memory allocations (excluding profiling
    overheads) of the program made by this call and all of its
    sub-calls.

171
In addition you can use the :rts-flag:`-P` RTS option to get the
172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201
following additional information:

``ticks``
    The raw number of time “ticks” which were attributed to this
    cost-centre; from this, we get the ``%time`` figure mentioned above.

``bytes``
    Number of bytes allocated in the heap while in this cost-centre;
    again, this is the raw number from which we get the ``%alloc``
    figure mentioned above.

What about recursive functions, and mutually recursive groups of
functions? Where are the costs attributed? Well, although GHC does keep
information about which groups of functions called each other
recursively, this information isn't displayed in the basic time and
allocation profile, instead the call-graph is flattened into a tree as
follows: a call to a function that occurs elsewhere on the current stack
does not push another entry on the stack, instead the costs for this
call are aggregated into the caller [2]_.

.. _scc-pragma:

Inserting cost centres by hand
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Cost centres are just program annotations. When you say ``-fprof-auto``
to the compiler, it automatically inserts a cost centre annotation
around every binding not marked INLINE in your program, but you are
entirely free to add cost centre annotations yourself.

202
The syntax of a cost centre annotation for expressions is ::
203 204 205 206 207 208 209 210

    {-# SCC "name" #-} <expression>

where ``"name"`` is an arbitrary string, that will become the name of
your cost centre as it appears in the profiling output, and
``<expression>`` is any Haskell expression. An ``SCC`` annotation
extends as far to the right as possible when parsing. (SCC stands for
"Set Cost Centre"). The double quotes can be omitted if ``name`` is a
211
Haskell identifier, for example: ::
212

213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230
    {-# SCC id #-} <expression>

Cost centre annotations can also appear in the top-level or in a
declaration context. In that case you need to pass a function name
defined in the same module or scope with the annotation. Example: ::

    f x y = ...
      where
        g z = ...
        {-# SCC g #-}

    {-# SCC f #-}

If you want to give a cost centre different name than the function name,
you can pass a string to the annotation ::

    f x y = ...
    {-# SCC f "cost_centre_name" #-}
231

232
Here is an example of a program with a couple of SCCs: ::
233 234 235 236 237 238 239 240 241 242 243

    main :: IO ()
    main = do let xs = [1..1000000]
              let ys = [1..2000000]
              print $ {-# SCC last_xs #-} last xs
              print $ {-# SCC last_init_xs #-} last $ init xs
              print $ {-# SCC last_ys #-} last ys
              print $ {-# SCC last_init_ys #-} last $ init ys

which gives this profile when run:

244
.. code-block:: none
245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292

    COST CENTRE     MODULE                  no.     entries  %time %alloc   %time %alloc

    MAIN            MAIN                    102           0    0.0    0.0   100.0  100.0
     CAF            GHC.IO.Handle.FD        130           0    0.0    0.0     0.0    0.0
     CAF            GHC.IO.Encoding.Iconv   122           0    0.0    0.0     0.0    0.0
     CAF            GHC.Conc.Signal         111           0    0.0    0.0     0.0    0.0
     CAF            Main                    108           0    0.0    0.0   100.0  100.0
      main          Main                    204           1    0.0    0.0   100.0  100.0
       last_init_ys Main                    210           1   25.0   27.4    25.0   27.4
       main.ys      Main                    209           1   25.0   39.2    25.0   39.2
       last_ys      Main                    208           1   12.5    0.0    12.5    0.0
       last_init_xs Main                    207           1   12.5   13.7    12.5   13.7
       main.xs      Main                    206           1   18.8   19.6    18.8   19.6
       last_xs      Main                    205           1    6.2    0.0     6.2    0.0

.. _prof-rules:

Rules for attributing costs
~~~~~~~~~~~~~~~~~~~~~~~~~~~

While running a program with profiling turned on, GHC maintains a
cost-centre stack behind the scenes, and attributes any costs (memory
allocation and time) to whatever the current cost-centre stack is at the
time the cost is incurred.

The mechanism is simple: whenever the program evaluates an expression
with an SCC annotation, ``{-# SCC c -#} E``, the cost centre ``c`` is
pushed on the current stack, and the entry count for this stack is
incremented by one. The stack also sometimes has to be saved and
restored; in particular when the program creates a thunk (a lazy
suspension), the current cost-centre stack is stored in the thunk, and
restored when the thunk is evaluated. In this way, the cost-centre stack
is independent of the actual evaluation order used by GHC at runtime.

At a function call, GHC takes the stack stored in the function being
called (which for a top-level function will be empty), and *appends* it
to the current stack, ignoring any prefix that is identical to a prefix
of the current stack.

We mentioned earlier that lazy computations, i.e. thunks, capture the
current stack when they are created, and restore this stack when they
are evaluated. What about top-level thunks? They are "created" when the
program is compiled, so what stack should we give them? The technical
name for a top-level thunk is a CAF ("Constant Applicative Form"). GHC
assigns every CAF in a module a stack consisting of the single cost
centre ``M.CAF``, where ``M`` is the name of the module. It is also
possible to give each CAF a different stack, using the option
293 294 295
:ghc-flag:`-fprof-cafs`. This is especially useful when
compiling with :ghc-flag:`-ffull-laziness` (as is default with :ghc-flag:`-O`
and higher), as constants in function bodies will be lifted to the top-level
296
and become CAFs. You will probably need to consult the Core
297
(:ghc-flag:`-ddump-simpl`) in order to determine what these CAFs correspond to.
298 299 300 301 302 303 304 305 306 307 308 309 310

.. index::
   single: -fprof-cafs

.. _prof-compiler-options:

Compiler options for profiling
------------------------------

.. index::
   single: profiling; options
   single: options; for profiling

311
.. ghc-flag:: -prof
312 313

    To make use of the profiling system *all* modules must be compiled
314
    and linked with the :ghc-flag:`-prof` option. Any ``SCC`` annotations you've
315 316
    put in your source will spring to life.

317
    Without a :ghc-flag:`-prof` option, your ``SCC``\ s are ignored; so you can
318 319 320
    compile ``SCC``-laden code without changing it.

There are a few other profiling-related compilation options. Use them
321
*in addition to* :ghc-flag:`-prof`. These do not have to be used consistently
322 323
for all modules in a program.

324
.. ghc-flag:: -fprof-auto
325 326 327 328 329

    *All* bindings not marked INLINE, whether exported or not, top level
    or nested, will be given automatic ``SCC`` annotations. Functions
    marked INLINE must be given a cost centre manually.

330 331
.. ghc-flag:: -fprof-auto-top

332 333 334 335 336 337 338
    .. index::
       single: cost centres; automatically inserting

    GHC will automatically add ``SCC`` annotations for all top-level
    bindings not marked INLINE. If you want a cost centre on an INLINE
    function, you have to add it manually.

339
.. ghc-flag:: -fprof-auto-exported
340 341 342 343 344 345 346 347

    .. index::
       single: cost centres; automatically inserting

    GHC will automatically add ``SCC`` annotations for all exported
    functions not marked INLINE. If you want a cost centre on an INLINE
    function, you have to add it manually.

348 349
.. ghc-flag:: -fprof-auto-calls

350 351 352 353 354
    .. index::
       single: -fprof-auto-calls

    Adds an automatic ``SCC`` annotation to all *call sites*. This is
    particularly useful when using profiling for the purposes of
Ben Gamari's avatar
Ben Gamari committed
355
    generating stack traces; see the function :base-ref:`traceStack <Debug-Trace.html#traceShow>` in the
356
    module ``Debug.Trace``, or the :rts-flag:`-xc` RTS flag
357 358
    (:ref:`rts-options-debugging`) for more details.

359
.. ghc-flag:: -fprof-cafs
360 361

    The costs of all CAFs in a module are usually attributed to one
362
    "big" CAF cost-centre. With this option, all CAFs get their own
363 364
    cost-centre. An “if all else fails” option…

365
.. ghc-flag:: -fno-prof-auto
366

367 368
    Disables any previous :ghc-flag:`-fprof-auto`, :ghc-flag:`-fprof-auto-top`, or
    :ghc-flag:`-fprof-auto-exported` options.
369

370
.. ghc-flag:: -fno-prof-cafs
371

372 373 374
    Disables any previous :ghc-flag:`-fprof-cafs` option.

.. ghc-flag:: -fno-prof-count-entries
375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396

    .. index::
       single: -fno-prof-count-entries

    Tells GHC not to collect information about how often functions are
    entered at runtime (the "entries" column of the time profile), for
    this module. This tends to make the profiled code run faster, and
    hence closer to the speed of the unprofiled code, because GHC is
    able to optimise more aggressively if it doesn't have to maintain
    correct entry counts. This option can be useful if you aren't
    interested in the entry counts (for example, if you only intend to
    do heap profiling).

.. _prof-time-options:

Time and allocation profiling
-----------------------------

To generate a time and allocation profile, give one of the following RTS
options to the compiled program when you run it (RTS options should be
enclosed between ``+RTS ... -RTS`` as usual):

397 398 399 400
.. rts-flag:: -p
              -P
              -pa

401 402 403
    .. index::
       single: time profile

404 405
    The :rts-flag:`-p` option produces a standard *time profile* report. It is
    written into the file :file:`program.prof`.
406

407
    The :rts-flag:`-P` option produces a more detailed report containing the
408 409
    actual time and allocation data as well. (Not used much.)

410
    The :rts-flag:`-pa` option produces the most detailed report containing all
411 412
    cost centres in addition to the actual time and allocation data.

Ben Gamari's avatar
Ben Gamari committed
413 414 415 416 417
.. rts-flag:: -pj

    The :rts-flag:`-pj` option produces a time/allocation profile report in JSON
    format written into the file :file:`<program>.prof`.

418
.. rts-flag:: -V <secs>
419 420 421 422 423

    Sets the interval that the RTS clock ticks at, which is also the
    sampling interval of the time and allocation profile. The default is
    0.02 seconds.

424
.. rts-flag:: -xc
425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448

    This option causes the runtime to print out the current cost-centre
    stack whenever an exception is raised. This can be particularly
    useful for debugging the location of exceptions, such as the
    notorious ``Prelude.head: empty list`` error. See
    :ref:`rts-options-debugging`.

.. _prof-heap:

Profiling memory usage
----------------------

In addition to profiling the time and allocation behaviour of your
program, you can also generate a graph of its memory usage over time.
This is useful for detecting the causes of space leaks, when your
program holds on to more memory at run-time that it needs to. Space
leaks lead to slower execution due to heavy garbage collector activity,
and may even cause the program to run out of memory altogether.

To generate a heap profile from your program:

1. Compile the program for profiling (:ref:`prof-compiler-options`).

2. Run it with one of the heap profiling options described below (eg.
449
   :rts-flag:`-h` for a basic producer profile). This generates the file
450
   :file:`{prog}.hp`.
451

452 453 454 455 456 457
   If the :ref:`event log <rts-eventlog>` is enabled (with the :rts-flag:`-l`
   runtime system flag) heap samples will additionally be emitted to the GHC
   event log (see :ref:`heap-profiler-events` for details about event format).

3. Run :command:`hp2ps` to produce a Postscript file, :file:`{prog}.ps`. The
   :command:`hp2ps` utility is described in detail in :ref:`hp2ps`.
458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480

4. Display the heap profile using a postscript viewer such as Ghostview,
   or print it out on a Postscript-capable printer.

For example, here is a heap profile produced for the ``sphere`` program
from GHC's ``nofib`` benchmark suite,

.. image:: images/prof_scc.*

You might also want to take a look at
`hp2any <http://www.haskell.org/haskellwiki/Hp2any>`__, a more advanced
suite of tools (not distributed with GHC) for displaying heap profiles.

.. _rts-options-heap-prof:

RTS options for heap profiling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

There are several different kinds of heap profile that can be generated.
All the different profile types yield a graph of live heap against time,
but they differ in how the live heap is broken down into bands. The
following RTS options select which break-down to use:

481 482
.. rts-flag:: -hc
              -h
483

484
    (can be shortened to :rts-flag:`-h`). Breaks down the graph by the
485 486
    cost-centre stack which produced the data.

487
.. rts-flag:: -hm
488 489 490 491

    Break down the live heap by the module containing the code which
    produced the data.

492
.. rts-flag:: -hd
493 494 495 496 497

    Breaks down the graph by closure description. For actual data, the
    description is just the constructor name, for other closures it is a
    compiler-generated string identifying the closure.

498
.. rts-flag:: -hy
499 500 501 502 503

    Breaks down the graph by type. For closures which have function type
    or unknown/polymorphic type, the string will represent an
    approximation to the actual type.

504
.. rts-flag:: -hr
505 506 507 508

    Break down the graph by retainer set. Retainer profiling is
    described in more detail below (:ref:`retainer-prof`).

509
.. rts-flag:: -hb
510 511 512 513

    Break down the graph by biography. Biographical profiling is
    described in more detail below (:ref:`biography-prof`).

514 515 516 517 518 519 520 521 522 523 524 525
.. rts-flag:: -l

    :noindex:

    .. index::
       single: eventlog; and heap profiling

    Emit profile samples to the :ref:`GHC event log <rts-eventlog>`.
    This format is both more expressive than the old ``.hp`` format
    and can be correlated with other events over the program's runtime.
    See :ref:`heap-profiler-events` for details on the produced event structure.

526 527 528 529 530 531
In addition, the profile can be restricted to heap data which satisfies
certain criteria - for example, you might want to display a profile by
type but only for data produced by a certain module, or a profile by
retainer for a certain type of data. Restrictions are specified as
follows:

532 533 534 535 536 537 538
.. comment

    The flags below are marked with ``:noindex:`` to avoid duplicate
    ID warnings from Sphinx.

.. rts-flag:: -hc <name>
    :noindex:
539 540 541 542

    Restrict the profile to closures produced by cost-centre stacks with
    one of the specified cost centres at the top.

543 544
.. rts-flag:: -hC <name>
    :noindex:
545 546 547 548

    Restrict the profile to closures produced by cost-centre stacks with
    one of the specified cost centres anywhere in the stack.

549 550
.. rts-flag:: -hm <module>
    :noindex:
551 552 553

    Restrict the profile to closures produced by the specified modules.

554 555
.. rts-flag:: -hd <desc>
    :noindex:
556 557 558 559

    Restrict the profile to closures with the specified description
    strings.

560 561
.. rts-flag:: -hy <type>
    :noindex:
562 563 564

    Restrict the profile to closures with the specified types.

565 566
.. rts-flag:: -hr <cc>
    :noindex:
567 568 569 570 571

    Restrict the profile to closures with retainer sets containing
    cost-centre stacks with one of the specified cost centres at the
    top.

572 573
.. rts-flag:: -hb <bio>
    :noindex:
574 575 576 577 578 579 580 581

    Restrict the profile to closures with one of the specified
    biographies, where ⟨bio⟩ is one of ``lag``, ``drag``, ``void``, or
    ``use``.

For example, the following options will generate a retainer profile
restricted to ``Branch`` and ``Leaf`` constructors:

582
.. code-block:: none
583 584 585

    prog +RTS -hr -hdBranch,Leaf

586
There can only be one "break-down" option (eg. :rts-flag:`-hr` in the example
587 588
above), but there is no limit on the number of further restrictions that
may be applied. All the options may be combined, with one exception: GHC
589
doesn't currently support mixing the :rts-flag:`-hr` and :rts-flag:`-hb` options.
590 591 592

There are three more options which relate to heap profiling:

593
.. rts-flag:: -i <secs>
594 595 596 597 598 599 600

    Set the profiling (sampling) interval to ⟨secs⟩ seconds (the default
    is 0.1 second). Fractions are allowed: for example ``-i0.2`` will
    get 5 samples per second. This only affects heap profiling; time
    profiles are always sampled with the frequency of the RTS clock. See
    :ref:`prof-time-options` for changing that.

601
.. rts-flag:: -xt
602 603 604 605 606 607

    Include the memory occupied by threads in a heap profile. Each
    thread takes up a small area for its thread state in addition to the
    space allocated for its stack (stacks normally start small and then
    grow as necessary).

608
    This includes the main thread, so using :rts-flag:`-xt` is a good way to see
609 610 611 612 613 614
    how much stack space the program is using.

    Memory occupied by threads and their stacks is labelled as “TSO” and
    “STACK” respectively when displaying the profile by closure
    description or type description.

615
.. rts-flag:: -L <num>
616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633

    Sets the maximum length of a cost-centre stack name in a heap
    profile. Defaults to 25.

.. _retainer-prof:

Retainer Profiling
~~~~~~~~~~~~~~~~~~

Retainer profiling is designed to help answer questions like “why is
this data being retained?”. We start by defining what we mean by a
retainer:

    A retainer is either the system stack, an unevaluated closure
    (thunk), or an explicitly mutable object.

In particular, constructors are *not* retainers.

634 635 636
An object ``B`` retains object ``A`` if (i) ``B`` is a retainer object and (ii)
object ``A`` can be reached by recursively following pointers starting from
object ``B``, but not meeting any other retainer objects on the way. Each
637 638 639 640 641 642 643 644 645 646 647 648 649 650 651
live object is retained by one or more retainer objects, collectively
called its retainer set, or its retainer set, or its retainers.

When retainer profiling is requested by giving the program the ``-hr``
option, a graph is generated which is broken down by retainer set. A
retainer set is displayed as a set of cost-centre stacks; because this
is usually too large to fit on the profile graph, each retainer set is
numbered and shown abbreviated on the graph along with its number, and
the full list of retainer sets is dumped into the file ``prog.prof``.

Retainer profiling requires multiple passes over the live heap in order
to discover the full retainer set for each object, which can be quite
slow. So we set a limit on the maximum size of a retainer set, where all
retainer sets larger than the maximum retainer set size are replaced by
the special set ``MANY``. The maximum set size defaults to 8 and can be
652 653 654
altered with the :rts-flag:`-R` RTS option:

.. rts-flag:: -R <size>
655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672

    Restrict the number of elements in a retainer set to ⟨size⟩ (default
    8).

Hints for using retainer profiling
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The definition of retainers is designed to reflect a common cause of
space leaks: a large structure is retained by an unevaluated
computation, and will be released once the computation is forced. A good
example is looking up a value in a finite map, where unless the lookup
is forced in a timely manner the unevaluated lookup will cause the whole
mapping to be retained. These kind of space leaks can often be
eliminated by forcing the relevant computations to be performed eagerly,
using ``seq`` or strictness annotations on data constructor fields.

Often a particular data structure is being retained by a chain of
unevaluated closures, only the nearest of which will be reported by
673 674 675 676 677 678 679
retainer profiling - for example ``A`` retains ``B``, ``B`` retains ``C``, and
``C`` retains a large structure. There might be a large number of ``B``\s but
only a single ``A``, so ``A`` is really the one we're interested in eliminating.
However, retainer profiling will in this case report ``B`` as the retainer of
the large structure. To move further up the chain of retainers, we can ask for
another retainer profile but this time restrict the profile to ``B`` objects, so
we get a profile of the retainers of ``B``:
680

681
.. code-block:: none
682 683 684

    prog +RTS -hr -hcB

685
This trick isn't foolproof, because there might be other ``B`` closures in
686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718
the heap which aren't the retainers we are interested in, but we've
found this to be a useful technique in most cases.

.. _biography-prof:

Biographical Profiling
~~~~~~~~~~~~~~~~~~~~~~

A typical heap object may be in one of the following four states at each
point in its lifetime:

-  The lag stage, which is the time between creation and the first use
   of the object,

-  the use stage, which lasts from the first use until the last use of
   the object, and

-  The drag stage, which lasts from the final use until the last
   reference to the object is dropped.

-  An object which is never used is said to be in the void state for its
   whole lifetime.

A biographical heap profile displays the portion of the live heap in
each of the four states listed above. Usually the most interesting
states are the void and drag states: live heap in these states is more
likely to be wasted space than heap in the lag or use states.

It is also possible to break down the heap in one or more of these
states by a different criteria, by restricting a profile by biography.
For example, to show the portion of the heap in the drag or void state
by producer:

719
.. code-block:: none
720 721 722 723 724 725

    prog +RTS -hc -hbdrag,void

Once you know the producer or the type of the heap in the drag or void
states, the next step is usually to find the retainer(s):

726
.. code-block:: none
727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755

    prog +RTS -hr -hccc...

.. note::
    This two stage process is required because GHC cannot currently
    profile using both biographical and retainer information simultaneously.

.. _mem-residency:

Actual memory residency
~~~~~~~~~~~~~~~~~~~~~~~

How does the heap residency reported by the heap profiler relate to the
actual memory residency of your program when you run it? You might see a
large discrepancy between the residency reported by the heap profiler,
and the residency reported by tools on your system (eg. ``ps`` or
``top`` on Unix, or the Task Manager on Windows). There are several
reasons for this:

-  There is an overhead of profiling itself, which is subtracted from
   the residency figures by the profiler. This overhead goes away when
   compiling without profiling support, of course. The space overhead is
   currently 2 extra words per heap object, which probably results in
   about a 30% overhead.

-  Garbage collection requires more memory than the actual residency.
   The factor depends on the kind of garbage collection algorithm in
   use: a major GC in the standard generation copying collector will
   usually require 3L bytes of memory, where L is the amount of live
756
   data. This is because by default (see the RTS :rts-flag:`-F` option) we
757 758
   allow the old generation to grow to twice its size (2L) before
   collecting it, and we require additionally L bytes to copy the live
759
   data into. When using compacting collection (see the :rts-flag:`-c`
760
   option), this is reduced to 2L, and can further be reduced by
761
   tweaking the :rts-flag:`-F` option. Also add the size of the allocation area
762
   (see :rts-flag:`-A`).
763 764

-  The stack isn't counted in the heap profile by default. See the
765
   RTS :rts-flag:`-xt` option.
766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783

-  The program text itself, the C stack, any non-heap data (e.g. data
   allocated by foreign libraries, and data allocated by the RTS), and
   ``mmap()``\'d memory are not counted in the heap profile.

.. _hp2ps:

``hp2ps`` -- Rendering heap profiles to PostScript
--------------------------------------------------

.. index::
   single: hp2ps
   single: heap profiles
   single: postscript, from heap profiles
   single: -h⟨break-down⟩

Usage:

784
.. code-block:: none
785 786 787

    hp2ps [flags] [<file>[.hp]]

788
The program :command:`hp2ps` program converts a ``.hp`` file produced
789
by the ``-h<break-down>`` runtime option into a PostScript graph of the
790 791
heap profile. By convention, the file to be processed by :command:`hp2ps` has a
``.hp`` extension. The PostScript output is written to :file:`{file}@.ps`.
792 793
If ``<file>`` is omitted entirely, then the program behaves as a filter.

794
:command:`hp2ps` is distributed in :file:`ghc/utils/hp2ps` in a GHC source
795 796 797 798 799
distribution. It was originally developed by Dave Wakeling as part of
the HBC/LML heap profiler.

The flags are:

800 801 802 803
.. program:: hp2ps

.. option:: -d

804 805 806 807 808 809 810
    In order to make graphs more readable, ``hp2ps`` sorts the shaded
    bands for each identifier. The default sort ordering is for the
    bands with the largest area to be stacked on top of the smaller
    ones. The ``-d`` option causes rougher bands (those representing
    series of values with the largest standard deviations) to be stacked
    on top of smoother ones.

811 812
.. option:: -b

813 814 815 816 817 818
    Normally, ``hp2ps`` puts the title of the graph in a small box at
    the top of the page. However, if the JOB string is too long to fit
    in a small box (more than 35 characters), then ``hp2ps`` will choose
    to use a big box instead. The ``-b`` option forces ``hp2ps`` to use
    a big box.

819 820
.. option:: -e<float>[in|mm|pt]

821 822 823 824 825 826 827 828 829 830 831 832
    Generate encapsulated PostScript suitable for inclusion in LaTeX
    documents. Usually, the PostScript graph is drawn in landscape mode
    in an area 9 inches wide by 6 inches high, and ``hp2ps`` arranges
    for this area to be approximately centred on a sheet of a4 paper.
    This format is convenient of studying the graph in detail, but it is
    unsuitable for inclusion in LaTeX documents. The ``-e`` option
    causes the graph to be drawn in portrait mode, with float specifying
    the width in inches, millimetres or points (the default). The
    resulting PostScript file conforms to the Encapsulated PostScript
    (EPS) convention, and it can be included in a LaTeX document using
    Rokicki's dvi-to-PostScript converter ``dvips``.

833 834
.. option:: -g

835 836 837 838
    Create output suitable for the ``gs`` PostScript previewer (or
    similar). In this case the graph is printed in portrait mode without
    scaling. The output is unsuitable for a laser printer.

839 840
.. option:: -l

841 842 843 844 845 846
    Normally a profile is limited to 20 bands with additional
    identifiers being grouped into an ``OTHER`` band. The ``-l`` flag
    removes this 20 band and limit, producing as many bands as
    necessary. No key is produced as it won't fit!. It is useful for
    creation time profiles with many bands.

847 848
.. option:: -m<int>

849 850 851 852 853 854 855 856
    Normally a profile is limited to 20 bands with additional
    identifiers being grouped into an ``OTHER`` band. The ``-m`` flag
    specifies an alternative band limit (the maximum is 20).

    ``-m0`` requests the band limit to be removed. As many bands as
    necessary are produced. However no key is produced as it won't fit!
    It is useful for displaying creation time profiles with many bands.

857 858
.. option:: -p

859 860 861 862 863 864 865 866 867
    Use previous parameters. By default, the PostScript graph is
    automatically scaled both horizontally and vertically so that it
    fills the page. However, when preparing a series of graphs for use
    in a presentation, it is often useful to draw a new graph using the
    same scale, shading and ordering as a previous one. The ``-p`` flag
    causes the graph to be drawn using the parameters determined by a
    previous run of ``hp2ps`` on ``file``. These are extracted from
    ``file@.aux``.

868 869
.. option:: -s

870 871
    Use a small box for the title.

872 873
.. option:: -t<float>

874 875 876 877 878 879 880
    Normally trace elements which sum to a total of less than 1% of the
    profile are removed from the profile. The ``-t`` option allows this
    percentage to be modified (maximum 5%).

    ``-t0`` requests no trace elements to be removed from the profile,
    ensuring that all the data will be displayed.

881 882
.. option:: -c

883 884
    Generate colour output.

885 886
.. option:: -y

887 888
    Ignore marks.

889 890
.. option:: -?

891 892 893 894 895 896 897 898 899 900 901 902 903
    Print out usage information.

.. _manipulating-hp:

Manipulating the hp file
~~~~~~~~~~~~~~~~~~~~~~~~

(Notes kindly offered by Jan-Willem Maessen.)

The ``FOO.hp`` file produced when you ask for the heap profile of a
program ``FOO`` is a text file with a particularly simple structure.
Here's a representative example, with much of the actual data omitted:

904
.. code-block:: none
905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940

    JOB "FOO -hC"
    DATE "Thu Dec 26 18:17 2002"
    SAMPLE_UNIT "seconds"
    VALUE_UNIT "bytes"
    BEGIN_SAMPLE 0.00
    END_SAMPLE 0.00
    BEGIN_SAMPLE 15.07
      ... sample data ...
    END_SAMPLE 15.07
    BEGIN_SAMPLE 30.23
      ... sample data ...
    END_SAMPLE 30.23
    ... etc.
    BEGIN_SAMPLE 11695.47
    END_SAMPLE 11695.47

The first four lines (``JOB``, ``DATE``, ``SAMPLE_UNIT``,
``VALUE_UNIT``) form a header. Each block of lines starting with
``BEGIN_SAMPLE`` and ending with ``END_SAMPLE`` forms a single sample
(you can think of this as a vertical slice of your heap profile). The
hp2ps utility should accept any input with a properly-formatted header
followed by a series of *complete* samples.

Zooming in on regions of your profile
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

You can look at particular regions of your profile simply by loading a
copy of the ``.hp`` file into a text editor and deleting the unwanted
samples. The resulting ``.hp`` file can be run through ``hp2ps`` and
viewed or printed.

Viewing the heap profile of a running program
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The ``.hp`` file is generated incrementally as your program runs. In
941
principle, running :command:`hp2ps` on the incomplete file should produce a
942
snapshot of your program's heap usage. However, the last sample in the
943
file may be incomplete, causing :command:`hp2ps` to fail. If you are using a
944 945 946
machine with UNIX utilities installed, it's not too hard to work around
this problem (though the resulting command line looks rather Byzantine):

947
.. code-block:: sh
948 949 950 951 952 953 954

    head -`fgrep -n END_SAMPLE FOO.hp | tail -1 | cut -d : -f 1` FOO.hp \
        | hp2ps > FOO.ps

The command ``fgrep -n END_SAMPLE FOO.hp`` finds the end of every
complete sample in ``FOO.hp``, and labels each sample with its ending
line number. We then select the line number of the last complete sample
955 956
using :command:`tail` and :command:`cut`. This is used as a parameter to :command:`head`; the
result is as if we deleted the final incomplete sample from :file:`FOO.hp`.
957
This results in a properly-formatted .hp file which we feed directly to
958
:command:`hp2ps`.
959 960 961 962

Viewing a heap profile in real time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

963 964
The :command:`gv` and :command:`ghostview` programs have a "watch file" option
can be used to view an up-to-date heap profile of your program as it runs.
965
Simply generate an incremental heap profile as described in the previous
966
section. Run :command:`gv` on your profile:
967

968
.. code-block:: sh
969 970 971 972 973 974 975 976 977

      gv -watch -orientation=seascape FOO.ps

If you forget the ``-watch`` flag you can still select "Watch file" from
the "State" menu. Now each time you generate a new profile ``FOO.ps``
the view will update automatically.

This can all be encapsulated in a little script:

978
.. code-block:: sh
979 980 981 982 983 984 985 986 987 988 989

      #!/bin/sh
      head -`fgrep -n END_SAMPLE FOO.hp | tail -1 | cut -d : -f 1` FOO.hp \
        | hp2ps > FOO.ps
      gv -watch -orientation=seascape FOO.ps &
      while [ 1 ] ; do
        sleep 10 # We generate a new profile every 10 seconds.
        head -`fgrep -n END_SAMPLE FOO.hp | tail -1 | cut -d : -f 1` FOO.hp \
          | hp2ps > FOO.ps
      done

990 991
Occasionally :command:`gv` will choke as it tries to read an incomplete copy of
:file:`FOO.ps` (because :command:`hp2ps` is still running as an update occurs). A
992 993 994 995
slightly more complicated script works around this problem, by using the
fact that sending a SIGHUP to gv will cause it to re-read its input
file:

996
.. code-block:: sh
997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014

      #!/bin/sh
      head -`fgrep -n END_SAMPLE FOO.hp | tail -1 | cut -d : -f 1` FOO.hp \
        | hp2ps > FOO.ps
      gv FOO.ps &
      gvpsnum=$!
      while [ 1 ] ; do
        sleep 10
        head -`fgrep -n END_SAMPLE FOO.hp | tail -1 | cut -d : -f 1` FOO.hp \
          | hp2ps > FOO.ps
        kill -HUP $gvpsnum
      done

.. _prof-threaded:

Profiling Parallel and Concurrent Programs
------------------------------------------

1015 1016 1017
Combining :ghc-flag:`-threaded` and :ghc-flag:`-prof` is perfectly fine, and
indeed it is possible to profile a program running on multiple processors with
the RTS :rts-flag:`-N` option. [3]_
1018 1019 1020 1021 1022 1023 1024 1025 1026

Some caveats apply, however. In the current implementation, a profiled
program is likely to scale much less well than the unprofiled program,
because the profiling implementation uses some shared data structures
which require locking in the runtime system. Furthermore, the memory
allocation statistics collected by the profiled program are stored in
shared memory but *not* locked (for speed), which means that these
figures might be inaccurate for parallel programs.

1027
We strongly recommend that you use :ghc-flag:`-fno-prof-count-entries` when
1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076
compiling a program to be profiled on multiple cores, because the entry
counts are also stored in shared memory, and continuously updating them
on multiple cores is extremely slow.

We also recommend using
`ThreadScope <http://www.haskell.org/haskellwiki/ThreadScope>`__ for
profiling parallel programs; it offers a GUI for visualising parallel
execution, and is complementary to the time and space profiling features
provided with GHC.

.. _hpc:

Observing Code Coverage
-----------------------

.. index::
   single: code coverage
   single: Haskell Program Coverage
   single: hpc

Code coverage tools allow a programmer to determine what parts of their
code have been actually executed, and which parts have never actually
been invoked. GHC has an option for generating instrumented code that
records code coverage as part of the Haskell Program Coverage (HPC)
toolkit, which is included with GHC. HPC tools can be used to render the
generated code coverage information into human understandable format.

Correctly instrumented code provides coverage information of two kinds:
source coverage and boolean-control coverage. Source coverage is the
extent to which every part of the program was used, measured at three
different levels: declarations (both top-level and local), alternatives
(among several equations or case branches) and expressions (at every
level). Boolean coverage is the extent to which each of the values True
and False is obtained in every syntactic boolean context (ie. guard,
condition, qualifier).

HPC displays both kinds of information in two primary ways: textual
reports with summary statistics (``hpc report``) and sources with color
mark-up (``hpc markup``). For boolean coverage, there are four possible
outcomes for each guard, condition or qualifier: both True and False
values occur; only True; only False; never evaluated. In hpc-markup
output, highlighting with a yellow background indicates a part of the
program that was never evaluated; a green background indicates an
always-True expression and a red background indicates an always-False
one.

A small example: Reciprocation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1077
For an example we have a program, called :file:`Recip.hs`, which computes
1078
exact decimal representations of reciprocals, with recurring parts
1079
indicated in brackets. ::
1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111

    reciprocal :: Int -> (String, Int)
    reciprocal n | n > 1 = ('0' : '.' : digits, recur)
                 | otherwise = error
                  "attempting to compute reciprocal of number <= 1"
      where
      (digits, recur) = divide n 1 []
    divide :: Int -> Int -> [Int] -> (String, Int)
    divide n c cs | c `elem` cs = ([], position c cs)
                  | r == 0      = (show q, 0)
                  | r /= 0      = (show q ++ digits, recur)
      where
      (q, r) = (c*10) `quotRem` n
      (digits, recur) = divide n r (c:cs)

    position :: Int -> [Int] -> Int
    position n (x:xs) | n==x      = 1
                      | otherwise = 1 + position n xs

    showRecip :: Int -> String
    showRecip n =
      "1/" ++ show n ++ " = " ++
      if r==0 then d else take p d ++ "(" ++ drop p d ++ ")"
      where
      p = length d - r
      (d, r) = reciprocal n

    main = do
      number <- readLn
      putStrLn (showRecip number)
      main

1112
HPC instrumentation is enabled with the :ghc-flag:`-fhpc` flag:
1113

1114
.. code-block:: sh
1115 1116 1117 1118 1119 1120 1121 1122 1123

    $ ghc -fhpc Recip.hs

GHC creates a subdirectory ``.hpc`` in the current directory, and puts
HPC index (``.mix``) files in there, one for each module compiled. You
don't need to worry about these files: they contain information needed
by the ``hpc`` tool to generate the coverage data for compiled modules
after the program is run.

1124
.. code-block:: sh
1125 1126 1127 1128 1129 1130

    $ ./Recip
    1/3
    = 0.(3)

Running the program generates a file with the ``.tix`` suffix, in this
1131
case :file:`Recip.tix`, which contains the coverage data for this run of the
1132 1133 1134 1135 1136 1137 1138
program. The program may be run multiple times (e.g. with different test
data), and the coverage data from the separate runs is accumulated in
the ``.tix`` file. To reset the coverage data and start again, just
remove the ``.tix`` file.

Having run the program, we can generate a textual summary of coverage:

1139
.. code-block:: none
1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154

    $ hpc report Recip
     80% expressions used (81/101)
     12% boolean coverage (1/8)
          14% guards (1/7), 3 always True,
                            1 always False,
                            2 unevaluated
           0% 'if' conditions (0/1), 1 always False
         100% qualifiers (0/0)
     55% alternatives used (5/9)
    100% local declarations used (9/9)
    100% top-level declarations used (5/5)

We can also generate a marked-up version of the source.

1155
.. code-block:: none
1156 1157 1158 1159 1160

    $ hpc markup Recip
    writing Recip.hs.html

This generates one file per Haskell module, and 4 index files,
1161 1162
:file:`hpc_index.html`, :file:`hpc_index_alt.html`, :file:`hpc_index_exp.html`,
:file:`hpc_index_fun.html`.
1163 1164 1165 1166

Options for instrumenting code for coverage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1167 1168 1169 1170
.. program:: hpc

.. ghc-flag:: -fhpc

1171 1172 1173 1174 1175
    Enable code coverage for the current module or modules being
    compiled.

    Modules compiled with this option can be freely mixed with modules
    compiled without it; indeed, most libraries will typically be
1176
    compiled without :ghc-flag:`-fhpc`. When the program is run, coverage data
1177
    will only be generated for those modules that were compiled with
1178
    :ghc-flag:`-fhpc`, and the :command:`hpc` tool will only show information about
1179 1180 1181 1182 1183 1184 1185
    those modules.

The hpc toolkit
~~~~~~~~~~~~~~~

The hpc command has several sub-commands:

1186
.. code-block:: none
1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226

    $ hpc
    Usage: hpc COMMAND ...

    Commands:
      help        Display help for hpc or a single command
    Reporting Coverage:
      report      Output textual report about program coverage
      markup      Markup Haskell source with program coverage
    Processing Coverage files:
      sum         Sum multiple .tix files in a single .tix file
      combine     Combine two .tix files in a single .tix file
      map         Map a function over a single .tix file
    Coverage Overlays:
      overlay     Generate a .tix file from an overlay file
      draft       Generate draft overlay that provides 100% coverage
    Others:
      show        Show .tix file in readable, verbose format
      version     Display version for hpc

In general, these options act on a ``.tix`` file after an instrumented
binary has generated it.

The hpc tool assumes you are in the top-level directory of the location
where you built your application, and the ``.tix`` file is in the same
top-level directory. You can use the flag ``--srcdir`` to use ``hpc``
for any other directory, and use ``--srcdir`` multiple times to analyse
programs compiled from difference locations, as is typical for packages.

We now explain in more details the major modes of hpc.

hpc report
^^^^^^^^^^

``hpc report`` gives a textual report of coverage. By default, all
modules and packages are considered in generating report, unless include
or exclude are used. The report is a summary unless the ``--per-module``
flag is used. The ``--xml-output`` option allows for tools to use hpc to
glean coverage.

1227
.. code-block:: none
1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250

    $ hpc help report
    Usage: hpc report [OPTION] .. <TIX_FILE> [<MODULE> [<MODULE> ..]]

    Options:

        --per-module                  show module level detail
        --decl-list                   show unused decls
        --exclude=[PACKAGE:][MODULE]  exclude MODULE and/or PACKAGE
        --include=[PACKAGE:][MODULE]  include MODULE and/or PACKAGE
        --srcdir=DIR                  path to source directory of .hs files
                                      multi-use of srcdir possible
        --hpcdir=DIR                  append sub-directory that contains .mix files
                                      default .hpc [rarely used]
        --reset-hpcdirs               empty the list of hpcdir's
                                      [rarely used]
        --xml-output                  show output in XML

hpc markup
^^^^^^^^^^

``hpc markup`` marks up source files into colored html.

1251
.. code-block:: none
1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276

    $ hpc help markup
    Usage: hpc markup [OPTION] .. <TIX_FILE> [<MODULE> [<MODULE> ..]]

    Options:

        --exclude=[PACKAGE:][MODULE]  exclude MODULE and/or PACKAGE
        --include=[PACKAGE:][MODULE]  include MODULE and/or PACKAGE
        --srcdir=DIR                  path to source directory of .hs files
                                      multi-use of srcdir possible
        --hpcdir=DIR                  append sub-directory that contains .mix files
                                      default .hpc [rarely used]
        --reset-hpcdirs               empty the list of hpcdir's
                                      [rarely used]
        --fun-entry-count             show top-level function entry counts
        --highlight-covered           highlight covered code, rather that code gaps
        --destdir=DIR                 path to write output to

hpc sum
^^^^^^^

``hpc sum`` adds together any number of ``.tix`` files into a single
``.tix`` file. ``hpc sum`` does not change the original ``.tix`` file;
it generates a new ``.tix`` file.

1277
.. code-block:: none
1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297

    $ hpc help sum
    Usage: hpc sum [OPTION] .. <TIX_FILE> [<TIX_FILE> [<TIX_FILE> ..]]
    Sum multiple .tix files in a single .tix file

    Options:

        --exclude=[PACKAGE:][MODULE]  exclude MODULE and/or PACKAGE
        --include=[PACKAGE:][MODULE]  include MODULE and/or PACKAGE
        --output=FILE                 output FILE
        --union                       use the union of the module namespace (default is intersection)

hpc combine
^^^^^^^^^^^

``hpc combine`` is the swiss army knife of ``hpc``. It can be used to
take the difference between ``.tix`` files, to subtract one ``.tix``
file from another, or to add two ``.tix`` files. hpc combine does not
change the original ``.tix`` file; it generates a new ``.tix`` file.

1298
.. code-block:: none
1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318

    $ hpc help combine
    Usage: hpc combine [OPTION] .. <TIX_FILE> <TIX_FILE>
    Combine two .tix files in a single .tix file

    Options:

        --exclude=[PACKAGE:][MODULE]  exclude MODULE and/or PACKAGE
        --include=[PACKAGE:][MODULE]  include MODULE and/or PACKAGE
        --output=FILE                 output FILE
        --function=FUNCTION           combine .tix files with join function, default = ADD
                                      FUNCTION = ADD | DIFF | SUB
        --union                       use the union of the module namespace (default is intersection)

hpc map
^^^^^^^

hpc map inverts or zeros a ``.tix`` file. hpc map does not change the
original ``.tix`` file; it generates a new ``.tix`` file.

1319
.. code-block:: none
1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340

    $ hpc help map
    Usage: hpc map [OPTION] .. <TIX_FILE>
    Map a function over a single .tix file

    Options:

        --exclude=[PACKAGE:][MODULE]  exclude MODULE and/or PACKAGE
        --include=[PACKAGE:][MODULE]  include MODULE and/or PACKAGE
        --output=FILE                 output FILE
        --function=FUNCTION           apply function to .tix files, default = ID
                                      FUNCTION = ID | INV | ZERO
        --union                       use the union of the module namespace (default is intersection)

hpc overlay and hpc draft
^^^^^^^^^^^^^^^^^^^^^^^^^

Overlays are an experimental feature of HPC, a textual description of
coverage. hpc draft is used to generate a draft overlay from a .tix
file, and hpc overlay generates a .tix files from an overlay.

1341
.. code-block:: none
1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392

    % hpc help overlay
    Usage: hpc overlay [OPTION] .. <OVERLAY_FILE> [<OVERLAY_FILE> [...]]

    Options:

        --srcdir=DIR   path to source directory of .hs files
                       multi-use of srcdir possible
        --hpcdir=DIR                  append sub-directory that contains .mix files
                                      default .hpc [rarely used]
        --reset-hpcdirs               empty the list of hpcdir's
                                      [rarely used]
        --output=FILE  output FILE
    % hpc help draft
    Usage: hpc draft [OPTION] .. <TIX_FILE>

    Options:

        --exclude=[PACKAGE:][MODULE]  exclude MODULE and/or PACKAGE
        --include=[PACKAGE:][MODULE]  include MODULE and/or PACKAGE
        --srcdir=DIR                  path to source directory of .hs files
                                      multi-use of srcdir possible
        --hpcdir=DIR                  append sub-directory that contains .mix files
                                      default .hpc [rarely used]
        --reset-hpcdirs               empty the list of hpcdir's
                                      [rarely used]
        --output=FILE                 output FILE

Caveats and Shortcomings of Haskell Program Coverage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

HPC does not attempt to lock the ``.tix`` file, so multiple concurrently
running binaries in the same directory will exhibit a race condition.
There is no way to change the name of the ``.tix`` file generated, apart
from renaming the binary. HPC does not work with GHCi.

.. _ticky-ticky:

Using “ticky-ticky” profiling (for implementors)
------------------------------------------------

.. index::
   single: ticky-ticky profiling

Because ticky-ticky profiling requires a certain familiarity with GHC
internals, we have moved the documentation to the GHC developers wiki.
Take a look at its
:ghc-wiki:`overview of the profiling options <Commentary/Profiling>`,
which includeds a link to the ticky-ticky profiling page.

.. [1]
1393
   :ghc-flag:`-fprof-auto` was known as ``-auto-all`` prior to
1394 1395 1396 1397 1398 1399 1400 1401
   GHC 7.4.1.

.. [2]
   Note that this policy has changed slightly in GHC 7.4.1 relative to
   earlier versions, and may yet change further, feedback is welcome.

.. [3]
   This feature was added in GHC 7.4.1.