...
 
Commits (15)
......@@ -109,6 +109,7 @@ spectral/constraints/constraints
spectral/cryptarithm1/cryptarithm1
spectral/cryptarithm2/cryptarithm2
spectral/cse/cse
spectral/dom-lt/dom-lt
spectral/eliza/eliza
spectral/exact-reals/exact-reals
spectral/expert/expert
......
......@@ -3,6 +3,8 @@ variables:
validate:
image: "registry.gitlab.haskell.org/ghc/ci-images/x86_64-linux-deb9:$DOCKER_REV"
tags:
- x86_64-linux
before_script:
- git clean -xdf
- sudo apt install -y time
......
......@@ -12,16 +12,20 @@ pick `$(which ghc)` or whatever the `HC` environment variable is set to.
Additional information can also be found on
[NoFib's wiki page](https://ghc.haskell.org/trac/ghc/wiki/Building/RunningNoFib).
There's also a `easy.sh` helper script, which as name implies, is
automated and easy way to run `nofib`.
See the section at the end of README for its usage.
## Using
<details>
<summary>Git symlink support for Windows machines</summary>
NoFib uses a few symlinks here and there to share code between benchmarks.
Git for Windows has symlinks support for some time now, but
[it may not be enabled by default](https://stackoverflow.com/a/42137273/388010).
You will notice strange `make boot` failures if it's not enabled for you.
Make sure you follow the instructions in the link to enable symlink support,
possibly as simple as through `git config core.symlinks true` or cloning with
`git clone -c core.symlinks=true <URL>`.
......@@ -49,6 +53,9 @@ $ make boot
$ make EXTRA_HC_OPTS="-fllvm"
```
**Note:** to get all the results, you have to `clean` and `boot` between
separate `nofib` runs.
To compare the results of multiple runs, save the output in a logfile
and use the program in `./nofib-analyse/nofib-analyse`, for example:
......@@ -150,11 +157,51 @@ If you add a benchmark try to set the problem sizes for
fast/normal/slow reasonably. [Modes](#modes) lists the recommended brackets for
each mode.
### Benchmark Categories
So you have a benchmark to submit but don't know in which subfolder to put it? Here's some
advice on the intended semantics of each category.
#### Single threaded benchmarks
These are run when you just type `make`. Their semantics is explained in
[the Nofib paper](https://link.springer.com/chapter/10.1007%2F978-1-4471-3215-8_17)
(You can find a .ps online, thanks to @bgamari. Alternatively grep for
'Spectral' in docs/paper/paper.verb).
- `imaginary`: Mostly toy benchmarks, solving puzzles like n-queens.
- `spectral`: Algorithmic kernels, like FFT. If you want to add a benchmark of a
library, this most certainly the place to put it.
- `real`: Actual applications, with a command-line interface and all. Because of
the large dependency footprint of today's applications, these have become
rather aged.
- `shootout`: Benchmarks from
[the benchmarks game](https://benchmarksgame-team.pages.debian.net/benchmarksgame/),
formerly known as "language shootout".
Most of the benchmarks are quite old and aren't really written in way one would
write high-performance Haskell code today (e.g., use of `String`, lists,
redefining own list combinators that don't take part in list fusion, rare use of
strictness annotations or unboxed data), so new benchmarks for the `real` and
`spectral` in brackets in particular are always welcome!
#### Other categories
Other than the default single-threaded categories above, there are the
following (SG: I'm guessing here, have never run them):
- `gc`: Run by `make -C gc` (though you'll probably have to edit the Makefile to
your specific config). Select benchmarks from `spectral` and `real`, plus a
few more (Careful, these have not been touched by #15999/!5, see the next
subsection). Testdrives different GC configs, apparently.
- `smp`: Microbenchmarks for the `-threaded` runtime, measuring scheduler
performance on concurrent and STM-heavy code.
### Stability wrt. GC paramerisations
Additionally, pay attention that your benchmarks are stable wrt. different
Additionally, pay attention that your benchmarks are stable wrt. different
GC paramerisations, so that small changes in allocation don't lead to big,
unexplicable jumps in performance. See Trac #15999 for details. Also make sure
unexplicable jumps in performance. See #15999 for details. Also make sure
that you run the benchmark with the default GC settings, as enlarging Gen 0 or
Gen 1 heaps just amplifies the problem.
......@@ -164,6 +211,23 @@ working set grows and shrinks (e.g. is approximately constant) over the whole
run of the benchmark. You can ensure this by iterating your main logic $n times
(how often depends on your program, but in the ball park of 100-1000).
You can test stability by plotting productivity curves for your `fast` settings
with the `prod.py` script attached to Trac #15999.
with the `prod.py` script attached to #15999.
If in doubt, ask Sebastian Graf for help.
## easy.sh
```
./easy.sh - easy nofib
Usage: ./easy.sh [ -m mode ] /path/to/baseline/ghc /path/to/new/ghc"
GHC paths can point to the root of the GHC repository,
if it's build with Hadrian.
Available options:
-m MODE nofib mode: fast norm slow
This script caches the results using the sha256 of ghc executable.
Remove these files, if you want to rerun the benchmark.
```
#!/bin/sh
echo '\033]0;NOFIB: starting...\007'
# Settings
#######################################################################
mode=norm
# "Library" part
#######################################################################
show_usage () {
cat <<EOF
./easy.sh - easy nofib
Usage: ./easy.sh [ -m mode ] /path/to/baseline/ghc /path/to/new/ghc"
GHC paths can point to the root of the GHC repository,
if it's build with Hadrian.
Available options:
-m MODE nofib mode: fast norm slow
This script caches the results using the sha256 of ghc executable.
Remove these files, if you want to rerun the benchmark.
EOF
}
hashoffile () {
shasum -a 256 $1 | awk '{ print $1 }'
}
# getopt
#######################################################################
while getopts 'm:' flag; do
case $flag in
m)
case $OPTARG in
slow)
mode=$OPTARG
;;
norm)
mode=$OPTARG
;;
fast)
mode=$OPTARG
;;
*)
echo "Unknown mode: $OPTARG"
show_usage
exit 1
;;
esac
;;
?) show_usage
;;
esac
done
shift $((OPTIND - 1))
if [ $# -ne 2 ]; then
echo "Expected two arguments: ghc executables or roots of source repositories"
show_usage
exit 1
fi
OLD_HC=$1
NEW_HC=$2
# Set up
#######################################################################
# Arguments can point to GHC repository roots
if [ -d $OLD_HC -a -f "$OLD_HC/_build/stage1/bin/ghc" ]; then
OLD_HC="$OLD_HC/_build/stage1/bin/ghc"
fi
if [ -d $NEW_HC -a -f "$NEW_HC/_build/stage1/bin/ghc" ]; then
NEW_HC="$NEW_HC/_build/stage1/bin/ghc"
fi
# Check we have executables
if [ ! -f $NEW_HC -a -x $OLD_HC ]; then
echo "$OLD_HC is not an executable"
exit 1
fi
if [ ! -f $NEW_HC -a -x $NEW_HC ]; then
echo "$NEW_HC is not an executable"
exit 1
fi
# Info before we get going
#######################################################################
echo "Running nofib (mode=$mode) with $OLD_HC and $NEW_HC"
echo "Running nofib (mode=$mode) with $OLD_HC and $NEW_HC" | sed 's/./-/g'
sleep 2
# Run nofib
#######################################################################
# Run with old ghc
echo '\033]0;NOFIB: old\007'
OLD_HASH=$(hashoffile $OLD_HC)
OLD_OUTPUT=result-$OLD_HASH-$mode.txt
if [ -f $OLD_OUTPUT ]; then
echo "$OLD_OUTPUT exists; not re-running."
else
echo '\033]0;NOFIB: old, cleaning...\007'
make clean
echo '\033]0;NOFIB: old, booting...\007'
make boot mode=$mode HC=$OLD_HC
echo '\033]0;NOFIB: old, benchmarking...\007'
make mode=$mode HC=$OLD_HC 2>&1 | tee $OLD_OUTPUT
fi
# Run with new ghc
echo '\033]0;NOFIB: new\007'
NEW_HASH=$(hashoffile $NEW_HC)
NEW_OUTPUT=result-$NEW_HASH-$mode.txt
if [ -f $NEW_OUTPUT ]; then
echo "$NEW_OUTPUT exists; not re-running."
else
echo '\033]0;NOFIB: new, cleaning...\007'
make clean
echo '\033]0;NOFIB: new, booting...\007'
make boot mode=$mode HC=$NEW_HC
echo '\033]0;NOFIB: new, benchmarking...\007'
make mode=$mode HC=$NEW_HC 2>&1 | tee $NEW_OUTPUT
fi
# Done
#######################################################################
echo '\033]0;NOFIB: done\007'
# Analyse
./nofib-analyse/nofib-analyse $OLD_OUTPUT $NEW_OUTPUT > report.txt
# Show report
less report.txt
......@@ -75,7 +75,7 @@ the roots of two binomial trees and makes the larger a child of the
smaller (thus bumping its degree by one). It is essential that this
only be called on binomial trees of equal degree.
>link (a @ (Node x as)) (b @ (Node y bs)) =
>link (a@(Node x as)) (b@(Node y bs)) =
> if x <= y then Node x (b:as) else Node y (a:bs)
It will also be useful to extract the minimum element from a tree.
......
......@@ -4,7 +4,7 @@ module Parser ( parseModule, parseStmt, parseIdentifier, parseType,
#include "HsVersions.h"
import HsSyn
import GHC.Hs
import RdrHsSyn
import HscTypes ( IsBootInterface, DeprecTxt )
import Lexer
......
......@@ -16,7 +16,7 @@ module Parser ( parseModule, parseStmt, parseIdentifier, parseType,
#include "HsVersions.h"
import HsSyn
import GHC.Hs
import RdrHsSyn
import HscTypes ( IsBootInterface, DeprecTxt )
import Lexer
......
......@@ -295,16 +295,16 @@ avBelowEQrep (Rep2 lf1 mf1 hfs1) (Rep2 lf2 mf2 hfs2)
--
(\/) :: Route -> Route -> Route
p@ Zero \/ q = q
p@ One \/ q = p
p@Zero \/ q = q
p@One \/ q = p
p@ Stop1 \/ q = q
p@Stop1 \/ q = q
p@(Up1 rs1) \/ Stop1 = p
p@(Up1 rs1) \/ Up1 rs2 = Up1 (myZipWith2 (\/) rs1 rs2)
p@ Stop2 \/ q = q
p@ Up2 \/ Stop2 = p
p@ Up2 \/ q = q
p@Stop2 \/ q = q
p@Up2 \/ Stop2 = p
p@Up2 \/ q = q
p@(UpUp2 rs1) \/ UpUp2 rs2 = UpUp2 (myZipWith2 (\/) rs1 rs2)
p@(UpUp2 rs1) \/ q = p
......@@ -361,16 +361,16 @@ avLUBmax0frontier f0a f0b
--
(/\) :: Route -> Route -> Route
p@ Zero /\ q = p
p@ One /\ q = q
p@Zero /\ q = p
p@One /\ q = q
p@ Stop1 /\ q = p
p@Stop1 /\ q = p
p@(Up1 rs1) /\ (Up1 rs2) = Up1 (myZipWith2 (/\) rs1 rs2)
p@(Up1 rs1) /\ q = q
p@ Stop2 /\ q = p
p@ Up2 /\ q@ Stop2 = q
p@ Up2 /\ q = p
p@Stop2 /\ q = p
p@Up2 /\ q@Stop2 = q
p@Up2 /\ q = p
p@(UpUp2 rs1) /\ q@(UpUp2 rs2) = UpUp2 (myZipWith2 (/\) rs1 rs2)
p@(UpUp2 rs1) /\ q = q
......
......@@ -23,10 +23,10 @@ infix 9 %%
bmNorm :: Domain -> Route -> Route
bmNorm Two r = r
bmNorm (Lift1 ds) r@ Stop1 = r
bmNorm (Lift1 ds) r@Stop1 = r
bmNorm (Lift1 ds) (Up1 rs) = Up1 (myZipWith2 bmNorm ds rs)
bmNorm (Lift2 ds) r@ Stop2 = r
bmNorm (Lift2 ds) r@ Up2 = r
bmNorm (Lift2 ds) r@Stop2 = r
bmNorm (Lift2 ds) r@Up2 = r
bmNorm (Lift2 ds) (UpUp2 rs) = UpUp2 (myZipWith2 bmNorm ds rs)
bmNorm d (Rep rep) = Rep (bmNorm_rep d rep)
......
......@@ -17,7 +17,7 @@ p`onto`l | vertical l = proj(p)==proj(s(l))
-- v `into` ls means that proj(v) is inside (including the border of) proj(ls).
into :: Vector -> Plate -> Bool
v`into`p @ (Plt _ ls)
v`into`p@(Plt _ ls)
| vertical p = or [v`onto`l |l<-ls]
| otherwise = and [a>=0| a<-zs] || and [a<=0| a<-zs]
where zs = [z ( (v-s(l)) * h(l) )| l<-ls]
......@@ -332,7 +332,7 @@ show_comment tr@(TreeSt t _ _)
edit_comment tr @ (TreeSt t tl gst)
edit_comment tr@(TreeSt t tl gst)
= x_form True [InComment "Edit Comment", InMultiText "Comment:" com] /./
exp
where
......
......@@ -383,7 +383,7 @@ constructor (SG sg) i j k
{- Datatype elimination -}
recurse tmL (TM (tm @ (Binder Pi (Symbol_dec tm1 _) _ _ _)) _ sg)
recurse tmL (TM (tm@(Binder Pi (Symbol_dec tm1 _) _ _ _)) _ sg)
= if forall ok (zip tmL tyL)
then
TM (Recurse (map fst tmL) tm [] []) tm sg
......
......@@ -148,7 +148,7 @@ select_trm tm iL
sel_trm (i:iL) (Binary' _ tm1 tm2 _ _) dcL
= sel_trm iL ([tm1,tm2]!!i) dcL
sel_trm (i:iL) (Cond (dc @ (Axiom_dec tm inf)) tm1 tm2 _ _) dcL
sel_trm (i:iL) (Cond (dc@(Axiom_dec tm inf)) tm1 tm2 _ _) dcL
| i==0 = sel_dec iL dc dcL
| i/=0 = sel_trm iL ([tm1,tm2]!!(i-1)) (dc1:dcL)
where
......
......@@ -289,13 +289,13 @@ extract_dc i dc
where
extract :: Int -> IDec -> [IDec] -> IDec
extract 0 (dc @ (Symbol_dec _ _)) _ = dc
extract 0 (dc@(Symbol_dec _ _)) _ = dc
extract 0 (dc @ (Axiom_dec _ _)) _ = dc
extract 0 (dc@(Axiom_dec _ _)) _ = dc
extract 0 (dc @ (Def _ _ _)) _ = dc
extract 0 (dc@(Def _ _ _)) _ = dc
extract 0 (dc @ (Data _ _ _)) _ = dc
extract 0 (dc@(Data _ _ _)) _ = dc
-- extract 0 _ _ = error "BadIndex" -- ** exn
......@@ -379,7 +379,7 @@ typ_of_dec (Symbol_dec tm _) = tm
typ_of_dec (Axiom_dec tm _) = tm
typ_of_dec (dc @ (Decpair dc1 dc2 _))
typ_of_dec (dc@(Decpair dc1 dc2 _))
= if is_sym_dec dc
then Binder Sigma dc1 (typ_of_dec dc2) [] []
else if is_axm_dec dc
......@@ -410,7 +410,7 @@ mk_fnspace tm1 tm2
mk_sms (Symbol_dec _ _) i j
= (Sym i j [] [] , j+1)
mk_sms (dc @ (Decpair dc1 dc2 _)) i j
mk_sms (dc@(Decpair dc1 dc2 _)) i j
= (Pair sms1 sms2 (typ_of_dec dc) [] [] , j2)
where
(sms1, j1) = mk_sms dc1 i (j+1)
......
......@@ -152,7 +152,7 @@ eta_match dc tm i = error "VTS_ERROR" -- ** exn
make_rec fntype clause_ty []
= clause_ty
make_rec (fntype @ ( Binder Pi dc tm _ _)) clause_ty (ty:tyL)
make_rec (fntype@( Binder Pi dc tm _ _)) clause_ty (ty:tyL)
= Binder Pi (Symbol_dec ty2 []) ty1 [] []
where
ty1 = make_rec (shift_trm [] 1 fntype) (shift_trm [] 1 clause_ty) tyL
......@@ -163,10 +163,10 @@ make_rec (fntype @ ( Binder Pi dc tm _ _)) clause_ty (ty:tyL)
gen_type i (fntype @(Binder Pi dc tm _ _)) rectypeL const []
gen_type i (fntype@(Binder Pi dc tm _ _)) rectypeL const []
= make_rec fntype (subst_trm dc tm const) rectypeL
gen_type i (fntype @(Binder Pi dc tm _ _)) rectypeL const (ty : tyL)
gen_type i (fntype@(Binder Pi dc tm _ _)) rectypeL const (ty : tyL)
= Binder Pi (Symbol_dec (shift_trm [] i ty) []) ty1 [] []
where
const1 = App (shift_trm [] 1 const) (Sym 0 0 [] []) [] []
......
......@@ -106,7 +106,7 @@ lift_tactic_valid _ _ t = t
lift_ordtactic_valid vf gst t @(Tree g tl NONE vf' u)
lift_ordtactic_valid vf gst t@(Tree g tl NONE vf' u)
= Tree g tl' dn vf' u
-- handle _ => t
where
......
......@@ -423,7 +423,7 @@ sub process_stats_file {
$ResidencySamples = $2;
}
if ( /^\s+([0-9]+)\s+M[Bb] total memory/ ) {
if ( /^\s+([0-9]+)\s+Mi?[Bb] total memory/ ) {
$TotMem = $1;
}
......
......@@ -59,7 +59,7 @@ endif
# Create output to validate against
revcomp-c : revcomp-c.o
gcc $< -o $@ -pthread
$(CC) $< -o $@ -pthread
reverse-complement.faststdout : revcomp-c $(INPUT_FILE)
./revcomp-c < $(INPUT_FILE) | tr -d '\r' > $@
......
......@@ -3,7 +3,7 @@ include $(TOP)/mk/boilerplate.mk
# TODO(michalt): Re-enable `secretary` (requires `random`)
SUBDIRS = ansi atom awards banner boyer boyer2 calendar cichelli circsim \
clausify constraints cryptarithm1 cryptarithm2 cse eliza expert \
clausify constraints cryptarithm1 cryptarithm2 cse dom-lt eliza expert \
exact-reals fft2 fibheaps fish gcd hartel integer knights lambda \
last-piece lcss life mandel mandel2 mate minimax multiplier para \
power pretty primetest puzzle rewrite scc simple sorting sphere \
......
This diff is collapsed.
{-# OPTIONS_GHC -fno-full-laziness #-}
module Main where
import System.Environment
import Data.Bifunctor
import Dom
import Control.Monad
import Data.Traversable
-- Take a filename as input.
-- Each line is expected to be a graph of the form (root, [(vertex, [successors])])
-- Compute the dominators for each line.
main :: IO ()
main = do
[inputFile,repetitions] <- getArgs
let repetitions' = read repetitions
sgraphs <- map (second fromAdj . read) . lines <$> readFile inputFile :: IO [Rooted]
let s = flip map [0..repetitions'] $ (\i -> sum (map (\g -> i + doGraph g) sgraphs)) :: [Int]
print $ sum s
doGraph :: Rooted -> Int
doGraph g = length . idom $ g
\ No newline at end of file
TOP = ../..
include $(TOP)/mk/boilerplate.mk
include $(TOP)/mk/target.mk
# The core (Dom.hs) is taken from the dom-lt package.
# Compute dominators over a graph
# Graphs are real examples for control flow graphs produced by GHC.
# First parameter controls input file, second parameter controls repetitions.
# We do more iterations here for stability over different GC parameterisations
# and to shift compute time away from read/graph creation.
FAST_OPTS = ghc-examples.in 50
NORM_OPTS = ghc-examples.in 400
SLOW_OPTS = ghc-examples.in 1800
# We require containers.
HC_OPTS += -package containers
This diff is collapsed.
......@@ -81,7 +81,7 @@ the roots of two binomial trees and makes the larger a child of the
smaller (thus bumping its degree by one). It is essential that this
only be called on binomial trees of equal degree.
>link (a @ (Node x as)) (b @ (Node y bs)) =
>link (a@(Node x as)) (b@(Node y bs)) =
> if x <= y then Node x (b:as) else Node y (a:bs)
It will also be useful to extract the minimum element from a tree.
......