...
 
Commits (51)
......@@ -109,6 +109,7 @@ spectral/constraints/constraints
spectral/cryptarithm1/cryptarithm1
spectral/cryptarithm2/cryptarithm2
spectral/cse/cse
spectral/dom-lt/dom-lt
spectral/eliza/eliza
spectral/exact-reals/exact-reals
spectral/expert/expert
......
......@@ -3,6 +3,8 @@ variables:
validate:
image: "registry.gitlab.haskell.org/ghc/ci-images/x86_64-linux-deb9:$DOCKER_REV"
tags:
- x86_64-linux
before_script:
- git clean -xdf
- sudo apt install -y time
......
......@@ -12,6 +12,10 @@ pick `$(which ghc)` or whatever the `HC` environment variable is set to.
Additional information can also be found on
[NoFib's wiki page](https://ghc.haskell.org/trac/ghc/wiki/Building/RunningNoFib).
There's also a `easy.sh` helper script, which as name implies, is
automated and easy way to run `nofib`.
See the section at the end of README for its usage.
## Using
<details>
......@@ -49,6 +53,9 @@ $ make boot
$ make EXTRA_HC_OPTS="-fllvm"
```
**Note:** to get all the results, you have to `clean` and `boot` between
separate `nofib` runs.
To compare the results of multiple runs, save the output in a logfile
and use the program in `./nofib-analyse/nofib-analyse`, for example:
......@@ -142,7 +149,17 @@ Optionally combine this with `mode=fast`, see [Modes](#modes).
Some benchmarks aren't run by default and require extra packages are
installed for the GHC compiler being tested. These packages include:
* stm - for smp benchmarks
* `old-time`: for `gc` benchmarks
* `stm`: for smp benchmarks
* `parallel`: for parallel benchmarks
* `random`: for various benchmarks
These can be installed with
```
cabal v1-install --allow-newer -w $HC random parallel old-time
````
## Adding benchmarks
......@@ -150,20 +167,87 @@ If you add a benchmark try to set the problem sizes for
fast/normal/slow reasonably. [Modes](#modes) lists the recommended brackets for
each mode.
### Benchmark Categories
So you have a benchmark to submit but don't know in which subfolder to put it? Here's some
advice on the intended semantics of each category.
#### Single threaded benchmarks
These are run when you just type `make`. Their semantics is explained in
[the Nofib paper](https://link.springer.com/chapter/10.1007%2F978-1-4471-3215-8_17)
(You can find a .ps online, thanks to @bgamari. Alternatively grep for
'Spectral' in docs/paper/paper.verb).
- `imaginary`: Mostly toy benchmarks, solving puzzles like n-queens.
- `spectral`: Algorithmic kernels, like FFT. If you want to add a benchmark of a
library, this most certainly the place to put it.
- `real`: Actual applications, with a command-line interface and all. Because of
the large dependency footprint of today's applications, these have become
rather aged.
- `shootout`: Benchmarks from
[the benchmarks game](https://benchmarksgame-team.pages.debian.net/benchmarksgame/),
formerly known as "language shootout".
Most of the benchmarks are quite old and aren't really written in way one would
write high-performance Haskell code today (e.g., use of `String`, lists,
redefining own list combinators that don't take part in list fusion, rare use of
strictness annotations or unboxed data), so new benchmarks for the `real` and
`spectral` in brackets in particular are always welcome!
#### Other categories
Other than the default single-threaded categories above, there are the
following (SG: I'm guessing here, have never run them):
- `gc`: Run by `make -C gc` (though you'll probably have to edit the Makefile to
your specific config). Select benchmarks from `spectral` and `real`, plus a
few more (Careful, these have not been touched by #15999/!5, see the next
subsection). Testdrives different GC configs, apparently.
- `smp`: Microbenchmarks for the `-threaded` runtime, measuring scheduler
performance on concurrent and STM-heavy code.
### Stability wrt. GC paramerisations
Additionally, pay attention that your benchmarks are stable wrt. different
GC paramerisations, so that small changes in allocation don't lead to big,
unexplicable jumps in performance. See Trac #15999 for details. Also make sure
unexplicable jumps in performance. See #15999 for details. Also make sure
that you run the benchmark with the default GC settings, as enlarging Gen 0 or
Gen 1 heaps just amplifies the problem.
As a rule of thumb on how to ensure this: Make sure that your benchmark doesn't
just build up one big data and consume it in a final step, but rather that the
working set grows and shrinks (e.g. is approximately constant) over the whole
run of the benchmark. You can ensure this by iterating your main logic $n times
(how often depends on your program, but in the ball park of 100-1000).
run of the benchmark. You can ensure this by iterating your main logic `$n`
times (how often depends on your program, but in the ball park of 100-1000).
You can test stability by plotting productivity curves for your `fast` settings
with the `prod.py` script attached to Trac #15999.
with the `prod.py` script attached to #15999.
If in doubt, ask Sebastian Graf for help.
## Important notes
Note that some of these tests (e.g. `spectral/fish`) tend to be very sensitive
to branch predictor effectiveness. This means that changes in the compiler
can easily be masked by "random" fluctuations in the code layout produced by
particular compiler runs. Recent GHC versions provide the `-fproc-alignment`
flag to pad procedures, ensuring slightly better stability across runs. If you
are seeing an unexpected change in performance try adding `-fproc-alignment=64`
the compiler flags of both your baseline and test tree.
## easy.sh
```
./easy.sh - easy nofib
Usage: ./easy.sh [ -m mode ] /path/to/baseline/ghc /path/to/new/ghc"
GHC paths can point to the root of the GHC repository,
if it's build with Hadrian.
Available options:
-m MODE nofib mode: fast norm slow
This script caches the results using the sha256 of ghc executable.
Remove these files, if you want to rerun the benchmark.
```
#!/bin/sh
echo '\033]0;NOFIB: starting...\007'
# Settings
#######################################################################
mode=norm
# "Library" part
#######################################################################
show_usage () {
cat <<EOF
./easy.sh - easy nofib
Usage: ./easy.sh [ -m mode ] /path/to/baseline/ghc /path/to/new/ghc"
GHC paths can point to the root of the GHC repository,
if it's build with Hadrian.
Available options:
-m MODE nofib mode: fast norm slow
This script caches the results using the sha256 of ghc executable.
Remove these files, if you want to rerun the benchmark.
EOF
}
hashoffile () {
shasum -a 256 $1 | awk '{ print $1 }'
}
# getopt
#######################################################################
while getopts 'm:' flag; do
case $flag in
m)
case $OPTARG in
slow)
mode=$OPTARG
;;
norm)
mode=$OPTARG
;;
fast)
mode=$OPTARG
;;
*)
echo "Unknown mode: $OPTARG"
show_usage
exit 1
;;
esac
;;
?) show_usage
;;
esac
done
shift $((OPTIND - 1))
if [ $# -ne 2 ]; then
echo "Expected two arguments: ghc executables or roots of source repositories"
show_usage
exit 1
fi
OLD_HC=$1
NEW_HC=$2
# Set up
#######################################################################
# Arguments can point to GHC repository roots
if [ -d $OLD_HC -a -f "$OLD_HC/_build/stage1/bin/ghc" ]; then
OLD_HC="$OLD_HC/_build/stage1/bin/ghc"
fi
if [ -d $NEW_HC -a -f "$NEW_HC/_build/stage1/bin/ghc" ]; then
NEW_HC="$NEW_HC/_build/stage1/bin/ghc"
fi
# Check we have executables
if [ ! -f $NEW_HC -a -x $OLD_HC ]; then
echo "$OLD_HC is not an executable"
exit 1
fi
if [ ! -f $NEW_HC -a -x $NEW_HC ]; then
echo "$NEW_HC is not an executable"
exit 1
fi
# Info before we get going
#######################################################################
echo "Running nofib (mode=$mode) with $OLD_HC and $NEW_HC"
echo "Running nofib (mode=$mode) with $OLD_HC and $NEW_HC" | sed 's/./-/g'
sleep 2
# Run nofib
#######################################################################
# Run with old ghc
echo '\033]0;NOFIB: old\007'
OLD_HASH=$(hashoffile $OLD_HC)
OLD_OUTPUT=result-$OLD_HASH-$mode.txt
if [ -f $OLD_OUTPUT ]; then
echo "$OLD_OUTPUT exists; not re-running."
else
echo '\033]0;NOFIB: old, cleaning...\007'
make clean
echo '\033]0;NOFIB: old, booting...\007'
make boot mode=$mode HC=$OLD_HC
echo '\033]0;NOFIB: old, benchmarking...\007'
make mode=$mode HC=$OLD_HC 2>&1 | tee $OLD_OUTPUT
fi
# Run with new ghc
echo '\033]0;NOFIB: new\007'
NEW_HASH=$(hashoffile $NEW_HC)
NEW_OUTPUT=result-$NEW_HASH-$mode.txt
if [ -f $NEW_OUTPUT ]; then
echo "$NEW_OUTPUT exists; not re-running."
else
echo '\033]0;NOFIB: new, cleaning...\007'
make clean
echo '\033]0;NOFIB: new, booting...\007'
make boot mode=$mode HC=$NEW_HC
echo '\033]0;NOFIB: new, benchmarking...\007'
make mode=$mode HC=$NEW_HC 2>&1 | tee $NEW_OUTPUT
fi
# Done
#######################################################################
echo '\033]0;NOFIB: done\007'
# Analyse
./nofib-analyse/nofib-analyse $OLD_OUTPUT $NEW_OUTPUT > report.txt
# Show report
less report.txt
......@@ -2,10 +2,10 @@ TOP = ../..
include $(TOP)/mk/boilerplate.mk
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H16m -RTS
PROG_ARGS += +RTS -H16m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
include $(TOP)/mk/target.mk
......
......@@ -7,10 +7,10 @@ NORM_OPTS = 8 3000
SLOW_OPTS = 8 5000
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H256m -RTS
PROG_ARGS += +RTS -H256m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H30m -RTS
PROG_ARGS += +RTS -H30m -RTS
endif
include $(TOP)/mk/target.mk
......@@ -7,10 +7,10 @@ NORM_OPTS = 10
SLOW_OPTS = 11
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H330m -RTS
PROG_ARGS += +RTS -H330m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
include $(TOP)/mk/target.mk
......@@ -75,7 +75,7 @@ the roots of two binomial trees and makes the larger a child of the
smaller (thus bumping its degree by one). It is essential that this
only be called on binomial trees of equal degree.
>link (a @ (Node x as)) (b @ (Node y bs)) =
>link (a@(Node x as)) (b@(Node y bs)) =
> if x <= y then Node x (b:as) else Node y (a:bs)
It will also be useful to extract the minimum element from a tree.
......
......@@ -3,14 +3,14 @@ include $(TOP)/mk/boilerplate.mk
NORM_OPTS = 300000
SRC_HC_OPTS += -package array
SRC_RUNTEST_OPTS += +RTS -K64m -RTS
SRC_DEPS = array
PROG_ARGS += +RTS -K64m -RTS
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H128m -RTS
PROG_ARGS += +RTS -H128m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -cpp
# Bah.hs is a test file, which we don't want in SRCS
EXCLUDED_SRCS = Bah.hs
......@@ -11,10 +9,10 @@ NORM_OPTS = 9
SLOW_OPTS = 9
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H160m -RTS
PROG_ARGS += +RTS -H160m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
SRC_RUNTEST_OPTS += -stdout-binary
......
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -cpp -package old-time
SRC_DEPS = old-time
# kLongLivedTreeDepth = 17 :: Int
# kArraySize = 500000 :: Int
......@@ -13,7 +13,7 @@ NORM_OPTS = 18 500000 4 19
SLOW_OPTS = 19 500000 5 22
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H180m -RTS
PROG_ARGS += +RTS -H180m -RTS
endif
include $(TOP)/mk/target.mk
......
> {-# LANGUAGE CPP #-}
-----------------------------------------------------------------------------
Abstract syntax for grammar files.
......
> {-# LANGUAGE CPP #-}
-----------------------------------------------------------------------------
The Grammar data type.
......
> {-# LANGUAGE CPP #-}
-----------------------------------------------------------------------------
The lexer.
......
> {-# LANGUAGE CPP #-}
-----------------------------------------------------------------------------
The main driver.
......
......@@ -3,14 +3,15 @@ TOP = ../..
include $(TOP)/mk/boilerplate.mk
NORM_OPTS = TestInput.y
SRC_HC_OPTS += -cpp -package containers
SRC_HC_OPTS += -cpp
SRC_DEPS = containers
EXCLUDED_SRCS += TestInput.hs
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H128m -RTS
PROG_ARGS += +RTS -H128m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
include $(TOP)/mk/target.mk
......@@ -30,4 +30,6 @@ The parser monad.
> m >>= k = P $ \s l -> case runP m s l of
> OkP a -> runP (k a) s l
> FailP s -> FailP s
> instance MonadFail P where
> fail s = P $ \ _ _ -> FailP s
{-# LANGUAGE CPP #-}
module Set (
Set, null, member, empty, singleton,
union, difference, filter, fold,
......
......@@ -4,7 +4,7 @@ module Parser ( parseModule, parseStmt, parseIdentifier, parseType,
#include "HsVersions.h"
import HsSyn
import GHC.Hs
import RdrHsSyn
import HscTypes ( IsBootInterface, DeprecTxt )
import Lexer
......
......@@ -16,7 +16,7 @@ module Parser ( parseModule, parseStmt, parseIdentifier, parseType,
#include "HsVersions.h"
import HsSyn
import GHC.Hs
import RdrHsSyn
import HscTypes ( IsBootInterface, DeprecTxt )
import Lexer
......
......@@ -6,7 +6,7 @@ NORM_OPTS = 5000000
SLOW_OPTS = 100000000
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H430m -RTS
PROG_ARGS += +RTS -H430m -RTS
endif
include $(TOP)/mk/target.mk
......@@ -6,7 +6,7 @@ NORM_OPTS = 1 2 2000 1000 1001 4000
SLOW_OPTS = 1 2 4000 1000 1001 4000
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H256m -RTS
PROG_ARGS += +RTS -H256m -RTS
endif
include $(TOP)/mk/target.mk
......@@ -5,10 +5,10 @@ include $(TOP)/mk/boilerplate.mk
NORM_OPTS = 14
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H8m -RTS
PROG_ARGS += +RTS -H8m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
include $(TOP)/mk/target.mk
......@@ -8,8 +8,8 @@ NORM_OPTS = 80
SLOW_OPTS = 90
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H16m -RTS
PROG_ARGS += +RTS -H16m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
......@@ -7,14 +7,14 @@ all boot :: input
input : words
cat words words words words words words words words words words >$@
SRC_HC_OPTS += -package containers
SRC_DEPS = containers
NORM_OPTS = words input
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H32m -RTS
PROG_ARGS += +RTS -H32m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
include $(TOP)/mk/target.mk
......@@ -5,10 +5,10 @@ include $(TOP)/mk/boilerplate.mk
PROG_ARGS = 27000.1 27000.2
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H32m -RTS
PROG_ARGS += +RTS -H32m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H24m -RTS
PROG_ARGS += +RTS -H24m -RTS
endif
include $(TOP)/mk/target.mk
......@@ -6,4 +6,4 @@ FAST_OPTS = 150000
NORM_OPTS = 1500000
SLOW_OPTS = 7500000
SRC_HC_OPTS += -package array
SRC_DEPS = array
......@@ -3,7 +3,7 @@ include $(TOP)/mk/boilerplate.mk
-include opts.mk
# Seems to be a real memory hog, this one
SRC_RUNTEST_OPTS += +RTS -M300m -RTS
PROG_ARGS += +RTS -M300m -RTS
include $(TOP)/mk/target.mk
......
......@@ -59,7 +59,7 @@ endif
# All the standard gluing together, as in the comment right at the front
HC_OPTS = $(BOOTSTRAPPING_PACKAGE_CONF_HC_OPTS) $(SRC_HC_OPTS) $(WAY$(_way)_HC_OPTS) $($*_HC_OPTS) $(EXTRA_HC_OPTS)
HC_OPTS = $(BOOTSTRAPPING_PACKAGE_CONF_HC_OPTS) $(SRC_HC_OPTS) $(WAY$(_way)_HC_OPTS) $($*_HC_OPTS) $(EXTRA_HC_OPTS) $(addprefix -package, $(SRC_DEPS))
ifeq "$(HC_VERSION_GE_6_13)" "YES"
HC_OPTS += -rtsopts
endif
......
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_RUNTEST_OPTS += 8400
PROG_ARGS += 8400
include $(TOP)/mk/target.mk
......@@ -4,7 +4,7 @@ include $(TOP)/mk/boilerplate.mk
# Override default SRCS; the default is all source files
SRCS=parfact.hs
SRC_RUNTEST_OPTS += 8000000 1000
PROG_ARGS += 8000000 1000
SRC_HC_OPTS += -package concurrent
include $(TOP)/mk/target.mk
......
......@@ -3,7 +3,7 @@ include $(TOP)/mk/boilerplate.mk
# Override default SRCS; the default is all source files
SRCS=Main.hs
SRC_RUNTEST_OPTS += 20
PROG_ARGS += 20
SRC_HC_OPTS += -cpp -package concurrent
include $(TOP)/mk/target.mk
......
......@@ -2,7 +2,7 @@ TOP = ../..
include $(TOP)/mk/boilerplate.mk
PROG_ARGS = 10000 15000000
SRC_HC_OPTS += -package parallel
SRC_DEPS += parallel
include $(TOP)/mk/target.mk
......@@ -2,11 +2,11 @@ TOP = ../..
include $(TOP)/mk/boilerplate.mk
# This version just counts the results, and runs in constant space:
# SRC_RUNTEST_OPTS += 7 1163
# PROG_ARGS += 7 1163
# This version builds a list of the results, and needs a lot of memory:
SRC_RUNTEST_OPTS += 3 873
PROG_ARGS += 3 873
SRC_HC_OPTS += -package parallel
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
......@@ -13,6 +13,7 @@
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -package parallel -package parsec -fvia-C -fexcess-precision
SRC_HC_OPTS = -fexcess-precision
SRC_DEPS = parallel parsec
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -cpp -DSTRATEGIES -package random -package parallel
SRC_HC_OPTS += -cpp -DSTRATEGIES
SRC_DEPS = random parallel
# 28 = version
# 83 = input
PROG_ARGS = 28 83
PROG_ARGS += 28 83
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_RUNTEST_OPTS += -2.0 -2.0 2.0 2.0 1024 1024 256
PROG_ARGS += -2.0 -2.0 2.0 2.0 1024 1024 256
SRC_RUNTEST_OPTS += -stdout-binary
SRC_HC_OPTS += -package parallel
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
......@@ -9,7 +9,7 @@ include $(TOP)/mk/boilerplate.mk
FAST_OPTS = 100 1 10
NORM_OPTS = 600 1 10
SLOW_OPTS = 1000 1 10
SRC_HC_OPTS += -package parallel
SRC_DEPS = parallel
# FAST_OPTS =
# NORM_OPTS =
......
......@@ -7,9 +7,9 @@ SRCS = Board.hs \
Tree.hs \
Wins.hs \
Main.hs
HC_OPTS += -package parallel -package random
SRC_DEPS = parallel random
PROG_ARGS = 4 6
PROG_ARGS += 4 6
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
PROG_ARGS = 3000
SRC_HC_OPTS += -package parallel
PROG_ARGS += 3000
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_RUNTEST_OPTS += 43 11
SRC_HC_OPTS += -package parallel
PROG_ARGS += 43 11
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
......@@ -5,6 +5,6 @@ include $(TOP)/mk/boilerplate.mk
FAST_OPTS = 34 15 8
NORM_OPTS = 36 17 8
SLOW_OPTS = 34 18 8
SRC_HC_OPTS += -package parallel
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
PROG_ARGS = 300 100
SRC_HC_OPTS += -package parallel
PROG_ARGS += 300 100
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
PROG_ARGS = 500000
SRC_HC_OPTS += -package parallel
PROG_ARGS += 500000
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_RUNTEST_OPTS += 13
SRC_HC_OPTS += -package parallel
PROG_ARGS += 13
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
PROG_ARGS = 1000
SRC_HC_OPTS += -package parallel
PROG_ARGS += 1000
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
PROG_ARGS = 38 8000 100
SRC_HC_OPTS += -package parallel
PROG_ARGS += 38 8000 100
SRC_DEPS = parallel
# FAST_OPTS =
# NORM_OPTS =
......
{-# OPTIONS_GHC -XFlexibleInstances -XBangPatterns #-}
{-# LANGUAGE CPP, FlexibleInstances, BangPatterns #-}
-- Time-stamp: <2010-11-03 12:01:15 simonmar>
--
-- Test wrapper for (parallel) transitive closure computation.
......
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -cpp -DSTRATEGIES -DTRANSCL_NESTED -package random -package parallel -package containers
SRC_HC_OPTS += -DSTRATEGIES -DTRANSCL_NESTED
SRC_DEPS = random parallel containers
# XXX: only speeds up without optimisation. This is bad. Could be
# due to the nfib delay mucking up load-balancing.
......@@ -13,7 +14,7 @@ SRC_HC_OPTS += -O0
# 10 is about optimal for 7.1, greater degrades perf (less so for local-gc)
# dummy = 999 (always)
# delay = larger for
PROG_ARGS = 4 10 10 999 24
PROG_ARGS += 4 10 10 999 24
include $(TOP)/mk/target.mk
......@@ -13,7 +13,7 @@
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_RUNTEST_OPTS += 400 400
SRC_HC_OPTS += -package parallel
PROG_ARGS += 400 400
SRC_DEPS = parallel
include $(TOP)/mk/target.mk
......@@ -295,16 +295,16 @@ avBelowEQrep (Rep2 lf1 mf1 hfs1) (Rep2 lf2 mf2 hfs2)
--
(\/) :: Route -> Route -> Route
p@ Zero \/ q = q
p@ One \/ q = p
p@Zero \/ q = q
p@One \/ q = p
p@ Stop1 \/ q = q
p@Stop1 \/ q = q
p@(Up1 rs1) \/ Stop1 = p
p@(Up1 rs1) \/ Up1 rs2 = Up1 (myZipWith2 (\/) rs1 rs2)
p@ Stop2 \/ q = q
p@ Up2 \/ Stop2 = p
p@ Up2 \/ q = q
p@Stop2 \/ q = q
p@Up2 \/ Stop2 = p
p@Up2 \/ q = q
p@(UpUp2 rs1) \/ UpUp2 rs2 = UpUp2 (myZipWith2 (\/) rs1 rs2)
p@(UpUp2 rs1) \/ q = p
......@@ -361,16 +361,16 @@ avLUBmax0frontier f0a f0b
--
(/\) :: Route -> Route -> Route
p@ Zero /\ q = p
p@ One /\ q = q
p@Zero /\ q = p
p@One /\ q = q
p@ Stop1 /\ q = p
p@Stop1 /\ q = p
p@(Up1 rs1) /\ (Up1 rs2) = Up1 (myZipWith2 (/\) rs1 rs2)
p@(Up1 rs1) /\ q = q
p@ Stop2 /\ q = p
p@ Up2 /\ q@ Stop2 = q
p@ Up2 /\ q = p
p@Stop2 /\ q = p
p@Up2 /\ q@Stop2 = q
p@Up2 /\ q = p
p@(UpUp2 rs1) /\ q@(UpUp2 rs2) = UpUp2 (myZipWith2 (/\) rs1 rs2)
p@(UpUp2 rs1) /\ q = q
......
......@@ -23,10 +23,10 @@ infix 9 %%
bmNorm :: Domain -> Route -> Route
bmNorm Two r = r
bmNorm (Lift1 ds) r@ Stop1 = r
bmNorm (Lift1 ds) r@Stop1 = r
bmNorm (Lift1 ds) (Up1 rs) = Up1 (myZipWith2 bmNorm ds rs)
bmNorm (Lift2 ds) r@ Stop2 = r
bmNorm (Lift2 ds) r@ Up2 = r
bmNorm (Lift2 ds) r@Stop2 = r
bmNorm (Lift2 ds) r@Up2 = r
bmNorm (Lift2 ds) (UpUp2 rs) = UpUp2 (myZipWith2 bmNorm ds rs)
bmNorm d (Rep rep) = Rep (bmNorm_rep d rep)
......
{-# LANGUAGE CPP #-}
module Main (main){-export list added by partain-} where {
-- partain: with "ghc -cpp -DSLEAZY_UNBOXING", you get (guess what)?
......
......@@ -3,8 +3,6 @@ include $(TOP)/mk/boilerplate.mk
SRCS = BinConv.hs BinTest.hs Decode.hs Defaults.hs Encode.hs Main.hs PTTrees.hs Uncompress.hs
Lzw_HC_OPTS = -cpp
# Input files taken from http://mattmahoney.net/dc/enwik8.zip
#
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -fglasgow-exts
# Input files taken from http://mattmahoney.net/dc/enwik8.zip
include $(TOP)/mk/target.mk
{-# LANGUAGE MagicHash #-}
module WriteRoutines (outputCodes)
where
......
TOP = ../../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -fglasgow-exts -package transformers
include $(TOP)/mk/target.mk
......@@ -7,9 +7,6 @@
module EffBench where
import qualified Control.Monad.State.Strict as S
times :: Monad m => Int -> m a -> m ()
times n ma = go n where
go 0 = pure ()
......
TOP = ../../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -fglasgow-exts
include $(TOP)/mk/target.mk
TOP = ../../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -fglasgow-exts
include $(TOP)/mk/target.mk
TOP = ../../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -fglasgow-exts -package transformers -package mtl
SRC_DEPS = transformers mtl
include $(TOP)/mk/target.mk
TOP = ../../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -fglasgow-exts -package transformers
SRC_DEPS = transformers
include $(TOP)/mk/target.mk
TOP = ../../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -fglasgow-exts -package transformers -package mtl
SRC_DEPS = mtl
include $(TOP)/mk/target.mk
TOP = ../../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -fglasgow-exts -package transformers -package mtl
SRC_DEPS = mtl
include $(TOP)/mk/target.mk
TOP = ../..
include $(TOP)/mk/boilerplate.mk
SRC_HC_OPTS += -cpp
# Bah.hs is a test file, which we don't want in SRCS
EXCLUDED_SRCS = Bah.hs
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.