Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • ghc/nofib
  • sgraf812/nofib
  • Abhiroop/nofib
  • ggreif/nofib
  • phadej/nofib
  • AndreasK/nofib
  • takenobu-hs/nofib
  • hsyl20/nofib
  • trac-matthewbauer/nofib
  • bgamari/nofib
  • trac-vdukhovni/nofib
  • cptwunderlich/nofib
  • alinab/nofib
  • fp/nofib
  • teo/nofib
15 results
Show changes
Commits on Source (156)
Showing with 520 additions and 13257 deletions
......@@ -8,6 +8,13 @@ cachegrind.out.*
cachegrind.out.summary
perf.data
perf.data.*
dist-newstyle/
.ghc.environment.*
_make
tags
# Common for hackaged head workarounds
cabal.project.local
# Specific generated files
nofib-analyse/nofib-analyse
......@@ -107,6 +114,7 @@ spectral/constraints/constraints
spectral/cryptarithm1/cryptarithm1
spectral/cryptarithm2/cryptarithm2
spectral/cse/cse
spectral/dom-lt/dom-lt
spectral/eliza/eliza
spectral/exact-reals/exact-reals
spectral/expert/expert
......
validate:
image: ghcci/x86_64-linux-deb9:0.2
# Commit taken from https://gitlab.haskell.org/ghc/ci-images
variables:
DOCKER_REV: f2d12519f45a13a61fcca03a949f927ceead6492
# Always start with a fresh clone to avoid non-hermetic builds
GIT_STRATEGY: clone
.validate-hadrian:
image: "registry.gitlab.haskell.org/ghc/ci-images/x86_64-linux-deb10:$DOCKER_REV"
tags:
- x86_64-linux
before_script:
- sudo chown ghc:ghc -R .
- git clean -xdf
- sudo apt install -y time
- $GHC --version
- cabal --version
script:
- make clean
- |
cabal update
cabal install regex-compat html
- make boot mode=fast
- "make mode=fast NoFibRuns=1 2>&1 | tee log"
- "nofib-analyse/nofib-analyse log"
- git submodule update --init --recursive
- $GHC --version
- cabal update
- cabal new-run -w $GHC nofib-run -- -o out -w "$GHC" $EXTRA_ARGS
- mkdir -p results
- $GHC --info > results/compiler-info
- cp _make/out/*.results.tsv results
artifacts:
paths:
- results
validate-hadrian-normal:
extends:
- .validate-hadrian
variables:
EXTRA_ARGS: "--speed=Norm"
validate-hadrian-fast:
extends:
- .validate-hadrian
variables:
EXTRA_ARGS: "--speed=Fast"
# Syntax: https://docs.gitlab.com/ee/user/project/code_owners.html
* @sgraf812 @bgamari
\ No newline at end of file
# NoFib: Legacy make build system
This describes NoFib's legacy `make`-based build system. Note that you are
strongly encouraged to use the newer Shake-based build system, described in
[`shake/README.mkd`](shake/README.mkd).
When nofib is being used to test a compiler built from source, the `nofib`
directory should be the same level in the tree as `compiler` and `libraries`.
This makes sure that NoFib picks up the stage 2 compiler from the surrounding
GHC source tree. However, you can also clone this repository in isolation, in
which case it will pick `$(which ghc)` or whatever the `HC` environment
variable is set to.
There's also a `easy.sh` helper script, which as name implies, is automated and
easy way to run `nofib`. See the section at the end of this document for its
usage.
## Usage
<details>
<summary>Git symlink support for Windows machines</summary>
NoFib uses a few symlinks here and there to share code between benchmarks.
Git for Windows has symlinks support for some time now, but
[it may not be enabled by default](https://stackoverflow.com/a/42137273/388010).
You will notice strange `make boot` failures if it's not enabled for you.
Make sure you follow the instructions in the link to enable symlink support,
possibly as simple as through `git config core.symlinks true` or cloning with
`git clone -c core.symlinks=true <URL>`.
</details>
Install [`cabal-install-2.4`](https://www.haskell.org/cabal/download.html) or later.
Then, to run the tests, execute:
```bash
$ make clean # or git clean -fxd, it's faster
$ # Generates input files for the benchmarks and builds compilation
$ # dependencies for make (ghc -M)
$ make boot
$ # Builds the benchmarks and runs them $NoFibRuns (default: 5) times
$ make
```
This will put the results in the file `nofib-log`. You can pass extra
options to a nofib run using the `EXTRA_HC_OPTS` variable like this:
```bash
$ make clean
$ make boot
$ make EXTRA_HC_OPTS="-fllvm"
```
Likewise, you can pass additional arguments (e.g. RTS flags) to the command
itself by using the `EXTRA_RUNTEST_OPTS` variable like this:
make EXTRA_RUNTEST_OPTS="-- +RTS -A2M -RTS"
The `--` here ensures that `runtest` doesn't attempt to interpret any of the
given flags as its own.
**Note:** to get all the results, you have to `clean` and `boot` between
separate `nofib` runs.
To compare the results of multiple runs, save the output in a logfile
and use the program in `./nofib-analyse/nofib-analyse`, for example:
```bash
...
$ make 2>&1 | tee nofib-log-6.4.2
...
$ make 2>&1 | tee nofib-log-6.6
$ nofib-analyse nofib-log-6.4.2 nofib-log-6.6 | less
```
to generate a comparison of the runs in captured in `nofib-log-6.4.2`
and `nofib-log-6.6`. When making comparisons, be careful to ensure
that the things that changed between the builds are only the things
that you _wanted_ to change. There are lots of variables: machine,
GHC version, GCC version, C libraries, static vs. dynamic GMP library,
build options, run options, and probably lots more. To be on the safe
side, make both runs on the same unloaded machine.
## Modes
Each benchmark is runnable in three different time `mode`s:
- `fast`: 0.1-0.2s
- `norm`: 1-2s
- `slow`: 5-10s
You can control which mode to run by setting an additional `mode` variable for
`make`. The default is `mode=norm`. Example for `mode=fast`:
```bash
$ make clean
$ make boot mode=fast
$ make mode=fast
```
Note that the `mode`s set in `make boot` and `make` need to agree. Otherwise you
will get output errors, because `make boot` will generate input files for a
different `mode`. A more DRY way to control the `mode` would be
```bash
$ make clean
$ export mode=fast
$ make boot
$ make
```
As CPU architectures advance, the above running times may drift and
occasionally, all benchmarks will need adjustments.
Be aware that `nofib-analyse` will ignore the result if it falls below 0.2s.
This is the default of its `-i` option, which is of course incompatible with
`mode=fast`. In that case, you should just set `-i` as appropriate, even
deactivate it with `-i 0`.
## Boot vs. benchmarked GHC
The `nofib-analyse` utility is compiled with `BOOT_HC` compiler,
which may be different then the GHC under the benchmark.
You can control which GHC you benchmark with `HC` variable
```bash
$ make clean
$ make boot HC=ghc-head
$ make HC=ghc-head 2>&1 | tee nofib-log-ghc-head
```
## Configuration
There are some options you might want to tweak; search for nofib in
`../mk/config.mk`, and override settings in `../mk/build.mk` as usual.
## Extra Metrics: Valgrind
To get instruction counts, memory reads/writes, and "cache misses",
you'll need to get hold of Cachegrind, which is part of
[Valgrind](http://valgrind.org).
You can then pass `-cachegrind` as `EXTRA_RUNTEST_OPTS`. Counting
instructions slows down execution by a factor of ~30. But it's
a deterministic metric, so you can combine it with `NoFibRuns=1`:
```bash
$ (make EXTRA_RUNTEST_OPTS="-cachegrind" NoFibRuns=1) 2>&1 | tee nofib-log
```
Optionally combine this with `mode=fast`, see [Modes](#modes).
## Extra Packages
Some benchmarks aren't run by default and require extra packages are
installed for the GHC compiler being tested. These packages include:
* `old-time`: for `gc` benchmarks
* `stm`: for smp benchmarks
* `parallel`: for parallel benchmarks
* `random`: for various benchmarks
These can be installed with
```bash
cabal v1-install --allow-newer -w $HC random parallel old-time
```
## easy.sh
```
./easy.sh - easy nofib
Usage: ./easy.sh [ -m mode ] /path/to/baseline/ghc /path/to/new/ghc"
GHC paths can point to the root of the GHC repository,
if it's build with Hadrian.
Available options:
-m MODE nofib mode: fast norm slow
This script caches the results using the sha256 of ghc executable.
Remove these files, if you want to rerun the benchmark.
```
# NoFib: Haskell Benchmark Suite
This is the root directory of the "NoFib Haskell benchmark suite". It
should be part of a GHC source tree, that is the 'nofib' directory
should be at the same level in the tree as 'compiler' and 'libraries'.
This makes sure that NoFib picks up the stage 2 compiler from the
surrounding GHC source tree.
This is the root directory of the "NoFib Haskell benchmark suite".
There are currently two means of running the `nofib` benchmarks:
You can also clone this repository in isolation, in which case it will
pick `$(which ghc)` or whatever the `HC` environment variable is set to.
* [the `shake`-based build system](shake/README.mkd)
* [the legacy `make`-based build system](README.make.mkd)
Additional information can also be found on
[NoFib's wiki page](https://ghc.haskell.org/trac/ghc/wiki/Building/RunningNoFib).
## Package Depedencies
Please make sure you have the following packages installed for your
system GHC:
* html
* regex-compat (will install: mtl, regex-base, regex-posix)
## Using
Then, to run the tests, execute:
```
$ make clean # or git clean -fxd, it's faster
$ # Generates input files for the benchmarks and builds compilation
$ # dependencies for make (ghc -M)
$ make boot
$ # Builds the benchmarks and runs them $NoFibRuns (default: 5) times
$ make
```
This will put the results in the file `nofib-log`. You can pass extra
options to a nofib run using the `EXTRA_HC_OPTS` variable like this:
Users are generally encouraged to use the former when possible. See the linked
READMEs for usage instructions.
```
$ make clean
$ make boot
$ make EXTRA_HC_OPTS="-fllvm"
```
Additional information can also be found on
[NoFib's wiki page](https://gitlab.haskell.org/ghc/ghc/-/wikis/building/running-nofib).
To compare the results of multiple runs, save the output in a logfile
and use the program in `../utils/nofib-analyse`, for example:
```
...
$ make 2>&1 | tee nofib-log-6.4.2
...
$ make 2>&1 | tee nofib-log-6.6
$ nofib-analyse nofib-log-6.4.2 nofib-log-6.6 | less
```
## Adding benchmarks
to generate a comparison of the runs in captured in `nofib-log-6.4.2`
and `nofib-log-6.6`. When making comparisons, be careful to ensure
that the things that changed between the builds are only the things
that you _wanted_ to change. There are lots of variables: machine,
GHC version, GCC version, C libraries, static vs. dynamic GMP library,
build options, run options, and probably lots more. To be on the safe
side, make both runs on the same unloaded machine.
If you add a benchmark try to set the problem sizes for
fast/normal/slow reasonably. [Modes](#modes) lists the recommended brackets for
each mode.
## Modes
### Benchmark runtimes
Each benchmark is runnable in three different time `mode`s:
Benchmark should ideally support running in three different modes:
- `fast`: 0.1-0.2s
- `norm`: 1-2s
- `slow`: 5-10s
You can control which mode to run by setting an additional `mode` variable for
`make`. The default is `mode=norm`. Example for `mode=fast`:
```
$ make clean
$ make boot mode=fast
$ make mode=fast
```
You can look at existing benchmarks for how this is usually achieved.
### Benchmark Categories
Note that the `mode`s set in `make boot` and `make` need to agree. Otherwise you
will get output errors, because `make boot` will generate input files for a
different `mode`. A more DRY way to control the `mode` would be
So you have a benchmark to submit but don't know in which subfolder to put it? Here's some
advice on the intended semantics of each category.
```
$ make clean
$ export mode=fast
$ make boot
$ make
```
#### Single threaded benchmarks
As CPU architectures advance, the above running times may drift and
occasionally, all benchmarks will need adjustments.
These are run when you just type `make`. Their semantics is explained in
[the Nofib paper](https://link.springer.com/chapter/10.1007%2F978-1-4471-3215-8_17)
(You can find a .ps online, thanks to @bgamari. Alternatively grep for
'Spectral' in docs/paper/paper.verb).
Be aware that `nofib-analyse` will ignore the result if it falls below 0.2s.
This is the default of its `-i` option, which is of course incompatible with
`mode=fast`. In that case, you should just set `-i` as appropriate, even
deactivate it with `-i 0`.
- `imaginary`: Mostly toy benchmarks, solving puzzles like n-queens.
- `spectral`: Algorithmic kernels, like FFT. If you want to add a benchmark of a
library, this most certainly the place to put it.
- `real`: Actual applications, with a command-line interface and all. Because of
the large dependency footprint of today's applications, these have become
rather aged.
- `shootout`: Benchmarks from
[the benchmarks game](https://benchmarksgame-team.pages.debian.net/benchmarksgame/),
formerly known as "language shootout".
## Configuration
Most of the benchmarks are quite old and aren't really written in way one would
write high-performance Haskell code today (e.g., use of `String`, lists,
redefining own list combinators that don't take part in list fusion, rare use of
strictness annotations or unboxed data), so new benchmarks for the `real` and
`spectral` in brackets in particular are always welcome!
There are some options you might want to tweak; search for nofib in
`../mk/config.mk`, and override settings in `../mk/build.mk` as usual.
#### Other categories
## Extra Metrics: Valgrind
Other than the default single-threaded categories above, there are the
following (SG: I'm guessing here, have never run them):
To get instruction counts, memory reads/writes, and "cache misses",
you'll need to get hold of Cachegrind, which is part of
[Valgrind](http://valgrind.org).
You can then pass `-cachegrind` as `EXTRA_RUNTEST_OPTS`. Counting
instructions slows down execution by a factor of ~30. But it's
a deterministic metric, so you can combine it with `NoFibRuns=1`:
```
$ (make EXTRA_RUNTEST_OPTS="-cachegrind" NoFibRuns=1) 2>&1 | tee nofib-log
```
Optionally combine this with `mode=fast`, see [Modes](#modes).
## Extra Packages
Some benchmarks aren't run by default and require extra packages are
installed for the GHC compiler being tested. These packages include:
* stm - for smp benchmarks
## Adding benchmarks
If you add a benchmark try to set the problem sizes for
fast/normal/slow reasonably. [Modes](#modes) lists the recommended brackets for
each mode.
- `gc`: Run by `make -C gc` (though you'll probably have to edit the Makefile to
your specific config). Select benchmarks from `spectral` and `real`, plus a
few more (Careful, these have not been touched by #15999/!5, see the next
subsection). Testdrives different GC configs, apparently.
- `smp`: Microbenchmarks for the `-threaded` runtime, measuring scheduler
performance on concurrent and STM-heavy code.
### Stability wrt. GC paramerisations
Additionally, pay attention that your benchmarks are stable wrt. different
Additionally, pay attention that your benchmarks are stable wrt. different
GC paramerisations, so that small changes in allocation don't lead to big,
unexplicable jumps in performance. See Trac #15999 for details. Also make sure
unexplicable jumps in performance. See #15999 for details. Also make sure
that you run the benchmark with the default GC settings, as enlarging Gen 0 or
Gen 1 heaps just amplifies the problem.
As a rule of thumb on how to ensure this: Make sure that your benchmark doesn't
just build up one big data and consume it in a final step, but rather that the
working set grows and shrinks (e.g. is approximately constant) over the whole
run of the benchmark. You can ensure this by iterating your main logic $n times
(how often depends on your program, but in the ball park of 100-1000).
run of the benchmark. You can ensure this by iterating your main logic `$n`
times (how often depends on your program, but in the ball park of 100-1000).
You can test stability by plotting productivity curves for your `fast` settings
with the `prod.py` script attached to Trac #15999.
with the `prod.py` script attached to #15999.
If in doubt, ask Sebastian Graf for help.
## Important notes
Note that some of these tests (e.g. `spectral/fish`) tend to be very sensitive
to branch predictor effectiveness. This means that changes in the compiler
can easily be masked by "random" fluctuations in the code layout produced by
particular compiler runs. Recent GHC versions provide the `-fproc-alignment`
flag to pad procedures, ensuring slightly better stability across runs. If you
are seeing an unexpected change in performance try adding `-fproc-alignment=64`
the compiler flags of both your baseline and test tree.
......@@ -423,6 +423,11 @@ sorting
~~~~~~~
Same issue with GHC.IO.Encoding.UTF8 as treejoin
life
~~~~
The call to zipWith3 in row is quite close to the inlining threshold.
Makes a difference of about 2% last time it flipped.
---------------------------------------
Real suite
......
packages: shake
-- A cabal.project file to be used with hackage head.
-- Use by running `cabal --project-file nofib.head <usual command>`
packages: shake
repository head.hackage.ghc.haskell.org
url: https://ghc.gitlab.haskell.org/head.hackage/
secure: True
key-threshold: 3
root-keys:
f76d08be13e9a61a377a85e2fb63f4c5435d40f8feb3e12eb05905edb8cdea89
26021a13b401500c8eb2761ca95c61f2d625bfef951b939a8124ed12ecf07329
7541f32a4ccca4f97aea3b22f5e593ba2c0267546016b992dfadcd2fe944e55d
......@@ -10,7 +10,31 @@ import System.Environment (getArgs)
-- | A very simple hash function so that we don't have to store and compare
-- huge output files.
hash :: String -> Int
hash = foldl' (\acc c -> ord c + acc*31) 0
hash str = foldl' (\acc c -> ord c + acc*31) 0 str
{-# INLINE hash #-}
{-
Note: Originally, `hash` was eta-reduced and not explicitly inlined, and as a
result the `foldl'` call here was not saturated. Thus the definition of `hash`
was simple enough to inline into the call site, where the provided string
allowed `foldl'` to inline into `foldr`, which then enabled fusion, avoiding
potentially non-trivial allocation overhead.
In !5259, we're reducing the arity of `foldl'`, so that it can inline with just
two arguments. This yields performance improvements in common idiomatic code,
see !5259 for details.
That reduced arity makes `foldl'` inline into even the eta-reduced `hash`,
which then (for lack of an INLINE here) also inlined `foldr` and the fusion
opportunity was lost.
Quoting Simon, <https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5259#note_341945>
* let's add that INLINE to hash in NofibUtils. We don't usually mess with
nofib, but we don't want to perpetuate reliance on a fluke.
In addition, eta-expanding `hash`. It will now saturate both the original
and the post-!5259 `foldl'`.
-}
-- | Using @salt xs@ on an loop-invariant @xs@ inside a loop prevents the
-- compiler from floating out the input parameter.
......@@ -27,4 +51,4 @@ salt xs = do
-- executable, otherwise this isn't really 'pure'
-- anymore.
pure (take (max (maxBound - 1) s) xs)
#endif
\ No newline at end of file
#endif
#!/bin/sh
echo '\033]0;NOFIB: starting...\007'
# Settings
#######################################################################
mode=norm
# "Library" part
#######################################################################
show_usage () {
cat <<EOF
./easy.sh - easy nofib
Usage: ./easy.sh [ -m mode ] /path/to/baseline/ghc /path/to/new/ghc"
GHC paths can point to the root of the GHC repository,
if it's build with Hadrian.
Available options:
-m MODE nofib mode: fast norm slow
This script caches the results using the sha256 of ghc executable.
Remove these files, if you want to rerun the benchmark.
EOF
}
hashoffile () {
shasum -a 256 $1 | awk '{ print $1 }'
}
# getopt
#######################################################################
while getopts 'm:' flag; do
case $flag in
m)
case $OPTARG in
slow)
mode=$OPTARG
;;
norm)
mode=$OPTARG
;;
fast)
mode=$OPTARG
;;
*)
echo "Unknown mode: $OPTARG"
show_usage
exit 1
;;
esac
;;
?) show_usage
;;
esac
done
shift $((OPTIND - 1))
if [ $# -ne 2 ]; then
echo "Expected two arguments: ghc executables or roots of source repositories"
show_usage
exit 1
fi
OLD_HC=$1
NEW_HC=$2
# Set up
#######################################################################
# Arguments can point to GHC repository roots
if [ -d $OLD_HC -a -f "$OLD_HC/_build/stage1/bin/ghc" ]; then
OLD_HC="$OLD_HC/_build/stage1/bin/ghc"
fi
if [ -d $NEW_HC -a -f "$NEW_HC/_build/stage1/bin/ghc" ]; then
NEW_HC="$NEW_HC/_build/stage1/bin/ghc"
fi
# Check we have executables
if [ ! -f $NEW_HC -a -x $OLD_HC ]; then
echo "$OLD_HC is not an executable"
exit 1
fi
if [ ! -f $NEW_HC -a -x $NEW_HC ]; then
echo "$NEW_HC is not an executable"
exit 1
fi
# Info before we get going
#######################################################################
echo "Running nofib (mode=$mode) with $OLD_HC and $NEW_HC"
echo "Running nofib (mode=$mode) with $OLD_HC and $NEW_HC" | sed 's/./-/g'
sleep 2
# Run nofib
#######################################################################
# Run with old ghc
echo '\033]0;NOFIB: old\007'
OLD_HASH=$(hashoffile $OLD_HC)
OLD_OUTPUT=result-$OLD_HASH-$mode.txt
if [ -f $OLD_OUTPUT ]; then
echo "$OLD_OUTPUT exists; not re-running."
else
echo '\033]0;NOFIB: old, cleaning...\007'
make clean
echo '\033]0;NOFIB: old, booting...\007'
make boot mode=$mode HC=$OLD_HC
echo '\033]0;NOFIB: old, benchmarking...\007'
make mode=$mode HC=$OLD_HC 2>&1 | tee $OLD_OUTPUT
fi
# Run with new ghc
echo '\033]0;NOFIB: new\007'
NEW_HASH=$(hashoffile $NEW_HC)
NEW_OUTPUT=result-$NEW_HASH-$mode.txt
if [ -f $NEW_OUTPUT ]; then
echo "$NEW_OUTPUT exists; not re-running."
else
echo '\033]0;NOFIB: new, cleaning...\007'
make clean
echo '\033]0;NOFIB: new, booting...\007'
make boot mode=$mode HC=$NEW_HC
echo '\033]0;NOFIB: new, benchmarking...\007'
make mode=$mode HC=$NEW_HC 2>&1 | tee $NEW_OUTPUT
fi
# Done
#######################################################################
echo '\033]0;NOFIB: done\007'
# Analyse
./nofib-analyse/nofib-analyse $OLD_OUTPUT $NEW_OUTPUT > report.txt
# Show report
less report.txt
......@@ -31,6 +31,7 @@
module Main ( main ) where
import Data.Char
import Prelude hiding (null, length, or, foldr, maximum, concat, concatMap, foldl, foldr1, foldl1, sum, all, any, and, elem, notElem)
import Data.List
import System.IO
import System.Environment
......
......@@ -2,10 +2,10 @@ TOP = ../..
include $(TOP)/mk/boilerplate.mk
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H16m -RTS
PROG_ARGS += +RTS -H16m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
include $(TOP)/mk/target.mk
......
This diff is collapsed.
......@@ -33,7 +33,9 @@ David J. King & John O'Donnell
January, 1998
> import System.Environment
> import Control.Monad (forM_)
> import Data.List
> import Prelude hiding (length, or, foldr, maximum, concat, foldl)
> data BinTree a b = Cell a
> | Node b (BinTree a b) (BinTree a b)
......@@ -656,8 +658,10 @@ To run (with ghc) for a (8 bit register) circuit over 1000 cycles
% circ_sim 8 1000
> main :: IO ()
> main = getArgs >>= \[num_bits, num_cycles] ->
> print (run (read num_bits) (read num_cycles))
> main = forM_ [1..97] $ const $ do
> (num_bits:num_cycles:_) <- getArgs
> -- We save ourselfs some trouble and don't produce output in the gc variant
> return $! (length . show $ (run (read num_bits) (read num_cycles))) `seq` ()
> run :: Int -> Int -> [[Boolean]]
......
......@@ -2,15 +2,15 @@ TOP = ../..
include $(TOP)/mk/boilerplate.mk
FAST_OPTS = 8 100
NORM_OPTS = 8 3000
SLOW_OPTS = 8 5000
FAST_OPTS = 8 4
NORM_OPTS = 8 40
SLOW_OPTS = 8 200
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H256m -RTS
PROG_ARGS += +RTS -H256m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H30m -RTS
PROG_ARGS += +RTS -H30m -RTS
endif
include $(TOP)/mk/target.mk
[[F,F,F,F,F,F,F,F],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T],[T,T,T,T,T,T,T,T]]
This diff is collapsed.
This diff is collapsed.
......@@ -3,20 +3,21 @@
See Proceedings of WAAAPL '99
-}
import Prelude hiding (Maybe(Just,Nothing))
import Data.List
import Prelude hiding (Maybe(Just,Nothing), null, length, or, foldr, maximum, concat, foldl, foldr1, foldl1, sum, all, elem, notElem)
import System.Environment
import Control.Monad (forM_)
-----------------------------
-- The main program
-----------------------------
main = do
[arg] <- getArgs
let
n = read arg :: Int
try algorithm = print (length (search algorithm (queens n)))
sequence_ (map try [bt, bm, bjbt, bjbt', fc])
main = forM_ [1..240] $ const $ do
[arg] <- getArgs
let
n = read arg :: Int
try algorithm = print (length (search algorithm (queens n)))
sequence_ (map try [bt, bm, bjbt, bjbt', fc])
-----------------------------
-- Figure 1. CSPs in Haskell.
......
TOP = ../..
include $(TOP)/mk/boilerplate.mk
FAST_OPTS = 7
# NORM_OPTS should probably be 8 or 9
NORM_OPTS = 10
SLOW_OPTS = 11
FAST_OPTS = 6
NORM_OPTS = 7
SLOW_OPTS = 8
ifeq "$(HEAP)" "LARGE"
SRC_RUNTEST_OPTS += +RTS -H330m -RTS
PROG_ARGS += +RTS -H330m -RTS
endif
ifeq "$(HEAP)" "OLD"
SRC_RUNTEST_OPTS += +RTS -H10m -RTS
PROG_ARGS += +RTS -H10m -RTS
endif
include $(TOP)/mk/target.mk