- Nov 09, 2019
-
-
Ben Gamari authored
This utilizes GitLab CI's `parallel`[1] field to divide the build into several jobs, each handling a subset of the built packages. The job count is a bit of a trade-off between build re-use and parallelism. I'm trying 5 for now. [1] https://docs.gitlab.com/ee/ci/yaml/#parallel
-
- Nov 08, 2019
-
-
Ben Gamari authored
Previously we constrained the CI install plans to use only patched versions of any packages for which we had patches. This was an attempt to decouple ourselves from the evolution of Hackage a bit. Specifically, I was worried that if we didn't constrain the install plan to the patched versions then we would end up having spurious build failures every time a new minor package version is released (since cabal-install would prefer the new release over our patched version). Admittedly this is just a hack to work around the fact that version bounds are generally quite loose, but it seemed like a reasonable trade-off at the time. However, this ended up making it hard to support older major versions of a package simultaneously with the newest version if the latter doesn't require a patch. Let's try doing away with the constraints.
-
- Nov 07, 2019
-
-
Ben Gamari authored
-
- Nov 04, 2019
-
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
-
Ben Gamari authored
This adds a `--expect-broken` option to the CI driver executable along with a simple shell script containing the list of packages which we expect to fail for a given GHC version.
-
- Oct 31, 2019
-
-
Ben Gamari authored
-
- Oct 18, 2019
-
-
Ben Gamari authored
-
Ben Gamari authored
The Problem ----------- When I started looking at the problem of providing CI for `head.hackage` I considered two possible designs: 1. Build upon `cabal-install` 2. Build upon Nix's Haskell infrastructure While I preferred (1), I found that integrating with `cabal-install` was quite difficult: * it [does not produce logs](https://github.com/haskell/cabal/issues/5901) for local packages, which was the obvious way to incorporate patched packages into the build plan * it is difficult to reconstruct why a package build failed (e.g. due to a planning failure, dependency failing to build, or an error in the package itself) For these reasons it so happened that (2) ended up being a tad easier to implement. However, it suffers from a number of problems: * Nix's Haskell infrastructure doesn't handle multiple versions of a single package at all, yet we now have patches for multiple package versions in `head.hackage` * Nix's Haskell infrastructure doesn't handle flags, which can complicate building some packages * The Nix expressions ended up being rather difficult to maintain The Solution ------------ This MR moves the CI infrastructure back in the direction of (1), facilitated by workarounds that I found for the two issues described above. The infrastructure revolves around the `head-hackage-ci` executable which provides a `test-patches` mode which `gitlab-ci.yml` invokes thusly: ``` head-hackage-ci test-patches --patches=./patches --with-compiler=$GHC ``` This mode does several things: 1. Builds a local package repository (using the same script used to build `https://ghc.gitlab.haskell.org/head.hackage/`). (N.B. by pulling patched packages from a proper repository instead of using local packages we side-step the fact that `cabal-install` doesn't produce logs for local packages) 2. Generate a `cabal.project` file containing: * a `remote-repository` stanza referring to this repository * constraints to ensure that we only choose patched package versions * some additional `package` stanzas to ensure that `Cabal` can find native library dependencies (these are defined in `ci/build-deps.nix`) 3. Run `cabal new-update` (as well as perform a dummy build of the `acme-box` package to ensure that the package index cache is built, otherwise parallel builds can randomly fail) 4. For each patched create a new working directory containing: * the previously generated `cabal.project` file * a `test-$PKGNAME.cabal` file defining a dummy package depending upon the library and perform the build. We use some heuristics depending upon: * the `plan.json` file * which log files exist * the contents of said log files to sort out what happened. 6. After all the packages have been built produce a final report of the result. While this is admittedly pretty hacky, in truth it's no worse than the somersaults which we had to perform in the Nix infrastructure. Reliably introspecting on failed builds seems to be messy business no matter which build system you use.
-