1. 01 Jun, 2020 1 commit
  2. 15 Feb, 2020 1 commit
  3. 13 Jan, 2020 1 commit
  4. 30 Dec, 2019 1 commit
    • Ben Gamari's avatar
      perf_notes: Add --zero-y argument · 7fad387d
      Ben Gamari authored
      This makes it easier to see the true magnitude of fluctuations.
      Also do some house-keeping in the argument parsing department.
  5. 12 Dec, 2019 1 commit
    • Ben Gamari's avatar
      testsuite: Simplify and clarify performance test baseline search · e6e1ec08
      Ben Gamari authored
      The previous implementation was extremely complicated, seemingly to
      allow the local and CI namespaces to be searched incrementally. However,
      it's quite unclear why this is needed and moreover the implementation
      seems to have had quadratic runtime cost in the search depth(!).
  6. 05 Dec, 2019 2 commits
  7. 28 Nov, 2019 1 commit
  8. 22 Oct, 2019 1 commit
  9. 27 Jun, 2019 1 commit
  10. 26 Jun, 2019 1 commit
  11. 25 Jun, 2019 1 commit
    • Ben Gamari's avatar
      testsuite: A major revamp of the driver · c346585b
      Ben Gamari authored
      This tries to put the testsuite driver into a slightly more maintainable
      * Add type annotations where easily done
      * Use pathlib.Path instead of str paths
      * Make it pass the mypy typechecker
  12. 04 Jun, 2019 1 commit
  13. 20 May, 2019 1 commit
  14. 05 Mar, 2019 1 commit
  15. 01 Mar, 2019 1 commit
  16. 21 Feb, 2019 1 commit
  17. 16 Feb, 2019 1 commit
  18. 01 Feb, 2019 1 commit
  19. 30 Jan, 2019 3 commits
  20. 11 Dec, 2018 1 commit
  21. 01 Dec, 2018 1 commit
  22. 30 Nov, 2018 1 commit
  23. 07 Nov, 2018 1 commit
    • davide's avatar
      testsuite: Save performance metrics in git notes. · 932cd41d
      davide authored
      This patch makes the following improvement:
        - Automatically records test metrics (per test environment) so that
          the programmer need not supply nor update expected values in *.T
          - On expected metric changes, the programmer need only indicate the
            direction of change in the git commit message.
        - Provides a simple python tool "perf_notes.py" to compare metrics
          over time.
        - Using just the previous commit allows performance to drift with each
          - Currently we allow drift as we have a preference for minimizing
            false positives.
          - Some possible alternatives include:
            - Use metrics from a fixed commit per test: the last commit that
              allowed a change in performance (else the oldest metric)
            - Or use some sort of aggregate since the last commit that allowed
              a change in performance (else all available metrics)
            - These alternatives may result in a performance issue (with the
              test driver) having to heavily search git commits/notes.
        - Run locally, performance tests will trivially pass unless the tests
          were run locally on the previous commit. This is often not the case
          e.g.  after pulling recent changes.
      Previously, *.T files contain statements such as:
      stats_num_field('peak_megabytes_allocated', (2, 1))
      compiler_stats_num_field('bytes allocated',
                               [(wordsize(64), 165890392, 10)])
      This required the programmer to give the expected values and a tolerance
      deviation (percentage). With this patch, the above statements are
      replaced with:
      collect_stats('peak_megabytes_allocated', 5)
      collect_compiler_stats('bytes allocated', 10)
      So that programmer must only enter which metrics to test and a tolerance
      deviation. No expected value is required. CircleCI will then run the
      tests per test environment and record the metrics to a git note for that
      commit and push them to the git.haskell.org ghc repo. Metrics will be
      compared to the previous commit. If they are different by the tolerance
      deviation from the *.T file, then the corresponding test will fail. By
      adding to the git commit message e.g.
       # Metric (In|De)crease <metric(s)> <options>: <tests>
      Metric Increase ['bytes allocated', 'peak_megabytes_allocated'] \
               (test_env='linux_x86', way='default'):
          Test012, Test345
      Metric Decrease 'bytes allocated':
      Metric Increase:
      This will allow the noted changes (letting the test pass). Note that by
      omitting metrics or options, the change will apply to all possible
      metrics/options (i.e. in the above, an increase for all metrics in all
      test environments is allowed for Test711)
      phabricator will use the message in the description
      Reviewers: bgamari, hvr
      Reviewed By: bgamari
      Subscribers: rwbarton, carter
      GHC Trac Issues: #12758
      Differential Revision: https://phabricator.haskell.org/D5059