Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • ghc/ghc
  • bgamari/ghc
  • syd/ghc
  • ggreif/ghc
  • watashi/ghc
  • RolandSenn/ghc
  • mpickering/ghc
  • DavidEichmann/ghc
  • carter/ghc
  • harpocrates/ghc
  • ethercrow/ghc
  • mijicd/ghc
  • adamse/ghc
  • alexbiehl/ghc
  • gridaphobe/ghc
  • trofi/ghc
  • supersven/ghc
  • ppk/ghc
  • ulysses4ever/ghc
  • AndreasK/ghc
  • ghuntley/ghc
  • shayne-fletcher-da/ghc
  • fgaz/ghc
  • yav/ghc
  • osa1/ghc
  • mbbx6spp/ghc
  • JulianLeviston/ghc
  • reactormonk/ghc
  • rae/ghc
  • takenobu-hs/ghc
  • michalt/ghc
  • andrewthad/ghc
  • hsyl20/ghc
  • scottgw/ghc
  • sjakobi/ghc
  • angerman/ghc
  • RyanGlScott/ghc
  • hvr/ghc
  • howtonotwin/ghc
  • chessai/ghc
  • m-renaud/ghc
  • brprice/ghc
  • stevehartdata/ghc
  • sighingnow/ghc
  • kgardas/ghc
  • ckoparkar/ghc
  • alp/ghc
  • smaeul/ghc
  • kakkun61/ghc
  • sykloid/ghc
  • newhoggy/ghc
  • toonn/ghc
  • nineonine/ghc
  • Phyx/ghc
  • ezyang/ghc
  • tweag/ghc
  • langston/ghc
  • ndmitchell/ghc
  • rockbmb/ghc
  • artempyanykh/ghc
  • mniip/ghc
  • mynguyenbmc/ghc
  • alexfmpe/ghc
  • crockeea/ghc
  • nh2/ghc
  • vaibhavsagar/ghc
  • phadej/ghc
  • Haskell-mouse/ghc
  • lolotp/ghc
  • spacekitteh/ghc
  • michaelpj/ghc
  • mgsloan/ghc
  • HPCohen/ghc
  • tmobile/ghc
  • radrow/ghc
  • simonmar/ghc
  • _deepfire/ghc
  • Ericson2314/ghc
  • leitao/ghc
  • fumieval/ghc
  • trac-isovector/ghc
  • cblp/ghc
  • xich/ghc
  • ciil/ghc
  • erthalion/ghc
  • xldenis/ghc
  • autotaker/ghc
  • haskell-wasm/ghc
  • kcsongor/ghc
  • agander/ghc
  • Baranowski/ghc
  • trac-dredozubov/ghc
  • 23Skidoo/ghc
  • iustin/ghc
  • ningning/ghc
  • josefs/ghc
  • kabuhr/ghc
  • gallais/ghc
  • dten/ghc
  • expipiplus1/ghc
  • Pluralia/ghc
  • rohanjr/ghc
  • intricate/ghc
  • kirelagin/ghc
  • Javran/ghc
  • DanielG/ghc
  • trac-mizunashi_mana/ghc
  • pparkkin/ghc
  • bollu/ghc
  • ntc2/ghc
  • jaspervdj/ghc
  • JoshMeredith/ghc
  • wz1000/ghc
  • zkourouma/ghc
  • code5hot/ghc
  • jdprice/ghc
  • tdammers/ghc
  • J-mie6/ghc
  • trac-lantti/ghc
  • ch1bo/ghc
  • cgohla/ghc
  • lucamolteni/ghc
  • acairncross/ghc
  • amerocu/ghc
  • chreekat/ghc
  • txsmith/ghc
  • trupill/ghc
  • typetetris/ghc
  • sergv/ghc
  • fryguybob/ghc
  • erikd/ghc
  • trac-roland/ghc
  • setupminimal/ghc
  • Friede80/ghc
  • SkyWriter/ghc
  • xplorld/ghc
  • abrar/ghc
  • obsidiansystems/ghc
  • Icelandjack/ghc
  • adinapoli/ghc
  • trac-matthewbauer/ghc
  • heatsink/ghc
  • dwijnand/ghc
  • Cmdv/ghc
  • alinab/ghc
  • pepeiborra/ghc
  • fommil/ghc
  • luochen1990/ghc
  • rlupton20/ghc
  • applePrincess/ghc
  • lehins/ghc
  • ronmrdechai/ghc
  • leeadam/ghc
  • harendra/ghc
  • mightymosquito1991/ghc
  • trac-gershomb/ghc
  • lucajulian/ghc
  • Rizary/ghc
  • VictorCMiraldo/ghc
  • jamesbrock/ghc
  • andrewdmeier/ghc
  • luke/ghc
  • pranaysashank/ghc
  • cocreature/ghc
  • hithroc/ghc
  • obreitwi/ghc
  • slrtbtfs/ghc
  • kaol/ghc
  • yairchu/ghc
  • Mathemagician98/ghc
  • trac-taylorfausak/ghc
  • leungbk/ghc
  • MichaWiedenmann/ghc
  • chris-martin/ghc
  • TDecki/ghc
  • adithyaov/ghc
  • trac-gelisam/ghc
  • Lysxia/ghc
  • complyue/ghc
  • bwignall/ghc
  • sternmull/ghc
  • sonika/ghc
  • leif/ghc
  • broadwaylamb/ghc
  • myszon/ghc
  • danbroooks/ghc
  • Mechachleopteryx/ghc
  • zardyh/ghc
  • trac-vdukhovni/ghc
  • OmarKhaledAbdo/ghc
  • arrowd/ghc
  • Bodigrim/ghc
  • matheus23/ghc
  • cardenaso11/ghc
  • trac-Athas/ghc
  • mb720/ghc
  • DylanZA/ghc
  • liff/ghc
  • typedrat/ghc
  • trac-claude/ghc
  • jbm/ghc
  • Gertjan423/ghc
  • PHO/ghc
  • JKTKops/ghc
  • kockahonza/ghc
  • msakai/ghc
  • Sir4ur0n/ghc
  • barambani/ghc
  • vishnu.c/ghc
  • dcoutts/ghc
  • trac-runeks/ghc
  • trac-MaxGabriel/ghc
  • lexi.lambda/ghc
  • strake/ghc
  • spavikevik/ghc
  • JakobBruenker/ghc
  • rmanne/ghc
  • gdziadkiewicz/ghc
  • ani/ghc
  • iliastsi/ghc
  • smunix/ghc
  • judah/ghc
  • blackgnezdo/ghc
  • emilypi/ghc
  • trac-bpfoley/ghc
  • muesli4/ghc
  • trac-gkaracha/ghc
  • Kleidukos/ghc
  • nek0/ghc
  • TristanCacqueray/ghc
  • dwulive/ghc
  • mbakke/ghc
  • arybczak/ghc
  • Yang123321/ghc
  • maksbotan/ghc
  • QuietMisdreavus/ghc
  • trac-olshanskydr/ghc
  • emekoi/ghc
  • samuela/ghc
  • josephcsible/ghc
  • dramforever/ghc
  • lpsmith/ghc
  • DenisFrezzato/ghc
  • michivi/ghc
  • jneira/ghc
  • jeffhappily/ghc
  • Ivan-Yudin/ghc
  • nakaji-dayo/ghc
  • gdevanla/ghc
  • galen/ghc
  • fendor/ghc
  • yaitskov/ghc
  • rcythr/ghc
  • awpr/ghc
  • jeremyschlatter/ghc
  • Aver1y/ghc
  • mitchellvitez/ghc
  • merijn/ghc
  • tomjaguarpaw1/ghc
  • trac-NoidedSuper/ghc
  • erewok/ghc
  • trac-junji.hashimoto/ghc
  • adamwespiser/ghc
  • bjaress/ghc
  • jhrcek/ghc
  • leonschoorl/ghc
  • lukasz-golebiewski/ghc
  • sheaf/ghc
  • last-g/ghc
  • carassius1014/ghc
  • eschwartz/ghc
  • dwincort/ghc
  • felixwiemuth/ghc
  • TimWSpence/ghc
  • marcusmonteirodesouza/ghc
  • WJWH/ghc
  • vtols/ghc
  • theobat/ghc
  • BinderDavid/ghc
  • ckoparkar0/ghc
  • alexander-kjeldaas/ghc
  • dme2/ghc
  • philderbeast/ghc
  • aaronallen8455/ghc
  • rayshih/ghc
  • benkard/ghc
  • mpardalos/ghc
  • saidelman/ghc
  • leiftw/ghc
  • ca333/ghc
  • bwroga/ghc
  • nmichael44/ghc
  • trac-crobbins/ghc
  • felixonmars/ghc
  • adityagupta1089/ghc
  • hgsipiere/ghc
  • treeowl/ghc
  • alexpeits/ghc
  • CraigFe/ghc
  • dnlkrgr/ghc
  • kerckhove_ts/ghc
  • cptwunderlich/ghc
  • eiais/ghc
  • hahohihu/ghc
  • sanchayan/ghc
  • lemmih/ghc
  • sehqlr/ghc
  • trac-dbeacham/ghc
  • luite/ghc
  • trac-f-a/ghc
  • vados/ghc
  • luntain/ghc
  • fatho/ghc
  • alexbiehl-gc/ghc
  • dcbdan/ghc
  • tvh/ghc
  • liam-ly/ghc
  • timbobbarnes/ghc
  • GovanifY/ghc
  • shanth2600/ghc
  • gliboc/ghc
  • duog/ghc
  • moxonsghost/ghc
  • zander/ghc
  • masaeedu/ghc
  • georgefst/ghc
  • guibou/ghc
  • nicuveo/ghc
  • mdebruijne/ghc
  • stjordanis/ghc
  • emiflake/ghc
  • wygulmage/ghc
  • frasertweedale/ghc
  • coot/ghc
  • aratamizuki/ghc
  • tsandstr/ghc
  • mrBliss/ghc
  • Anton-Latukha/ghc
  • tadfisher/ghc
  • vapourismo/ghc
  • Sorokin-Anton/ghc
  • basile-henry/ghc
  • trac-mightybyte/ghc
  • AbsoluteNikola/ghc
  • cobrien99/ghc
  • songzh/ghc
  • blamario/ghc
  • aj4ayushjain/ghc
  • trac-utdemir/ghc
  • tangcl/ghc
  • hdgarrood/ghc
  • maerwald/ghc
  • arjun/ghc
  • ratherforky/ghc
  • haskieLambda/ghc
  • EmilGedda/ghc
  • Bogicevic/ghc
  • eddiejessup/ghc
  • kozross/ghc
  • AlistairB/ghc
  • 3Rafal/ghc
  • christiaanb/ghc
  • trac-bit/ghc
  • matsumonkie/ghc
  • trac-parsonsmatt/ghc
  • chisui/ghc
  • jaro/ghc
  • trac-kmiyazato/ghc
  • davidsd/ghc
  • Tritlo/ghc
  • I-B-3/ghc
  • lykahb/ghc
  • AriFordsham/ghc
  • turion1/ghc
  • berberman/ghc
  • christiantakle/ghc
  • zyklotomic/ghc
  • trac-ocramz/ghc
  • CSEdd/ghc
  • doyougnu/ghc
  • mmhat/ghc
  • why-not-try-calmer/ghc
  • plutotulp/ghc
  • kjekac/ghc
  • Manvi07/ghc
  • teo/ghc
  • cactus/ghc
  • CarrieMY/ghc
  • abel/ghc
  • yihming/ghc
  • tsakki/ghc
  • jessicah/ghc
  • oliverbunting/ghc
  • meld/ghc
  • friedbrice/ghc
  • Joald/ghc
  • abarbu/ghc
  • DigitalBrains1/ghc
  • sterni/ghc
  • alexDarcy/ghc
  • hexchain/ghc
  • minimario/ghc
  • zliu41/ghc
  • tommd/ghc
  • jazcarate/ghc
  • peterbecich/ghc
  • alirezaghey/ghc
  • solomon/ghc
  • mikael.urankar/ghc
  • davjam/ghc
  • int-index/ghc
  • MorrowM/ghc
  • nrnrnr/ghc
  • Sonfamm/ghc-test-only
  • afzt1/ghc
  • nguyenhaibinh-tpc/ghc
  • trac-lierdakil/ghc
  • MichaWiedenmann1/ghc
  • jmorag/ghc
  • Ziharrk/ghc
  • trac-MitchellSalad/ghc
  • juampe/ghc
  • jwaldmann/ghc
  • snowleopard/ghc
  • juhp/ghc
  • normalcoder/ghc
  • ksqsf/ghc
  • trac-jberryman/ghc
  • roberth/ghc
  • 1ntEgr8/ghc
  • epworth/ghc
  • MrAdityaAlok/ghc
  • JunmingZhao42/ghc
  • jappeace/ghc
  • trac-Gabriel439/ghc
  • alt-romes/ghc
  • HugoPeters1024/ghc
  • 10ne1/ghc-fork
  • agentultra/ghc
  • Garfield1002/ghc
  • ChickenProp/ghc
  • clyring/ghc
  • MaxHearnden/ghc
  • jumper149/ghc
  • vem/ghc
  • ketzacoatl/ghc
  • Rosuavio/ghc
  • jackohughes/ghc
  • p4l1ly/ghc
  • konsumlamm/ghc
  • shlevy/ghc
  • torsten.schmits/ghc
  • andremarianiello/ghc
  • amesgen/ghc
  • googleson78/ghc
  • InfiniteVerma/ghc
  • uhbif19/ghc
  • yiyunliu/ghc
  • raehik/ghc
  • mrkun/ghc
  • telser/ghc
  • 1Jajen1/ghc
  • slotThe/ghc
  • WinstonHartnett/ghc
  • mpilgrem/ghc
  • dreamsmasher/ghc
  • schuelermine/ghc
  • trac-Viwor/ghc
  • undergroundquizscene/ghc
  • evertedsphere/ghc
  • coltenwebb/ghc
  • oberblastmeister/ghc
  • agrue/ghc
  • lf-/ghc
  • zacwood9/ghc
  • steshaw/ghc
  • high-cloud/ghc
  • SkamDart/ghc
  • PiDelport/ghc
  • maoif/ghc
  • RossPaterson/ghc
  • CharlesTaylor7/ghc
  • ribosomerocker/ghc
  • trac-ramirez7/ghc
  • daig/ghc
  • NicolasT/ghc
  • FinleyMcIlwaine/ghc
  • lawtonnichols/ghc
  • jmtd/ghc
  • ozkutuk/ghc
  • wildsebastian/ghc
  • nikshalark/ghc
  • lrzlin/ghc
  • tobias/ghc
  • fw/ghc
  • hawkinsw/ghc
  • type-dance/ghc
  • rui314/ghc
  • ocharles/ghc
  • wavewave/ghc
  • TheKK/ghc
  • nomeata/ghc
  • trac-csabahruska/ghc
  • jonathanjameswatson/ghc
  • L-as/ghc
  • Axman6/ghc
  • barracuda156/ghc
  • trac-jship/ghc
  • jake-87/ghc
  • meooow/ghc
  • rebeccat/ghc
  • hamana55/ghc
  • Enigmage/ghc
  • kokobd/ghc
  • agevelt/ghc
  • gshen42/ghc
  • chrismwendt/ghc
  • MangoIV/ghc
  • teto/ghc
  • Sookr1/ghc
  • trac-thomasjm/ghc
  • barci2/ghc-dev
  • trac-m4dc4p/ghc
  • dixonary/ghc
  • breakerzirconia/ghc
  • alexsio27444/ghc
  • glocq/ghc
  • sourabhxyz/ghc
  • ryantrinkle/ghc
  • Jade/ghc
  • scedfaliako/ghc
  • martijnbastiaan/ghc
  • trac-george.colpitts/ghc
  • ammarbinfaisal/ghc
  • mimi.vx/ghc
  • lortabac/ghc
  • trac-zyla/ghc
  • benbellick/ghc
  • aadaa-fgtaa/ghc
  • jvanbruegge/ghc
  • archbung/ghc
  • gilmi/ghc
  • mfonism/ghc
  • alex-mckenna/ghc
  • Ei30metry/ghc
  • DiegoDiverio/ghc
  • jorgecunhamendes/ghc
  • liesnikov/ghc
  • akrmn/ghc
  • trac-simplifierticks/ghc
  • jacco/ghc
  • rhendric/ghc
  • damhiya/ghc
  • ryndubei/ghc
  • DaveBarton/ghc
  • trac-Profpatsch/ghc
  • GZGavinZhao/ghc
  • ncfavier/ghc
  • jameshaydon/ghc
  • ajccosta/ghc
  • dschrempf/ghc
  • cydparser/ghc
  • LinuxUserGD/ghc
  • elodielander/ghc
  • facundominguez/ghc
  • psilospore/ghc
  • lachrimae/ghc
  • dylan-thinnes/ghc-type-errors-plugin
  • hamishmack/ghc
  • Leary/ghc
  • lzszt/ghc
  • lyokha/ghc
  • trac-glaubitz/ghc
  • Rewbert/ghc
  • andreabedini/ghc
  • Jasagredo/ghc
  • sol/ghc
  • OlegAlexander/ghc
  • trac-sthibaul/ghc
  • avdv/ghc
  • Wendaolee/ghc
  • ur4t/ghc
  • daylily/ghc
  • boltzmannrain/ghc
  • mmzk1526/ghc
  • trac-fizzixnerd/ghc
  • soulomoon/ghc
  • rwmjones/ghc
  • j14i/ghc
  • tracsis/ghc
  • gesh/ghc
  • flip101/ghc
  • eldritch-cookie/ghc
  • LemonjamesD/ghc
  • pgujjula/ghc
  • skeuchel/ghc
  • noteed/ghc
  • gulin.serge/ghc
  • Torrekie/ghc
  • jlwoodwa/ghc
  • ayanamists/ghc
  • husong998/ghc
  • trac-edmundnoble/ghc
  • josephf/ghc
  • contrun/ghc
  • baulig/ghc
  • edsko/ghc
  • mzschr/ghc-issue-24732
  • ulidtko/ghc
  • Arsen/ghc
  • trac-sjoerd_visscher/ghc
  • crumbtoo/ghc
  • L0neGamer/ghc
  • DrewFenwick/ghc
  • benz0li/ghc
  • MaciejWas/ghc
  • jordanrule/ghc
  • trac-qqwy/ghc
  • LiamGoodacre/ghc
  • isomorpheme/ghc
  • trac-danidiaz/ghc
  • Kariim/ghc
  • MTaimoorZaeem/ghc
  • hololeap/ghc
  • ticat-fp/ghc
  • meritamen/ghc
  • criskell/ghc
  • trac-kraai/ghc
  • aergus/ghc
  • jdral/ghc
  • SamB/ghc
  • Tristian/ghc
  • ywgrit/ghc
  • KatsuPatrick/ghc
  • OsePedro/ghc
  • mpscholten/ghc
  • fp/ghc
  • zaquest/ghc
  • fangyi-zhou/ghc
  • augyg/ghc
640 results
Show changes
Commits on Source (2)
...@@ -141,6 +141,7 @@ data BuildConfig ...@@ -141,6 +141,7 @@ data BuildConfig
, tablesNextToCode :: Bool , tablesNextToCode :: Bool
, threadSanitiser :: Bool , threadSanitiser :: Bool
, noSplitSections :: Bool , noSplitSections :: Bool
, testsuiteUsePerf :: Bool
} }
-- Extra arguments to pass to ./configure due to the BuildConfig -- Extra arguments to pass to ./configure due to the BuildConfig
...@@ -188,6 +189,7 @@ vanilla = BuildConfig ...@@ -188,6 +189,7 @@ vanilla = BuildConfig
, tablesNextToCode = True , tablesNextToCode = True
, threadSanitiser = False , threadSanitiser = False
, noSplitSections = False , noSplitSections = False
, testsuiteUsePerf = False
} }
splitSectionsBroken :: BuildConfig -> BuildConfig splitSectionsBroken :: BuildConfig -> BuildConfig
...@@ -663,6 +665,7 @@ job arch opsys buildConfig = NamedJob { name = jobName, jobInfo = Job {..} } ...@@ -663,6 +665,7 @@ job arch opsys buildConfig = NamedJob { name = jobName, jobInfo = Job {..} }
Emulator s -> "CROSS_EMULATOR" =: s Emulator s -> "CROSS_EMULATOR" =: s
NoEmulatorNeeded -> mempty NoEmulatorNeeded -> mempty
, if withNuma buildConfig then "ENABLE_NUMA" =: "1" else mempty , if withNuma buildConfig then "ENABLE_NUMA" =: "1" else mempty
, if testsuiteUsePerf buildConfig then "RUNTEST_ARGS" =: "--config perf_path=perf" else mempty
] ]
jobArtifacts = Artifacts jobArtifacts = Artifacts
......
...@@ -123,10 +123,37 @@ AllowedPerfChange = NamedTuple('AllowedPerfChange', ...@@ -123,10 +123,37 @@ AllowedPerfChange = NamedTuple('AllowedPerfChange',
('opts', Dict[str, str]) ('opts', Dict[str, str])
]) ])
MetricBaselineOracle = Callable[[WayName, GitHash], Baseline] class MetricAcceptanceWindow:
MetricDeviationOracle = Callable[[WayName, GitHash], Optional[float]] """
MetricOracles = NamedTuple("MetricOracles", [("baseline", MetricBaselineOracle), A strategy for computing an acceptance window for a metric measurement
("deviation", MetricDeviationOracle)]) given a baseline value.
"""
def get_bounds(self, baseline: float) -> Tuple[float, float]:
raise NotImplemented
def describe(self) -> str:
raise NotImplemented
class AlwaysAccept(MetricAcceptanceWindow):
def get_bounds(self, baseline: float) -> Tuple[float, float]:
return (-1/0, +1/0)
def describe(self) -> str:
raise NotImplemented
class RelativeMetricAcceptanceWindow(MetricAcceptanceWindow):
"""
A MetricAcceptanceWindow which accepts measurements within tol-percent of
the baseline.
"""
def __init__(self, tol: float):
""" Accept any metric within tol-percent of the baseline """
self.tol = tol / 100
def get_bounds(self, baseline: float) -> Tuple[float, float]:
return (baseline * (1-self.tol), baseline * (1+self.tol))
def describe(self) -> str:
return '+/- %1.1f%%' % (100*self.tol)
def parse_perf_stat(stat_str: str) -> PerfStat: def parse_perf_stat(stat_str: str) -> PerfStat:
field_vals = stat_str.strip('\t').split('\t') field_vals = stat_str.strip('\t').split('\t')
...@@ -558,32 +585,38 @@ def get_commit_metric(gitNoteRef, ...@@ -558,32 +585,38 @@ def get_commit_metric(gitNoteRef,
_commit_metric_cache[cacheKeyA] = baseline_by_cache_key_b _commit_metric_cache[cacheKeyA] = baseline_by_cache_key_b
return baseline_by_cache_key_b.get(cacheKeyB) return baseline_by_cache_key_b.get(cacheKeyB)
# Check test stats. This prints the results for the user.
# actual: the PerfStat with actual value.
# baseline: the expected Baseline value (this should generally be derived from baseline_metric())
# tolerance_dev: allowed deviation of the actual value from the expected value.
# allowed_perf_changes: allowed changes in stats. This is a dictionary as returned by get_allowed_perf_changes().
# force_print: Print stats even if the test stat was in the tolerance range.
# Returns a (MetricChange, pass/fail object) tuple. Passes if the stats are within the expected value ranges.
def check_stats_change(actual: PerfStat, def check_stats_change(actual: PerfStat,
baseline: Baseline, baseline: Baseline,
tolerance_dev, acceptance_window: MetricAcceptanceWindow,
allowed_perf_changes: Dict[TestName, List[AllowedPerfChange]] = {}, allowed_perf_changes: Dict[TestName, List[AllowedPerfChange]] = {},
force_print = False force_print = False
) -> Tuple[MetricChange, Any]: ) -> Tuple[MetricChange, Any]:
"""
Check test stats. This prints the results for the user.
Parameters:
actual: the PerfStat with actual value
baseline: the expected Baseline value (this should generally be derived
from baseline_metric())
acceptance_window: allowed deviation of the actual value from the expected
value.
allowed_perf_changes: allowed changes in stats. This is a dictionary as
returned by get_allowed_perf_changes().
force_print: Print stats even if the test stat was in the tolerance range.
Returns a (MetricChange, pass/fail object) tuple. Passes if the stats are within the expected value ranges.
"""
expected_val = baseline.perfStat.value expected_val = baseline.perfStat.value
full_name = actual.test + ' (' + actual.way + ')' full_name = actual.test + ' (' + actual.way + ')'
lowerBound = trunc( int(expected_val) * ((100 - float(tolerance_dev))/100)) lower_bound, upper_bound = acceptance_window.get_bounds(expected_val)
upperBound = trunc(0.5 + ceil(int(expected_val) * ((100 + float(tolerance_dev))/100))) actual_dev = round(((float(actual.value) * 100)/ expected_val) - 100, 1)
actual_dev = round(((float(actual.value) * 100)/ int(expected_val)) - 100, 1)
# Find the direction of change. # Find the direction of change.
change = MetricChange.NoChange change = MetricChange.NoChange
if actual.value < lowerBound: if actual.value < lower_bound:
change = MetricChange.Decrease change = MetricChange.Decrease
elif actual.value > upperBound: elif actual.value > upper_bound:
change = MetricChange.Increase change = MetricChange.Increase
# Is the change allowed? # Is the change allowed?
...@@ -608,14 +641,14 @@ def check_stats_change(actual: PerfStat, ...@@ -608,14 +641,14 @@ def check_stats_change(actual: PerfStat,
result = failBecause('stat ' + error, tag='stat') result = failBecause('stat ' + error, tag='stat')
if not change_allowed or force_print: if not change_allowed or force_print:
length = max(len(str(x)) for x in [expected_val, lowerBound, upperBound, actual.value]) length = max(len(str(x)) for x in [expected_val, lower_bound, upper_bound, actual.value])
def display(descr, val, extra): def display(descr, val, extra):
print(descr, str(val).rjust(length), extra) print(descr, str(val).rjust(length), extra)
display(' Expected ' + full_name + ' ' + actual.metric + ':', expected_val, '+/-' + str(tolerance_dev) + '%') display(' Expected ' + full_name + ' ' + actual.metric + ':', expected_val, acceptance_window.describe())
display(' Lower bound ' + full_name + ' ' + actual.metric + ':', lowerBound, '') display(' Lower bound ' + full_name + ' ' + actual.metric + ':', lower_bound, '')
display(' Upper bound ' + full_name + ' ' + actual.metric + ':', upperBound, '') display(' Upper bound ' + full_name + ' ' + actual.metric + ':', upper_bound, '')
display(' Actual ' + full_name + ' ' + actual.metric + ':', actual.value, '') display(' Actual ' + full_name + ' ' + actual.metric + ':', actual.value, '')
if actual.value != expected_val: if actual.value != expected_val:
display(' Deviation ' + full_name + ' ' + actual.metric + ':', actual_dev, '%') display(' Deviation ' + full_name + ' ' + actual.metric + ':', actual_dev, '%')
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
from my_typing import * from my_typing import *
from pathlib import Path from pathlib import Path
from perf_notes import MetricChange, PerfStat, Baseline, MetricOracles, GitRef from perf_notes import MetricChange, PerfStat, Baseline, MetricAcceptanceWindow, GitRef
from datetime import datetime from datetime import datetime
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
...@@ -49,6 +49,9 @@ class TestConfig: ...@@ -49,6 +49,9 @@ class TestConfig:
# Path to Ghostscript # Path to Ghostscript
self.gs = None # type: Optional[Path] self.gs = None # type: Optional[Path]
# Path to Linux `perf` tool
self.perf_path = None # type: Optional[Path]
# Run tests requiring Haddock # Run tests requiring Haddock
self.haddock = False self.haddock = False
...@@ -377,7 +380,7 @@ class TestOptions: ...@@ -377,7 +380,7 @@ class TestOptions:
# , 10) } # , 10) }
# This means no baseline is available for way1. For way 2, allow a 10% # This means no baseline is available for way1. For way 2, allow a 10%
# deviation from 9300000000. # deviation from 9300000000.
self.stats_range_fields = {} # type: Dict[MetricName, MetricOracles] self.stats_range_fields = {} # type: Dict[MetricName, MetricAcceptanceWindow]
# Is the test testing performance? # Is the test testing performance?
self.is_stats_test = False self.is_stats_test = False
...@@ -449,6 +452,9 @@ class TestOptions: ...@@ -449,6 +452,9 @@ class TestOptions:
# The extra hadrian dependencies we need for this particular test # The extra hadrian dependencies we need for this particular test
self.hadrian_deps = set(["test:ghc"]) # type: Set[str] self.hadrian_deps = set(["test:ghc"]) # type: Set[str]
# Record these `perf-events` counters when compiling this test, if `perf` is available
self.compiler_perf_counters = [] # type: List[str]
# The default set of options # The default set of options
global default_testopts global default_testopts
default_testopts = TestOptions() default_testopts = TestOptions()
......
...@@ -3,10 +3,12 @@ ...@@ -3,10 +3,12 @@
# (c) Simon Marlow 2002 # (c) Simon Marlow 2002
# #
import csv
import io import io
import shutil import shutil
import os import os
import re import re
import tempfile
import traceback import traceback
import time import time
import datetime import datetime
...@@ -28,7 +30,7 @@ from term_color import Color, colored ...@@ -28,7 +30,7 @@ from term_color import Color, colored
import testutil import testutil
from cpu_features import have_cpu_feature from cpu_features import have_cpu_feature
import perf_notes as Perf import perf_notes as Perf
from perf_notes import MetricChange, PerfStat, MetricOracles from perf_notes import MetricChange, PerfStat, MetricAcceptanceWindow
extra_src_files = {'T4198': ['exitminus1.c']} # TODO: See #12223 extra_src_files = {'T4198': ['exitminus1.c']} # TODO: See #12223
from my_typing import * from my_typing import *
...@@ -477,6 +479,17 @@ def _run_timeout_multiplier( name, opts, v ): ...@@ -477,6 +479,17 @@ def _run_timeout_multiplier( name, opts, v ):
# ----- # -----
def collect_compiler_perf_counters( counters: List[str] ):
"""
Record the given event counters using `perf stat` when available.
"""
def f(name, opts):
opts.compiler_perf_counters += counters
return f
# -----
def extra_run_opts( val ): def extra_run_opts( val ):
return lambda name, opts, v=val: _extra_run_opts(name, opts, v); return lambda name, opts, v=val: _extra_run_opts(name, opts, v);
...@@ -522,10 +535,10 @@ def _extra_files(name, opts, files): ...@@ -522,10 +535,10 @@ def _extra_files(name, opts, files):
# are about the performance of the runtime code generated by the compiler. # are about the performance of the runtime code generated by the compiler.
def collect_compiler_stats(metric='all',deviation=20): def collect_compiler_stats(metric='all',deviation=20):
setTestOpts(no_lint) setTestOpts(no_lint)
return lambda name, opts, m=metric, d=deviation: _collect_stats(name, opts, m,d, True) return lambda name, opts: _collect_rts_stats(name, opts, metric, deviation, True)
def collect_stats(metric='all', deviation=20): def collect_stats(metric='all', deviation=20):
return lambda name, opts, m=metric, d=deviation: _collect_stats(name, opts, m, d) return lambda name, opts: _collect_rts_stats(name, opts, metric, deviation, False)
# This is an internal function that is used only in the implementation. # This is an internal function that is used only in the implementation.
# 'is_compiler_stats_test' is somewhat of an unfortunate name. # 'is_compiler_stats_test' is somewhat of an unfortunate name.
...@@ -533,7 +546,11 @@ def collect_stats(metric='all', deviation=20): ...@@ -533,7 +546,11 @@ def collect_stats(metric='all', deviation=20):
# measures the performance numbers of the compiler. # measures the performance numbers of the compiler.
# As this is a fairly rare case in the testsuite, it defaults to false to # As this is a fairly rare case in the testsuite, it defaults to false to
# indicate that it is a 'normal' performance test. # indicate that it is a 'normal' performance test.
def _collect_stats(name: TestName, opts, metrics, deviation, is_compiler_stats_test=False): def _collect_rts_stats(
name: TestName,
opts, metrics: List[MetricName],
deviation: float,
is_compiler_stats_test: bool):
if not re.match('^[0-9]*[a-zA-Z][a-zA-Z0-9._-]*$', name): if not re.match('^[0-9]*[a-zA-Z][a-zA-Z0-9._-]*$', name):
failBecause('This test has an invalid name.') failBecause('This test has an invalid name.')
...@@ -573,8 +590,7 @@ def _collect_stats(name: TestName, opts, metrics, deviation, is_compiler_stats_t ...@@ -573,8 +590,7 @@ def _collect_stats(name: TestName, opts, metrics, deviation, is_compiler_stats_t
target_commit, name, config.test_env, metric, way, \ target_commit, name, config.test_env, metric, way, \
config.baseline_commit ) config.baseline_commit )
opts.stats_range_fields[metric] = MetricOracles(baseline=baselineByWay, opts.stats_range_fields[metric] = Perf.RelativeMetricAcceptanceWindow(deviation)
deviation=deviation)
# ----- # -----
...@@ -1458,7 +1474,7 @@ def do_compile(name: TestName, ...@@ -1458,7 +1474,7 @@ def do_compile(name: TestName,
return result return result
extra_hc_opts = result.hc_opts extra_hc_opts = result.hc_opts
result = simple_build(name, way, extra_hc_opts, should_fail, top_mod, units, False, True, **kwargs) result = simple_build(name, way, extra_hc_opts, should_fail, top_mod, units, False, True, compiler_perf_counters = getTestOpts().compiler_perf_counters, **kwargs)
if badResult(result): if badResult(result):
return result return result
...@@ -1580,7 +1596,7 @@ def compile_and_run__(name: TestName, ...@@ -1580,7 +1596,7 @@ def compile_and_run__(name: TestName,
if way.startswith('ghci'): # interpreted... if way.startswith('ghci'): # interpreted...
return interpreter_run(name, way, extra_hc_opts, top_mod) return interpreter_run(name, way, extra_hc_opts, top_mod)
else: # compiled... else: # compiled...
result = simple_build(name, way, extra_hc_opts, False, top_mod, [], True, True, backpack = backpack) result = simple_build(name, way, extra_hc_opts, False, top_mod, [], True, True, backpack = backpack, compiler_perf_counters = getTestOpts().compiler_perf_counters)
if badResult(result): if badResult(result):
return result return result
...@@ -1615,6 +1631,49 @@ def metric_dict(name, way, metric, value) -> PerfStat: ...@@ -1615,6 +1631,49 @@ def metric_dict(name, way, metric, value) -> PerfStat:
value = value) value = value)
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
def check_stat(
name: TestName,
way: WayName,
metric: MetricName,
acceptance_window: MetricAcceptanceWindow,
value: float) -> PassFail:
if not Perf.inside_git_repo():
return passed()
head_commit = Perf.commit_hash(GitRef('HEAD')) if Perf.inside_git_repo() else None
if head_commit is None:
return passed()
# Store the metric so it can later be stored in a git note.
perf_stat = metric_dict(name, way, metric, value)
# Find baseline; If this is the first time running the benchmark, then pass.
baseline = Perf.baseline_metric(head_commit, name, config.test_env, metric, way, config.baseline_commit)
if baseline is None:
metric_result = passed()
perf_change = MetricChange.NewMetric
else:
(perf_change, metric_result) = Perf.check_stats_change(
perf_stat,
baseline,
acceptance_window,
config.allowed_perf_changes,
config.verbose >= 4)
t.metrics.append(PerfMetric(change=perf_change, stat=perf_stat, baseline=baseline))
# If any metric fails then the test fails.
# Note, the remaining metrics are still run so that
# a complete list of changes can be presented to the user.
if not metric_result.passed:
if config.ignore_perf_increases and perf_change == MetricChange.Increase:
metric_result = passed()
elif config.ignore_perf_decreases and perf_change == MetricChange.Decrease:
metric_result = passed()
return metric_result
# Check test stats. This prints the results for the user. # Check test stats. This prints the results for the user.
# name: name of the test. # name: name of the test.
# way: the way. # way: the way.
...@@ -1622,14 +1681,11 @@ def metric_dict(name, way, metric, value) -> PerfStat: ...@@ -1622,14 +1681,11 @@ def metric_dict(name, way, metric, value) -> PerfStat:
# range_fields: see TestOptions.stats_range_fields # range_fields: see TestOptions.stats_range_fields
# Returns a pass/fail object. Passes if the stats are within the expected value ranges. # Returns a pass/fail object. Passes if the stats are within the expected value ranges.
# This prints the results for the user. # This prints the results for the user.
def check_stats(name: TestName, def check_rts_stats(name: TestName,
way: WayName, way: WayName,
stats_file: Path, stats_file: Path,
range_fields: Dict[MetricName, MetricOracles] range_fields: Dict[MetricName, MetricAcceptanceWindow]
) -> PassFail: ) -> PassFail:
head_commit = Perf.commit_hash(GitRef('HEAD')) if Perf.inside_git_repo() else None
if head_commit is None:
return passed()
result = passed() result = passed()
if range_fields: if range_fields:
...@@ -1638,7 +1694,7 @@ def check_stats(name: TestName, ...@@ -1638,7 +1694,7 @@ def check_stats(name: TestName,
except IOError as e: except IOError as e:
return failBecause(str(e)) return failBecause(str(e))
for (metric, baseline_and_dev) in range_fields.items(): for (metric, acceptance_window) in range_fields.items():
# Remove any metric prefix e.g. "runtime/" and "compile_time/" # Remove any metric prefix e.g. "runtime/" and "compile_time/"
stat_file_metric = metric.split("/")[-1] stat_file_metric = metric.split("/")[-1]
perf_change = None perf_change = None
...@@ -1651,46 +1707,27 @@ def check_stats(name: TestName, ...@@ -1651,46 +1707,27 @@ def check_stats(name: TestName,
val = field_match.group(1) val = field_match.group(1)
assert val is not None assert val is not None
actual_val = int(val) actual_val = int(val)
r = check_stat(name, way, metric, acceptance_window, actual_val)
# Store the metric so it can later be stored in a git note. if badResult(r):
perf_stat = metric_dict(name, way, metric, actual_val) result = r
# If this is the first time running the benchmark, then pass.
baseline = baseline_and_dev.baseline(way, head_commit) \
if Perf.inside_git_repo() else None
if baseline is None:
metric_result = passed()
perf_change = MetricChange.NewMetric
else:
tolerance_dev = baseline_and_dev.deviation
(perf_change, metric_result) = Perf.check_stats_change(
perf_stat,
baseline,
tolerance_dev,
config.allowed_perf_changes,
config.verbose >= 4)
t.metrics.append(PerfMetric(change=perf_change, stat=perf_stat, baseline=baseline))
# If any metric fails then the test fails.
# Note, the remaining metrics are still run so that
# a complete list of changes can be presented to the user.
if not metric_result.passed:
if config.ignore_perf_increases and perf_change == MetricChange.Increase:
metric_result = passed()
elif config.ignore_perf_decreases and perf_change == MetricChange.Decrease:
metric_result = passed()
result = metric_result
return result return result
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# Build a single-module program # Build a single-module program
def extras_build( way, extra_mods, extra_hc_opts ): def extras_build(
way: WayName,
extra_mods: List[str],
extra_hc_opts: str
) -> PassFail:
for mod, opts in extra_mods: for mod, opts in extra_mods:
result = simple_build(mod, way, opts + ' ' + extra_hc_opts, False, None, [], False, False) result = simple_build(mod, way, opts + ' ' + extra_hc_opts,
should_fail = False,
top_mod = None,
units = [],
link = False,
addsuf = False)
if not (mod.endswith('.hs') or mod.endswith('.lhs')): if not (mod.endswith('.hs') or mod.endswith('.lhs')):
extra_hc_opts += ' %s' % Path(mod).with_suffix('.o') extra_hc_opts += ' %s' % Path(mod).with_suffix('.o')
if badResult(result): if badResult(result):
...@@ -1708,7 +1745,8 @@ def simple_build(name: Union[TestName, str], ...@@ -1708,7 +1745,8 @@ def simple_build(name: Union[TestName, str],
addsuf: bool, addsuf: bool,
backpack: bool = False, backpack: bool = False,
suppress_stdout: bool = False, suppress_stdout: bool = False,
filter_with: str = '') -> Any: filter_with: str = '',
compiler_perf_counters: List[str] = []) -> Any:
opts = getTestOpts() opts = getTestOpts()
# Redirect stdout and stderr to the same file # Redirect stdout and stderr to the same file
...@@ -1763,14 +1801,19 @@ def simple_build(name: Union[TestName, str], ...@@ -1763,14 +1801,19 @@ def simple_build(name: Union[TestName, str],
flags = ' '.join(get_compiler_flags() + config.way_flags[way]) flags = ' '.join(get_compiler_flags() + config.way_flags[way])
cmd = ('cd "{opts.testdir}" && {cmd_prefix} ' cmd = ('{cmd_prefix} '
'{{compiler}} {to_do} {srcname} {flags} {extra_hc_opts}' '{{compiler}} {to_do} {srcname} {flags} {extra_hc_opts}'
).format(**locals()) ).format(**locals())
if filter_with != '': if filter_with != '':
cmd = cmd + ' | ' + filter_with cmd = cmd + ' | ' + filter_with
exit_code = runCmd(cmd, None, stdout, stderr, opts.compile_timeout_multiplier) (exit_code, perf_counts) = runCmdPerf(
compiler_perf_counters,
cmd,
stdin=None, stdout=stdout, stderr=stderr,
working_dir=opts.testdir,
timeout_multiplier=opts.compile_timeout_multiplier)
actual_stderr_path = in_testdir(name, 'comp.stderr') actual_stderr_path = in_testdir(name, 'comp.stderr')
...@@ -1791,10 +1834,15 @@ def simple_build(name: Union[TestName, str], ...@@ -1791,10 +1834,15 @@ def simple_build(name: Union[TestName, str],
return failBecause('exit code non-0', stderr=stderr_contents) return failBecause('exit code non-0', stderr=stderr_contents)
if isCompilerStatsTest(): if isCompilerStatsTest():
statsResult = check_stats(TestName(name), way, in_testdir(stats_file), opts.stats_range_fields) statsResult = check_rts_stats(TestName(name), way, in_testdir(stats_file), opts.stats_range_fields)
if badResult(statsResult): if badResult(statsResult):
return statsResult return statsResult
for k,v in perf_counts.items():
r = check_stat(TestName(name), way, MetricName('compile_time/perf/%s' % k), Perf.AlwaysAccept(), v)
if badResult(r):
return r
return passed() return passed()
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
...@@ -1841,10 +1889,10 @@ def simple_run(name: TestName, way: WayName, prog: str, extra_run_opts: str) -> ...@@ -1841,10 +1889,10 @@ def simple_run(name: TestName, way: WayName, prog: str, extra_run_opts: str) ->
if opts.cmd_wrapper is not None: if opts.cmd_wrapper is not None:
cmd = opts.cmd_wrapper(cmd) cmd = opts.cmd_wrapper(cmd)
cmd = 'cd "{opts.testdir}" && {cmd}'.format(**locals())
# run the command # run the command
exit_code = runCmd(cmd, stdin_arg, stdout_arg, stderr_arg, opts.run_timeout_multiplier) exit_code = runCmd(cmd, stdin_arg, stdout_arg, stderr_arg,
timeout_multiplier=opts.run_timeout_multiplier,
working_dir=opts.testdir)
# check the exit code # check the exit code
if exit_code != opts.exit_code: if exit_code != opts.exit_code:
...@@ -1875,7 +1923,7 @@ def simple_run(name: TestName, way: WayName, prog: str, extra_run_opts: str) -> ...@@ -1875,7 +1923,7 @@ def simple_run(name: TestName, way: WayName, prog: str, extra_run_opts: str) ->
# Check runtime stats if desired. # Check runtime stats if desired.
if stats_file is not None: if stats_file is not None:
return check_stats(name, way, in_testdir(stats_file), opts.stats_range_fields) return check_rts_stats(name, way, in_testdir(stats_file), opts.stats_range_fields)
else: else:
return passed() return passed()
...@@ -1935,9 +1983,9 @@ def interpreter_run(name: TestName, ...@@ -1935,9 +1983,9 @@ def interpreter_run(name: TestName,
if opts.cmd_wrapper is not None: if opts.cmd_wrapper is not None:
cmd = opts.cmd_wrapper(cmd); cmd = opts.cmd_wrapper(cmd);
cmd = 'cd "{opts.testdir}" && {cmd}'.format(**locals()) exit_code = runCmd(cmd, script, stdout, stderr,
timeout_multiplier=opts.run_timeout_multiplier,
exit_code = runCmd(cmd, script, stdout, stderr, opts.run_timeout_multiplier) working_dir=opts.testdir)
# split the stdout into compilation/program output # split the stdout into compilation/program output
split_file(stdout, delimiter, split_file(stdout, delimiter,
...@@ -2095,9 +2143,9 @@ def check_hp_ok(name: TestName) -> bool: ...@@ -2095,9 +2143,9 @@ def check_hp_ok(name: TestName) -> bool:
opts = getTestOpts() opts = getTestOpts()
# do not qualify for hp2ps because we should be in the right directory # do not qualify for hp2ps because we should be in the right directory
hp2psCmd = 'cd "{opts.testdir}" && {{hp2ps}} {name}'.format(**locals()) hp2psCmd = '{{hp2ps}} {name}'.format(**locals())
hp2psResult = runCmd(hp2psCmd, print_output=True) hp2psResult = runCmd(hp2psCmd, print_output=True, working_dir=opts.testdir)
actual_ps_path = in_testdir(name, 'ps') actual_ps_path = in_testdir(name, 'ps')
...@@ -2532,12 +2580,45 @@ def dump_file(f: Path): ...@@ -2532,12 +2580,45 @@ def dump_file(f: Path):
except Exception: except Exception:
print('') print('')
def runCmdPerf(
perf_counters: List[str],
cmd: str,
**kwargs) -> Tuple[int, Dict[str,float]]:
"""
Run a command under `perf stat`, collecting the given counters.
Returns the exit code and a dictionary of the collected counter values.
"""
FIELDS = ['value','unit','event','runtime','percent']
if len(perf_counters) == 0 or config.perf_path is None:
return (runCmd(cmd, **kwargs), {})
with tempfile.NamedTemporaryFile('rt') as perf_out:
args = [config.perf_path, 'stat', '-x,', '-o', perf_out.name, '-e', ','.join(perf_counters), cmd]
exit_code = runCmd(' '.join(args), **kwargs)
perf_out.readline() # drop initial comment line
perf_metrics = {}
for line in perf_out:
line = line.strip()
if line == '' or line.startswith('#'):
continue
fields = { k: v for k,v in zip(FIELDS, line.split(',')) }
perf_metrics[fields['event']] = float(fields['value'])
return (exit_code, perf_metrics)
def runCmd(cmd: str, def runCmd(cmd: str,
stdin: Union[None, Path]=None, stdin: Union[None, Path]=None,
stdout: Union[None, Path]=None, stdout: Union[None, Path]=None,
stderr: Union[None, int, Path]=None, stderr: Union[None, int, Path]=None,
working_dir: Optional[Path]=None,
timeout_multiplier=1.0, timeout_multiplier=1.0,
print_output=False) -> int: print_output=False,
) -> int:
"""
Run a command enforcing a timeout and returning the exit code.
"""
timeout_prog = strip_quotes(config.timeout_prog) timeout_prog = strip_quotes(config.timeout_prog)
timeout = str(int(ceil(config.timeout * timeout_multiplier))) timeout = str(int(ceil(config.timeout * timeout_multiplier)))
...@@ -2563,7 +2644,8 @@ def runCmd(cmd: str, ...@@ -2563,7 +2644,8 @@ def runCmd(cmd: str,
stdin=stdin_file, stdin=stdin_file,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=hStdErr, stderr=hStdErr,
env=ghc_env) env=ghc_env,
cwd=working_dir)
stdout_buffer, stderr_buffer = r.communicate() stdout_buffer, stderr_buffer = r.communicate()
finally: finally:
......
...@@ -5,6 +5,7 @@ setTestOpts(no_lint) ...@@ -5,6 +5,7 @@ setTestOpts(no_lint)
test('T1969', test('T1969',
[# expect_broken(12437), [# expect_broken(12437),
collect_compiler_residency(20), collect_compiler_residency(20),
collect_compiler_perf_counters(['instructions']),
extra_run_opts('+RTS -A64k -RTS'), extra_run_opts('+RTS -A64k -RTS'),
# The default RESIDENCY_OPTS is 256k and we need higher sampling # The default RESIDENCY_OPTS is 256k and we need higher sampling
# frequency. Incurs a slow-down by about 2. # frequency. Incurs a slow-down by about 2.
......