Widen acceptance threshold for low stakes microbenchmarks
T10421a, for example. This is a regression test designed to check for an exponential blowup, and a few of our MRs have been held up by ~1% regressions on it. It only allocates about 90MB at present.
Incidentally, I now notice that
T10421ais actually not allocating that much overall, hence the visibility of this small change. Frankly, I wonder whether these tiny testcases are really worth the hassle. Measuring changes in compiler allocations when the compiler is allocating less than 100 MB of heap in total seems really quite questionable. Perhaps tracking them is worthwhile, but the acceptance threshold should be pretty wide.
T10421a probably isn't the only test like this, however. We should probably do a sort of systematic review to find others like it.