Skip to content
GitLab
Projects
Groups
Snippets
Help
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
GHC
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Iterations
Merge Requests
0
Merge Requests
0
Requirements
Requirements
List
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Operations
Operations
Incidents
Environments
Packages & Registries
Packages & Registries
Package Registry
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issue
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Alex D
GHC
Commits
661aa07e
Commit
661aa07e
authored
Feb 22, 2016
by
thomie
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Testsuite: failing profiling tests (#10037)
These tests fail not only for WAY=prof, but also for WAY=profllvm.
parent
176be87c
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
8 additions
and
5 deletions
+8
-5
testsuite/tests/profiling/should_run/all.T
testsuite/tests/profiling/should_run/all.T
+8
-5
No files found.
testsuite/tests/profiling/should_run/all.T
View file @
661aa07e
...
...
@@ -19,6 +19,10 @@ setTestOpts(keep_prof_callstacks)
extra_prof_ways
=
['
prof
',
'
prof_hc_hb
',
'
prof_hb
',
'
prof_hd
',
'
prof_hy
',
'
prof_hr
']
expect_broken_for_10037
=
expect_broken_for
(
10037
,
[
w
for
w
in
prof_ways
if
w
not
in
opt_ways
])
# e.g. prof and profllvm
test
('
heapprof001
',
[
when
(
have_profiling
(),
extra_ways
(
extra_prof_ways
)),
extra_run_opts
('
7
')],
compile_and_run
,
[''])
...
...
@@ -35,7 +39,7 @@ test('T3001-2', [only_ways(['prof_hb']), extra_ways(['prof_hb'])],
# As with ioprof001, the unoptimised profile is different but
# not badly wrong (CAF attribution is different).
test
('
scc001
',
[
expect_broken_for
(
10037
,
['
prof
'])
],
compile_and_run
,
test
('
scc001
',
[
expect_broken_for
_10037
],
compile_and_run
,
['
-fno-state-hack -fno-full-laziness
'])
# Note [consistent stacks]
test
('
scc002
',
[]
,
compile_and_run
,
[''])
...
...
@@ -56,15 +60,14 @@ test('T5314', [extra_ways(extra_prof_ways)], compile_and_run, [''])
test
('
T680
',
[]
,
compile_and_run
,
['
-fno-full-laziness
'])
# Note [consistent stacks]
test
('
T2552
',
[
expect_broken_for
(
10037
,
opt_ways
)
],
compile_and_run
,
[''])
test
('
T2552
',
[
expect_broken_for
_10037
],
compile_and_run
,
[''])
test
('
T949
',
[
extra_ways
(
extra_prof_ways
)],
compile_and_run
,
[''])
# The results for 'prof' are fine, but the ordering changes.
# We care more about getting the optimised results right, so ignoring
# this for now.
test
('
ioprof
',
[
expect_broken_for
(
10037
,
['
prof
']),
exit_code
(
1
)],
compile_and_run
,
test
('
ioprof
',
[
expect_broken_for_10037
,
exit_code
(
1
)],
compile_and_run
,
['
-fno-full-laziness -fno-state-hack
'])
# Note [consistent stacks]
# These two examples are from the User's Guide:
...
...
@@ -84,7 +87,7 @@ test('T5559', [], compile_and_run, [''])
# -fno-state-hack
# -fno-full-laziness
test
('
callstack001
',
[
expect_broken_for
(
10037
,
['
prof
'])
],
test
('
callstack001
',
[
expect_broken_for
_10037
],
# unoptimised results are different w.r.t. CAF attribution
compile_and_run
,
['
-fprof-auto-calls -fno-full-laziness -fno-state-hack
'])
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment