Commit 5dc6bb98 authored by Michal Terepeta's avatar Michal Terepeta

Remove private directory (ticket #6)

It contained reviews from ICFP and POPL, which are not really useful
(at least now) and probably shouldn't have been made public in the
first place.
parent 96b439a5
We agree with the reviewers on almost all points. However, we draw
a more positive conclusion from these points than the reviewers imply:
* We agree with reviewer #1 that our paper is about solid
engineering, a subject that is often valued at ICFP.
As engineers, we take reviewer #2's comment 'appears
straightforward' as a high compliment. A simple design can be
surprisingly difficult to achieve; it is often obvious only in
retrospect. We believe that the interfaces described in the paper
are substantially simpler than the interfaces we described in
2005, and unlike the interfaces in the 2005 paper, they are
reusable.
* The impression that our library is conceived just to serve GHC
represents a misleading failure of our presentation. We
specifically made our library *polymorphic* so that it can be used
with *any* intermediate language, provided only that the language
distinguishes control transfers and provides a means to follow
control-flow edges. We hope soon to use the library with
representations of machine instructions.
* We hope Reviewer #3 may reconsider whether our library is fit for
general purposes. We make no claim to theoretical novelty and no
claim to enable analyses that cannot be implemented using other
methods. Our claim is that using our library, it is much *easier*
to implement dataflow analyses and transformations than it is
using other compiler infrastructures (e.g., SUIF or vpo, to name
two with which we are familiar). In substantiating this claim, we
chose examples of analyses and optimizations that are already
known, so that readers can compare our code with code written for
their favorite compilers.
To be sure the examples are right, we chose *only* examples that
have been implemented and tested in GHC. This choice may have
influenced the reviewer's impression that the library is not fit
for general purposes.
There are many faults in the paper. If the paper is accepted,
1. We will make sure all missing definitions and explanations are
accounted for, so that readers can understand the code easily.
2. We will rewrite Section 7 completely. (We have already done so,
but since it would be unfair to ask the reviewers to base a
decision on work done since the deadline, we say no more here.)
3. We will provide a suitable scholarly account of the relationship
with Schmidt's work and with abstract interpretation more generally.
To reviewer #1: if register pressure could be approximated by,
e.g., the number of live variables, then it would be a
straightforward matter to write a dataflow pass that removes
redundant reloads when the number of live variables is small.
It would require a forward pass to mark the redundant reloads and a
backward pass to remove them.
Dear Prof. Norman Ramsey:
Final versions of the reviews for your submission
Dataflow Optimization Made Simple
are at the bottom of this mail. I hope you will find them
useful.
Please note that in some cases these reviews have been
updated since the author response period. Also, additional
reviews may have been received after the response period
ended; if this was the case, the committee took note of the
fact that you did not have the opportunity to respond to
them.
Again, if you have any additional questions, please feel free
to contact me.
Sincerely,
Andrew Tolmach
PC Chair
============================================================================
ICFP 2009 Reviews for Submission #8
============================================================================
Title: Dataflow Optimization Made Simple
Authors: Norman Ramsey, Joao Dias and Simon Peyton Jones
============================================================================
REVIEWER #1
============================================================================
---------------------------------------------------------------------------
Comments
---------------------------------------------------------------------------
The authors present "a Haskell library that makes it easy for compiler writers
to implement program transformations based on dataflow analyses".
Although the main goal of the paper is interesting, I think that the authors'
proposal is not so general as claimed in the abstract (and
expected for a general Haskell library).
>From a theoretical point of view, it does not introduce anything really new
(the theoretical basis comes from a POPL'02 paper by Lerner, Grove, and
Chambers).
>From a practical point of view, I find the following drawbacks:
1.- Although a general purpose library is claimed to be given,
the development focuses on some few (already known) program analyses
and optimizations which are defined and described for C--,
a particular compiler-target intermediate language
which (as far as I know) is not widely used.
Actually, at the end of the abstract it is said that
this library is the "workhorse of a new backend for GHC".
2.- Many types, functions and data structures are not defined in the paper.
For instance, the types VarSet or Middle (together with some data constructors
like MidAssign, CmmLocal, CmmLoad, etc., and defined functions like aTx, noTx,
etc.) in Figures 3 to 7 seem to be part of the types which are used
in the current implementation of GHC (this is explicit for Middle as one can
read in the but last paragraph of the second column in page 5) or in a full
description of the library which is not available anywhere (at least I found
no link to any additional information about them). This makes the code in
Figures 3 to 7 more difficult to read and understand and, again, more difficult
to generalize to compilers of other programming languages different from C
3.- It is unclear to me how the optimization of the target code is achieved
by using the proposed functions. The focus is on the dataflow graph but
nothing is said about how this graph is obtained from the unoptimized code
and how an optimized target code is obtained from the transformed/rewritten
graph.
Overall, the paper is very well-written. I also find interesting the
ideas dropped in the Conclusions, and have no doubt that FP can export
many ideas and techniques for optimizing compilers of more 'traditional'
programming languages. However, I failed to see how the proposed
library helps to achieve this since, in my opinion, it
seems to be conceived just to serve the GHC compiler.
============================================================================
REVIEWER #2
============================================================================
---------------------------------------------------------------------------
Comments
---------------------------------------------------------------------------
This paper presents a Haskell library for implementing a general
dataflow analysis and optimization through examples and describes
its implementation.
The introductory part of sections 1 - 6 contains a nice introduction
to dataflow optimization, and convincing argument on ease of
developing variety of dataflow optimizers using the presented
library. However, the presented library appears to be a
straightforward porting of Ramsey and Dias's dataflow infrastructure
based on applicative representation of control flow graph based on
Huet's Zipper. Data representation and the structures of the
optimization engine appears to be quite similar, and it is unclear
what the new technical contributions are. In addition, the description
of the dataflow engine in Section 7 of is not very clear due to lack
of proper explanation. For example, readers would have some difficulty
in understanding the importance and relevance of the data types such
as "Block", "ZTail" and "Zhead" unless they were already familiar with
the underlying Zipper style representation of control flow graph with
the corresponding definitions (ie, "block", "head" and "tail") given
in Ramsey and Dias's original article.
I understand that sometimes republishing a workshop paper in ICFP
would be appropriate especially when the workshop is limit visibility,
but this may not be the case. The audience of ML workshop
significantly overlap with those of ICFP and that the proceeding of ML
2005 has been published in Elsevier's Electronic Notes in Theoretical
Computer Science, which is widely available and archival publication.
As such, I doubt that this paper contains enough original contribution
appropriate for publication in ICFP 2009.
============================================================================
REVIEWER #3
============================================================================
---------------------------------------------------------------------------
Comments
---------------------------------------------------------------------------
This paper describes the design and Haskell implementation of a generic library
for dataflow analysis and code transformations. Examples of classic dataflow
analyses expressed using the library are shown and appear remarkably compact.
A distinguishing feature of this library is that it supports combined analysis
and transformation, whereas the result of the analysis anticipates the effect
of the code transformation. This follows the approach proposed by Lerner et al
in 2002. It is pointed out that this enables combining several
analysis/transformation passes in one, but no example is given. The only
example given where analyze&transform gives better results than
analyze-then-transform is dead code elimination by liveness analysis, but as
noted in footnote 1 it was already known how to achieve the same result using a
standard analyze-only dataflow solver.
80% of the paper describe the library and its use. The remaining 20% read more
like a lecture on dataflow analysis, presented as inference of (certain classes
of) logical assertions. I wasn't completely convinced by this presentation,
which I found less penetrating than D. Schmidt's famous "Data flow analysis is
model checking of abstract interpretations". More generally, it is surprising
that abstract interpretation is mentioned nowhere in this paper.
All in all, this is a well-written paper describing a very solid piece of
compiler engineering, but I didn't see a breakthrough.
Minor comments:
The example in section 4 is a bit mysterious. Once the set of variables live
across a function call has been computed (by a standard liveness analysis), it
is a simple syntactic transformation to insert a spill after each definition
(of one of those variables) and a reload before each use; no further dataflow
analyses are needed. Yes, this can give less precise results than the proposed
approach if there are multiple definitions of a variable, some live across a
call and some not (can this happen in GHC's intermediate code?). But, still,
the hard part about spilling and reloading is to eliminate multiple reloads in
areas of code where register pressure is low, e.g.
x = ...
spill(x)
...
reload(x)
y = ... x ...
// do not reload(x) again if register pressure is low
z = ... x ...
I didn't see whether / how the proposed approach could do this.
The referees' reports are clear, and we haven't identified any
significant misunderstandings. Several referees suggest that the
paper reads more like a pearl than a research contribution, and we are
happy to have it evaluated as such. Below we answer referees'
questions. (Having received such nice detailed reviews, we don't want
to leave referees' questions hanging unanswered, but it is probably
not necessary to read the answers below in order to make a decision
about the paper.)
Referee A asks if we have experimental results which show that the
quality of generated code can compete with state-of-the-art compilers.
Yes, we have experimental results with the Glasgow Haskell Compiler
which show that the new back end produces code at least as good as the
old back end. But although GHC's front end contains some very
sophisticated optimizations, by the time the code gets to the level
shown in the paper, the back-end optimizations are limited, and so
GHC's bar is actually set low.
Referee B, citing Knoop, Koschützki, and Steffen, points out that the
API might be simpler if we eliminated the static type distinction
between 'first', 'last', and 'middle' nodes. Ironically, we were very
inspired by the 'living dinosaur' paper and used it as the starting
point for our representation of control-flow graphs. But giving all
nodes the same type led to a great deal of run-time checking, and to
preserve our sanity we were forced to distinguish at compile time
between first, middle, and last nodes, which of course means that we
reinvented basic blocks. Perhaps one way to think about the design
issues here is that although the split into three static types makes
the API wider, client code is simpler because each of the three static
types of node obeys a stronger invariant (constraining the numbers of
predecessors or successors). In any case, we have experience with
both representations, and our experience is that the wider API leads
to a simpler compiler overall---although we don't know how to make
that case compellingly in a conference submission.
Referee C asks why we rewrite in two steps. The referee has the
answer exactly: during the first step of the analysis, speculative
rewriting produces intermediate results which are not guaranteed to be
sound until a fixed point is reached.
Referee C asks for more detail on how the optimization fuel is used
for debugging. Suppose we are regression-testing the compiler and a
test fails. We re-run the same test with no fuel. If the test then
succeeds, the optimizer is at fault. We ask the compiler how much
fuel was used on the original run, and we use that as the starting
point for a binary search on the fuel supply. This binary search
identifies a single graph-node rewrite which transforms a working test
case into a failed test case. At this point there's no more
automation; the compiler writer takes over and diagnoses whether the
transformation is unjustified or the underlying analysis is faulty.
To summarize, optimization fuel is used to find, in logarithmically
many runs of the compiler, the transformation, analysis, node, and
rewrite that cause a fault. We should add that although this process
is completely automated in the 'Quick C--' compiler written by the
first two authors, it is not yet automated in the Glasgow Haskell
Compiler.
Referee E observes that CIL uses OCaml so it may have some nice static
type-checking guarantees. We wrote a predecessor of Hoopl in OCaml
and the static typing was not bad, but having the 'open' and 'closed'
graph properties checked statically is a significant upgrade---we
eliminated a number of dynamic checks, some of which had been sources
of bugs. It is possible that a creative encoding into OCaml would
make it possible to check the same properties statically using only
OCaml's Hindley-Milner system, but GHC's extension of generalized
algebraic data types makes it very easy to provide the extra static
checking.
Referee E also suggests we should compare Hoopl with other engines for
dataflow analysis. We are all wearing our stupid hats and whacking
ourselves in the head for not thinking of this. If it should happen
that the paper is accepted, we'll do a proper job.
===========================================================================
POPL 2010 Review #28A
Updated Saturday 25 Jul 2009 8:03:25am PDT
---------------------------------------------------------------------------
Paper #28: Hoopl: Dataflow Optimization Made Simple
---------------------------------------------------------------------------
Overall merit: B. OK paper, but I will not champion
it.
Reviewer expertise: Z. I am an informed outsider of the
area.
===== Paper summary =====
The paper presents an approach to specifying and combining data flow analyses. The authors do program analysis by solving equations, they relate assertions via weakest liberal preconditions and strongest postconditions, and they combine analyses and transformations using the POPL 2002 paper by Lerner, Grove, and Chambers. The entire framework is written in a functional, nonimperative style that uses zippers and a dataflow monad, and is polymorphic in the underlying representations.
===== Comments for author =====
The paper reads more like a pearl than a research paper. The paper combines "everything we know" into an elegant system for program analysis and optimization.
The examples of analyses and transformations are admirably short, and the paper gives several examples of how they apply to programs.
The main difficulty that has been overcome by the authors is doing the design of the system in a way puts together many ideas in a neat and seamless way.
Question: do you have experimental results that show that the quality of the produced code can compete with that of state-of-the-art compilers?
===========================================================================
POPL 2010 Review #28B
Updated Saturday 1 Aug 2009 9:46:50am PDT
---------------------------------------------------------------------------
Paper #28: Hoopl: Dataflow Optimization Made Simple
---------------------------------------------------------------------------
Overall merit: C. Weak paper, though I will not fight
strongly against it.
Reviewer expertise: Y. I am knowledgeable in the area,
though not an expert.
===== Paper summary =====
The paper presents the interface of an Haskell generic library for dataflow analysis combined with code transformations, in the style of Lerner, Grove and Chambers (POPL 2002).
The approach is illustrated by very compact implementations of two classic analyses (available variables and liveness) and a less common (and harder to follow) analysis+transformation for the insertion of spill and reload instructions.
===== Comments for author =====
All in all, this is a very solid piece of compiler engineering, and the paper is well written . But there are essentially no new principles in this paper. The only really novel aspect of this work ("analyze and transform" instead of "analyze then transform") is taken from Lerner et al. The use of an applicative "zipper" to represent the CFG scores some points for originality but was already published by the first two authors in a workshop paper (ENTCS 148(2)).
The authors also claim as an achievement the simplicity of their API, but I'm not convinced: for dataflow analysis at least, simpler interfaces could be obtained by throwing away the distinction between "first", "middle" and "last" nodes and working on a CFG of individual instructions [1]. See for example the presentations of Kildall's dataflow equation solver by Klein and Nipkow [2] and by Coupet-Grimal and Delobel [3], both of which were also mechanically proved correct.
I was excited, at first, by the extended example on insertion of reload and spill instructions, because this is an isse that is not well treated in compiler textbooks. In the end, I was a bit disappointed: I had the feeling that the proposed approach doesn't work significantly better than the trivial approach of inserting a spill after every definition and a reload before every use for each variable that couldn't be allocated to a register. Isn't the proposed approach overengineered?
Minor remarks:
Page 3, col 2, "the analysis shows nothing about x, which we notate x = bottom". This explanation of bottom sounds wrong. Thinking in terms of abstract interpretation, k denotes the singleton set of values {k}, top the set of all values, and bot the empty set of values. Knowing x = bottom at the end of the analysis really means something very strong about x, namely that all its definitions are unreachable.
References:
[1] Jens Knoop, Dirk Koschützki and Bernhard Steffen.
"Basic-Block Graphs: Living Dinosaurs?".
Proc. Compiler Construction '98, LNCS 1383, 1998.
[2] Gerwin Klein and Tobias Nipkow.
"Verified bytecode verifiers".
Theor. Comp. Sci 298, 2003.
[3] Solange Coupet-Grimal and William Delobel.
"A Uniform and Certified Approach for Two Static Analyses".
Types for Proofs and Programs, International Workshop, TYPES 2004.
LNCS 3839, 2006.
===========================================================================
POPL 2010 Review #28C
Updated Friday 11 Sep 2009 6:15:44pm PDT
---------------------------------------------------------------------------
Paper #28: Hoopl: Dataflow Optimization Made Simple
---------------------------------------------------------------------------
Overall merit: C. Weak paper, though I will not fight
strongly against it.
Reviewer expertise: Y. I am knowledgeable in the area,
though not an expert.
===== Paper summary =====
The paper presents a data flow analysis and program transformation framework. The framework, Hoopl, is implemented as a Haskell library that compiler writers can use to implement optimizations. The paper presents examples of actual analyses and transformations in the context of the Glasgow Haskell compiler, and outlines the implementation of the dataflow engine, which is the main part of Hoopl.
===== Comments for author =====
It is hard to pinpoint exactly the technical contribution of this paper. On one hand, it appears to be a beautifully engineered implementation of a data flow analysis framework but there is little comparison with other similar frameworks and at this point little evidence that this is the "right" design with the right compromise between expressiveness and generality. The paper is also an improvement over Ramsey and Dias's work but the improvements are scattered here and there. Finally the paper, I feel, is hard to appreciate without some familiarity with the GHC backend. Some of the code has to be elided and some of the code presented uses the actual GHC datatypes (which is good in some sense but also adds some unneeded complexity to the examples).
A couple of specific comments/questions:
- can you explain in more detail the reasons for implementing the rewriting in two steps: first a speculative step and then a step that commits it. Is that because the intermediate results of the analysis are unsound and that soundness is only achieved when the analysis reaches a fixed point?
- can you provide more detail on how the optimization fuel is used for debugging
===========================================================================
POPL 2010 Review #28D
Updated Tuesday 15 Sep 2009 5:36:29am PDT
---------------------------------------------------------------------------
Paper #28: Hoopl: Dataflow Optimization Made Simple
---------------------------------------------------------------------------
Overall merit: B. OK paper, but I will not champion
it.
Reviewer expertise: Y. I am knowledgeable in the area,
though not an expert.
===== Paper summary =====
This paper describes Hoopl, a dataflow optimization tool. The paper
first analyzes general properties and principles underlying various
dataflow analysis for low-level code languages through examples, and
identifies major components of a general dataflow optimizer: (1) a
dataflow fact lattice, (2) a transfer function that computes a dataflow
fact of a program point from the preceding (depending of the direction
of the analysis) dataflow facts, (3) a rewrite function that replaces a
node in a control flow graph based on dataflow facts. Based on this
analysis, the paper introduces Hoopl as a generic dataflow optimizer
through type signatures of Hoopl functions, and describes their
functionality. Hoopl takes a dataflow fact lattice (i.e. types and
associated operations), a transfer function, a rewriter, and performs
the iterative process of analyzing the graph using the transfer
function and transforming the graph using the rewrite function until
it obtains the least fixed point. The paper then describes some
aspects of its implementation, including its two phase architecture
consisting of a speculative iterator and an actualizer, and describes
the implemented forward iterator and forward actualizer in some details.
===== Comments for author =====
From the presentation, it seems that Hoopl is an easy to use and
generic tool that automates dataflow optimization for low-level code
languages. It is well engineered so that compiler writers can readily
use it for implementing various optimizations in their optimizing
compilers. The paper is also very well written. Hoopl's description
through examples can serve as a nice tutorial on unified view of
dataflow optimization.
However, I am not completely sure that this paper makes significantly
new contribution to POPL 2010. Although being a well engineered tool,
Hoopl appears to be based on combinations of known results.
The overall structure of representation and implementation is based
on some of the authors earlier work on zipper-style control-flow graph
representation and optimization. There are some improvements on
representations and interfaces: graphs are classified into "open" and
"closed" ones, and interfaces of graph splicing functions are
improved. The overall structure of interleaved analysis and
transformation is due to other existing work. Hoopl also combines
debugging facility, which is based on excising work. The description
of its implementation is too sketched to be useful in implementing new
optimization engines.
===========================================================================
POPL 2010 Review #28E
Updated Wednesday 16 Sep 2009 9:22:46am PDT
---------------------------------------------------------------------------
Paper #28: Hoopl: Dataflow Optimization Made Simple
---------------------------------------------------------------------------
Overall merit: C. Weak paper, though I will not fight
strongly against it.
Reviewer expertise: X. I am an expert in the subject area
of this paper.
===== Paper summary =====
This paper presents an analysis and transformation engine implemented
in Haskell. To use the engine, the programmer provides a description
of the lattice, transfer functions, and rewrite functions. The engine
then takes care of computing the dataflow fixed point and applying the
rewrites. The paper describes the interface to the engine, shows
examples of several client analyses and optimizations and describes an
implementation of the engine.
===== Comments for author =====
The interesting part of this paper is that it shows how to effectively
combine several previously known techniques/ideas into a single
engine. These techniques/ideas are: the fixed-point formulation of
dataflow analyses; the rewrite-rule formulation of transformation
rules; the composition technique of Lerner Grove and Chambers; and the
fuel-based abstraction of Whalley for quickly narrowing down the
source of compiler bugs.
However, it's hard to tease out what exactly the contribution
is. Datafow analysis engines based on lattices, transfer functions,
and rewrite functions are very common (Weimer and Necula's CIL has
one, Hendren et al's Soot has one, Lattner and Adve's LLVM has one,
IBM's WALA engine has one). It would be interesting to better
understand how the proposed framework distinguishes itself from these
existing frameworks.
Presumably, one difference is that the proposed framework incorporates
additional techniques (eg: the composition framework and the
fuel-based abstraction). However, these two techniques were previously
published, and they also seem quite orthogonal to each other (which
means the integration of the two techniques would not pose too many
additional challenges -- if it does, the paper should focus on this).
The paper does point out how Haskell helps with many of the
implementation tasks, and the use of Haskell is indeed a difference
from other frameworks. However, the paper doesn't really develop this
point, and it's also not clear how much of this type checking also
exists in other frameworks (eg: CIL uses OCaml so it may have some
nice static type-checking guarantees) The paper would be stronger if
it had a direct comparison (maybe a table?) of what kinds of
properties are statically checked using types in the proposed
framework, vs CIL, Soot, LLVM, and others frameworks too.
The paper could also be improved by reporting on experience in using
the framework. For example: what was it used for? what are some
statistics about the framework (number of analyses implemented, how
many lines of code, bugs found using types, etc.) how does experience
with the proposed framework compare with other frameworks such as LLVM
(eg: for conciseness, ease of use, etc)
Finally, the paper doesn't seem to address interprocedural analyses
and optimizations (although that's understandable to some extent --
one has to nail down the intra-procedural case first, but it would be
nice to get an idea of how the authors see this framework panning out
in the interprocedural case)
===========================================================================
Author's Response by Norman Ramsey <nr@cs.tufts.edu>
Paper #28: Hoopl: Dataflow Optimization Made Simple
---------------------------------------------------------------------------
The referees' reports are clear, and we haven't identified any
significant misunderstandings. Several referees suggest that the
paper reads more like a pearl than a research contribution, and we are
happy to have it evaluated as such. Below we answer referees'
questions. (Having received such nice detailed reviews, we don't want
to leave referees' questions hanging unanswered, but it is probably
not necessary to read the answers below in order to make a decision
about the paper.)
Referee A asks if we have experimental results which show that the
quality of generated code can compete with state-of-the-art compilers.
Yes, we have experimental results with the Glasgow Haskell Compiler
which show that the new back end produces code at least as good as the
old back end. But although GHC's front end contains some very
sophisticated optimizations, by the time the code gets to the level
shown in the paper, the back-end optimizations are limited, and so
GHC's bar is actually set low.
Referee B, citing Knoop, Koschützki, and Steffen, points out that the
API might be simpler if we eliminated the static type distinction
between 'first', 'last', and 'middle' nodes. Ironically, we were very
inspired by the 'living dinosaur' paper and used it as the starting
point for our representation of control-flow graphs. But giving all
nodes the same type led to a great deal of run-time checking, and to
preserve our sanity we were forced to distinguish at compile time
between first, middle, and last nodes, which of course means that we
reinvented basic blocks. Perhaps one way to think about the design
issues here is that although the split into three static types makes
the API wider, client code is simpler because each of the three static
types of node obeys a stronger invariant (constraining the numbers of
predecessors or successors). In any case, we have experience with
both representations, and our experience is that the wider API leads
to a simpler compiler overall---although we don't know how to make
that case compellingly in a conference submission.
Referee C asks why we rewrite in two steps. The referee has the
answer exactly: during the first step of the analysis, speculative
rewriting produces intermediate results which are not guaranteed to be
sound until a fixed point is reached.
Referee C asks for more detail on how the optimization fuel is used
for debugging. Suppose we are regression-testing the compiler and a
test fails. We re-run the same test with no fuel. If the test then
succeeds, the optimizer is at fault. We ask the compiler how much
fuel was used on the original run, and we use that as the starting
point for a binary search on the fuel supply. This binary search
identifies a single graph-node rewrite which transforms a working test
case into a failed test case. At this point there's no more
automation; the compiler writer takes over and diagnoses whether the
transformation is unjustified or the underlying analysis is faulty.
To summarize, optimization fuel is used to find, in logarithmically
many runs of the compiler, the transformation, analysis, node, and
rewrite that cause a fault. We should add that although this process
is completely automated in the 'Quick C--' compiler written by the
first two authors, it is not yet automated in the Glasgow Haskell
Compiler.
Referee E observes that CIL uses OCaml so it may have some nice static
type-checking guarantees. We wrote a predecessor of Hoopl in OCaml
and the static typing was not bad, but having the 'open' and 'closed'
graph properties checked statically is a significant upgrade---we
eliminated a number of dynamic checks, some of which had been sources
of bugs. It is possible that a creative encoding into OCaml would
make it possible to check the same properties statically using only
OCaml's Hindley-Milner system, but GHC's extension of generalized
algebraic data types makes it very easy to provide the extra static
checking.
Referee E also suggests we should compare Hoopl with other engines for
dataflow analysis. We are all wearing our stupid hats and whacking
ourselves in the head for not thinking of this. If it should happen
that the paper is accepted, we'll do a proper job.
POPL 2010
Paper #28
nr@cs.tufts.edu | Help | Sign out | Thursday 17 Sep 2009 6:38:16pm PDT
Your submissions #29->
[Main] Main