[Haskell] Monadification as a refactoring [was: Mixing monadic
and non-monadic functions]
frederik at a5.repetae.net
Sat Sep 10 19:25:23 EDT 2005
On Sat, Sep 10, 2005 at 12:55:15AM +0100, Claus Reinke wrote:
> life is funny, isn't it? so many people so eagerly
lazily, in my case
> discussing conversion between non-monadic and monadic code,
I'm trying to discuss a new syntax, not code transformations. I agree
that the two are related. I'm interested in the latter, but I don't
understand it very well. I think of refactoring as an operation that
takes source code to source code, i.e. unlike most operations done on
source code, refactoring produces output which is meant to be edited
by humans. Is this correct? But if it is, doesn't it mean that one
would like refactorizations to have some ill-defined "reversibility"
a refactorization should have an inverse which commutes with simple
For instance, if I (a) rename a variable, and then (b) introduce a new
reference to the renamed variable somewhere, I can later decide to
change the name back, reverting (a), without losing the work I did in
the meantime in (b). I can do this by applying another rename
operation, which will also affect the new reference.
Or, if I (a) take a bit of code and remove it to a separate function,
and then (b) modify the body of that function, I can later decide to
inline the function back into the one place which calls it, thus
reverting (a), without losing the modification done in (b).
Yet, I don't see how the "monadification" operations you propose could
have this property. They are certainly code transformations! But they
seem irreversible - once I (a) apply your transformations and (b) edit
the output, I can't revert (a) without losing the work done in (b).
Changes to the code become tightly coupled, design becomes less
> yet when we asked for your opinions and suggestions on this very
> topic only a short while ago, we got a total of 4 (four) replies -
> all quite useful, mind you, so we were grateful, but still one
> wonders.. we might have assumed that not many people cared after
It might have been more useful to ask for survey replies to be sent to
the list. Often the various opinions of a large number of people can
be compressed to a few representative positions. But if respondents
can't see what opinions have been expressed so far, then this
time-saving compression becomes impossible. That is just my opinion.
> shall I assume that all participants in this discussion have joined
> the Haskell parade since then, and have proceeded rapidly to the
> problems of monadic lifting?-) in which case I'd invite you to have
> a look at that survey and the papers mentioned.
I should do that, yes!
It's just that I was a bit late, having misplaced my trumpet.
> > > I thought the easy answer would be to inject non-monadic values into the
> > > monad (assuming one already rejiggered things to do automatic lifting).
> I'd phrase it slightly differently: what (I think) one wants are implicit coercions
> between monadic and non-monadic types of expressions, where the coercions
> lift non-monadic values into the monad in question, while embedding monadic
> computations in the current monad to get a non-monadic result if only that is
> needed (although one might think of the latter as partially lifting the operation
> that needs the non-monadic result).
> only I wouldn't want those implicit coercions to be introduced unless
> programmers explicitly ask for that (one usually only converts code from
> non-monadic to monadic once, and while the details of that step might
> be tiresome and in need of tool-support, the step itself should be explicit
> - see my comment on (2) below).
> > Note that in (a), "pure" values are never used where monads are asked
> > for, only the other way around.
> that is probably where some would beg to differ - if you lift operations,
> why not lift values as well?
Oh, one should do both, I was just giving a case where value-lifting
didn't happen, as a counterexample to Aaron's viewpoint.
> > I think that supporting syntax (a) for semantics (b) should be a
> > feature because: (1) it is (usually) obvious what (a) means; (2) it
> > eliminates the single-use variable 'v' - single-use variables like
> > this occur a lot in monadic Haskell code, and I think they make it
> > harder to read and write; (3) it would support the math-like syntax
> > that I presented in my original message.
> (1) "(usually) obvious" is tech-speak for "(perhaps) possible to
> figure out, though probably not uniquely determined"?-)
> when mathematicians abuse notation in the "obvious" way,
> there is usually an assumed context in which the intended
> abuses are clearly defined (if not, there is another context
> in which the "obvious" things will go unexpectedly awry).
> (2) the nice thing about Haskell is that it *distinguishes* between
> monadic and non-monadic computations, and between evaluation
> and execution of monadic computations. if you want everything
> mixed into one soup, ML might be your language of choice
> (http://portal.acm.org/citation.cfm?id=178047 , if I recall correctly?
> see the paper discussed in
> for one application that demonstrates the power/danger of such
> implicit monads).
> (3) using math-like syntax for lifted expressions is common practice
> in some families of Haskell-DSELs, eg. Conal Elliot's Fran. As John
> pointed out, the predefined class-hierarchy is not really helpful
> for such endeavours, but if one isn't picky, one may ignore classes
> not used.. the "trick" is to lift even constants, so when you get
> to applications, all components are already lifted, and lifting most
> arithmetic works out fine (Boolean operations are another matter).
> note, however, that the resulting language, while looking
> mathematically pure and permitting concise expression of complex
> circumstances, may not have the reasoning properties you expect..
At this point I think we have to look at more examples. I'm not
convinced that my position is right, but I'm not convinced that that
it is wrong either. I just think it's promising, based on my
experience. If I were a better person, and had more free time, I would
work on producing examples and working things out myself, and perhaps
write a paper or something.
Of course, experienced people are disagreeing with me, so maybe I
should just accept their good judgment! In any case, I'm afraid I
don't have much more to contribute, beyond the idea itself.
> > It might be hard to modify the type checker to get it to work, but I
> > think it is possible, and I see no reason not to be as general as
> > possible.
> here I'd agree, although in contrast to you, I'd be talking about a
> complex refactoring under programmer control, not about an implicitly
> invoked collection of coercions. I played with that idea after Martin
> Erwig visited our refactoring project in March, and got to a prototype
> type-coercion inference system for a very simple functional language,
> because I found the situation with various existing and, as Erwig/Ren
> pointed out, apparently unrelated monadification algorithms confusing.
> apart from the various styles of monadification, which we'd like
> to permit, and have the programmer select, e.g., by type annotations,
> there is the slight problem that there are an unbounded number of
> different monadifications (more if one wants to keep annotations to
> a minimum), so one needs a sensible bound (one that does not
> exclude any of the alternatives one might want). one also might
> want to be able to choose between the alternatives (or tune the
> system so that taking the first choice works out ok most of the
> time). oh, and it shouldn't be too inefficient, and it is really a pain
> to re-implement a type-system just to add a few coercion rules to
> it (which is why I haven't extended my mini fpl to Haskell yet..).
Well, I've said a little about why I don't like irreversible
refactoring. I've been reading "Notes on the Synthesis of Form" by
Christopher Alexander, I think this has helped me understand the
process of design in more abstract terms, if you want a reference.
I think the source code of a program should be as close to its initial
specification as possible. The goal should be to make it as easy to
read and modify as it was to write. However, if the design is spread
out over write/test/debug cycles, as is often the case in interpreted
languages such as perl; or refactor cycles, as you seem to be
proposing; then a lot of the decisions and rationales which determine
that design will not be visible in the final version of the code -
rather, they will be stored in the succession of modifications which
have been applied to create this final version. But these
modifications are, as it were, difficult to go back and modify. The
more a program is produced through a process of accretion or
evolution, the more tightly coupled various aspects of its design will
be, and the more difficult it will be to change any one of them.
Even if most of the design decisions are good ones, eventually the
number of unfixable bad decisions will grow until continued
development of a particular component becomes untenable. This might
happen at a function level or a module level or a program level - but
in any case I think the development style in question should be
avoided where possible, and made avoidable where feasible.
More information about the Haskell