[Haskell-cafe] Investing in languages (Was: What isyourfavouriteHaskell "aha" moment?)

Joachim Durchholz jo at durchholz.org
Sun Jul 15 17:08:34 UTC 2018

Am 15.07.2018 um 18:06 schrieb Paul:
>   * But it is not lazy - one. Remember, laziness is our requirement
>     here. Whatever you propose _must _ work in a context of laziness.
> Does it mean because Haskell is lazy (Clean – not) then linear types are 
> impossible in Haskell?

Laziness and linear types are orthogonal.

 > If they are possible why we need monads?

"Monadic" as a term is at the same level as "associative".
A very simple concept that is visible everywhere, and if you can arrange 
your computations in a monadic manner you'll get a certain level of 
sanity. And lo and behold, you can even write useful libraries just 
based on the assumption that you're dealing with monadic structures, 
that's monad transformers (so they're more interesting than associativity).

So monads are interesting and useful (read: important) regardless of 
whether you have laziness or linear types.
Again: orthogonal.

> Haskell “tracks” effects obviously. But I shown example with State monad 
> already. As I saw, nobody understand that State monad does not solve 
> problem of spaghetti-code style manipulation with global state.

Actually that's pretty well-known.
Not just for State for for anything that hides state out of plain sight, 
i.e. somewhere else than in function parameters. I.e. either some struct 
type, or by returning a partially evaluated function that has that data.
People get bitten by those things, and they learn to avoid these 
patterns except where it's safe - just as with spaghetti code, which 
people stopped writing a few years ago (nowadays it's more spaghetti 
data but at least that's analyzable).

So I think if you don't see anybody explicitly mentioning spaghetti 
issues with State that's for some people it's just hiding in plain sight 
and they either aren't consciously aware of it, or find that area so 
self-explaining that they do not think they really need to explain that.

Or you simply misunderstood what people are saying.

> But it was solved in OOP when all changes of state happen in *one
> place* under FSM control
Sorry, but that's not what OO is about.
Also, I do not think that you're using general FSMs, else you'd be 
having transition spaghetti.

> (with explicit rules of denied transitions: instead of change you
> have *a request to change*/a message, which can fail if transition is
> denied).
Which does nothing about keeping transitions under control.
Let me repeat: What you call a "message" is just a standard synchronous 
function call. The one difference is that the caller allows the target 
type to influence what function gets actually called, and while that's 
powerful it's quite far from what people assume if you throw that 
"message" terminology around.
This conflation of terminology has been misleading people since the 
invention of Smalltalk. I wish people would finally stop using that 
terminology, and highlight those things where Smalltalk really deviates 
from other OO languages (#doesNotUnderstand, clean 
everything-is-an-object concepts, Metaclasses Done Right). This message 
send terminology is just a distraction.

> Haskell HAS mutable structures, side-effects and allows
> spaghetti-code.
Its functions can model these, and to the point that the Haskell code is 
still spaghetti.
But that's not the point. The point is that Haskell makes it easy to 
write non-spaghetti.

BTW you have similar claims about FSMs. Ordinarily they are spaghetti 
incarnate, but you say they work quite beautifully if done right.
(I'm staying sceptical because your arguments in that direction didn't 
make sense to me, but that might be because I'm lacking background 
information, and filling in these gaps is really too far off-topic to be 
of interest.)

 >  But magical word
> “monad” allows to forget about problem and the real solution and to lie 
> that no such problem at whole (it automatically solved due to magical 
> safety of Haskell). Sure, you can do it in Haskell too, but Haskell does 
> not force you, but Smalltalk, for example, forces you.

WTF? You can do spaghetti in Smalltalk. Easily actually, there are 
plenty of antipatterns for that language.

> We often repeat this: “side-effects”, “tracks”, “safe”. But what does it 
> actually mean? Can I have side-effects in Haskell? Yes. Can I mix 
> side-effects? Yes. But in more difficult way than in ML or F#, for 
> example. What is the benefit?

That it is difficult to accidentally introduce side effects.
Or, rather, the problems of side effects. Formally, no Haskell program 
can have a side effect (unless using UnsafeIO or FFI, but that's not 
what we're talking about here).

  Actually no any benefit,

Really. You *should* listen more. If the overwhelming majority of 
Haskell programmers who're using it in practice tell you that there are 
benefits, you should question your analysis, not their experience. You 
should ask rather than make bold statements that run contrary to 
practical experience.
That way, everybody is going to learn: You about your misjudgements, and 
(maybe) Haskell programmers about the limits of the approach.

The way you're approaching this is just going to give you an antibody 
reaction: Everybody is homing in on you, with the sole intent of 
neutralizing you. (Been there, done that, on both sides of the fence.)

  it’s easy
> understandable with simple experiment: if I have a big D program and I 
> remove all “pure” keywords, will it become automatically buggy? No. If I 
> stop to use “pure” totally, will it become buggy? No.

Sure. It will still be pure.

  If I add “print”
> for debug purpose in some subroutines, will they become buggy? No.

Yes they will. Some tests will fail if they expect specific output. If 
the program has a text-based user interface, it will become unusable.

  If I
> mix read/write effects in my subroutine, will it make it buggy? No.

Yes they will become buggy. You'll get aliasing issues. And these are 
the nastiest thing to debug because they will hit you if and only if the 
program is so large that you don't know all the data flows anymore, and 
your assumptions about what might be an alias start to fall down. Or not 
you but maybe the new coworker who doesn't yet know all the parts of the 
That's exactly why data flow is being pushed to being explicit.

> But it’s really very philosophical question, I think that monads are 
> over-hyped actually. I stopped seeing the value of monads by themselves.

Yeah, a lot of people think that monads are somehow state.
It's just that state usually is pretty monadic. Or, rather, the 
functions that are built for computing a "next state" are by nature 
monadic, so that was the first big application area of monads.
But monads are really much more general than for handling state. It's 
like assuming that associativity is for arithmetic, but there's a whole 
lot of other associative operators in the world, some of them even 
useful (such as string concatenation).

>   * Third, AFAIK CLR restrictions do not allow implementing things like
>     Functor, Monad, etc. in F# directly because they can't support HKT.
>     So they workaround the problem.
> https://fsprojects.github.io/FSharpPlus/abstractions.html(btw, you can 
> see that monad is monoid here 😉)

Nope, monoid is a special case of monad (the case where all input and 
output types are the same).
(BTW monoid is associativity + neutral element. Not 100% sure whether 
monad's "return" qualifies as a neutral element, and my 
monoid-equals-monotyped-monad claim above may fall down if it is not. 
Also, different definitions of monad may add different operators so the 
subconcept relationship may not be straightforward.)

(I'm running out of time and interest so I'll leave the remaining points 


More information about the Haskell-Cafe mailing list