[Haskell-cafe] Wow Monads!
Joachim Durchholz
jo at durchholz.org
Wed Apr 19 12:42:42 UTC 2017
Am 19.04.2017 um 11:42 schrieb Richard A. O'Keefe:
>
>> The general finding, however, was what both were very, very heavy on enabling all kinds of cool and nifty features, but also very, very weak on making it easy to understand existing code.
>
> With respect to Common Lisp, I'm half-way inclined to give you an argument.
:-)
As I said, I don't really know any details anymore.
> Macros, used *carefully*, can give massive improvements in readability.
Oh, definitely!
Actually this applies even to C-style macros, though they're pretty
limited what they can do before you start to hit the more obscure
preprocessor features.
> Let's face it, one bit of Lisp (APL, SML, OCaml, Haskell) code looks pretty
> much like another. The way I've seen macros used -- and the way I've used
> them myself -- makes code MORE understandable, not less.
>
> All it takes is documentation.
Good documentation. That's where that Smalltalk system I experimented
with (Squeak I think) was a bit weak on: It documented everything, but
it was pretty thin on preconditions so you had to experiment, and since
parameters were passed through layers and layers of code it was really
hard to determine what part of the system was supposed to do what.
> In Scheme, however, I flatly deny that macros in any way make it harder to
> understand existing code than functions do.
Fair enough.
>> Essentially, each ecosystem with its set of macros, multiple-dispatch conventions, hook systems and whatnow, to the point that it is its own language that you have to learn.
>
> And how is this different from each ecosystem having its own set of operators (with the same
> names but different precedence and semantics) or its own functions (with the same name but
> different arities and semantics)?
Different functions with different arities are actually that: different.
Just consider the arity a part of the name.
Different semantics (in the sense of divergence) - now that would be a
major API design problem. It's the kind of stuff you see in PHP, but not
very much elsewhere.
(Operators are just functions with a funny syntax.)
However there's a real difference: If the same name is dispatched at
runtime, you're in trouble unless you can tell that the interesting
parts of the semantics are always the same.
Languages with a notion of subtype, or actually any kind of semantic
hierarchy between functions allow you to reason about the minimum
guaranteed semantics and check that the caller does it right. Any
language with subtyping or a way to associate an abstract data type to
an interface can do this; I didn't see anything like that in Smalltalk
(where subclasses tend to sort-of be subtypes but no guarantees), or in
any Lisp variant that I ever investigated, so this kind of thing is hard.
Now there's still a huge difference between just type guarantees (C++,
Java, OCaml), design by contract (Eiffel), and provable design by
contract (I know of no language that does this, though you can
approximate that with a sufficiently strong type system).
> Yes, different subsystems can have different vocabularies of macros,
> just as different modules in Haskell can have different vocabularies
> of types, operators, and functions.
> They don't interfere, thanks to the package or module system.
Ah. I've been thinking that macros were globally applied.
>> That's essentially why I lost interest: None of what I was
>> learning would enable me to actually work in any project, I'd have
>> to add more time to even understand the libraries, let alone
>> contribute.
>
> That *really* doesn't sound one tiny bit different from trying to work on
> someone else's Java or Haskell code.
It *is* different: Understanding the libraries isn't hard in Java.
That's partly because the language is pretty "stupid", though the
addition of generics, annotations, and higher-order functions has
started changing that. (Unfortunately these things are too important and
useful to just leave them out.)
> My own experience with Lisp was writing tens of thousands of lines to fit into
> hundreds of thousands, and my experience was very different from yours.
> Macros were *defined* sparingly, *documented* thoroughly, and *used* freely.
> Result: clarity.
Yes, I've been assuming that that's what was happening.
I still reserve some scepticism about the "documented thoroughly" bit,
because you're so far beyond any learning curve that I suspect that your
chances of spotting any deficits in macro documentation are pretty slim.
(I may be wrong, but I see no way to really validate the thoroughness of
macro documentation.)
>> The other realization was that these extension mechanisms could make code non-interoperable,
>
> I fail to see how define-syntax encapsulated in modules could possibly
> make Scheme code non-interoperable.
Yeah, I was assuming that macros are global.
It's very old Lisp experience from the days when Common Lisp and Scheme
were new fads, when Lisp machines were still a thing, and had to be
rebooted on a daily basis to keep them running.
What can make code non-interoperable even today is those
multiple-dispatch mechanisms (which may not exist in Scheme but only in
Common Lisp). Multiple dispatch cannot be made modular and consistent
("non-surprising"), and the MD mechanism that I studied went the other
route: If the semantics is a problem, throw more mechanisms at it until
people can make it work as intended, to the point that you could
dynamically hook into the dispatch process itself.
It made my toenails curl.
>> Java does a lot of things badly, but it got this one right. I can
>> easily integrate whatever library I want, and it "just works", there
>> are no linker errors, incompatible memory layouts, and what else is
>> making the reuse external C++ libraries so hard.
>
> Fell off chair laughing hysterically. Or was that screaming with remembered
> pain? I am sick to death of Java code *NOT* "just working".
> I am particularly unthrilled about Java code that *used* to work ceasing to work.
> I am also sick of working code that gets thousands of deprecation warnings.
Sorry, but that's all hallmarks of bad Java code.
> I am particularly tired of having to grovel through thousands of pages of bad
> documentation, to the point where it's often less effort to write my own code.
Yeah, that used to be a thing.
It isn't usually a problem anymore (even Hibernate may have grown up, I
hear that the 4.x codebase is far better than the really crummy 3.6 one
that I have come to disrespect).
> I am *ALSO* sick of trying to match up Java libraries that only build with Ant
> and Java libraries that only build with Maven. (Yes, I know about just putting
> .jar files in the right place. I've also heard of the Easter Bunny.)
Using Ant means you're doing it in a pretty complicated and fragile,
outdated way.
Putting .jar files in the right place is the most fragile way ever, and
leads stright into stone age nightmares; don't ever follow that kind of
advice unless you *want* to fail in mysterious ways.
Maven would be the way to go, but only if your project is so large that
you have a team of build engineers anyway, i.e. with overall workforce
of 30+ persons.
Smaller teams should stick with Gradle, which uses the same dependency
management as Maven but isn't into the kind of bondage&discipline that
Maven is.
Sadly, there are still shops that don't use Maven or Gradle.
For legacy projects I can understand that, but many do it because they
don't know better, i.e. there's no competent build engineer on the team.
Those teams are doomed to repeat old mistakes, just like people who
still think that Lisp macros are global are doomed to misjudge them :-D
>> Nowadays, the first step in any new project is to see what libraries we need. Integration is usually just a configuration line in the build tool (you don't have build tools in Lisp so that's a downside,
>
> Wrong. There are build tools for Common Lisp and have been for a long time.
> I didn't actually know that myself until I saw a student -- who was using
> Common Lisp without any of us having ever mentioned it to him -- using one.
Ah ok, I didn't know that.
> Look, it's really simple.
> If programmers *WANT* to write readable code
> and are *ALLOWED TIME* to write readable code, they will.
> Whatever the language.
Unless they want to show off how smart they are, and think that writing
code that only they can understand is testament to that.
This kind of thinking is frowned upon nowadays, but it used to be pretty
widespread not too long ago.
> If they have other priorities or constraints, they won't.
> Whatever the language.
Definitely. Even if programmers would and could to do better, external
constraints can prevent them from doing so.
More information about the Haskell-Cafe
mailing list