[Haskell-cafe] Wow Monads!

David McClain dbm at refined-audiometrics.com
Tue Apr 18 16:12:38 UTC 2017


… step back about 40 years, and realize that people then understood the frailty of real machines. 

When I first interviewed with IBM back in 1982, before going with them on another project, one project group explained to me that they were working on error correction codes for large disk drive cache memories. Back in those days, the ceramic IC packages had trace amount of radioactive isotopes, and every once in a while an alpha particle would smash into a DRAM cell, splattering the charge away and dropping or setting bits. 

Now on a small memory system (a few KB in those days) the probability was low. But IBM was working on an 8 MB backing cache memory to speed up the disk I/O, and the likelihood of an errant bit worked out to about 1 per hour. That would be unacceptable. So IBM, like nearly all other memory manufacturers at the time, built memory systems with ECC.

Fast forward to day, and we have cheap Chinese production machines. No memory ECC at all. And memory density is even higher than before. Oh yes, they ultimately found improved manufacturing processes so that the alpha particles aren’t anywhere near the problem today that they were in 1982. But higher bit density today, larger memories, solar flares, cosmic rays, and plenty of code bloat that depends on perfectly held memory…

Who knows why an FPL module gives a machine fault? But they sometimes really do.

- DM


> On Apr 18, 2017, at 08:39, David McClain <dbm at refined-audiometrics.com> wrote:
> 
> Well, as I stated, I think failures come in several categories:
> 
> 1. Machine Fault failures on successfully compiled code. That’s the big nasty one. Provably correct code can’t fail, right? But it sometimes does. And I think you can trace that to the need to interface with the real world. The underlying OS isn’t written in OCaml. Neither are many of the C-libs on which OCaml depends. 
> 
> In an ideal world, where everything in sight has gone through type checking, you might do better. But that will never be the case, even when the current FPL Fad reaches its zenith. About that time, new architectures will appear underneath you and new fad cycles will be in their infancy and chomping at the bits to become the next wave… Reality is a constantly moving target. We always gear up to fight the last war, not the unseen unkowns headed our way.
> 
> 2. Experimental languages are constantly moving tools with ever changing syntax and semantics. It becomes a huge chore to keep your code base up to date, and sooner or later you will stop trying. I have been through that cycle so many times before. Not just in OCaml. There is also RSI/IDL, C++ compilers ever since 1985, even C compilers.
> 
> The one constant, believe it or not, has been my Common Lisp. I’m still running code today, as part of my system environment, that I wrote back in 1990 and have never touched since then. It just continues to work. I don’t have one other example in another language where I can state that.
> 
> 3. Even if the underlying language were fixed, the OS never changing, all libraries fully debugged and cast in concrete, the language that you use will likely have soft edges somewhere. For Pattern-based languages with full type decorations (e.g., row-type fields), attempting to match composite patterns over several tree layers becomes an exercise in write-only coding. 
> 
> The lack of a good macro facility in current FPL is hindering. Yes, you can do some of it functionally, but that implies a performance hit. Sometimes the FPL compilers will allow you to see the initial AST parse trees and you might be able to implement a macro facility / syntax bending at that point. But then some wise guy back at language HQ decides that the AST tool is not really needed by anyone, and then you get stung for having depended on it. The manual effort to recode what had been machine generated becomes too much to bear.
> 
> 4. I will fault any language system for programming that doesn’t give you an ecosystem to live inside of, to allow for incremental extensions, test, recoding, etc. Edit / compile / debug cycles are awful. FPL allows you to generally minimize the debug cycle, by having you live longer at the edit / compile stage. 
> 
> But see some of the more recent work of Alan Kay and is now defunct project. They had entire GUI systems programmed in meta-language that compiles on the fly using JIT upon JIT. They make the claim that compilers were tools from another era, which they are, and that we should not be dealing with such things today.
> 
> Heck, even the 1976 Forth system, crummy as it was, offered a live-in ecosystem for programming.
> 
> So, that’s my short take on problems to be found in nearly all languages. There is no single perfect language, only languages best suited to some piece of your problem space. For me, Lisp offers a full toolbox and allows me to decide its shape in the moment. It doesn’t treat me like an idiot, and it doesn’t hold me to rigid world views. Is is perfect? Not by a long shot. But I haven’t found anything better yet…
> 
> - DM
> 
>> On Apr 18, 2017, at 08:18, Joachim Durchholz <jo at durchholz.org> wrote:
>> 
>> Am 18.04.2017 um 16:57 schrieb David McClain:
>>> 
>>> Not sure what you are asking here? Do you mean, are they still
>>> extant? or do you want to know the failure mechanisms?
>> 
>> What failed for you when you were using OCaml.
>> _______________________________________________
>> Haskell-Cafe mailing list
>> To (un)subscribe, modify options or view archives go to:
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
>> Only members subscribed via the mailman list are allowed to post.
> 
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.



More information about the Haskell-Cafe mailing list