[Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29

Joachim Durchholz jo at durchholz.org
Tue Nov 1 21:28:51 UTC 2016


Am 01.11.2016 um 01:37 schrieb Richard A. O'Keefe:
>
>
> On 1/11/16 9:54 AM, Joachim Durchholz wrote:
>>
>> And you need to control memory coherence, i.e. you need to define what
>> data goes together with what processes.
>
> At this point I'm puzzled.  Languages like Occam, ZPL, and Co-Array
> Fortran basically say NO! to memory coherence.

Sure, but unrelated.

 > Of course you say
> which data goes with what process.

The hope with FPLs was that you do not need to explicitly specify it 
anymore, because the compiler can manage that.

Or maybe the considerably weaker scenario: that the programmer still 
explicitly defines what computation with what data forms a unit, but 
that it is easy to move boundaries.

 > If the data need to be available
> in some other process, there is some sort of fairly explicit
> communication.

Which means that you do not have a simple function call anymore, but an 
extra API layer.

>> In an ideal world, the compiler would be smart enough to do that for you.
>> I have been reading fantasies that FPLs with their immutable data
>> structures are better suited for this kind of automation;
>
> Memory coherence exists as an issue when data are replicated and
> one of the copies gets mutated, so that the copies are now inconsistent.
> With immutable data this cannot be a problem.

This still does not tell you where to draw the boundaries inside your 
system.
If anything, it is getting harder with non-strict languages because it 
is harder to predict what computation will be run at what time.

> For what it's worth, the "Clean" programming language used to be
> called "Concurrent Clean" because it was set up to run on a cluster
> of Macintoshes.

Clean is strict ;-)

>> Without that, you'd code explicit multithreading, which means that
>> communication does not look like memory access at all.
>
> I believe my argument was that it *shouldn't* look like memory access.

It's something that some people want(ed).
Yours Truly being one of them, actually. It's just that I have become 
sceptical about the trade-offs. Plus, the more I read about various 
forms of partitioning computations (not just NUMA but also IPC and 
networking), the more it seems that hardware moves towards making the 
barriers higher, not lower (the reason being that this helps making 
computations within the barrier more efficient).

If that's a general trend, that's bad news for network, IPC, or NUMA 
transparency. Which is going to make programming for these harder, not 
easier :-(


More information about the Haskell-Cafe mailing list