[Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29

Rik Howard rik at dcs.bbk.ac.uk
Tue Nov 1 09:41:51 UTC 2016


As usual, you give me much to ponder.  For some reason it pleases that the
world is not too concerned with what we happen to like.



But it's not true to what is *there*, and if you program for that model,
> you're going to get terrible performance.


I heard recently of a type system that captures the complexity of functions
in their signatures.  With that information available to the machine,
perhaps the machine could be equipped with a way to plan an execution such
that performance is optimised.

Your day with the HPC system sounds fascinating.  Do you think that an
Ada/Occam-like approach to partitioned distribution could tame the sort
address space that you encountered on the day?


>

Any programming model that relies on large flat shared address spaces is
> out; message passing that copies stuff is going to be much easier to manage
> than passing a pointer to memory that might be powered off when you need it


But there'll still be some call for shared memory?  Or maybe only for
persistence stores?



One of the presenters was working with a million lines of Fortran, almost
> all of it written by other people.  How do we make that safe?


Ultimately only proof can verify safety.  (I'm trying to address something
like that in my rewrite, which given the high quality of feedback from this
list, I hope to post soon.)





On 31 October 2016 at 04:07, Richard A. O'Keefe <ok at cs.otago.ac.nz> wrote:

>
>
> On 31/10/16 5:44 AM, Rik Howard wrote:
>
>> thanks for the reply.  Conceptually I like the idea of a single address
>> space, it can then be a matter of configuration as to whether what
>> you're addressing is another local process, processor or something more
>> remote.
>>
>
> The world doesn't care what you or I happen to like.
> I completely agree in *liking* a single address space.
> But it's not true to what is *there*, and if you program for that
> model, you're going to get terrible performance.
>
> I've just been attending a 1-day introduction to our national HPC
> system.  There are two clusters.  One has about 3,300 cores and
> the other over 6,000.  One is POWER+AIX, the other Intel+Linux.
> One has Xeon Phis (amongst other things), the other does not.
> Hint: neither of them has a single address space, and while we
> know about software distributed memory (indeed, one of the people
> here has published innovative research in that area), it is *not*
> like a single address space and is not notably easy to use.
>
> It's possible to "tame" single address space.  When you start to
> learn Ada, you *think* you're dealing with a single address space
> language, until you learn about partitioning programs for
> distributed execution.  For that matter, Occam has the same
> property (which is one of the reasons why Occam didn't have
> pointers, so that it would be *logically* a matter of indifference
> whether two concurrent processors were on the same chip or not).
>
> But even when communication is disguised as memory accessing,
> it's still communication, it still *costs* like communication,
> and if you want high performance, you had better *measure* it
> as communication.
>
> One of the presenters was working with a million lines of Fortran,
> almost all of it written by other people.  How do we make that safe?
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20161101/62e0c4c7/attachment.html>


More information about the Haskell-Cafe mailing list