[Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29

Rik Howard rik at dcs.bbk.ac.uk
Sun Oct 30 17:25:50 UTC 2016


All

thank you for the feedback and the bandwidth.  It has been invaluable and
is appreciated.

Regards
Rik




On 30 October 2016 at 16:44, Rik Howard <rik at dcs.bbk.ac.uk> wrote:

> thanks for the reply.  Conceptually I like the idea of a single address
> space, it can then be a matter of configuration as to whether what you're
> addressing is another local process, processor or something more remote.
> Some assumptions about what can be expected from local resources need to be
> dropped but I believe that it works in other situations.  Your point about
> not wanting to have to rewrite when the underlying platform evolves seems
> relevant.  Perhaps that suggests that a language, while needing to be aware
> of its environment, oughtn't to shape itself entirely for that
> environment.  While we're on the subject of rewrites, that is the fate of
> the WIP.  I was wrong.
>
>
> On 28 October 2016 at 01:38, Richard A. O'Keefe <ok at cs.otago.ac.nz> wrote:
>
>>
>>
>> On 28/10/16 8:41 AM, Rik Howard wrote:
>>
>>> Any novelty in the note would only ever be in the way that the mix is
>>> provided.  You raise salient points about the sort of challenges that
>>> languages will need to confront although a search has left me still
>>> unsure about PGPUs.  Can I ask you to say a bit more about programming
>>> styles: what Java can't do, what others can do, how that scales?
>>>
>>
>> The fundamental issue is that Java is very much an imperative language
>> (although books on concurrent programming in Java tend to strongly
>> recommending immutable data structures whenever practical, because they
>> are safer to share).
>>
>> The basic computational model of (even concurrent) imperative languages
>> is the RAM: there is a set of threads living in a single address space
>> where all memory is equally and as easily accessible to all threads.
>>
>> Already that's not true.  One of the machines sitting on my desk is a
>> Parallela:  2 ARM cores, 16 RISC cores, there's a single address space
>> shared by the RISC cores but each of them "owns" a chunk of it and
>> access is not uniform.  Getting information between the ARM cores and
>> the RISC cores is not trivial.  Indeed, one programming model for the
>> Parallela is OpenCL 1.1, although as they note,
>> "Creating an API for architectures not even considered during the
>> creation of a standard is challenging.  This can be seen in the case of
>> Epiphany, which possesses an architecture very different from a GPU, and
>> which supports functionality not yet supported by a GPU. OpenCL as an API
>> for Epiphany is good, but not perfect."  The thing is that the
>> Epiphany chip is more *like* a GPU than it is like anything say Java
>> might want to run on.
>>
>> For that matter, there is the IBM "Cell" processor, basically a Power
>> core and a bunch of RISCish cores, not entirely unlike the Epiphany.
>> As the Wikipedia page on the Cell notes, "Cell is widely regarded as a
>> challenging environment for software development".
>>
>> Again, Java wants a (1) large (2) flat (3) shared address space, and
>> that's *not* what Cell delivers.  The memory space available to each
>> "SPE" in a Cell is effectively what would have been L1 cache on a more
>> conventional machine, and transfers between that and main memory are
>> non-trivial.  So Cell memory is (1) small (2) heterogeneous and (3)
>> partitioned.
>>
>> The Science Data Processor for the Square Kilometre Array is still
>> being designed.  As far as I know, they haven't committed to a CPU
>> architecture yet, and they probably want to leave that pretty late.
>> Cell might be a candidate, but I suspect they'll not want to spend
>> much of their software development budget on a "challenging"
>> architecture.
>>
>> Hmm.  Scaling.
>>
>> Here's the issue.  It looks as though the future of scaling is
>> *lots* of processors, running *slower* than typical desktops,
>> with things turned down or off as much as possible, so you won't
>> be able to pull the Parallela/Epiphany trick of always being able
>> to access another chip's local memory.  Any programming model
>> that relies on large flat shared address spaces is out; message
>> passing that copies stuff is going to be much easier to manage
>> than passing a pointer to memory that might be powered off when
>> you need it; anything that creates tight coupling between the
>> execution orders of separate processors is going to be a nightmare.
>>
>> We're also looking at more things moving into special-purpose
>> hardware, in order to reduce power costs.  It would be nice to be
>> able to do this without a complete rewrite...
>>
>> Coarray Fortran (in the current standard) is an attempt to deal with
>> the kinds of machines I'm talking about.  Whether it's a good attempt
>> I couldn't say, I'm still trying to get my head around it.  (More
>> precisely, I think I understand what it's about, but I haven't a
>> clue about how to *use* the feature effectively.)  There are people
>> at Rice who think it could be better.
>>
>> Reverting to the subject of declarative/procedural, I recently came
>> across Lee Naish's "Pawns" language.  Still very much a prototype,
>> and he is interested in the semantics, not the syntax.
>> https://github.com/lee-naish/Pawns
>> http://people.eng.unimelb.edu.au/lee/papers/pawns/
>>
>>
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20161030/a7a7a827/attachment.html>


More information about the Haskell-Cafe mailing list