[Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

Marcin Kosiba marcin.kosiba at gmail.com
Sun Aug 16 04:46:39 EDT 2009

On Sunday 16 August 2009, Artem V. Andreev wrote:
> "John A. De Goes" <john at n-brain.net> writes:
> > On Aug 15, 2009, at 6:36 AM, Jason Dusek wrote:
> >> 2009/08/14 John A. De Goes <john at n-brain.net>:
> >>> Hmmm, my point (perhaps I wasn't clear), is that different
> >>> effects have different commutability properties. In the case
> >>> of a file system, you can commute two sequential reads from
> >>> two different files.
> >>
> >>  I think this is a bad example -- it's not something that's
> >>  safe in general and discredits your idea. How would the
> >>  compiler even know that two files are not actually the same
> >>  file?
> >
> > I don't think the file system is the best example. However, I do think 
> > it's a reasonable one.
> >
> > Let's say the type of the function getFilesInDir is annotated in such  a
> > way as to tell the effect system that every file in the returned  array
> > is unique. Further, let's say the type of the function  makeNewTempFile
> > is annotated in such a way as to tell the effect  system that the
> > function will succeed in creating a new temp file with  a name unique
> > from any other existing file.
> Sorry, but this example is ridiculuous. While file *names* in this case
> might be reasonably assumed to be unique, the *files* themselves may not.
> Any modern filesystem does support file aliasing, and usually several forms
> thereof. And what does makeNewTempFile function do? Does it create a new
> file like POSIX mktemp() and return its name, or does it rather behave as
> POSIX mkstemp()? The first case is a well known security hole, and the
> second case does not, as it seems to me, fit well into the rest of your
> reasoning.

	IMHO, provided with a flexible effect system, the decision on how to do 
read/write operations on files is a matter of libraries.
	But I'd like to ask another question: is the effects system you're discussing 
now really capable of providing significant performance improvements in case 
of file I/O? Even if we assume some consistency model and transform one 
correct program to another one -- how do you estimate efficiency without 
knowledge of physical media characteristics? I kinda see how this could be 
used to optimize different kinds of media access (reordering socket/file 
operations or even running some of those in parallel), but I don't see how 
can we benefit from reordering writes to the same media.
	Another thing is that not every kind of r/w operation requires the same 
consistency model -- like when I'm writing a backup for later use I only care 
about my writes being in the same order. I imagine that such an effect system 
could help write software for CC-NUMA architectures or shared-memory 
distributed systems.
	Marcin Kosiba
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
Url : http://www.haskell.org/pipermail/haskell-cafe/attachments/20090816/8e2b0b45/attachment.bin

More information about the Haskell-Cafe mailing list