Ryan Ingram ryani.spam at gmail.com
Fri Jan 21 20:01:36 CET 2011

Interesting little paper, Tyson.

You bring up other programming languages and 'ad-hoc systems for
resolving ambiguities'; I agree with you that these systems generally
have no strong theoretical basis, but I'm not sure that's a terribly

I think what a programmer actually wants from ambiguity resolution is
something *predictable*; C++'s system is definitely stretching the
boundaries of predictability, but any case where I have to break out a
calculator to decide whether the compiler is going to choose
specification A or specification B for my program seems like a
failure.  I'd much rather the solution wasn't always 'the most
probable' but at least was easy for me to figure out without thinking
too hard.

The goal is to easily know when I have to manually specify ambiguity
resolution and when I can trust the compiler to do it for me.  I
didn't completely follow the math in your paper, so maybe it turns out
simply if it was implemented, but it wasn't clear to me.  At the
least, I think you should add examples of the types of ambiguity
resolution you'd like the compiler to figure out and what your
probability measure chooses as the correct answer in each case.

Anyways, thanks for the interesting read.  I'm excited to see work on
making a better type *inference* system, since much of the work lately
seems to be on making a better *type* system at the cost of more often
manually specifying types.

I work in a traditional programming industry, and most of the people
from work that I talk to about Haskell are frustrated that they can't
just write putStrLn (readLn + (5 :: Int)) and have the compiler figure
out where the lifts and joins go.  After all, that just works in C[1]!
What's the point of having the most powerful type system in the
universe if the compiler can't use it to make your life easier?

-- ryan

[1] sample program:
int readLn(); // reads a line from stdin and converts string to int
void putStrLn(int x); // prints an int to stdout

void main() { putStrLn(readLn() + 5); }

On Fri, Jan 21, 2011 at 8:43 AM, Tyson Whitehead <twhitehead at gmail.com> wrote:
> On January 19, 2011 15:28:33 Conor McBride wrote:
>> In each case, the former has (++) acting on lists of strings as pure
>> values,
>> while the latter has (++) acting on strings as values given in
>> []-computations.
>>
>> The type [String] determines a domain, it does not decompose uniquely
>> to a
>> notion of computation and a notion of value. We currently resolve this
>> ambiguity by using one syntax for pure computations with [String] values
>> and a different syntax for [] computations with String values.
>>
>> Just as we use newtypes to put a different spin on types which are
>> denotationally the same, it might be worth considering a clearer (but
>> renegotiable) separation of the computation and value aspects of types,
>> in order to allow a syntax in which functions are typed as if they act
>> on
>> *values*, but lifted to whatever notion of computation is ambient.
>
> Yes.  That makes sense.  Thank you both for the clarification.  The idea of
> explicitly separating the two aspects of types is an interesting one.
>
> The automated approach I had been thinking of was to always take the simplest
> context possible.  (i.e., for the above, list of strings as pure values).
>
> To this end I've been working on a measure for the complexity of the
> application operator.  I've got a draft at
>
>
> I'm still working on my thinking on polymorphic types though, so everything
> from section 2.2 onwards is subject to change (especially 2.3 and the
> conclusion).
>
> Cheers!  -Tyson
>
> _______________________________________________