[Haskell-cafe] Proving correctness

Gábor Lehel illissius at gmail.com
Mon Feb 14 15:07:01 CET 2011


On Mon, Feb 14, 2011 at 12:06 PM, Pedro Vasconcelos <pbv at dcc.fc.up.pt> wrote:
> On Sat, 12 Feb 2011 19:38:31 +0530
> C K Kashyap <ckkashyap at gmail.com> wrote:
>
>> Anyway, how can one go about explaining to an imperative programmer
>> with no FP exposure - what aspect of Haskell makes it easy to
>> refactor?
>
>
> Like other people have said, the static type system is a major factor.
> It's true that Haskell's type system is more advanced than most
> imperative languages but it's also the often not mentioned that static
> typing gives a stronger check in a functional language than an
> imperative ones even in simple cases. This is because all input and
> output data flow is type checked in a function application, whereas
> imperative side effects might escape checking.
>
> For example, the type signature for a variable swapping procedure in C:
>
>        void swap(int *a, int *b)
>
> This will still type check even if it modified only one of the argument
> references. However, if written functionally, it must return a pair:
>
>        swap :: (Int,Int) -> (Int,Int)
>
> Now the type checker will reject any implementation that fails to
> return a pair of results in every case. Of course for a trivial example
> like swap this is easy to ensure in any imperative language, but for
> more complex programs it is actually quite common to forget some update
> some component of the state.
>
> BTW, I found this and other interesting reflections on the avantages of
> FP vs. imperative in Martin Oderski's book "Programming in Scala".

I'm not completely sure, but I suspect another part of it (or maybe
I'm just rephrasing what you said?) has to do with the fact that in
Haskell, basically everything is an expression. In imperative
languages, the code is segragated into statements (which each contain
expressions) which are evaluated individually for their side effects.
Type checking occurs mainly within statements (in expressions), while
between them it is minimal to nonexistent. In Haskell, functions are
one big expression -- even imperative code is represented by monadic
expressions. If you have a complicated function that transforms lists
in some way, and change something, the change has to satisfy the types
of not just its immediate environment, or that of the complicated
function you're writing, but also everything else in between, with the
consequences of the change (and the consequences of the
consequences...) being propogated automatically by the type
inferencer. (The 'boundaries' so to speak between expressions being
defined by where you put explicit type signatures -- calling a
function without an explicit signature is similar from the
typechecker's point of view to having its contents inlined in the
place where it was called. (Modulo things like the monomorphism
restriction, it should be equivalent?))

Does this sound roughly correct?

>
> Best regards,
>
> Pedro Vasconcelos
>
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>



-- 
Work is punishment for failing to procrastinate effectively.



More information about the Haskell-Cafe mailing list