From bhalchin@hotmail.com Thu Feb 1 05:29:44 2001 Date: Thu, 01 Feb 2001 05:29:44 From: Bill Halchin bhalchin@hotmail.com Subject: Source tar ball for Simon Marlow's Haskell Web Server??
Hello,

   I looked on www.haskell.org for the Simon Marlow's Web Server,
but couldn't find. Did I overlook it?

Regards,

Bill Halchin

_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com



From igloo@earth.li Fri Feb 2 01:04:07 2001 Date: Fri, 2 Feb 2001 01:04:07 +0000 From: Ian Lynagh igloo@earth.li Subject: Various software and a question
Hi all

First a brief question - is there a nicer way to do something like

    #ifdef __GLASGOW_HASKELL__
    #include "GHCCode.hs"
    #else

    > import HugsCode

    #endif

than that (i.e. code that needs to be different depending on if you are
using GHC or HUGS)?


Secondly, I don't know if this sort of thing is of interest to anyone,
but inspired by the number of people who looked at the MD5 stuff I
thought I might as well mention it. I've put all the Haskell stuff I've
written at http://c93.keble.ox.ac.uk/~ian/haskell/ (although I'm new
at this game so it may not be the best code in the world).

At the moment this consists of a (very nearly complete) clone of GNU ls,
and MD5 module and test program and the smae for SHA1 and DES. The ls
clone needs a ptch to GHC for things like isLink (incidentally, would
it be sensible to try and get this included with GHC? It is basically a
simple set of changes to the PosixFiles module, but needs __USE_BSD
defined (which I guess is the reason it is not in there, but it could
have it's own file?)).


Have fun
Ian, wondering how this message got to be so long



From koen@cs.chalmers.se Fri Feb 2 09:15:17 2001 Date: Fri, 2 Feb 2001 10:15:17 +0100 (MET) From: Koen Claessen koen@cs.chalmers.se Subject: Various software and a question
Ian Lynagh wondered:

 | is there a nicer way to do something like
 |
 |     #ifdef __GLASGOW_HASKELL__
 |     #include "GHCCode.hs"
 |     #else
 |     > import HugsCode
 |     #endif

I usually make two directories:

  Hugs/
  Ghc/

That contain files with the same names but different
compiler-dependent implementations. Then it is just a
question of setting the PATHs right.

I hate using C preprocessor stuff for this. I think the
directory solution is nice because it forces you to
concentrate all the compiler-dependent stuff into a few
modules, which are distinct from the rest of the
implementation.

/Koen.

--
Koen Claessen         http://www.cs.chalmers.se/~koen
phone:+46-31-772 5424      mailto:koen@cs.chalmers.se
-----------------------------------------------------
Chalmers University of Technology, Gothenburg, Sweden



From Tom.Pledger@peace.com Sat Feb 3 04:13:04 2001 Date: Sat, 3 Feb 2001 17:13:04 +1300 From: Tom Pledger Tom.Pledger@peace.com Subject: Fundeps and quantified constructors
nubie nubie writes:
 | So I want to have a polymorphic Collection type, just because.
 | 
 | > class Collection c e | c -> e where
 | >   empty :: c
 | >   put   :: c -> e -> c
 | 
 | > data SomeCollection e = forall c . Collection c e => MakeSomeCollection c
 | 
 | Hugs (February 2000) doesn't like it. It says
 |   Variable "e" in constraint is not locally bound
 | 
 | I feel that e *is* bound, sort of, because c is bound and there's a
 | fundep c->e.

That line of reasoning establishes that e is constrained on the right
hand side of the "=".  However, it's still bound (by an implicit
"forall e") on the left hand side of the "=".  The problem is that e
can leak details about c to parts of the program outside the "forall
c".  It's still a problem if you remove the "| c -> e" fundep.

A more common use of a "Collection c e | c -> e" class is:

    data SomeCollection e = --some data structure involving e

    instance SomeContext e => Collection (SomeCollection e) e where
        --implementations of empty and put, for the aforementioned
        --data structure, and entitled to assume SomeContext

Is that collection type polymorphic enough for your purposes?

 :
 | The following things work as expected:
 | 
 | > data IntCollection = forall c . Collection c Int => MakeIntCollection c
 | > data AnyCollection = forall c e . Collection c e => MakeAnyCollection c

Neither of them has a type variable tunnelling through the local
quantifier.

HTH.
Tom


From Claudius.Heitz@web.de Sat Feb 3 14:07:29 2001 Date: Sat, 3 Feb 2001 15:07:29 +0100 From: Claudius Heitz Claudius.Heitz@web.de Subject: Provider / Haskell-CGI's
Hello!

Does anybody know a provider, where I can run Haskell-CGI's=3F I mean, in st=
andard webhosting package=3F

And are there any in Germany=3F

TIA!

Claudius
=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=
=5F=5F=5F=5F=5F
Alles unter einem Dach: Informationen, Fun, E-Mails. Bei WEB.DE: http://we=
b.de
Die gro=DFe Welt der Kommunikation: E-Mail, Fax, SMS, WAP: http://freemail.w=
eb.de



From christoph@cm-arts.de Fri Feb 2 14:32:01 2001 Date: Fri, 2 Feb 2001 15:32:01 +0100 From: Christoph M. christoph@cm-arts.de Subject: Knight
Hi !

Yes, I meant the Knight's Tour problem. Could anybody post me a solve of
this problem in haskell ?
Thanks a lot,

Christoph



From ahey@iee.org Mon Feb 5 00:43:51 2001 Date: Mon, 5 Feb 2001 00:43:51 +0000 (GMT) From: Adrian Hey ahey@iee.org Subject: Knight
On Fri 02 Feb, Christoph M. wrote:
> Yes, I meant the Knight's Tour problem. Could anybody post me a solve of
> this problem in haskell ?
> Thanks a lot,

Sorry I've never done this in Haskell. The second computer program I ever
wrote was to solve this (in BASIC). That was many years ago:-)

IIRC, it can be solved following a very simple rule. For each square of the
chess board keep track of the No. of squares which can be reached in a single
move from that square. The next Knight move is to which ever square has the
lowest No. (Unused squares and legal legal moves only of course). If you have
2 or more equally good moves just make a random choice.

After the move you decrement the counts for each square which can be reached
in a single move from the new square. 

This is more of a heuristic than an algorithm, in that I couldn't prove that
it will always work, nor could I prove that not obeying this rule will result
in failure. (That's why I wrote the program). It does seem to work. But it's
not hard to see why this is reasonable strategy. (The best next move is the
one which minimises the No. of possible future moves which get blocked as a
result). 

As far as  Haskell solution is concerned, the only difficult decision you
have to make is what data structure to use to represent the chess board
squares and counts. An array seems the obvious choice, but maybe somebody
can suggest something else better suited to a functional solution.

Regards
-- 
Adrian Hey



From timd@macquarie.com.au Mon Feb 5 22:16:02 2001 Date: Tue, 6 Feb 2001 09:16:02 +1100 (EST) From: Timothy Docker timd@macquarie.com.au Subject: Haskell Implemetors Meeting
 >         We agreed that it would be a Jolly Good Thing if GHC could
 >         be persuaded to produce GHC-independent Core output,
 >         ready to feed into some other compiler.  For example,
 >         Karl-Filip might be able to use it. 
 >         ANDREW will write a specification, and implement it.

A quick question. What is meant by  "Core output"? Subsequent posts
seem to suggest this is some "reduced Haskell", in which full Haskell
98 can be expressed. Am I completely off beam here?

Tim Docker


From malcolm-haskell@cs.york.ac.uk Tue Feb 6 11:00:22 2001 Date: Tue, 6 Feb 2001 11:00:22 +0000 From: malcolm-haskell@cs.york.ac.uk malcolm-haskell@cs.york.ac.uk Subject: binary files in haskell
John Meacham wrote:
> I wrote up a proposal for a binary file IO mechanism to be added
> as a 'blessed addendum' to the standard at best and as a commonly
> implmented extension (in hslibs) at least..

I have looked at your proposal.  If you would like it to be widely
available, you will need to write an implementation of the library,
or find someone who can write it for you.  Type signatures are great
as documentation, but they are not directly executable.  :-)

Regards,
    Malcolm


From simonmar@microsoft.com Tue Feb 6 12:50:25 2001 Date: Tue, 6 Feb 2001 04:50:25 -0800 From: Simon Marlow simonmar@microsoft.com Subject: binary files in haskell
> > How about this slightly more general interface, which works 
> with the new
> > FFI libraries, and is trivial to implement on top of the 
> primitives in
> > GHC's IOExts:
> > 
> >         hPut :: Storable a => Handle -> a -> IO ()
> >         hGet :: Storable a => Handle -> IO a
> 
> What about endianess? In which format are Floats or even just Bools
> stored? For a file which probably shall be read from 
> different machines
> this is not clear at all.

The behaviour is defined by the Storable instances for each type.  The
endianness for writing say an Int32 would be the same as the host
architecture, for instance.  If you want to work with just bytes, you
can always just use hPut and hGet at type Word8.

Overloading with Storable gives you more flexibility, since if you have
a way to serialise an object in memory for passing to a foreign
function, you also have a way to store it in binary format in a file
(modulo problems with pointers, of course).

In the long term, we'll want to be able to serialise more than just
Storable objects (c.f. the other overloaded binary I/O libraries out
there), and possibly make the output endian-independent - but after all
there's no requirement that Haskell's Int has the same size on all
implementations, so there's no guarantee that binary files written on
one machine will be readable on another, unless they only use explicitly
sized types or Integer.

Perhaps these should be called hPutStorable and hGetStorable so as not
to prematurely steal the best names.

> I think John is right that there needs to be a primitive interface for
> just writing bytes. You can then build anything more 
> complicated on top
> (probably different high-level ones for different purposes).
> 
> I just see one problem with John's proposal: the type Byte. It is
> completely useless if you don't have operations that go with it;
> bit-operations and conversions to and from Int. The FFI 
> already defines
> such a type: Word8. So I suggest that the binary IO library 
> explicitely
> reads and writes Word8's.

yup, that's what I had in mind.

Cheers,
	Simon


From chak@cse.unsw.edu.au Wed Feb 7 03:06:02 2001 Date: Wed, 07 Feb 2001 14:06:02 +1100 From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au Subject: binary files in haskell
Simon Marlow <simonmar@microsoft.com> wrote,

> Olaf wrote,
> > Simon Marlow wrote,
> > > How about this slightly more general interface, which works 
> > with the new
> > > FFI libraries, and is trivial to implement on top of the 
> > primitives in
> > > GHC's IOExts:
> > > 
> > >         hPut :: Storable a => Handle -> a -> IO ()
> > >         hGet :: Storable a => Handle -> IO a
> > 
> > What about endianess? In which format are Floats or even just Bools
> > stored? For a file which probably shall be read from 
> > different machines
> > this is not clear at all.

Like in any other language, too.  If you are writing binary
data, you get all the problems of writing binary data.  I
agree that on top of that it would be nice to have some
really nice serilisation routines, but that should be a
second step.

> Overloading with Storable gives you more flexibility, since if you have
> a way to serialise an object in memory for passing to a foreign
> function, you also have a way to store it in binary format in a file
> (modulo problems with pointers, of course).

Yep, good idea.

Cheers,
Manuel


From patrikj@cs.chalmers.se Wed Feb 7 07:35:20 2001 Date: Wed, 7 Feb 2001 08:35:20 +0100 (MET) From: Patrik Jansson patrikj@cs.chalmers.se Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
{I'm diverting this discussion to haskell-cafe.}
[I am not sure a more mathematically correct numeric class system is
suitable for inclusion in the language specification of Haskell (a
library would certainly be useful though). But this is not my topic in
this letter.]

On Wed, 7 Feb 2001, Brian Boutel wrote:
> * Haskell equality is a defined operation, not a primitive, and may not
> be decidable. It does not always define equivalence classes, because
> a==a may be Bottom, so what's the problem? It would be a problem,
> though, to have to explain to a beginner why they can't print the result
> of a computation.

The fact that equality can be trivially defined as bottom does not imply
that it should be a superclass of Num, it only explains that there is an
ugly way of working around the problem. Neither is the argument that the
beginner should be able to print the result of a computation a good
argument for having Show as a superclass.

A Num class without Eq, Show as superclasses would only mean that the
implementor is not _forced_ to implement Eq and Show for all Num
instances.  Certainly most instances of Num will still be in both Show and
Eq, so that they can be printed and shown, and one can easily make sure
that all Num instances a beginner encounters would be such.

As far as I remember from the earlier discussion, the only really visible
reason for Show, Eq to be superclasses of Num is that class contexts are
simpler when (as is often the case) numeric operations, equality and show
are used in some context.

f :: Num a => a -> String  -- currently
f a = show (a+a==2*a)

If Show, Eq, Num were uncoupled this would be

f :: (Show a, Eq a, Num a) => a -> String

But I think I could live with that. (In fact, I rather like it.)

Another unfortunate result of having Show, Eq as superclasses to Num is
that for those cases when "trivial" instances (of Eq and Show) are defined
just to satisfy the current class systems, the users have no way of
supplying their own instances. Due to the Haskell rules of always
exporting instances we have that if the Num instance is visible, so are
the useless Eq and Show instances.

In the uncoupled case the users have the choice to define Eq and Show
instances that make sense to them. A library designer could provide the Eq
and Show instances in two separate modules to give the users maximum
flexibility.

/Patrik Jansson





From herrmann@infosun.fmi.uni-passau.de Wed Feb 7 09:12:10 2001 Date: Wed, 7 Feb 2001 10:12:10 +0100 (MET) From: Ch. A. Herrmann herrmann@infosun.fmi.uni-passau.de Subject: Revamping the numeric classes
moved to haskell-cafe

    Ketil> E.g. way back, I wrote a simple differential equation solver.
    Ketil> Now, the same function *could* have been applied to vector
    Ketil> functions, except that I'd have to decide on how to implement
    Ketil> all the "Num" stuff that really didn't fit well.  Ideally, a
    Ketil> nice class design would infer, or at least allow me to
    Ketil> specify, the mathematical constraints inherent in an
    Ketil> algorithm, and let my implementation work with any data
    Ketil> satisfying those constraints.

the problem is that the --majority, I suppose?-- of mathematicians
tend to overload operators. They use "*" for matrix-matrix
multiplication as well as for matrix-vector multiplication etc.

Therefore, a quick solution that implements groups, monoids, Abelian
groups, rings, Euclidean rings, fields, etc. will not be sufficient.

I don't think that it is acceptable for a language like Haskell
to permit the user to overload predefined operators, like "*".

A cheap solution could be to define a type MathObject and operators like 
   :*: MathObject -> MathObject -> MathObject
Then, the user can implement:

a :*: b = case (a,b) of
             (Matrix x, Matrix y) -> foo
             (Matrix x, Vector y) -> bar
-- 
 Christoph Herrmann
 E-mail:  herrmann@fmi.uni-passau.de
 WWW:     http://brahms.fmi.uni-passau.de/cl/staff/herrmann.html


From ketil@ii.uib.no Wed Feb 7 10:47:11 2001 Date: 07 Feb 2001 11:47:11 +0100 From: Ketil Malde ketil@ii.uib.no Subject: Revamping the numeric classes
"Ch. A. Herrmann" <herrmann@infosun.fmi.uni-passau.de> writes:

> moved to haskell-cafe

No, but *now* it is. (Does haskell@ strip Reply-To? Bad list!  Bad!)

> the problem is that the --majority, I suppose?-- of mathematicians
> tend to overload operators. They use "*" for matrix-matrix
> multiplication as well as for matrix-vector multiplication etc.

Yes, obviously.  On the other hand, I think you could get far by
defining (+) as an operator in a Group, (*) in a Ring, and so forth.

Another problem is that the mathematical constructs include properties
not easily encoded in Haskell, like commutativity, associativity, etc.

> I don't think that it is acceptable for a language like Haskell
> to permit the user to overload predefined operators, like "*".

Depends on your definition of overloading.  Is there a difference
between overloading and instantiating a class? :-)

> A cheap solution could be to define a type MathObject and operators like 
>    :*: MathObject -> MathObject -> MathObject
> Then, the user can implement:

> a :*: b = case (a,b) of
>              (Matrix x, Matrix y) -> foo
>              (Matrix x, Vector y) -> bar

Yes.  If it is useful to have a fine granularity of classes, you can
imagine doing:

        class Multiplicative a b c where
                (*) :: a -> b -> c

now I can do

        instance Multiplicative (Vector a) (Vector a) (Vector a) where
                x * y = ...

but also scalar multiplication

        instance Multiplicative a  (Vector a) (Vector a) where
                a * x = ....


Also, I think I can define Group a to be

        Additive a a a => class Group a where
                -- inherits plus from "Additive"
                zero :: a

        instance Group Int where
                (+) = built_in_int_addition
                zero = 0::Int

Long qualifier lists might be countered by having Classes -- Num, say
-- that just serve to include other classes in reasonable collections.
Funny mathematical names would - at least to some extent - be avoided
by having simple names for the classes actually defining the
operators, so that errors will warn you about missing "Multiplicative"
rather than Field or Ring or what have you.

>From experience, I guess there are probably issues that haven't
crossed my mind.   :-)

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants


From karczma@info.unicaen.fr Wed Feb 7 15:47:17 2001 Date: Wed, 07 Feb 2001 15:47:17 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Patrik Jansson wrote:

> [I am not sure a more mathematically correct numeric class system is
> suitable for inclusion in the language specification of Haskell (a
> library would certainly be useful though)....]

I think it should be done at the language level.

Previously Brian Boutel wrote:

...
> Haskell was intended for use by programmers who may not be
> mathematicians, as a general purpose language. Changes to make keep
> mathematicians happy tend to make it less understandable and attractive
> to everyone else.
> 
> Specifically:
> 
> * most usage of (+), (-), (*) is on numbers which support all of them.
> 
> * Haskell equality is a defined operation, not a primitive, and may not
> be decidable. It does not always define equivalence classes, because
> a==a may be Bottom, so what's the problem? It would be a problem,
> though, to have to explain to a beginner why they can't print the result
> of a computation.


====
Some people here might recall that I cried loudly and in despair (OK,
I am exaggerating a bit...) about the inadequacy of the Num hierarchy 
much before Sergey Mechveliani's proposal. Finally I implemented my
own home-brewed hierarchy of Rings, AdditiveGroups, Modules, etc. in
order to play with differential structures and graphical objects.
And arithmetic on functions.

I AM NOT A MATHEMATICIAN, and still, I see very strongly the need for
a sane math layer in Haskell on behalf of 'general purpose' programming.
Trying to explain to comp. sci students (who, at least here, don't like
formal mathematics too much...) WHY the Haskell Num hierarchy is as
it is, is simply hopeless, because some historical accidents were never
very rational. 

* I don't care about "most usage of (+), (-), (*) is on numbers which
  support all of them" if this produces a chaos if you want to use
  Haskell for geometry, or graphics, needing vectors.

>From this point of view a slightly simpler (in this context) type system 
of Clean seems to be better. And I appreciate also the possibility to
define arithmetic operations on *functions*, which is impossible in
Haskell because of this Eq/Show superclass constraints.

> In the uncoupled case the users have the choice to define Eq and Show
> instances that make sense to them. A library designer could provide the Eq
> and Show instances in two separate modules to give the users maximum
> flexibility.
> 
> /Patrik Jansson


Yes.

I don't want to be too acrimonious nor sarcastic, but those people who
claim that Haskell as a "universal" language should not follow too
closely a decent mathematical discipline, serve the devil. When math
is taught at school at the elementary level, with full knowledge of the
fact that almost nobody will follow the mathematical career afterwards,
the rational, logical side of all constructions is methodologically
essential. 10 years old pupils learn that you can add two dollars to
7 dollars, but multiplying dollars has not too much sense (a priori),
and adding dollars to watermelons is dubious. Numbers are delicate
abstractions, and treating them in a cavaličre manner in a supposedly
"universal" language, harms not only mathematicians. As you see,
treating
(*) together with (+) is silly not only to vector spaces, but also
for dimensional quantities, useful outside math (if only for debugging).

"Ch. A. Herrmann" wrote:

> the problem is that the --majority, I suppose?-- of mathematicians
> tend to overload operators. They use "*" for matrix-matrix
> multiplication as well as for matrix-vector multiplication etc.
> 
> Therefore, a quick solution that implements groups, monoids, Abelian
> groups, rings, Euclidean rings, fields, etc. will not be sufficient.
> 
> I don't think that it is acceptable for a language like Haskell
> to permit the user to overload predefined operators, like "*".

Wha do you mean "predefined" operators? Predefined where? Forbid what?
Using the standard notation even to multiply rationals or complexes?

And leave this possibility open to C++ programmers who can overload
anything without respecting mathematical congruity?
Why?

A serious mathematician who sees the signature (*) :: a -> a -> a
won't try to use it for multiplying a matrix by a vector. But using
it as a basic operator within a monoid is perfectly respectable.
No need to "lift" or "promote" scalars into vectors/matrices, etc.

For "scaling" I use personally an operation (*>) defined within
the Module constructor class, but I am unhappy, because

   (*>) :: a -> (t a) -> (t a) declared in a Module instance of
   the constructor t prevents from using it in the case where
   (t a) in reality is a. (By default (*>) maps (x*) through the
   elements of (t ...), and kinds "*" are not constructors...


Jerzy Karczmarczuk
Caen, France


From herrmann@infosun.fmi.uni-passau.de Wed Feb 7 15:40:04 2001 Date: Wed, 7 Feb 2001 16:40:04 +0100 (MET) From: Ch. A. Herrmann herrmann@infosun.fmi.uni-passau.de Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Hi Haskellers,

>>>>> "Jerzy" == Jerzy Karczmarczuk <karczma@info.unicaen.fr> writes:
    Jerzy> "Ch. A. Herrmann" wrote:

    >> the problem is that the --majority, I suppose?-- of
    >> mathematicians tend to overload operators. They use "*" for
    >> matrix-matrix multiplication as well as for matrix-vector
    >> multiplication etc.
    >> 
    >> Therefore, a quick solution that implements groups, monoids,
    >> Abelian groups, rings, Euclidean rings, fields, etc. will not be
    >> sufficient.
    >> 
    >> I don't think that it is acceptable for a language like Haskell
    >> to permit the user to overload predefined operators, like "*".

    Jerzy> Wha do you mean "predefined" operators? Predefined where? 

In hugs, ":t (*)" tells you:
   (*) :: Num a => a -> a -> a
which is an intended property of Haskell, I suppose.

Jerzy> Forbid what?
A definition like (a trivial example, instead of matrix/vector)
   class NewClass a where
     (*) :: a->[a]->a
leads to an error since (*) is already defined on top level, e.g.
    Repeated definition for member function "*"
in hugs, although I didn't specify that I wanted to use (*) in
the context of the Num class.
However, such things work in local definitions:
   Prelude> let (*) a b = a++(show b) in "Number " * 5      
   "Number 5"
but you certainly don't want it to use (*) only locally.

    Jerzy> Using the standard notation even to multiply
    Jerzy> rationals or complexes?
No, that's OK since they belong to the Num class. But as
soon as you want to multiply a rational with a complex
you'll get a type error. Personally, I've nothing against
this strong typing discipline, since it'll catch some
errors.

    Jerzy> And leave this possibility open to C++ programmers who can
    Jerzy> overload anything without respecting mathematical congruity?
    Jerzy> Why?
If mathematics is to be respected, we really have to discuss a lot of
things, e.g., whether it is legal to define comparison for floating point
numbers, but that won't help much. Also, the programming language should
not prescribe that the "standard" mathematics is the right mathematics
and the only the user is allowed to deal with. If the user likes to
multiply two strings, like "ten" * "six" (= "sixty"), and he/she has a
semantics for that, why not?

    Jerzy> A serious mathematician who sees the signature 
    Jerzy> (*) :: a -> a -> a 
    Jerzy> won't try to use it for multiplying a matrix by a
    Jerzy> vector.
A good thing would be to allow the signature 
   (*) :: a -> b -> c
as well as multi-parameter type classes (a, b and c)
and static overloading, as Joe Waldmann suggested. 

    Jerzy> No need to "lift" or "promote"
    Jerzy> scalars into vectors/matrices, etc.
You're right, there is no "need". We can live with
    a :*: b
for matrix multiplication, and with
    a <*> b
for matrix/vector multiplication, etc. It's a matter of style.

If anyone has experiences with defining operators in unicode
and editing them without problems, please tell me. Unicode
will provide enough characters for a distinction, I suppose.

Bye 
-- 
 Christoph Herrmann
 E-mail:  herrmann@fmi.uni-passau.de
 WWW:     http://brahms.fmi.uni-passau.de/cl/staff/herrmann.html
 


From karczma@info.unicaen.fr Wed Feb 7 17:12:24 2001 Date: Wed, 07 Feb 2001 17:12:24 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
"Ch. A. Herrmann" answers my questions:

>     Jerzy> What do you mean "predefined" operators? Predefined where?
> 
> In hugs, ":t (*)" tells you:
>    (*) :: Num a => a -> a -> a
> which is an intended property of Haskell, I suppose.

Aha. But I would never call this a DEFINITION of this operator.
This is just the type, isn't it?
A misunderstanding, I presume.

> Jerzy> Forbid what?
> A definition like (a trivial example, instead of matrix/vector)
>    class NewClass a where
>      (*) :: a->[a]->a
> leads to an error 

OK, OK. Actually my only point was to suggest that the type for (*)
as above should be constrained oinly by an *appropriate class*, not
by this horrible Num which contains additive operators as well. So
this is not the answer I expected, concerning the "overloading of
a predefined operator".


BTW.

In Clean (*) constitutes a class by itself, that is this simplicity
I appreciate, although I am far from saying that they have an ideal
type system for a working mathemaniac.

> ... Also, the programming language should
> not prescribe that the "standard" mathematics is the right mathematics
> and the only the user is allowed to deal with. If the user likes to
> multiply two strings, like "ten" * "six" (= "sixty"), and he/she has a
> semantics for that, why not?

Aaa, here we might, although need not disagree. I would like to see some
rational constraints, preventing the user from inventing a completely
insane semantics for this multiplication, mainly to discourage writing
of programs impossible to understand.



Jerzy Karczmarczuk
Caen, France


From dpt@haskell.org Wed Feb 7 16:37:33 2001 Date: Wed, 07 Feb 2001 11:37:33 -0500 From: Dylan Thurston dpt@haskell.org Subject: (no subject)

From qrczak@knm.org.pl Wed Feb 7 18:35:11 2001 Date: 7 Feb 2001 18:35:11 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revamping the numeric classes
07 Feb 2001 11:47:11 +0100, Ketil Malde <ketil@ii.uib.no> pisze:

> If it is useful to have a fine granularity of classes, you can
> imagine doing:
> 
>         class Multiplicative a b c where
>                 (*) :: a -> b -> c

Then a*b*c is ambiguous no matter what are types of a,b,c and the
result. Sorry, this does not work. Too general is too bad, it's
impossible to have everything at once.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From dpt@math.harvard.edu Wed Feb 7 18:57:41 2001 Date: Wed, 7 Feb 2001 13:57:41 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Revamping the numeric classes
Other people have been making great points for me.  (I particularly
liked the example of Dollars as a type with addition but not
multiplication.)  One point that has not been made: given a class
setup like

class Additive a where
  (+) :: a -> a -> a
  (-) :: a -> a -> a
  negate :: a -> a
  zero :: a

class Multiplicative a where
  (*) :: a -> a -> a
  one :: a

class (Additive a, Multiplicative a) => Num a where
  fromInteger :: Integer -> a

then naive users can continue to use (Num a) in contexts, and the same
programs will continue to work.[1]

(A question in the above context is whether the literal '0' should be
interpreted as 'fromInteger (0::Integer)' or as 'zero'.  Opinions?)

On Wed, Feb 07, 2001 at 06:27:02PM +1300, Brian Boutel wrote:
> * Haskell equality is a defined operation, not a primitive, and may not
> be decidable. It does not always define equivalence classes, because
> a==a may be Bottom, so what's the problem? It would be a problem,
> though, to have to explain to a beginner why they can't print the result
> of a computation.

Why doesn't your argument show that all types should by instances of
Eq and Show?  Why are numeric types special?

Best,
	Dylan Thurston

Footnotes: 
[1]  Except for the lack of abs and signum, which should be in some
other class.  I have to think about their semantics before I can say
where they belong.




From andrew@andrewcooke.free-online.co.uk Wed Feb 7 22:08:26 2001 Date: Wed, 7 Feb 2001 22:08:26 +0000 From: andrew@andrewcooke.free-online.co.uk andrew@andrewcooke.free-online.co.uk Subject: Revamping the numeric classes
On Wed, Feb 07, 2001 at 11:47:11AM +0100, Ketil Malde wrote:
> "Ch. A. Herrmann" <herrmann@infosun.fmi.uni-passau.de> writes:
[...]
> > the problem is that the --majority, I suppose?-- of mathematicians
> > tend to overload operators. They use "*" for matrix-matrix
> > multiplication as well as for matrix-vector multiplication etc.
> Yes, obviously.  On the other hand, I think you could get far by
> defining (+) as an operator in a Group, (*) in a Ring, and so forth.

As a complete newbie can I add a few points?  They may be misguided,
but they may also help identify what appears obvious only through
use...

- understanding the hierarchy of classes (ie constanly referring to
Fig 5 in the report) takes a fair amount of effort.  It would have
been much clearer for me to have classes that simply listed the
required super classes (as suggested in an earlier post).

- even for me, no great mathematician, I found the forced inclusion of
certain classes irritating (in my case - effectively implementing
arithmetic on tuples - Enum made little sense and ordering is hacked
in order to be total; why do I need to define either to overload "+"?)

- what's the deal with fmap and map?

> Another problem is that the mathematical constructs include properties
> not easily encoded in Haskell, like commutativity, associativity, etc.
> 
> > I don't think that it is acceptable for a language like Haskell
> > to permit the user to overload predefined operators, like "*".

Do you mean that the numeric classes should be dropped or are you
talking about some other overloading procedure?

Isn't one popular use of Haskell to define/extend it to support small
domain-specific languages?  In those cases, overloading operatores via
the class mechanism is very useful - you can give the user concise,
but stll understandable, syntax for the problem domain.

I can see that overloading operators is not good in general purpose
libraries, unless carefully controlled, but that doesn't mean it is
always bad, or should always be strictly controlled.  Maybe the
programmer could decide what is appropriate, faced with a particular
problem, rather than a language designer, from more general
considerations?  Balance, as ever, is the key :-)

[...]
> >From experience, I guess there are probably issues that haven't
> crossed my mind.   :-)

This is certainly true in my case - I presumed there was some deep
reason for the complex hierarchy that exists at the moment.  It was a
surprise to see it questioned here.

Sorry if I've used the wrong terminology anywhere.  Hope the above
makes some sense.

Andrew

-- 
http://www.andrewcooke.free-online.co.uk/index.html


----- End forwarded message -----

-- 
http://www.andrewcooke.free-online.co.uk/index.html


From peterd@availant.com Wed Feb 7 21:17:38 2001 Date: Wed, 7 Feb 2001 16:17:38 -0500 From: Peter Douglass peterd@availant.com Subject: Revamping the numeric classes
 I have some questions about how Haskell's numeric classes might be
revamped.

 Is it possible in Haskell to circumscribe the availability of certain
"unsafe" numeric operations such as div, /, mod?  If this is not possible
already, could perhaps a compiler flag "-noUnsafeDivide" could be added to
make such a restriction?

 What I have in mind is to remove division by zero as an untypable
expression.  The idea is to require div, /, mod to take NonZeroNumeric
values in their second argument.  NonZeroNumeric values could be created by
functions of type: 
  Number a => a -> Maybe NonZeroNumeric
or something similar.

  Has this been tried and failed?  I'm curious as to what problems there
might be with such an approach.

--PeterD  


From dpt@math.harvard.edu Wed Feb 7 21:54:50 2001 Date: Wed, 7 Feb 2001 16:54:50 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Revamping the numeric classes
On Wed, Feb 07, 2001 at 10:08:26PM +0000, andrew@andrewcooke.free-online.co.uk wrote:
> - even for me, no great mathematician, I found the forced inclusion of
> certain classes irritating (in my case - effectively implementing
> arithmetic on tuples - Enum made little sense and ordering is hacked
> in order to be total; why do I need to define either to overload "+"?)

Persumably you mean "quot" and "rem", since Enum is a superclass of
Integral, not Num.  toInteger must have been even worse, right?

> - what's the deal with fmap and map?

I think this one is historical:  map already existed before Haskell
was powerful enough to type fmap, and the decision was not to affect
existing programs too much.  Presumably Haskell 2 will have them
merged.

Best,
	Dylan Thurston


From dpt@math.harvard.edu Thu Feb 8 00:06:54 2001 Date: Wed, 7 Feb 2001 19:06:54 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Revamping the numeric classes
On Wed, Feb 07, 2001 at 01:57:41PM -0500, Dylan Thurston wrote:
> ... One point that has not been made: given a class
> setup like
>  <deleted>
> then naive users can continue to use (Num a) in contexts, and the same
> programs will continue to work.

I take that back.  Instance declarations would change, so this isn't
a very conservative change.  (Users would have to make instance
declarations for Additive, Multiplicative, and Num where before they
just made a declaration for Num.  Of course, they don't have to write
any more code.)

Best,
	Dylan Thurston


From brian@boutel.co.nz Thu Feb 8 05:37:04 2001 Date: Thu, 08 Feb 2001 18:37:04 +1300 From: Brian Boutel brian@boutel.co.nz Subject: Revamping the numeric classes
Dylan Thurston wrote:
> 
> 
> Why doesn't your argument show that all types should by instances of
> Eq and Show?  Why are numeric types special?
> 

Why do you think it does? I certainly don't think so.

The point about Eq was that a objection was raised to Num being a
subclass of Eq because, for some numeric types, equality is undecidable.
I suggested that Haskell equality could be undecidable, so (==) on those
types could reflect the real situation. One would expect that it could
do so in a natural way, producing a value of True or False when
possible, and diverging otherwise. Thus no convincing argument has been
given for removing Eq as a superclass of Num.


In general, if you fine-grain the Class heirarchy too much, the picture
gets very complicated. If you need to define separate subclases of Num
for those types which have both Eq and Show, those that only Have Eq,
those than only have Show and those that have neither, not to mention
those that have Ord as well as Eq and those that don't, and then for all
the other distinctions that will be suggested, my guess is that Haskell
will become the preserve of a few mathematicians and everyone else will
give up in disgust. Then the likely result is that no-one will be
interested in maintaining and developing Haskell and it will die.



--brian


From qrczak@knm.org.pl Thu Feb 8 04:53:35 2001 Date: 8 Feb 2001 04:53:35 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revamping the numeric classes
Wed, 7 Feb 2001 16:17:38 -0500, Peter Douglass <peterd@availant.com> pisze:

>  What I have in mind is to remove division by zero as an untypable
> expression.  The idea is to require div, /, mod to take NonZeroNumeric
> values in their second argument.  NonZeroNumeric values could be created by
> functions of type: 
>   Number a => a -> Maybe NonZeroNumeric
> or something similar.

IMHO it would be impractical.

Often I know that the value is non-zero, but it is not
statically determined, so it would just require uglification by
doing that conversion and then coercing Maybe NonZeroNumeric to
NonZeroNumeric. It's bottom anyway when the value is 0, but bottom
would come from Maybe coercion instead of from quot, so it only gives
a worse error message.

It's so easy to define partial functions that it would not buy much
for making it explicit outside quot.

Haskell does not have subtypes so a coercion from NonZeroNumeric to
plain Numbers would have to be explicit as well, even if logically
it's just an injection. Everybody assumes that quot has a symmetric
type as in all other languages, but in your proposal quot's arguments
come from completely disjoint worlds.

Moreover, 1/0 is defined on IEEE Doubles (e.g. in ghc): infinity.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From ketil@ii.uib.no Thu Feb 8 07:48:54 2001 Date: 08 Feb 2001 08:48:54 +0100 From: Ketil Malde ketil@ii.uib.no Subject: Revamping the numeric classes
Dylan Thurston <dpt@math.harvard.edu> writes:

> On Wed, Feb 07, 2001 at 01:57:41PM -0500, Dylan Thurston wrote:
> > ... One point that has not been made: given a class
> > setup like
> >  <deleted>
> > then naive users can continue to use (Num a) in contexts, and the same
> > programs will continue to work.

> I take that back.  Instance declarations would change, so this isn't
> a very conservative change.

Would it be a terribly grave change to the language to allow leaf
class instance declarations to include the necessary definitions for
dependent classes?  E.g.

        class foo a where
                f :: ...

        class (foo a) => bar a where
                b :: ...

        instance bar T where
                f = ...
                b = ...

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants


From k19990158@192.168.1.4 Thu Feb 8 13:49:11 2001 Date: Thu, 08 Feb 2001 13:49:11 From: FAIZAN RAZA k19990158@192.168.1.4 Subject: Please help me
Hello


Please help me to solve this questions


Question

Cartesian Product of three sets, written as X x Y x Z is defined as the set
of all ordered triples such that the first element is a member of X, the
second is member of Y, and the thrid member of set Z. write a Haskell
function cartesianProduct which when given three lists  (to represent three
sets) of integers returns a list of lists of ordered triples.

For examples,  cartesianProduct [1,3][2,4][5,6] returns
[[1,2,5],[1,2,6],[1,4,5],[1,4,6],[3,2,5],[3,2,6],[3,4,5],[3,4,6]]



Please send me reply as soon as possible

Ok

I wish you all the best of luck




From Tom.Pledger@peace.com Thu Feb 8 08:00:58 2001 Date: Thu, 8 Feb 2001 21:00:58 +1300 (NZDT) From: Tom Pledger Tom.Pledger@peace.com Subject: Revamping the numeric classes
Dylan Thurston writes:
 :
 | (A question in the above context is whether the literal '0' should
 | be interpreted as 'fromInteger (0::Integer)' or as 'zero'.
 | Opinions?)

Opinions?  Be careful what you wish for.  ;-)

In a similar discussion last year, I was making wistful noises about
subtyping, and one of Marcin's questions

    http://www.mail-archive.com/haskell-cafe@haskell.org/msg00125.html

was whether the numeric literal 10 should have type Int8 (2's
complement octet) or Word8 (unsigned octet).  At the time I couldn't
give a wholly satisfactory answer.  Since then I've read the oft-cited
paper "On Understanding Types, Data Abstraction, and Polymorphism"
(Cardelli & Wegner, ACM Computing Surveys, Dec 1985), which suggests a
nice answer: give the numeric literal 10 the range type 10..10, which
is defined implicitly and is a subtype of both -128..127 (Int8) and
0..255 (Word8).

The differences in arithmetic on certain important range types could
be represented by multiple primitive functions (or perhaps foreign
functions, through the FFI):

    primAdd   :: Integer -> Integer -> Integer    -- arbitrary precision
    primAdd8s :: Int8    -> Int8    -> Int8       -- overflow at -129, 128
    primAdd8u :: Word8   -> Word8   -> Word8      -- overflow at -1, 256
    -- etc.

    instance Additive Integer where
        zero = 0
        (+)  = primAdd

...with similar instances for the integer subrange types which may
overflow.  These other instances would belong outside the standard
Prelude, so that the ambiguity questions don't trouble people (such as
beginners) who don't care about the space and time advantages of fixed
precision integers.

Subtyping offers an alternative approach to handling arithmetic
overflows:
  - Use only arbitrary precision arithmetic.
  - When calculated result *really* needs to be packed into a fixed
    precision format, project it (or treat it down, etc., whatever's
    your preferred name), so that overflows are represented as
    Nothing.

For references to other uses of  class Subtype  see:

    http://www.mail-archive.com/haskell@haskell.org/msg07303.html

For a reference to some unification-driven rewrites, see:

    http://www.mail-archive.com/haskell@haskell.org/msg07327.html

Marcin 'Qrczak' Kowalczyk writes:
 :
 | Assuming that Ints can be implicitly converted to Doubles, is the
 | function
 |     f :: Int -> Int -> Double -> Double
 |     f x y z = x + y + z
 | ambiguous? Because there are two interpretations:
 |     f x y z = realToFrac x + realToFrac y + z
 |     f x y z = realToFrac (x + y) + z
 | 
 | Making this and similar case ambiguous means inserting lots of explicit
 | type signatures to disambiguate subexpressions.
 | 
 | Again, arbitrarily choosing one of the alternatives basing on some
 | set of weighting rules is dangerous,

I don't think the following disambiguation is too arbitrary:

        x + y + z                 -- as above

    --> (x + y) + z               -- left-associativity of (+)

    --> realToFrac (x + y) + z    -- injection (or treating up) done
                                  -- conservatively, i.e. only where needed

Regards,
Tom


From ashley@semantic.org Thu Feb 8 10:04:45 2001 Date: Thu, 8 Feb 2001 02:04:45 -0800 From: Ashley Yakeley ashley@semantic.org Subject: Please help me
At 2001-02-08 13:49, FAIZAN RAZA wrote:

>write a Haskell
>function cartesianProduct which when given three lists  (to represent three
>sets) of integers returns a list of lists of ordered triples.

That's easy. Just define 'product' as a function that finds the cartesian 
product of any number of lists, and then once you've done that you can 
apply it to make the special case of three items like this:

cartesianProduct a b c = product [a,b,c]

At least, that's how I would do it.

-- 
Ashley Yakeley, Seattle WA



From mk167280@students.mimuw.edu.pl Thu Feb 8 10:09:35 2001 Date: Thu, 8 Feb 2001 11:09:35 +0100 (CET) From: Marcin 'Qrczak' Kowalczyk mk167280@students.mimuw.edu.pl Subject: Revamping the numeric classes
On Thu, 8 Feb 2001, Tom Pledger wrote:

> nice answer: give the numeric literal 10 the range type 10..10, which
> is defined implicitly and is a subtype of both -128..127 (Int8) and
> 0..255 (Word8).

What are the inferred types for
    f = map (\x -> x+10)
    g l = l ++ f l
? I hope I can use them as [Int] -> [Int].

>         x + y + z                 -- as above
> 
>     --> (x + y) + z               -- left-associativity of (+)
> 
>     --> realToFrac (x + y) + z    -- injection (or treating up) done
>                                   -- conservatively, i.e. only where needed

What does it mean "where needed"? Type inference does not proceed
inside-out. What about this?
    h f = f (1::Int) == (2::Int)
Can I apply f to a function of type Int->Double? If no, then it's a
pity, because I could inline it (the comparison would be done on Doubles).
If yes, then what is the inferred type for h? Note that Int->Double is not
a subtype of Int->Int, so if h :: (Int->Int)->Bool, then I can't imagine
how h can be applied to something :: Int->Double.

-- 
Marcin 'Qrczak' Kowalczyk



From ashley@semantic.org Thu Feb 8 10:11:30 2001 Date: Thu, 8 Feb 2001 02:11:30 -0800 From: Ashley Yakeley ashley@semantic.org Subject: Please help me
At 2001-02-08 02:04, Ashley Yakeley wrote:

>That's easy. Just define 'product' as a function that finds the cartesian 
>product of any number of lists, and then once you've done that you can 
>apply it to make the special case of three items like this:
>
>cartesianProduct a b c = product [a,b,c]
>
>At least, that's how I would do it.

eesh, 'product' is something else in the Prelude. Better call it 
'cartprod' or something.

-- 
Ashley Yakeley, Seattle WA



From karczma@info.unicaen.fr Thu Feb 8 11:24:49 2001 Date: Thu, 08 Feb 2001 11:24:49 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: Revamping the numeric classes
First, a general remark which has nothing to do with Num.

PLEASE WATCH YOUR DESTINATION ADDRESSES
People send regularly their postings to haskell-cafe with
several private receiver addresses, which is a bit annoying
when you click "reply all"...


Brian Boutel after Dylan Thurston:

> > Why doesn't your argument show that all types should by instances of
> > Eq and Show?  Why are numeric types special?
> 
> Why do you think it does? I certainly don't think so.
> 
> The point about Eq was that a objection was raised to Num being a
> subclass of Eq because, for some numeric types, equality is undecidable.
> I suggested that Haskell equality could be undecidable, so (==) on those
> types could reflect the real situation. One would expect that it could
> do so in a natural way, producing a value of True or False when
> possible, and diverging otherwise. Thus no convincing argument has been
> given for removing Eq as a superclass of Num.
> 
> In general, if you fine-grain the Class heirarchy too much, the picture
> gets very complicated. If you need to define separate subclases of Num
> for those types which have both Eq and Show, those that only Have Eq,
> those than only have Show and those that have neither, not to mention
> those that have Ord as well as Eq and those that don't, and then for all
> the other distinctions that will be suggested, my guess is that Haskell
> will become the preserve of a few mathematicians and everyone else will
> give up in disgust. Then the likely result is that no-one will be
> interested in maintaining and developing Haskell and it will die.

Strange, but from the objectives mentioned in the last part of this 
posting (even if a little demagogic [insert smiley here if you wish])
I draw opposite conclusions.

The fact that the number of cases is quite large suggests that Eq, Show
and arithmetic should be treated as *orthogonal* issues, and treated
independently. 

If somebody needs Show for his favourite data type, he is free to
arrange
this himself. I repeat what I have already said: I work with functional
objects as mathematical entities. I want to add parametric surfaces, to
rotate trajectories. Also, to handle gracefully and legibly for those
simpletons who call themselves 'theoretical physicists', the arithmetic
of un-truncated lazy streams representing power series, or infinitely
dimensional differential algebra elements. Perhaps those are not 
convincing arguments for Brian Boutel. They are certainly so for me.

Num, with this forced marriage of (+) and (*) violates the principle
of orthogonality. Eq and Show constraints make it worse.

===

And, last, but very high on my check-list:

The implicit coercion of numeric constants: 3.14 -=->>  (fromDouble
3.14)
etc. is sick. (Or was; I still didn't install the last version of GHC,
and with Hugs it is bad). The decision is taken by the compiler
internally,
and it doesn't care at all about the fact that in my prelude 
I have eliminated the Num class and redefined fromDouble, fromInt, etc. 

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Dylan Thurston terminates his previous posting about Num with:

> Footnotes:
> [1]  Except for the lack of abs and signum, which should be in some
> other class.  I have to think about their semantics before I can say
> where they belong.

Now, signum and abs seem to be quite distincts beasts. Signum seem to
require Ord (and a generic zero...).

Abs from the mathematical point of view constitutes a *norm*. Now,
frankly, I haven't the slightest idea how to cast this concept into
Haskell class hierarchy in a sufficiently general way...

I'll tell you anyway that if you try to "sanitize" the numeric
classes, if you separate additive structures and the multiplication,
if you finally define abstract Vectors over some field of scalars,
and if you demand the existence of a generic normalization for your
vectors, than *most probably* you will need multiparametric classes
with dependencies. 


Jerzy Karczmarczuk
Caen, France


From CAngus@Armature.com Thu Feb 8 10:29:29 2001 Date: Thu, 8 Feb 2001 10:29:29 -0000 From: Chris Angus CAngus@Armature.com Subject: Please help me
Faizan,

A clue is to use list comprehensions (which are very like ZF set notation)

First think how you would define a cartesian product in set notation

X x Y x Z = {(x,y,z) | ...}

and then think how this is written in list comprehension notation

Chris

> -----Original Message-----
> From: FAIZAN RAZA [mailto:k19990158@192.168.1.4]
> Sent: 08 February 2001 13:49
> To: haskell-cafe@haskell.org
> Subject: Please help me
> 
> 
> Hello
> 
> 
> Please help me to solve this questions
> 
> 
> Question
> 
> Cartesian Product of three sets, written as X x Y x Z is 
> defined as the set
> of all ordered triples such that the first element is a 
> member of X, the
> second is member of Y, and the thrid member of set Z. write a Haskell
> function cartesianProduct which when given three lists  (to 
> represent three
> sets) of integers returns a list of lists of ordered triples.
> 
> For examples,  cartesianProduct [1,3][2,4][5,6] returns
> [[1,2,5],[1,2,6],[1,4,5],[1,4,6],[3,2,5],[3,2,6],[3,4,5],[3,4,6]]
> 
> 
> 
> Please send me reply as soon as possible
> 
> Ok
> 
> I wish you all the best of luck
> 
> 
> 
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
> 


From fjh@cs.mu.oz.au Thu Feb 8 10:41:56 2001 Date: Thu, 8 Feb 2001 21:41:56 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: Revamping the numeric classes
On 08-Feb-2001, Ketil Malde <ketil@ii.uib.no> wrote:
> Would it be a terribly grave change to the language to allow leaf
> class instance declarations to include the necessary definitions for
> dependent classes?  E.g.
> 
>         class foo a where
>                 f :: ...
> 
>         class (foo a) => bar a where
>                 b :: ...
> 
>         instance bar T where
>                 f = ...
>                 b = ...

I think that proposal is a good idea.

It means that the user of a class which inherits from some complicated
class hierarchy doesn't need to know (or to write code which depends on)
any of the details of that class hierarchy.  Instead, they can just
give instance declarations for the classes that they want to use,
and provide definitions all of the relevant members.

It means that the developer of a class can split that class into two
or more sub-classes without breaking (source level) backwards compatibility.


One point that needs to be  resolved is the interaction with default methods.

Consider

        class foo a where
                f :: ...
		f = ...
                f2 :: ...
		f2 = ...

        class (foo a) => bar a where
                b :: ...
 
        instance bar T where
		-- no definitions for f or f2
		b = 42

Should this define an instance for `foo T'?
(I think not.)

How about if the instance declaration is changed to

        instance bar T where
		f = 41
		-- no definition for f2
		b = 42

?
(In that case, I think it should.)

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From elke.kasimir@catmint.de Thu Feb 8 14:11:21 2001 Date: Thu, 08 Feb 2001 15:11:21 +0100 (CET) From: Elke Kasimir elke.kasimir@catmint.de Subject: Show, Eq not necessary for Num [Was: Revamping the numeric c
On 07-Feb-2001 Patrik Jansson wrote:

(interesting stuff deleted)

> As far as I remember from the earlier discussion, the only really visible
> reason for Show, Eq to be superclasses of Num is that class contexts are
> simpler when (as is often the case) numeric operations, equality and show
> are used in some context.
> 
> f :: Num a => a -> String  -- currently
> f a = show (a+a==2*a)
> 
> If Show, Eq, Num were uncoupled this would be
> 
> f :: (Show a, Eq a, Num a) => a -> String
> 
> But I think I could live with that. (In fact, I rather like it.)

Basically I'm too.

However, what is missing for me is something like:

type Comfortable a = (Show a, Eq a, Num a) => a

or

class (Show a, Read a, Eq a) => Comfortable a
instance (Show a, Read a, Eq a) => Comfortable a 

I think here is a point where a general flaw of class hierachies as a mean
of software design becomes obvious, which consists of forcing the programmer
to arbitrarily prefer few generalizations to all others in a global, 
context-independent design decision.

The oo community (being the source of all the evil...) usually relies on the 
rather problematic ontological assumption that, at least from a certain point
of view (problem domain, design, implemention), the relevant concepts form in
a natural way a kind a generalization hierarchy, and that this generalization 
provides a natural way to design the software (in our case, determine the
type system in some a-priory fashion).

Considering the fact that a concept, for which (given a certain point of view)  
n elementary predicates hold a-priory, n! possible generalizations exist 
a-priory, this assumption can be questioned. 

In contrary to the given assumption, I have made the experience that, when 
trying to classify concepts,  even a light shift in the situation being 
under consideration can lead to a severe change in what appears to be the 
"natural" classification.  

Besides this, as is apparent in Show a => Num a, it is not always a priory 
generalizations that are really needed. Instead, the things must be fit
into the current point of view with a bit force, thus changing 
concepts or even inventing new ones.

(For example, in the oo community, which likes (or is forced?) to "ontologize" 
relationships into "objects", has invented "factories" for different things, 
ranging from GUI border frames to database connection handles. 
Behind such an at first glance totally arbitary conceptualization might stand
a more rational concept, for example applying a certain library design principle
called "factory" to different types of things. However one can't always wait 
until the rationale behind a certain solution is clearly recognized.)
 
In my experience, both class membership and generalization relationships are 
often needed locally and post hoc, and they sometimes even express empirical 
(a-posteriory) relations between concepts instead of true analytical (a-priory) 
generalization relationships.

As a consequence, for my opinion, programming languages should make it
possible and easy to employ post-hoc and local class membership declarations and
post-hoc and local class hierarchy declarations (or even re-organizations).

There will of course be situations where a global a-priory declaration of 
generalization nevertheless still make completely sense.

For Haskell, I could imagine (without having having much thought about) in 
addition to the things mentioned in the beginning, several things making 
supporting the  "locally, fast and easy", including a mean to define classes 
with implied memberships, for example declarations saying that "Foo is the class
of all types in scope for which somefoo :: ... is defined", or declarations  
saying that "class Num is locally restricted to all instances of global Num 
which also belong to Eq".

Elke.

---
Elke Kasimir
Skalitzer Str. 79
10997 Berlin (Germany)
fon:  +49 (030) 612 852 16
mail: elke.kasimir@catmint.de>  
see: <http://www.catmint.de/elke>

for pgp public key see:
<http://www.catmint.de/elke/pgp_signature.html>


From peterd@availant.com Thu Feb 8 15:51:58 2001 Date: Thu, 8 Feb 2001 10:51:58 -0500 From: Peter Douglass peterd@availant.com Subject: Revamping the numeric classes
Marcin Kowalczyk wrote:
> Wed, 7 Feb 2001 16:17:38 -0500, Peter Douglass 
> <peterd@availant.com> pisze:
> 
> >  What I have in mind is to remove division by zero as an untypable
> > expression.  The idea is to require div, /, mod to take 
> NonZeroNumeric
> > values in their second argument.  NonZeroNumeric values 
> could be created by
> > functions of type: 
> >   Number a => a -> Maybe NonZeroNumeric
> > or something similar.
> 
> IMHO it would be impractical.
> 

The first part of my question (not contained in your reply) is whether it is
feasible to disable a developer's access to the "unsafe" numerical
operations.  Whether or not an individual developer chooses to do so is
another matter.  

> Often I know that the value is non-zero, but it is not
> statically determined,

If you "know" the value is non-zero before run-time, then that is statically
determined.  Otherwise, you don't "know" that.

> so it would just require uglification by
> doing that conversion and then coercing Maybe NonZeroNumeric to
> NonZeroNumeric.

  Ugliness is in the eye of the beholder I suppose.  For some applications,
every division should be preceded by an explicit test for zero, or the
denominator must be "known" to be non-zero by the way in which it was
created.  Forcing a developer to extract a NonZeroNumeric value from a Maybe
NonZeroNumeric value seems equivalent to me.

> It's bottom anyway when the value is 0, but bottom
> would come from Maybe coercion instead of from quot, so it only gives
> a worse error message.
> 

 It is possible that the developer writes a function which returns a
nonZeroNumeric value which actually has a value of zero.  However, the value
of requiring division to have a nonZeroNumeric denominator is to catch at
compile time the "error" of failing to scrutinize (correctly or incorrectly)
for zero. 
 
  For most commercial software, the quality of run-time error messages is
far less important than their absence.    

> It's so easy to define partial functions that it would not buy much
> for making it explicit outside quot.
> 
> Haskell does not have subtypes so a coercion from NonZeroNumeric to
> plain Numbers would have to be explicit as well, even if logically
> it's just an injection. 

If one is aiming to write code which cannot fail at run-time, then extra
work must be done anyway.  The only question is whether the language will
support such a discipline.

> Everybody assumes that quot has a symmetric
> type as in all other languages, but in your proposal quot's arguments
> come from completely disjoint worlds.

If it is optional but not required that a developer may disable unsafe
division, then developers who expect arithmetic to work in the usual way
will not be disappointed.
 
> Moreover, 1/0 is defined on IEEE Doubles (e.g. in ghc): infinity.

This solution doesn't always help with code safety.

Thanks for the response.
--PeterD


From dpt@math.harvard.edu Thu Feb 8 17:43:08 2001 Date: Thu, 8 Feb 2001 12:43:08 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Revamping the numeric classes
On Thu, Feb 08, 2001 at 11:24:49AM +0000, Jerzy Karczmarczuk wrote:
> First, a general remark which has nothing to do with Num.
> 
> PLEASE WATCH YOUR DESTINATION ADDRESSES
> People send regularly their postings to haskell-cafe with
> several private receiver addresses, which is a bit annoying
> when you click "reply all"...

Yes, apologies.  The way the lists do the headers make it very easy to
reply to individuals, and hard to reply to the list.

> And, last, but very high on my check-list:
> 
> The implicit coercion of numeric constants: 3.14 -=->>  (fromDouble
> 3.14)
> etc. is sick. (Or was; I still didn't install the last version of GHC,
> and with Hugs it is bad). The decision is taken by the compiler
> internally,
> and it doesn't care at all about the fact that in my prelude 
> I have eliminated the Num class and redefined fromDouble, fromInt, etc. 

Can't you just put "default ()" at the top of each module?

I suppose you still have the problem that a numeric literal "5" means
"Prelude.fromInteger 5".  Can't you define your types to be instances
of Prelude.Num, with no operations defined except Prelude.fromInteger?

> Dylan Thurston terminates his previous posting about Num with:
> 
> > Footnotes:
> > [1]  Except for the lack of abs and signum, which should be in some
> > other class.  I have to think about their semantics before I can say
> > where they belong.
> 
> Now, signum and abs seem to be quite distincts beasts. Signum seem to
> require Ord (and a generic zero...).
> 
> Abs from the mathematical point of view constitutes a *norm*. Now,
> frankly, I haven't the slightest idea how to cast this concept into
> Haskell class hierarchy in a sufficiently general way...

This was one thing I liked with the Haskell hierarchy: the observation
that "signum" of real numbers is very much like "argument" of complex
numbers.  abs and signum in Haskell satisfy an implicit law:
   abs x * signum x = x      [1]
So signum can be defined anywhere you can define abs (except that it's
not a continuous function, so is not terribly well-defined).  A
default definition for signum x might read
   signum x = let a = abs x in if (a == 0) then 0 else x / abs x
(Possibly signum is the wrong name.  What is the standard name for
this operation for, e.g., matrices?)  [Er, on second thoughts, it's
not as well-defined as I thought.  Abs x needs to be in a field for
the definition above to work.]

> I'll tell you anyway that if you try to "sanitize" the numeric
> classes, if you separate additive structures and the multiplication,
> if you finally define abstract Vectors over some field of scalars,
> and if you demand the existence of a generic normalization for your
> vectors, than *most probably* you will need multiparametric classes
> with dependencies. 

Multiparametric classes, certainly (for Vectors, at least).
Fortunately, they will be in Haskell 2 with high probability.  I'm not
convinced about dependencies yet.

> Jerzy Karczmarczuk
> Caen, France

Best,
	Dylan Thurston

Footnotes: 
[1]  I'm not sure what I mean by "=" there, since I do not believe
these should be forced to be instances of Eq.  For clearer cases,
consider the various Monad laws, e.g.,
   join . join = join . map join
(Hope I got that right.)  What does "=" mean there?  Some sort of
denotational equality, I suppose.




From dpt@math.harvard.edu Thu Feb 8 19:55:14 2001 Date: Thu, 8 Feb 2001 14:55:14 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Instances of multiple classes at once
(Superficially irrelevant digression:)

Simon Peyton-Jones came here today and talked about his combinator
library for financial applications, as in his paper "Composing
Contracts".  One of the points he made was that a well-designed
combinator library for financial traders should have combinators that
work on a high level; then, when they want to start writing their own
contracts, they can learn about a somewhat smaller set of building
blocks inside that; then eventually they might learn about the
fundamental building blocks.  (Examples of different levels from the
paper: "european"; "zcb"; "give"; "anytime".)

One theory is that a well-designed class library has the same
property.  But standard Haskell doesn't allow this; that is why I like
the proposal to allow a single instances to simultaneously declare
instances of superclasses.  One problem is how to present the
information on the type hierarchy to users.  (This is a problem in
Haskell anyway; I find myself referring to the source of the Prelude
while writing programs, which seems like a Bad Thing when extrapolated
to larger modules.)

On Thu, Feb 08, 2001 at 09:41:56PM +1100, Fergus Henderson wrote:
> One point that needs to be  resolved is the interaction with default methods.
> 
> Consider
> 
>         class foo a where
>                 f :: ...
> 		f = ...
>                 f2 :: ...
> 		f2 = ...
> 
>         class (foo a) => bar a where
>                 b :: ...
>  
>         instance bar T where
> 		-- no definitions for f or f2
> 		b = 42
> 
> Should this define an instance for `foo T'?
> (I think not.)

Whyever not?  Because there is no textual mention of class Foo in the
instance for Bar?  Think about the case of a superclass with no methods;
wouldn't you want to allow automatic instances in this case?


One might even go further and allow a class to declare default methods
for a superclass:

class Foo a where
   f :: ...

class (Foo a) => Bar a where
   b :: ...
   b = ...
   f = ...

Best,
	Dylan Thurston


From qrczak@knm.org.pl Thu Feb 8 20:51:57 2001 Date: 8 Feb 2001 20:51:57 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Show, Eq not necessary for Num [Was: Revamping the numeric c
Thu, 08 Feb 2001 15:11:21 +0100 (CET), Elke Kasimir <elke.kasimir@catmint.de> pisze:

> However, what is missing for me is something like:
> 
> type Comfortable a = (Show a, Eq a, Num a) => a
> 
> or
> 
> class (Show a, Read a, Eq a) => Comfortable a
> instance (Show a, Read a, Eq a) => Comfortable a 

I agree and think it should be easy to add.

The latter syntax is nice: obvious what it means, not legal today.
This instance of course conflicts with any other instance of that
class, so it can be recognized and treated specially as a "class
synonym".

> For Haskell, I could imagine (without having having much thought
> about) in addition to the things mentioned in the beginning,
> several things making supporting the  "locally, fast and easy",
> including a mean to define classes with implied memberships, for
> example declarations saying that "Foo is the class of all types in
> scope for which somefoo :: ... is defined", or declarations saying
> that "class Num is locally restricted to all instances of global
> Num which also belong to Eq".

Here I would be more careful. Don't know if local instances or local
classes can be defined to make sense, nor if they could be useful
enough...

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Thu Feb 8 20:45:16 2001 Date: 8 Feb 2001 20:45:16 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revamping the numeric classes
Thu, 8 Feb 2001 10:51:58 -0500, Peter Douglass <peterd@availant.com> pisze:

> The first part of my question (not contained in your reply) is
> whether it is feasible to disable a developer's access to the
> "unsafe" numerical operations.

import Prelude hiding (quot, rem, (/) {- etc. -})
import YourPrelude -- which defines substitutes

You can "disable" it now. You cannot disable them entirely - anyone can
define present functions in terms of your functions if he really wants.

> Whether or not an individual developer chooses to do so is another
> matter.

Why only quot? There are many other ways to write bottom:
    head []
    (\(x:xs) -> (x,xs)) []
    let x = x in x
    log (-1)
    asin 2
    error "foo"

> If you "know" the value is non-zero before run-time, then that is
> statically determined.

I know but the compiler does not know, and I have no way to convince it.

> It is possible that the developer writes a function which returns a
> nonZeroNumeric value which actually has a value of zero.  However,
> the value of requiring division to have a nonZeroNumeric denominator
> is to catch at compile time the "error" of failing to scrutinize
> (correctly or incorrectly) for zero.

IMHO it would be more painful than useful.

> For most commercial software, the quality of run-time error messages
> is far less important than their absence.

It would not avoid them if the interface does not give a place to
report the error:
    average xs = sum xs / case checkZero (length xs) of
        Just notZero -> notZero
        Nothing      -> error "This should never happen"
is not any more safe than
    average xs = sum xs / length xs

and I can report bad input without trouble now:
    average xs = case length xs of
        0 -> Nothing
        l -> Just (sum xs / l)

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Thu Feb 8 20:28:13 2001 Date: 8 Feb 2001 20:28:13 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revamping the numeric classes
Thu, 8 Feb 2001 21:41:56 +1100, Fergus Henderson <fjh@cs.mu.oz.au> pisze:

> Should this define an instance for `foo T'?
> (I think not.)
> 
> How about if the instance declaration is changed to
> 
>         instance bar T where
> 		f = 41
> 		-- no definition for f2
> 		b = 42
> 
> ?
> (In that case, I think it should.)

I don't like the idea of treating the case "no explicit definitions
were given because all have default definitions which are OK"
differently than "some explicit definitions were given".

When there is a superclass, it must have an instance defined, so if
we permit such thing at all, I would let it implicitly define all
superclass instances not defined explicitly, or something like that.
At least when all methods have default definitions. Yes, I know that
they can be mutually recursive and thus all will be bottoms...

So maybe there should be a way to specify that default definitions
are cyclic and some of them must be defined? It is usually written
in comments anyway, because it is not immediately visible in the
definitions. If not formally in the language (now any method definition
can be omitted even if it has no default!), then perhaps the compiler
could detect most cases when methods are defined in terms of one
another and give a warning.

Generally the compiler could warn if the programmer has written bottom
in an unusual way. For example
    f x = g some_expression
    g x = f some_expression
is almost certainly a programmer error.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Thu Feb 8 20:30:31 2001 Date: 8 Feb 2001 20:30:31 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revamping the numeric classes
Thu, 08 Feb 2001 11:24:49 +0000, Jerzy Karczmarczuk <karczma@info.unicaen.fr> pisze:

> The implicit coercion of numeric constants: 3.14 -=->>  (fromDouble
> 3.14) etc. is sick.

What do you propose instead?

(BTW, it's fromRational, to keep arbitrarily large precision.)

> Now, signum and abs seem to be quite distincts beasts. Signum seem
> to require Ord (and a generic zero...).

Signum doesn't require Ord.
    signum z = z / abs z
for complex numbers.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From brian@boutel.co.nz Thu Feb 8 21:37:46 2001 Date: Fri, 09 Feb 2001 10:37:46 +1300 From: Brian Boutel brian@boutel.co.nz Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Patrik Jansson wrote:
>
> On Wed, 7 Feb 2001, Brian Boutel wrote:
> > * Haskell equality is a defined operation, not a primitive, and may not
> > be decidable. It does not always define equivalence classes, because
> > a==a may be Bottom, so what's the problem? It would be a problem,
> > though, to have to explain to a beginner why they can't print the result
> > of a computation.
> 
> The fact that equality can be trivially defined as bottom does not imply
> that it should be a superclass of Num, it only explains that there is an
> ugly way of working around the problem. Neither is the argument that the
> beginner should be able to print the result of a computation a good
> argument for having Show as a superclass.
> 

There is nothing trivial or ugly about a definition that reflects
reality and bottoms only where equality is undefined.

Of course, if you do not need to apply equality to your "numeric" type
then having to define it is a waste of time, but consider this:

- Having a class hierarchy at all (or making any design decision)
implies compromise.
- The current hierarchy (and its predecessors) represent a reasonable
compromise that meets most needs.
- Users have a choice: either work within the class hierarchy and accept
the pain of having to define     things you don't need in order to get
the things that come for free, or omit the instance declarations    and
work outside the hierarchy. In that case you will not be able to use the
overloaded operator symbols   of the class, but that is just a matter of
concrete syntax, and ultimately unimportant.

--brian


From wli@holomorphy.com Fri Feb 9 01:37:31 2001 Date: Thu, 8 Feb 2001 17:37:31 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Revamping the numeric classes
On Thu, Feb 08, 2001 at 08:30:31PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Signum doesn't require Ord.
>     signum z = z / abs z
> for complex numbers.

I'd be careful here.

\begin{code}
	signum 0 = 0
	signum z = z / abs z
\end{code}

This is, perhaps, neither precise nor general enough.

The signum/abs pair seem to represent direction and magnitude.
According to the line of reasoning in some of the earlier posts in this
flamewar, the following constraints:

	(1) z = signum z <*> abs z where <*> is appropriately defined
	(2) abs $ signum z = 1

should be enforced, if possible, by the type system. This suggests
that for any type having a vector space structure over Fractional
(or whatever the hierarchy you're brewing up uses for rings with
a division partial function on them) that the result type of signum
lives in a more restricted universe, perhaps even one with a different
structure (operations defined on it, set of elements) than the argument
type, and it seems more than possible to parametrize it on the argument
type. The abs is in fact a norm, and the signum projects V^n -> V^n / V.
Attempts to define these things on Gaussian integers, p-adic numbers,
polynomial rings, and rational points on elliptic curves will quickly
reveal limitations of the stock class hierarchy.

Now, whether it's actually desirable to scare newcomers to the language
into math phobia, wetting their pants, and running screaming with
subtleties like this suggests perhaps that one or more "alternative
Preludes" may be desirable to have. There is a standard Prelude, why not
a nonstandard one or two? We have the source. The needs of the geek do
not outweigh the needs of the many. Hence, we can cook up a few Preludes
or so on our own, and certainly if we can tinker enough to spam the list
with counterexamples and suggestions of what we'd like the Prelude to
have, we can compile up a Prelude for ourselves with our "suggested
changes" included and perhaps one day knock together something which can
actually be used and has been tested, no?

The Standard Prelude serves its purpose well and accommodates the
largest cross-section of users. Perhaps a Geek Prelude could
accommodate the few of us who do need these sorts of schenanigans.


Cheers,
Bill
-- 
<j0][nD33R:#math> Excel/Spreadsheet Q: What is the formula for finding
	out the time passed between two dates and or two times in the same day?
<MatroiDN:#math> excel/spreadsheet? Hmm, this is math? Is there a GTM on
	excel or maybe an article in annals about spreadsheets or maybe
	there's a link from wolfram to doing your own computer work, eh?
<danprime:#math> jeeem, haven't you seen "Introduction to Algebraic Excel"?
<danprime:#math> or "Spreadsheet Space Embeddings in 2-Manifolds"
<brouwer:#math> i got my phd in spreadsheet theory
<brouwer:#math> i did my thesis on the spreadsheet conjecture


From t-atolm@microsoft.com Tue Feb 6 10:53:55 2001 Date: Tue, 6 Feb 2001 02:53:55 -0800 From: Andrew Tolmach t-atolm@microsoft.com Subject: GHC Core output
Timothy Docker [mailto:timd@macquarie.com.au] writes:
> 
>  >         We agreed that it would be a Jolly Good Thing if GHC could
>  >         be persuaded to produce GHC-independent Core output,
>  >         ready to feed into some other compiler.  For example,
>  >         Karl-Filip might be able to use it. 
>  >         ANDREW will write a specification, and implement it.
> 
> A quick question. What is meant by  "Core output"? Subsequent posts
> seem to suggest this is some "reduced Haskell", in which full Haskell
> 98 can be expressed. Am I completely off beam here?
> 
Not at all.
"Core" is an intermediate language used internally by the GHC compiler.
It does indeed resemble a reduced Haskell (but with explicit higher-order
polymorphic types) and GHC translates full Haskell 98 into it.
Currently Core has no rigorously defined external representation, although 
by setting certain compiler flags, one can get a (rather ad-hoc) textual
representation to be printed at various points in the compilation process.
(This is usually done to help debug the compiler).

What we hope to do is:

- provide a formal definition of Core's external syntax; 

- give a precise definition of its semantics (both static and dynamic);

- modify GHC to produce external Core files, if so requested, at one or more
useful points in the compilation sequence -- e.g., just before optimization,
or just after.

- modify GHC to accept external Core files in place of Haskell 
source files, again at one or more useful points.

The first three facilities will let one couple GHC's front-end (parser,
type-checker, etc.), and optionally its optimizer, with new back-end tools.
Adding the last facility will let one implement new Core-to-Core 
transformations in an external tool and integrate them into GHC. It will
also
allow new front-ends to generate Core that can be fed into GHC's optimizer
or 
back end; however, because there are many (undocumented)
idiosynracies in the way GHC produces Core from source Haskell, it will be
hard
for an external tool to produce Core that can be integrated with
GHC-produced core 
(e.g., for the Prelude), and we don't aim to support this.





From erik@meijcrosoft.com Fri Feb 9 04:26:06 2001 Date: Thu, 8 Feb 2001 20:26:06 -0800 From: Erik Meijer erik@meijcrosoft.com Subject: GHC Core output
I would *really* love to see GHC componetized (TM); it would even be better
if it would become easier to use the pieces. I would like to do experiments
on smaller bits of the compiler using Hugs (ideally the whole thing!). When
I was working on the Java/.NET backend I had to rebuild the whole compiler
just to test a few hundred lines of code that translated Core to Java which
is a major pain in the butt; I don't get a kick out of dealing with
installing Cygnus, recursive multi-staged makefiles, cpp, etc.

Erik "do you get a kick out of runnning the marathon with a ball and chain
at your feet?" Meijer

----- Original Message -----
From: "Andrew Tolmach" <t-atolm@microsoft.com>
To: "'Timothy Docker'" <timd@macquarie.com.au>; <haskell-cafe@haskell.org>
Sent: Tuesday, February 06, 2001 2:53 AM
Subject: RE: GHC Core output


> Timothy Docker [mailto:timd@macquarie.com.au] writes:
> >
> >  >         We agreed that it would be a Jolly Good Thing if GHC could
> >  >         be persuaded to produce GHC-independent Core output,
> >  >         ready to feed into some other compiler.  For example,
> >  >         Karl-Filip might be able to use it.
> >  >         ANDREW will write a specification, and implement it.
> >
> > A quick question. What is meant by  "Core output"? Subsequent posts
> > seem to suggest this is some "reduced Haskell", in which full Haskell
> > 98 can be expressed. Am I completely off beam here?
> >
> Not at all.
> "Core" is an intermediate language used internally by the GHC compiler.
> It does indeed resemble a reduced Haskell (but with explicit higher-order
> polymorphic types) and GHC translates full Haskell 98 into it.
> Currently Core has no rigorously defined external representation, although
> by setting certain compiler flags, one can get a (rather ad-hoc) textual
> representation to be printed at various points in the compilation process.
> (This is usually done to help debug the compiler).
>
> What we hope to do is:
>
> - provide a formal definition of Core's external syntax;
>
> - give a precise definition of its semantics (both static and dynamic);
>
> - modify GHC to produce external Core files, if so requested, at one or
more
> useful points in the compilation sequence -- e.g., just before
optimization,
> or just after.
>
> - modify GHC to accept external Core files in place of Haskell
> source files, again at one or more useful points.
>
> The first three facilities will let one couple GHC's front-end (parser,
> type-checker, etc.), and optionally its optimizer, with new back-end
tools.
> Adding the last facility will let one implement new Core-to-Core
> transformations in an external tool and integrate them into GHC. It will
> also
> allow new front-ends to generate Core that can be fed into GHC's optimizer
> or
> back end; however, because there are many (undocumented)
> idiosynracies in the way GHC produces Core from source Haskell, it will be
> hard
> for an external tool to produce Core that can be integrated with
> GHC-produced core
> (e.g., for the Prelude), and we don't aim to support this.
>
>
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe



From t-atolm@microsoft.com Wed Feb 7 09:34:36 2001 Date: Wed, 7 Feb 2001 01:34:36 -0800 From: Andrew Tolmach t-atolm@microsoft.com Subject: GHC Core Language
[moving to haskell-cafe]

> From: matt hellige [mailto:matt@immute.net]
> a quick question re: ghc's Core language... is it still very similar
> to the abstract syntax given in, for example, santos' "compilation by
> transformation..." (i think it was his dissertation?) and 
> elsewhere, or
> has it changed significantly in the last couple of years? i only ask
> because i know the language used in that paper is somewhat 
> different from
> the Core language given in peyton jones and lester's 
> "implementing functional 
> languages" from 92, and includes type annotations and so on.
> 
> m
> 
The current Core language is still quite similar to what is described in
Santos'
work; see

SL Peyton Jones and A Santos,
"A transformation-based optimiser for Haskell,"
Science of Computer Programming 32(1-3), pp3-47, September 1998.
http://research.microsoft.com/Users/simonpj/papers/comp-by-trans-scp.ps.gz

But there have been some noticeable changes; for example, 
function arguments are no longer required to be atomic.
A more recent version of Core is partially described (omitting types) in 

SL Peyton Jones & S Marlowe, 
"Secrets of the Glasgow Haskell Compiler Inliner,"
IDL'99.
http://research.microsoft.com/Users/simonpj/papers/inline.ps.gz

 


From Tom.Pledger@peace.com Fri Feb 9 04:29:09 2001 Date: Fri, 9 Feb 2001 17:29:09 +1300 From: Tom Pledger Tom.Pledger@peace.com Subject: Revamping the numeric classes
Marcin 'Qrczak' Kowalczyk writes:
 | On Thu, 8 Feb 2001, Tom Pledger wrote:
 | 
 | > nice answer: give the numeric literal 10 the range type 10..10, which
 | > is defined implicitly and is a subtype of both -128..127 (Int8) and
 | > 0..255 (Word8).
 | 
 | What are the inferred types for
 |     f = map (\x -> x+10)
 |     g l = l ++ f l
 | ? I hope I can use them as [Int] -> [Int].

f, g :: (Subtype a b, Subtype 10..10 b, Num b) => [a] -> [b]
Yes, because of the substitution {Int/a, Int/b}.

 | >         x + y + z                 -- as above
 | > 
 | >     --> (x + y) + z               -- left-associativity of (+)
 | > 
 | >     --> realToFrac (x + y) + z    -- injection (or treating up) done
 | >                                   -- conservatively, i.e. only where needed
 | 
 | What does it mean "where needed"? Type inference does not proceed
 | inside-out.

In the expression

    (x + y) + z

we know from the explicit type signature (in your question that I was
responding to) that x,y::Int and z::Double.  Type inference does not
need to treat x or y up, because it can take the first (+) to be Int
addition.  However, it must treat the result (x + y) up to the most
specific supertype which can be added to a Double.

 | What about this?
 |     h f = f (1::Int) == (2::Int)
 | Can I apply f

h?

 | to a function of type Int->Double?

Yes.

 | If no, then it's a pity, because I could inline it (the comparison
 | would be done on Doubles).  If yes, then what is the inferred type
 | for h? Note that Int->Double is not a subtype of Int->Int, so if h
 | :: (Int->Int)->Bool, then I can't imagine how h can be applied to
 | something :: Int->Double.

There's no explicit type signature for the result of applying f to
(1::Int), so...

h :: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool

That can be inferred by following the structure of the term.  Function
terms do seem prone to an accumulation of deferred subtype
constraints.

Regards,
Tom


From brian@boutel.co.nz Fri Feb 9 05:45:16 2001 Date: Fri, 09 Feb 2001 18:45:16 +1300 From: Brian Boutel brian@boutel.co.nz Subject: Revamping the numeric classes
William Lee Irwin III wrote:
> 
> 
> The Standard Prelude serves its purpose well and accommodates the
> largest cross-section of users. Perhaps a Geek Prelude could
> accommodate the few of us who do need these sorts of schenanigans.
> 
>

Amen.

--brian


From simonpj@microsoft.com Thu Feb 8 02:32:18 2001 Date: Wed, 7 Feb 2001 18:32:18 -0800 From: Simon Peyton-Jones simonpj@microsoft.com Subject: Haskell Implemetors Meeting
GHC transforms Haskell into "Core", which is roughly 
the second-order lambda calculus,
augmented with let(rec), case, and constructors.  This is an
a small explicitly-typed intermediate language, in contrast
to Haskell which is a very large, implicitly typed language.
Getting from Haskell to Core is a lot of work, and it might
be useful to be able to re-use that work.

Andrew's proposal (which he'll post to the Haskell list)
will define exactly what "Core" is.

Simon

| -----Original Message-----
| From: Timothy Docker [mailto:timd@macquarie.com.au]
| Sent: 05 February 2001 22:16
| To: haskell-cafe@haskell.org
| Subject: Haskell Implemetors Meeting
| 
| 
| 
|  >         We agreed that it would be a Jolly Good Thing if GHC could
|  >         be persuaded to produce GHC-independent Core output,
|  >         ready to feed into some other compiler.  For example,
|  >         Karl-Filip might be able to use it. 
|  >         ANDREW will write a specification, and implement it.
| 
| A quick question. What is meant by  "Core output"? Subsequent posts
| seem to suggest this is some "reduced Haskell", in which full Haskell
| 98 can be expressed. Am I completely off beam here?
| 
| Tim Docker
| 
| _______________________________________________
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe
| 


From ketil@ii.uib.no Fri Feb 9 08:14:53 2001 Date: 09 Feb 2001 09:14:53 +0100 From: Ketil Malde ketil@ii.uib.no Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Brian Boutel <brian@boutel.co.nz> writes:

>> The fact that equality can be trivially defined as bottom does not imply
>> that it should be a superclass of Num, it only explains that there is an
>> ugly way of working around the problem.

> There is nothing trivial or ugly about a definition that reflects
> reality and bottoms only where equality is undefined.

I think there is.  If I design a class and derive it from Num with
(==) is bottom, I am allowed to apply to it functions requiring a Num
argument, but I have no guarantee it will work.

The implementor of that function can change its internals (to use
(==)), and suddenly my previously working program is non-terminating. 
If I defined (==) to give a run time error, it'd be a bit better, but
I'd much prefer the compiler to tell me about this in advance.

> Of course, if you do not need to apply equality to your "numeric" type
> then having to define it is a waste of time, but consider this:

It's not about "needing to apply", but about finding a reasonable
definition. 

> - Having a class hierarchy at all (or making any design decision)
> implies compromise.

I think the argument is that we should move Eq and Show *out* of the
Num hierarchy.  Less hierarchy - less compromise.

> - The current hierarchy (and its predecessors) represent a reasonable
> compromise that meets most needs.

Obviously a lot of people seem to think we could find compromises that
are more reasonable.

> - Users have a choice: either work within the class hierarchy and
> accept the pain of having to define things you don't need in order
> to get the things that come for free,

Isn't it a good idea to reduce the amount of pain?

> or omit the instance declarations and work outside the hierarchy. In
> that case you will not be able to use the overloaded operator
> symbols of the class, but that is just a matter of concrete syntax,
> and ultimately unimportant.

I don't think syntax is unimportant.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants


From karczma@info.unicaen.fr Fri Feb 9 10:52:39 2001 Date: Fri, 09 Feb 2001 10:52:39 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: In hoc signo vinces (Was: Revamping the numeric classes)
Marcin 'Qrczak' Kowalczyk wrote:


> JK> Now, signum and abs seem to be quite distincts beasts. Signum seem
> JK> to require Ord (and a generic zero...).
> 
> Signum doesn't require Ord.
>     signum z = z / abs z
> for complex numbers.

Thank you, I know. And I ignore it. Calling "signum" the result of
a vector normalization (on the gauss plane in this case) is something
I don't really appreciate, and I wonder why this definition infiltrated
the prelude. Just because it conforms to the "normal" definition of
signum for reals?

Again, a violation of the orthogonality principle. Needing division
just to define signum. And of course a completely different approach
do define the signum of integers. Or of polynomials...


Jerzy Karczmarczuk


From karczma@info.unicaen.fr Fri Feb 9 11:26:39 2001 Date: Fri, 09 Feb 2001 11:26:39 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: Revamping the numeric HUMAN ATTITUDE
Brian Boutel wrote:
> 
> William Lee Irwin III wrote:
> >
> >
> > The Standard Prelude serves its purpose well and accommodates the
> > largest cross-section of users. Perhaps a Geek Prelude could
> > accommodate the few of us who do need these sorts of schenanigans.
> >
> >
> 
> Amen.


Aha.
And we will have The Prole, normal users who can live with incomplete,
sometimes contradictory math, and The Inner Party of those who know
The Truth?

Would you agree that your children be taught at primary school some
dubious matter because "they won't need the real stuff".

I would agree having a minimal standard Prelude which is incomplete.
But it should be sane, should avoid confusion of categories and
useless/harmful dependencies.

Methodologically and pedagogically it seems a bit risky.
Technically it may be awkward. It will require the compiler and
the standard libraries almost completely independent of each other. 
This is not the case now.

BTW. what is a schenanigan? Is it by definition someething consumed
by Geeks? Is the usage of Vector Spaces restricted to those few
Geeks who can't live without schenanigans?

Jerzy Karczmarczuk

PS.

For some time I follow the discussion on some newsgroups dealing with
computer graphics, imagery, game programming, etc. I noticed a curious,
strong influence of people who shout loudly:

 "Math?! You don't need it really. Don't waste your time on it!
  Don't waste your time on cute algorithms, they will be slow as
  hell. Learn assembler, "C", MMX instructions, learn DirectX APIs,
  forget this silly geometric speculations. Behave *normally*, as
  a *normal* computer user, not as a speculative mathematician!"

And I noticed that REGULARLY, 1 - 4 times a week some freshmen ask
over and over again such questions:
1. How to rotate a vector in 3D?
2. How to zoom an image?
3. What is a quaternion, and why some people hate them so much?
4. How to compute a trajectory if I know the force acting on the
   object.

To summarize: people who don't use and don't need math always feel
right to discourage others to give to it an adequate importance.
It is not they who will suffer from badly constructed math layer
of a language, or from badly taught math concepts, so they don't
care too much.


From dpt@math.harvard.edu Fri Feb 9 16:48:33 2001 Date: Fri, 9 Feb 2001 11:48:33 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Show, Eq not necessary for Num [Was: Revamping the numeric c
On Thu, Feb 08, 2001 at 08:51:57PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> > ...
> > class (Show a, Read a, Eq a) => Comfortable a
> > instance (Show a, Read a, Eq a) => Comfortable a 
> ... 
> The latter syntax is nice: obvious what it means, not legal today.
> This instance of course conflicts with any other instance of that
> class, so it can be recognized and treated specially as a "class
> synonym".

Why isn't it legal?  I just tried it, and Hugs accepted it, with or
without extensions.  "where" clauses are optional, right?

> .... Don't know if local instances or local classes can be defined
> to make sense, nor if they could be useful enough...

Well, let's see.  Local classes already exist: just don't export
them.  Local instances would not be hard to add with special syntax,
though really they should be part of a more general mechanism for
dealing with instances explicitly.

Agreed that they might not be useful enough.

Best,
	Dylan Thurston


From dpt@math.harvard.edu Fri Feb 9 17:05:09 2001 Date: Fri, 9 Feb 2001 12:05:09 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: 'Convertible' class?
You make some good arguments.  Thanks.  Let me ask about a few of them.

On Thu, Feb 08, 2001 at 04:06:24AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Wed, 7 Feb 2001 15:43:59 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:
> 
> > class Convertible a b where
> >     convert :: a -> b
> > 
> > So, e.g., fromInteger and fromRational could be replaced with
> > convert.  (But if you did the same thing with toInteger and toRational
> > as well, you run into problems of overlapping instances.
> 
> ...
> And convert cannot be a substitute for fromIntegral/realToFrac,
> because it needs a definition for every pair of types.

Right.  Those could still be defined as appropriately typed versions
of 'convert . convert'.

> You can put Num a in some instance's context, but you can't
> put Convertible Integer a. It's because instance contexts must
> constrain only type variables, which ensures that context reduction
> terminates (but is sometimes overly restrictive). There is ghc's
> flag -fallow-undecidable-instances which relaxes this restriction,
> at the cost of undecidability.

Ah!  Thanks for reminding me; I've been using Hugs, which allows these
instances.  Is there no way to relax this restriction while
maintaining undecidability?

Best,
	Dylan Thurston


From wli@holomorphy.com Fri Feb 9 19:19:05 2001 Date: Fri, 9 Feb 2001 11:19:05 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Revamping the numeric HUMAN ATTITUDE
William Lee Irwin III wrote:
>>> The Standard Prelude serves its purpose well and accommodates the
>>> largest cross-section of users. Perhaps a Geek Prelude could
>>> accommodate the few of us who do need these sorts of schenanigans.

I, of course, intend to use the Geek Prelude(s) myself. =)

On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
> Aha.
> And we will have The Prole, normal users who can live with incomplete,
> sometimes contradictory math, and The Inner Party of those who know
> The Truth?
> Would you agree that your children be taught at primary school some
> dubious matter because "they won't need the real stuff".

This is, perhaps, the best argument against my pseudo-proposal. I'm not
against resolving things that are outright inconsistent or otherwise
demonstrably bad, but the simplifications made to prevent the (rather
large) mathphobic segment of the population from wetting their pants
probably shouldn't be done away with to add more generality for the
advanced users. We can write our own preludes anyway.

On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
> I would agree having a minimal standard Prelude which is incomplete.
> But it should be sane, should avoid confusion of categories and
> useless/harmful dependencies.

At the risk of turning this into "me too", I'm in agreement here.

On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
> Methodologically and pedagogically it seems a bit risky.
> Technically it may be awkward. It will require the compiler and
> the standard libraries almost completely independent of each other. 
> This is not the case now.

I'm seeing a bit of this now, and the error messages GHC spits out
are hilarious! e.g.

    My brain just exploded.
    I can't handle pattern bindings for existentially-quantified constructors.

and

    Couldn't match `Bool' against `Bool'
        Expected type: Bool
        Inferred type: Bool

They're not quite Easter eggs, but they're quite a bit of fun. I might
have to look into seeing what sort of things I might have to alter in GHC
in order to resolve nasty situations like these.

I can't speak to the methodological and pedagogical aspects of it. I
just have a vague idea that explaining why something isn't an instance
of GradedAlgebra or DifferentialRing to freshman or the otherwise
mathematically disinclined isn't a task compiler and/or library
implementors care to deal with.

On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
> BTW. what is a schenanigan? Is it by definition someething consumed
> by Geeks? Is the usage of Vector Spaces restricted to those few
> Geeks who can't live without schenanigans?

Yes! And I can't live without them. I had a few schenanigans at the
math bar last night while I was trying to pick up a free module, but
she wanted a normed ring before getting down to a basis. I guess that's
what I get for going to a algebra bar. I should really have gone to a
topology bar instead if I was looking for something kinkier. =)

Perhaps "Geek Prelude" isn't a good name for it. Feel free to suggest
alternatives. Of course, there's nothing to prevent the non-geek among
us from using them if they care to. If I by some miracle produce
something which actually works, I'll leave it untitled.

And yes, I agree everyone needs VectorSpace.

On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
> For some time I follow the discussion on some newsgroups dealing with
> computer graphics, imagery, game programming, etc. I noticed a curious,
> strong influence of people who shout loudly:
> 
>  "Math?! You don't need it really. Don't waste your time on it!
>   Don't waste your time on cute algorithms, they will be slow as
>   hell. Learn assembler, "C", MMX instructions, learn DirectX APIs,
>   forget this silly geometric speculations. Behave *normally*, as
>   a *normal* computer user, not as a speculative mathematician!"
> 
> And I noticed that REGULARLY, 1 - 4 times a week some freshmen ask
> over and over again such questions:
> 1. How to rotate a vector in 3D?
> 2. How to zoom an image?
> 3. What is a quaternion, and why some people hate them so much?
> 4. How to compute a trajectory if I know the force acting on the
>    object.

To date I've been highly unsuccessful in convincing anyone in this
(the predominant) camp otherwise. People do need math, they just
refuse to believe it regardless of how strong the evidence is. I
spent my undergrad preaching the gospel of "CS is math" and nobody
listened. I don't know how they get anything done.

On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
> To summarize: people who don't use and don't need math always feel
> right to discourage others to give to it an adequate importance.
> It is not they who will suffer from badly constructed math layer
> of a language, or from badly taught math concepts, so they don't
> care too much.

How can I counter-summarize? It's true. I suppose I'm saying that the
design goals of a Standard Prelude are outright against being so general
it's capable of representing as many mathematical structures as possible.
Of course, as it stands, it's not beyond reproach.


Cheers,
Bill
-- 
A mathematician is a system for turning coffee into theorems.
-- Paul Erdös
A comathematician is a system for turning theorems into coffee.
-- Tim Poston


From qrczak@knm.org.pl Fri Feb 9 19:40:18 2001 Date: 9 Feb 2001 19:40:18 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revamping the numeric classes
Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger <Tom.Pledger@peace.com> pisze:

>     (x + y) + z
> 
> we know from the explicit type signature (in your question that I was
> responding to) that x,y::Int and z::Double.  Type inference does not
> need to treat x or y up, because it can take the first (+) to be Int
> addition.  However, it must treat the result (x + y) up to the most
> specific supertype which can be added to a Double.

Approach it differently. z is Double, (x+y) is added to it, so (x+y)
must have type Double. This means that x and y must have type Double.
This is OK, because they are Ints now, which can be converted to Double.

Why is your approach better than mine?

>  |     h f = f (1::Int) == (2::Int)
>  | Can I apply f
> 
> h?

Sure, sorry.

> h:: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool

This type is ambiguous: the type variable b is needed in the context
but not present in the type itself, so it can never be determined
from the usage of h.

> That can be inferred by following the structure of the term.
> Function terms do seem prone to an accumulation of deferred subtype
> constraints.

When function application generates a constraint, the language gets
ambiguous as hell. Applications are found everywhere through the
program! Very often the type of the argument or result of an internal
application does not appear in the type of the whole function being
defined, which makes it ambiguous.

Not to mention that there would be *LOTS* of these constraints.
Application is used everywhere. It's important to have its typing
rule simple and cheap. Generating a constraint for every application
is not an option.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Fri Feb 9 19:31:08 2001 Date: 9 Feb 2001 19:31:08 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Show, Eq not necessary for Num [Was: Revamping the numeric c
Fri, 9 Feb 2001 11:48:33 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:

> > > class (Show a, Read a, Eq a) => Comfortable a
> > > instance (Show a, Read a, Eq a) => Comfortable a 

> Why isn't it legal?

Because in Haskell 98 instance's head must be of the form of a type
constructor applied to type variables. Here it's a type variable.

> I just tried it, and Hugs accepted it, with or without extensions.

My Hugs does not accept it without extensions.

ghc does not accept it by default. ghc -fglasgow-exts accepts an
instance's head which is a type constructor applied to some other
types than just type variables (e.g. instance Foo [Char]), and
-fallow-undecidable-instances lets it accept the above too.

I forgot that it can make context reduction infinite unless the
compiler does extra checking to prevent this. I guess that making it
legal keeps the type system decidable, only compilers would have to
introduce some extra checks.

Try the following module:

------------------------------------------------------------------------
module Test where

class Foo a where foo :: a
class Bar a where bar :: a
class Baz a where baz :: a

instance Foo a => Bar a where bar = foo
instance Bar a => Baz a where baz = bar
instance Baz a => Foo a where foo = baz

f = foo
------------------------------------------------------------------------

Both hugs -98 and ghc -fglasgow-exts -fallow-undecidable-instances
reach their limits of context reduction steps.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Fri Feb 9 19:19:21 2001 Date: 9 Feb 2001 19:19:21 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: In hoc signo vinces (Was: Revamping the numeric classes)
Fri, 09 Feb 2001 10:52:39 +0000, Jerzy Karczmarczuk <karczma@info.unicaen.fr> pisze:

> Again, a violation of the orthogonality principle. Needing division
> just to define signum. And of course a completely different approach
> do define the signum of integers. Or of polynomials...

So what? That's why it's a class method and not a plain function with
a single definition.

Multiplication of matrices is implemented differently than
multiplication of integers. Why don't you call it a violation of the
orthogonality principle (whatever it is)?

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From dpt@math.harvard.edu Fri Feb 9 20:49:45 2001 Date: Fri, 9 Feb 2001 15:49:45 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: 'Convertible' class?
On Fri, Feb 09, 2001 at 12:05:09PM -0500, Dylan Thurston wrote:
> On Thu, Feb 08, 2001 at 04:06:24AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> > You can put Num a in some instance's context, but you can't
> > put Convertible Integer a. It's because instance contexts must
> > constrain only type variables, which ensures that context reduction
> > terminates (but is sometimes overly restrictive). There is ghc's
> > flag -fallow-undecidable-instances which relaxes this restriction,
> > at the cost of undecidability.
> 
> Ah!  Thanks for reminding me; I've been using Hugs, which allows these
> instances.  Is there no way to relax this restriction while
> maintaining undecidability?

After looking up the Jones-Jones-Meijer paper and thinking about it
briefly, it seems to me that the troublesome cases (when "reducing" a
context gives a more complicated context) can only happen with type
constructructors, and not with simple types.  Would this work?  I.e.,
if every element of an instance context is required to be of the form
  C a_1 ... a_n,
with each a_i either a type variable or a simple type, is type
checking decidable?  (Probably I'm missing something.)

If this isn't allowed, one could still work around the problem:
  class (Convertible Integer a) => ConvertibleFromInteger a
at the cost of sticking in nuisance instance declarations.

Note that this problem arises a lot.  E.g., suppose I have
  class (Field k, Additive v) => VectorSpace k v ...
and then I want to talk about vector spaces over Float.

Best,
	Dylan Thurston


From wli@holomorphy.com Fri Feb 9 20:55:12 2001 Date: Fri, 9 Feb 2001 12:55:12 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: In hoc signo vinces (Was: Revamping the numeric classes)
Fri, 09 Feb 2001 10:52:39 +0000, Jerzy Karczmarczuk pisze:
>> Again, a violation of the orthogonality principle. Needing division
>> just to define signum. And of course a completely different approach
>> do define the signum of integers. Or of polynomials...

On Fri, Feb 09, 2001 at 07:19:21PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> So what? That's why it's a class method and not a plain function with
> a single definition.
> 
> Multiplication of matrices is implemented differently than
> multiplication of integers. Why don't you call it a violation of the
> orthogonality principle (whatever it is)?

Matrix rings actually manage to expose the inappropriateness of signum
and abs' definitions and relationships to Num very well:

class  (Eq a, Show a) => Num a  where
    (+), (-), (*)   :: a -> a -> a
    negate          :: a -> a
    abs, signum     :: a -> a
    fromInteger     :: Integer -> a
    fromInt         :: Int -> a -- partain: Glasgow extension

Pure arithmetic ((+), (-), (*), negate) works just fine.

But there are no good injections to use for fromInteger or fromInt,
the type of abs is wrong if it's going to be a norm, and it's not
clear that signum makes much sense.

So we have two totally inappropriate operations (fromInteger and
fromInt), one operation which has the wrong type (abs), and an operation
which doesn't have well-defined meaning (signum) on matrices. If
we want people doing graphics or linear algebraic computations to
be able to go about their business with their code looking like
ordinary arithmetic, this is, perhaps, a real concern.

I believe that these applications are widespread enough to be concerned
about how the library design affects their aesthetics.


Cheers,
Bill
-- 
<craving> Weak coffee is only fit for lemmas.
--


From dpt@math.harvard.edu Fri Feb 9 21:49:09 2001 Date: Fri, 9 Feb 2001 16:49:09 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: In hoc signo vinces (Was: Revamping the numeric classes)
On Fri, Feb 09, 2001 at 12:55:12PM -0800, William Lee Irwin III wrote:
> class  (Eq a, Show a) => Num a  where
>     (+), (-), (*)   :: a -> a -> a
>     negate          :: a -> a
>     abs, signum     :: a -> a
>     fromInteger     :: Integer -> a
>     fromInt         :: Int -> a -- partain: Glasgow extension
>
> ...  So we have two totally inappropriate operations (fromInteger and
> fromInt), ...

I beg to differ on this point.  One could provide a default
implementation for fromInt(eger) as follows, assuming a 'zero' and
'one', which do obviously fit (they are the additive and
multiplicative units):

  fromInteger n | n < 0 = negate (fromInteger (-n))
  fromInteger n = foldl (+) zero (repeat n one)

(Of course, one could use the algorithm in integer exponentiation to
make this efficient.)

Best,
	Dylan Thurston




From brian@boutel.co.nz Sat Feb 10 01:09:59 2001 Date: Sat, 10 Feb 2001 14:09:59 +1300 From: Brian Boutel brian@boutel.co.nz Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Ketil Malde wrote:
> 
> Brian Boutel <brian@boutel.co.nz> writes:
> 
> > - Having a class hierarchy at all (or making any design decision)
> > implies compromise.
> 
> I think the argument is that we should move Eq and Show *out* of the
> Num hierarchy.  Less hierarchy - less compromise.


Can you demonstrate a revised hierarchy without Eq? What would happen to
Ord, and the numeric classes that require Eq because they need signum? 


> 
> > - The current hierarchy (and its predecessors) represent a reasonable
> > compromise that meets most needs.
> 
> Obviously a lot of people seem to think we could find compromises that
> are more reasonable.

I would put this differently. "A particular group of people want to
change the language to make it more convenient for their special
interests."

> 
> > - Users have a choice: either work within the class hierarchy and
> > accept the pain of having to define things you don't need in order
> > to get the things that come for free,
> 
> Isn't it a good idea to reduce the amount of pain?

Not always.

> 
> > or omit the instance declarations and work outside the hierarchy. In
> > that case you will not be able to use the overloaded operator
> > symbols of the class, but that is just a matter of concrete syntax,
> > and ultimately unimportant.
> 
> I don't think syntax is unimportant.
>

I wrote that *concrete* syntax is ultimately unimportant, not *syntax*.
There is a big difference. In particular, *lexical syntax*, the choice
of marks on paper used to represent a language element, is not
important, although it does give rise to arguments, as do all mattters
of taste and style.

Thre are not enough usable operator symbols to go round, so they get
overloaded. Mathematicians have overloaded common symbols like (+) and
(*) for concepts that have may some affinity with addition and
multiplication in arithmetic, but which are actually quite different.
That's fine, because, in context, expert human readers can distinguish
what is meant. From a software engineering point of view, though, such
free overloading is dangerous, because readers may assume, incorrectly,
that an operator has properties that are typically associated with
operators using that symbol. This may not matter in a private world
where the program writer is the only person who will see and use the
code, and no mission-critial decisions depend on the results, but it
should not be the fate of Haskell to be confined to such use.

Haskell could have allowed free ad hoc overloading, but one of the first
major decisions made by the Haskell Committee in 1988 was not to do so.
Instead, it adopted John Hughes' proposal to introduce type classes to
control overloading. A symbol could only be overloaded if the whole of a
group of related symbols (the Class) was overloaded with it, and the
class hierarchy provided an even stronger constraint by restricting
overloading of the class operators to cases where other classes,
intended to be closely related, were also overloaded. This tended to
ensure that the new type at which the classes were overloaded had strong
resemblences to the standard types. Simplifying the hierarchy weakens
these constraints and so should be approached with extreme caution. Of
course, the details of the classes and the hierarchy have changed over
the years - there is, always has been and always will be pressure to
make changes to meet particular needs - but the essence is still there,
and the essence is of a general-purpose language, not a domain-specific
language for some branches of mathematics.

A consequence of this is that certain uses of overloaded symbols are
inconvenient, because they are too far from the mainstream intended
meaning. If you have such a use, and you want to write in Haskell, you
have to choose other lexical symbols to represent your operators. You
make your choice.

--brian


From john@foo.net Sat Feb 10 01:58:34 2001 Date: Fri, 9 Feb 2001 17:58:34 -0800 From: John Meacham john@foo.net Subject: Haskell Implemetors Meeting
Another Haskell -> Haskell transformation tool which I always thought
would be useful (and perhaps exists?) would be a Haskell de-moduleizer.
Basically it would take a Haskell program and follow its imports and
spit out a single monolithic Haskell module. My first thought is that
this should be able to be done by prepending the module name to every
symbol (making sure the up/lowercases come out right of course) in each
module and then appending them to one another. 

Why would I want this? curiosity mainly. performance perhaps. There is
much more oprotunity to optimize if seperate compilation need not be
taken into account. It would be interesting to see what could be done
when not worrying about it. It would allow experimentation with
non-seperate compilation compilers by allowing them to compile more
stuff 'out-of-the-box'. Also it may be that performance is so important
that one may want seperate compilation while developing, but when the
final product is produced it might be worth the day it takes to compile
to get a crazy-optimized product. This could also be done
incrementally, unchanging subsystems (like GUI libraries) could be combined
this way for speed while your app code is linked normally for
development reasons.... 

	John


-- 
--------------------------------------------------------------
John Meacham   http://www.ugcs.caltech.edu/~john/
California Institute of Technology, Alum.  john@foo.net
--------------------------------------------------------------


From fjh@cs.mu.oz.au Sat Feb 10 05:48:30 2001 Date: Sat, 10 Feb 2001 16:48:30 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: Instances of multiple classes at once
On 08-Feb-2001, Dylan Thurston <dpt@math.harvard.edu> wrote:
> On Thu, Feb 08, 2001 at 09:41:56PM +1100, Fergus Henderson wrote:
> > One point that needs to be  resolved is the interaction with default methods.
> > Consider
> > 
> >         class foo a where
> >                 f :: ...
> > 		f = ...
> >                 f2 :: ...
> > 		f2 = ...
> > 
> >         class (foo a) => bar a where
> >                 b :: ...
> >  
> >         instance bar T where
> > 		-- no definitions for f or f2
> > 		b = 42
> > 
> > Should this define an instance for `foo T'?
> > (I think not.)
> 
> Whyever not?

Because too much Haskell code uses classes where the methods are
defined in terms of each other:

	class Foo a where
		-- you should define either f or f2
		f :: ...
		f = ... f2 ...
		f2 :: ...
		f2 = ... f ...

> Because there is no textual mention of class Foo in the
> instance for Bar?

Right, and because allowing the compiler to automatically generate
instances for class Foo without the programmer having considered
whether those instances are OK is too dangerous.

> Think about the case of a superclass with no methods;
> wouldn't you want to allow automatic instances in this case?

Yes.

I think Marcin has a better idea: 

| So maybe there should be a way to specify that default definitions
| are cyclic and some of them must be defined?

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From fjh@cs.mu.oz.au Sat Feb 10 05:52:39 2001 Date: Sat, 10 Feb 2001 16:52:39 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: Revamping the numeric classes
On 08-Feb-2001, Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> wrote:
> 
> I don't like the idea of treating the case "no explicit definitions
> were given because all have default definitions which are OK"
> differently than "some explicit definitions were given".

I don't really like it that much either, but...

> When there is a superclass, it must have an instance defined, so if
> we permit such thing at all, I would let it implicitly define all
> superclass instances not defined explicitly, or something like that.
> At least when all methods have default definitions. Yes, I know that
> they can be mutually recursive and thus all will be bottoms...

... that is the problem I was trying to solve.

> So maybe there should be a way to specify that default definitions
> are cyclic and some of them must be defined?

I agree 100%.

> It is usually written in comments anyway, because it is not immediately
> visible in the definitions.

Yes.  Much better to make it part of the language, so that the compiler
can check it.

> (now any method definition
> can be omitted even if it has no default!),

Yeah, that one really sucks.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From fjh@cs.mu.oz.au Sat Feb 10 05:55:18 2001 Date: Sat, 10 Feb 2001 16:55:18 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
On 09-Feb-2001, Brian Boutel <brian@boutel.co.nz> wrote:
> Patrik Jansson wrote:
> >
> > The fact that equality can be trivially defined as bottom does not imply
> > that it should be a superclass of Num, it only explains that there is an
> > ugly way of working around the problem.
...
> 
> There is nothing trivial or ugly about a definition that reflects
> reality and bottoms only where equality is undefined.

I disagree.  Haskell is a statically typed language, and having errors
which could easily be detected at compile instead being deferred to
run time is ugly in a statically typed language.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From qrczak@knm.org.pl Sat Feb 10 07:17:57 2001 Date: 10 Feb 2001 07:17:57 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel <brian@boutel.co.nz> pisze:

> Can you demonstrate a revised hierarchy without Eq? What would happen to
> Ord, and the numeric classes that require Eq because they need signum? 

signum doesn't require Eq. You can use signum without having Eq, and
you can sometimes define signum without having Eq (e.g. on functions).
Sometimes you do require (==) to define signum, but it has nothing to
do with superclasses.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From bhalchin@hotmail.com Sat Feb 10 08:44:38 2001 Date: Sat, 10 Feb 2001 08:44:38 From: Bill Halchin bhalchin@hotmail.com Subject: Mondrian question
Hello,

    Is this the right place to ask Mondrian questions?

    I will assume so. Is Mondrian  only meant to work with
    .NET?? If so, what good is it as an Internet scripting
     language? I.e. what good is it as a language if it only
     runs in Microsoft's .NET environment?? I tried to download
     but I found I would have to have Win2000 installed. I want
     to run on Linux.

Regards,
Bill Halchin

_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com



From dpt@math.harvard.edu Sat Feb 10 16:25:46 2001 Date: Sat, 10 Feb 2001 11:25:46 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Semantics of signum
On Sat, Feb 10, 2001 at 07:17:57AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel <brian@boutel.co.nz> pisze:
> 
> > Can you demonstrate a revised hierarchy without Eq? What would happen to
> > Ord, and the numeric classes that require Eq because they need signum? 
> 
> signum doesn't require Eq. You can use signum without having Eq, and
> you can sometimes define signum without having Eq (e.g. on functions).
> Sometimes you do require (==) to define signum, but it has nothing to
> do with superclasses.

Can you elaborate?  What do you mean by signum for functions?  The 
pointwise signum?  Then abs would be the pointwise abs as well, right?
That might work, but I'm nervous because I don't know the semantics
for signum/abs in such generality.  What identities should they
satisfy?  (The current Haskell report says nothing about the meaning
of these operations, in the same way it says nothing about the meaning
of (+), (-), and (*).  Compare this to the situation for the Monad class,
where the fundamental identities are given.  Oddly, there are identities
listed for 'quot', 'rem', 'div', and 'mod'.  For +, -, and * I can guess
what identities they should satisfy, but not for signum and abs.)

(Note that pointwise abs of functions yields a positive function, which
are not ordered but do have a sensible notion of max and min.)

Best,
	Dylan Thurston


From qrczak@knm.org.pl Sat Feb 10 17:55:32 2001 Date: 10 Feb 2001 17:55:32 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Semantics of signum
Sat, 10 Feb 2001 11:25:46 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:

> Can you elaborate?  What do you mean by signum for functions?
> The pointwise signum?

Yes.

> Then abs would be the pointwise abs as well, right?

Yes.

> That might work, but I'm nervous because I don't know the semantics
> for signum/abs in such generality.

For example signum x * abs x == x, where (==) is not Haskell's
equality but equivalence. Similarly to (x + y) + z == x + (y + z).

If (+) can be implicitly lifted to functions, then why not signum?

Note that I would lift neither signum nor (+). I don't feel the need.
It can't be uniformly applied to e.g. (<) whose result is Bool and
not some lifted Bool, so better be consistent and lift explicitly.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From wli@holomorphy.com Sat Feb 10 21:22:32 2001 Date: Sat, 10 Feb 2001 13:22:32 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Semantics of signum
On Sat, Feb 10, 2001 at 11:25:46AM -0500, Dylan Thurston wrote:
> Can you elaborate?  What do you mean by signum for functions?  The 
> pointwise signum?  Then abs would be the pointwise abs as well, right?
> That might work, but I'm nervous because I don't know the semantics
> for signum/abs in such generality.  What identities should they
> satisfy?  (The current Haskell report says nothing about the meaning
> of these operations, in the same way it says nothing about the meaning
> of (+), (-), and (*).  Compare this to the situation for the Monad class,
> where the fundamental identities are given.  Oddly, there are identities
> listed for 'quot', 'rem', 'div', and 'mod'.  For +, -, and * I can guess
> what identities they should satisfy, but not for signum and abs.)

Pointwise signum and abs are common in analysis. The identity is:

	signum f * abs f = f

I've already done the pointwise case. As I've pointed out before,
abs has the wrong type for doing anything with vector spaces, though,
perhaps, abs is a distinct notion from norm.

On Sat, Feb 10, 2001 at 11:25:46AM -0500, Dylan Thurston wrote:
> (Note that pointwise abs of functions yields a positive function, which
> are not ordered but do have a sensible notion of max and min.)

The ordering you're looking for needs a norm. If you really want a
notion of size on functions, you'll have to do it with something like
one of the L^p norms for continua and the \ell^p norms for discrete
spaces which are instances of Enum. There is a slightly problematic
aspect with this in that the domain of the function does not entirely
determine the norm, and furthermore adequately dealing with the
different notions of measure on these spaces with the type system is
probably also intractable. The sorts of issues raised by trying to
define norms on functions probably rather quickly relegate it to
something the user should explicitly define, as opposed to something
that should appear in a Prelude standard or otherwise. That said,
one could do something like

instance Enum a => Enum (MyTree a) where
	... -- it's tricky, but possible, you figure it out

instance (Enum a, RealFloat b) => NormedSpace (MyTree a -> b) where
	norm f = approxsum $ zipWith (*) (map f . enumFrom $ toEnum 0) weights
		where
			weights = map (\x -> 1/factorial x) [0..]
			approxsum [] = 0
			approxsum (x:xs)| x < 1.0e-6 = 0
					| otherwise = x + approxsum xs

and then do the usual junk where

instance NormedSpace a => Ord a where
	f < g = norm f < norm g
	...


Cheers,
Bill


From brian@boutel.co.nz Sun Feb 11 00:37:28 2001 Date: Sun, 11 Feb 2001 13:37:28 +1300 From: Brian Boutel brian@boutel.co.nz Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Marcin 'Qrczak' Kowalczyk wrote:
> 
> Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel <brian@boutel.co.nz> pisze:
> 
> > Can you demonstrate a revised hierarchy without Eq? What would happen to
> > Ord, and the numeric classes that require Eq because they need signum?
> 
> signum doesn't require Eq. You can use signum without having Eq, and
> you can sometimes define signum without having Eq (e.g. on functions).
> Sometimes you do require (==) to define signum, but it has nothing to
> do with superclasses.
> 

Let me restate my question more carefully:

Can you demonstrate a revised hierarchy without Eq? What would happen to
Ord and the numeric classes with default class method definitions that
use (==) either explicitly or in pattern matching against numeric
literals? Both Integral and RealFrac do this to compare or test the
value of signum.

In an instance declaration, if a method requires operations of another
class which is not a superclass of the class being instanced, it is
sufficient to place the requirement in the context, but for default
class method definitions, all class methods used must belong to the
class being defined or its superclasses.


--brian


From brian@boutel.co.nz Sun Feb 11 01:27:35 2001 Date: Sun, 11 Feb 2001 14:27:35 +1300 From: Brian Boutel brian@boutel.co.nz Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Fergus Henderson wrote:
> 
> On 09-Feb-2001, Brian Boutel <brian@boutel.co.nz> wrote:
> > Patrik Jansson wrote:
> > >
> > > The fact that equality can be trivially defined as bottom does not imply
> > > that it should be a superclass of Num, it only explains that there is an
> > > ugly way of working around the problem.
> ...
> >
> > There is nothing trivial or ugly about a definition that reflects
> > reality and bottoms only where equality is undefined.
> 
> I disagree.  Haskell is a statically typed language, and having errors
> which could easily be detected at compile instead being deferred to
> run time is ugly in a statically typed language.

There may be some misunderstanding here. If you are talking about type
for which equality is always undefined, then I agree with you, but that
is not what I was talking about. I was thinking about types where
equality is defined for some pairs of argument values and undefined for
others - I think the original example was some kind of arbitrary
precision reals. My remark about "a definition that reflects reality and
bottoms only where equality is undefined" was referring to this
situation.

Returning to the basic issue, I understood the desire to remove Eq as a
superclass of Num was so that people were not required to implement
equality if they did not need it, not that there were significant
numbers of useful numeric types for which equality was not meaningful. 

Whichever of these was meant, I feel strongly that accomodating this and
other similar changes by weakening the constraints on what Num in
Haskell implies, is going too far. It devalues the Class structure in
Haskell to the point where its purpose, to control ad hoc polymorphism
in a way that ensures that operators are overloaded only on closely
related types, is lost, and one might as well abandon Classes and allow
arbitrary overloading.

--brian





--brian


From dpt@math.harvard.edu Sun Feb 11 02:00:38 2001 Date: Sat, 10 Feb 2001 21:00:38 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Show, Eq not necessary for Num
On Sun, Feb 11, 2001 at 01:37:28PM +1300, Brian Boutel wrote:
> Let me restate my question more carefully:
> 
> Can you demonstrate a revised hierarchy without Eq? What would happen to
> Ord and the numeric classes with default class method definitions that
> use (==) either explicitly or in pattern matching against numeric
> literals? Both Integral and RealFrac do this to compare or test the
> value of signum.

I've been working on writing up my preferred hierarchy, but the short
answer is that classes that are currently derived from Ord often do
require Eq as superclasses.

In the specific cases: I think possibly divMod and quotRem should be
split into separate classes.  It seems to me that divMod is the
more fundamental pair: it satisfies the identity
  mod (a+b) b === mod a b
  div (a+b) b === 1 + div a b
in addition to
  (div a b)*b + mod a b === a.
This identity is not enough to specify divMod competely; another
reasonable choice for Integers would be to round to the nearest
integer.  But this is enough to make it useful for many applications.
quotRem is also useful (although it only satisfies the second of
these), and does require the ordering (and ==) to define sensibly, so
I would make it a method of a subclass of Ord (and hence Eq).  So I
would tend to put these into two separate classes:

class (Ord a, Num a) => Real a

class (Num a) => Integral a where
  div, mod  :: a -> a -> a
  divMod :: a -> a -> (a,a)

class (Integral a, Real a) => RealIntegral a where
  quot, rem :: a -> a -> a
  quotRem :: a -> a -> (a,a)

I haven't thought about the operations in RealFrac and their semantics
enough to say much sensible, but probably they will again require Ord
as a superclass.

In general, I think a good approach is to think carefully about the
semantics of a class and its operations, and to declare exactly the
superclasses that are necessary to define the semantics.

Note that sometimes there are no additional operations.  For instance,
declaring a class to be an instance of Real a should mean that the
ordering (from Ord) and the numeric structure (from Num) are
compatible.

Note also that we cannot require Eq to state laws (the '===' above);
consider the laws required for the Monad class to convince yourself.

Best,
	Dylan Thurston



From fjh@cs.mu.oz.au Sun Feb 11 07:24:33 2001 Date: Sun, 11 Feb 2001 18:24:33 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
On 11-Feb-2001, Brian Boutel <brian@boutel.co.nz> wrote:
> Fergus Henderson wrote:
> > 
> > On 09-Feb-2001, Brian Boutel <brian@boutel.co.nz> wrote:
> > > Patrik Jansson wrote:
> > > >
> > > > The fact that equality can be trivially defined as bottom does not imply
> > > > that it should be a superclass of Num, it only explains that there is an
> > > > ugly way of working around the problem.
> > ...
> > >
> > > There is nothing trivial or ugly about a definition that reflects
> > > reality and bottoms only where equality is undefined.
> > 
> > I disagree.  Haskell is a statically typed language, and having errors
> > which could easily be detected at compile instead being deferred to
> > run time is ugly in a statically typed language.
> 
> There may be some misunderstanding here. If you are talking about type
> for which equality is always undefined, then I agree with you, but that
> is not what I was talking about. I was thinking about types where
> equality is defined for some pairs of argument values and undefined for
> others - I think the original example was some kind of arbitrary
> precision reals.

The original example was treating functions as a numeric type.  In the
case of functions, computing equality is almost always infeasible.
But you can easily define addition etc. pointwise:
	
	f + g = (\ x -> f x + g x)

> Returning to the basic issue, I understood the desire to remove Eq as a
> superclass of Num was so that people were not required to implement
> equality if they did not need it, not that there were significant
> numbers of useful numeric types for which equality was not meaningful. 

The argument is the latter, with functions as the canonical example.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From qrczak@knm.org.pl Sun Feb 11 07:59:38 2001 Date: 11 Feb 2001 07:59:38 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel <brian@boutel.co.nz> pisze:

> Can you demonstrate a revised hierarchy without Eq? What would
> happen to Ord and the numeric classes with default class method
> definitions that use (==) either explicitly or in pattern matching
> against numeric literals?

OK, then you can't write these default method definitions.

I'm against removing Eq from the numeric hierarchy, against making Num
instances for functions, but I would probably remove Show. I haven't
seen a sensible proposal of a replacement of the whole hierarchy.

> In an instance declaration, if a method requires operations of
> another class which is not a superclass of the class being instanced,
> it is sufficient to place the requirement in the context,

Better: it is sufficient if the right instance is defined somewhere.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From wli@holomorphy.com Sun Feb 11 10:01:02 2001 Date: Sun, 11 Feb 2001 02:01:02 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel <brian@boutel.co.nz> pisze:
>> Can you demonstrate a revised hierarchy without Eq? What would
>> happen to Ord and the numeric classes with default class method
>> definitions that use (==) either explicitly or in pattern matching
>> against numeric literals?

I anticipate that some restructuring of the numeric classes must be
done in order to accomplish this. I am, of course, attempting to
contrive such a beast for my own personal use.

On Sun, Feb 11, 2001 at 07:59:38AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> OK, then you can't write these default method definitions.
> I'm against removing Eq from the numeric hierarchy, against making Num
> instances for functions, but I would probably remove Show. I haven't
> seen a sensible proposal of a replacement of the whole hierarchy.

Well, there are a couple of problems with someone like myself trying
to make such a proposal. First, I'm a bit too marginalized and/or
committed to a radical alternative. Second, I don't have the right
associations or perhaps other resources.

Removing Eq sounds like a good idea to me, in all honesty, though I
think numeric instances for functions (at least by default) aren't
great ideas. More details follow:

Regarding Eq, there are other types besides functions which might
not be good ideas to define equality on, either because they're not
efficiently implementable or are still inappropriate. Matrix types
aren't good candidates for defining equality, for one. Another one
you might not want to define equality on are formal power series
represented by infinite lists, since equality tests will never
terminate. A third counterexample comes, of course, from graphics,
where one might want to conveniently scale and translate solids.
Testing meshes and surface representations for equality is once
again not a great idea. Perhaps these counterexamples are a little
contrived, but perhaps other people can come up with better ones.

As far as the function instances of numeric types, there are some
nasty properties that they have that probably make it a bad idea.
In particular, I discovered that numeric literals' fromInteger
property creates the possibility that something which is supposed
to be a scalar or some other numeric result might accidentally be
applied. For instance, given an expression with an intermediate
numeric result like:

	f u v . g x y $ h z

which is expected to produce a number, one could accidentally apply
a numeric literal or something bound to one to some arguments, creating
a bug. So this is for at least partial agreement, though I think it
should be available in controlled circumstances. Local module
importations and/or scoped instances might help here, or perhaps
separating out code that relies upon them into a module where the
instance is in scope, as it probably needs control which is that tight.

Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel <brian@boutel.co.nz> pisze:
>> In an instance declaration, if a method requires operations of
>> another class which is not a superclass of the class being instanced,
>> it is sufficient to place the requirement in the context,

On Sun, Feb 11, 2001 at 07:59:38AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Better: it is sufficient if the right instance is defined somewhere.

Again, I'd be careful with this idea. It's poor design to unnecessarily
restrict the generality of code. Of course, it's poor design to not try
to enforce necessary conditions in the type system, too, which is why
library design is nontrivial. And, of course, keeping it simple enough
for use by the general populace (or whatever semblance thereof exists
within the Haskell community) might well conflict with the desires of
persons like myself who could easily fall prey to the accusation that
they're trying to turn Haskell into a computer algebra system, and adds
yet another constraint to the library design making it even tougher.


Cheers,
Bill


From brian@boutel.co.nz Sun Feb 11 10:14:44 2001 Date: Sun, 11 Feb 2001 23:14:44 +1300 From: Brian Boutel brian@boutel.co.nz Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
Marcin 'Qrczak' Kowalczyk wrote:


> I'm against removing Eq from the numeric hierarchy, against making Num
> instances for functions, but I would probably remove Show. I haven't
> seen a sensible proposal of a replacement of the whole hierarchy.
> 

Then we probably are in agreement. 

--brian


From wli@holomorphy.com Sun Feb 11 13:07:21 2001 Date: Sun, 11 Feb 2001 05:07:21 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes]
On 11-Feb-2001, Brian Boutel <brian@boutel.co.nz> wrote:
>> There may be some misunderstanding here. If you are talking about type
>> for which equality is always undefined, then I agree with you, but that
>> is not what I was talking about. I was thinking about types where
>> equality is defined for some pairs of argument values and undefined for
>> others - I think the original example was some kind of arbitrary
>> precision reals.

On Sun, Feb 11, 2001 at 06:24:33PM +1100, Fergus Henderson wrote:
> The original example was treating functions as a numeric type.  In the
> case of functions, computing equality is almost always infeasible.
> But you can easily define addition etc. pointwise:
> 	
> 	f + g = (\ x -> f x + g x)

I have a fairly complete implementation of this with dummy instances of
Eq and Show for those who want to see the consequences of this. I found,
interestingly enough, that any type constructor f with the following
three properties could have an instance of Num defined upon f a:

	(1) it has a unary constructor to lift scalars 
	(2) it has a Functor instance
	(3) it has an analogue of zip which can be defined upon it

or, more precisely:

\begin{code}
instance (Eq (f a), Show (f a), Num a, Functor f,
			Zippable f, HasUnaryCon f) => Num (f a)
	where
		f + g = fmap (uncurry (+)) $ fzip f g
		f * g = fmap (uncurry (*)) $ fzip f g
		f - g = fmap (uncurry (-)) $ fzip f g
		negate = fmap negate
		abs = fmap abs
		signum = fmap signum
		fromInteger = unaryCon . fromInteger

class Zippable f where
	fzip :: f a -> f b -> f (a,b)

class HasUnaryCon f where
	unaryCon :: a -> f a

instance Functor ((->) a) where
	fmap = (.)

instance Zippable ((->) a) where
	fzip f g = \x -> (f x, g x)

instance HasUnaryCon ((->) a) where
	unaryCon = const
\end{code}

and this generalizes nicely to other data types:

\begin{code}
instance Zippable Maybe where
	fzip (Just x) (Just y) = Just (x,y)
	fzip _ Nothing = Nothing
	fzip Nothing _ = Nothing

instance HasUnaryCon Maybe where
	unaryCon = Just

instance Zippable [ ] where
	fzip = zip

instance HasUnaryCon [ ] where
	unaryCon = cycle . (:[])
\end{code}

On 11-Feb-2001, Brian Boutel <brian@boutel.co.nz> wrote:
>> Returning to the basic issue, I understood the desire to remove Eq as a
>> superclass of Num was so that people were not required to implement
>> equality if they did not need it, not that there were significant
>> numbers of useful numeric types for which equality was not meaningful. 

On Sun, Feb 11, 2001 at 06:24:33PM +1100, Fergus Henderson wrote:
> The argument is the latter, with functions as the canonical example.

Well, usually equality as a mathematical concept is meaningful, but
either not effectively or efficiently computable. Given an enumerable
and bounded domain, equality may be defined (perhaps inefficiently)
on functions by

\begin{code}
instance (Enum a, Bounded a, Eq b) => Eq (a->b) where
	f == g = all (uncurry (==))
			$ zipWith (\x -> (f x, g x)) [minBound..maxBound]
\end{code}

and as I've said in another post, equality instances on data structures
expected to be infinite, very large, or where the semantics of equality
are make it difficult to compute, or perhaps even cases where it's just
not useful are also not good to be forced.


Cheers,
Bill


From Tom.Pledger@peace.com Sun Feb 11 21:58:40 2001 Date: Mon, 12 Feb 2001 10:58:40 +1300 From: Tom Pledger Tom.Pledger@peace.com Subject: Revamping the numeric classes
Marcin 'Qrczak' Kowalczyk writes:
 | Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger <Tom.Pledger@peace.com> pisze:
 | 
 | >     (x + y) + z
 | > 
 | > we know from the explicit type signature (in your question that I was
 | > responding to) that x,y::Int and z::Double.  Type inference does not
 | > need to treat x or y up, because it can take the first (+) to be Int
 | > addition.  However, it must treat the result (x + y) up to the most
 | > specific supertype which can be added to a Double.
 | 
 | Approach it differently. z is Double, (x+y) is added to it, so
 | (x+y) must have type Double.

That's a restriction I'd like to avoid.  Instead: ...so the most
specific common supertype of Double and (x+y)'s type must support
addition.

 | This means that x and y must have type Double.  This is OK, because
 | they are Ints now, which can be converted to Double.
 | 
 | Why is your approach better than mine?

It used a definition of (+) which was a closer fit for the types of x
and y.

 :
 | > h:: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool
 | 
 | This type is ambiguous: the type variable b is needed in the
 | context but not present in the type itself, so it can never be
 | determined from the usage of h.

Yes, I rashly glossed over the importance of having well-defined most
specific common supertype (MSCS) and least specific common subtype
(LSCS) operators in a subtype lattice.  Here's a more respectable
version:

    h :: Eq (MSCS a Int) => (Int -> a) -> Bool

 | > That can be inferred by following the structure of the term.
 | > Function terms do seem prone to an accumulation of deferred
 | > subtype constraints.
 | 
 | When function application generates a constraint, the language gets
 | ambiguous as hell. Applications are found everywhere through the
 | program! Very often the type of the argument or result of an
 | internal application does not appear in the type of the whole
 | function being defined, which makes it ambiguous.
 | 
 | Not to mention that there would be *LOTS* of these constraints.
 | Application is used everywhere. It's important to have its typing
 | rule simple and cheap. Generating a constraint for every
 | application is not an option.

These constraints tend to get discharged whenever the result of an
application is not another function.  The hellish ambiguities can be
substantially tamed by insisting on a properly constructed subtype
lattice.

Anyway, since neither of us is about to have a change of mind, and
nobody else is showing an interest in this branch of the discussion,
it appears that the most constructive thing for me to do is return to
try-to-keep-quiet-about-subtyping-until-I've-done-it-in-THIH mode.

Regards,
Tom


From dpt@math.harvard.edu Sun Feb 11 22:42:15 2001 Date: Sun, 11 Feb 2001 17:42:15 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: A sample revised prelude for numeric classes
--7AUc2qLy4jB3hD7Z
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

I've started writing up a more concrete proposal for what I'd like the
Prelude to look like in terms of numeric classes.  Please find it
attached below.  It's still a draft and rather incomplete, but please
let me know any comments, questions, or suggestions.

Best,
	Dylan Thurston

--7AUc2qLy4jB3hD7Z
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="NumPrelude.lhs"

Revisiting the Numeric Classes
------------------------------
The Prelude for Haskell 98 offers a well-considered set of numeric
classes which cover the standard numeric types (Integer, Int,
Rational, Float, Double, Complex) quite well.  But they offer limited
extensibility and have a few other flaws.  In this proposal we will
revisit these classes, addressing the following concerns:

(1) The current Prelude defines no semantics for the fundamental
    operations.  For instance, presumably addition should be
    associative (or come as close as feasible), but this is not
    mentioned anywhere.

(2) There are some superfluous superclasses.  For instance, Eq and
    Show are superclasses of Num.  Consider the data type

> data IntegerFunction a = IF (a -> Integer)

    One can reasonably define all the methods of Num for
    IntegerFunction a (satisfying good semantics), but it is
    impossible to define non-bottom instances of Eq and Show.

    In general, superclass relationship should indicate some semantic
    connection between the two classes.

(3) In a few cases, there is a mix of semantic operations and
    representation-specific operations.  toInteger, toRational, and
    the various operations in RealFloating (decodeFloat, ...) are the
    main examples.

(4) In some cases, the hierarchy is not finely-grained enough:
    operations that are often defined independently are lumped
    together.  For instance, in a financial application one might want
    a type "Dollar", or in a graphics application one might want a
    type "Vector".  It is reasonable to add two Vectors or Dollars,
    but not, in general, reasonable to multiply them.  But the
    programmer is currently forced to define a method for (*) when she
    defines a method for (+).

In specifying the semantics of type classes, I will state laws as
follows:
  (a + b) + c === a + (b + c)
The intended meaning is extensional equality: the rest of the program
should behave in the same way if one side is replaced with the
other.  Unfortunately, the laws are frequently violated by standard
instances; the law above, for instance, fails for Float:

  (100000000000000000000.0 + (-100000000000000000000.0)) + 1.0 = 1.0
  100000000000000000000.0 + ((-100000000000000000000.0) + 1.0) = 0.0

Thus these laws should be interpreted as guidelines rather than
absolute rules.  In particular, the compiler is not allowed to use
them.  Unless stated otherwise, default definitions should also be
taken as laws.

This version is fairly conservative.  I have retained the names for
classes with similar functions as far as possible, I have not made
some distinctions that could reasonably be made, and I have tried to
opt for simplicity over generality.  The main non-conservative change
is the Powerful class, which allows a unification of the Haskell 98
operators (^), (^^), and (**).  There are some problems with it, but I
left it in because it might be of interest.  It is very easy to change
back to the Haskell 98 situation.

I sometimes use Simon Peyton Jones' pattern guards in writing
functions.  This can (as always) be transformed into Haskell 98
syntax.

> module NumPrelude where
> import qualified Prelude as P
> -- Import some standard Prelude types verbatim verbandum
> import Prelude(Bool(..),Maybe(..),Eq(..),Either(..),Ordering(..),
> 	         Ord(..),Show(..),Read(..),id)
>
> infixr 8  ^
> infixl 7  *
> infixl 7 /, `quot`, `rem`, `div`, `mod`
> infixl 6  +, -
>
> class Additive a where
>     (+), (-) :: a -> a -> a
>     negate   :: a -> a
>     zero     :: a
>
>      -- Minimal definition: (+), zero, and (negate or (-1))
>     negate a = zero - a
>     a - b    = a + (negate b)

Additive a encapsulates the notion of a commutative group, specified
by the following laws:

          a + b === b + a
   (a + b) + c) === a + (b + c)
       zero + a === a
 a + (negate a) === 0

Typical examples include integers, dollars, and vectors.

> class (Additive a) => Num a where
>     (*)         :: a -> a -> a
>     one	  :: a
>     fromInteger :: Integer -> a
>
>       -- Minimal definition: (*), one
>     fromInteger 0         = zero
>     fromInteger n | n < 0 = negate (fromInteger (-n))
>     fromInteger n | n > 0 = reduceRepeat (+) one n

Num encapsulates the mathematical structure of a (not necessarily
commutative) ring, with the laws

  a * (b * c) === (a * b) * c
      one * a === a
      a * one === a
  a * (b + c) === a * b + a * c

Typical examples include integers, matrices, and quaternions.

"reduceRepeat op a n" is an auxiliary function that, for an
associative operation "op", computes the same value as

  reduceRepeat op a n = foldr1 op (repeat n a)

but applies "op" O(log n) times.  A sample implementation is below.

> class (Num a) => Integral a where
>     div, mod :: a -> a -> a
>     divMod :: a -> a -> (a,a)
>     gcd, lcm :: a -> a -> a
>     extendedGCD :: a -> a -> (a,a,a)
>
>      -- Minimal definition: divMod or (div and mod)
>      --   and extendedGCD, if the provided definition does not work
>     div a b | (d,_) <- divMod a b = d
>     mod a b | (_,m) <- divMod a b = m
>     divMod a b = (div a b, mod a b)
>     gcd a b | (_,_,g) <- extendedGCD a b = g
>     extendedGCD a b = ... -- insert Euclid's algorithm here
>     lcm a b = (a `div` gcd a b) * b

Integral has the mathematical structure of a unique factorization
domain, satisfying the laws

                      a * b === b * a
  (div a b) * b + (mod a b) === a
              mod (a+k*b) b === mod a b
            a `div` gcd a b === zero
                    gcd a b === gcd b a
            gcd (a + k*b) b === gcd a b
                  a*c + b*d === g where (c, d, g) = extendedGCD a b

TODO: quot, rem partially defined.  Explain.
The default definition of extendedGCD above should not be taken as
canonical (unlike most default definitions); for some Integral
instances, the algorithm could diverge, might not satisfy the laws
above, etc.

Typical examples of Integral include integers and polynomials over a
field.

Note that, unlike in Haskell 98, gcd and lcm are member function of
Integral.  extendedGCD is new.

> class (Num a) => Fractional a where
>     (/)          :: a -> a -> a
>     recip        :: a -> a
>     fromRational :: Rational -> a
>
>      -- Minimal definition: recip or (/)
>     recip a = one / a
>     a / b = a * (recip b)
>     fromRational r = fromInteger (numerator r) / fromInteger (denominator r)


Fractional encapsulates the mathematical structure of a field,
satisfying the laws

           a * b === b * a
   a * (recip a) === one

TODO: (/) is only partially defined.  How to specify?  Add a member
      isInvertible :: a -> Bool?
Typical examples include rationals, the real numbers, and rational
functions (ratios of polynomials).

> class (Num a, Additive b) => Powerful a b where
>     (^) :: a -> b -> a
> instance (Num a) => Powerful a (Positive Integer) where
>     a ^ 0 = one
>     a ^ n = reduceRepeated (*) a n
> instance (Fractional a) => Powerful a Integer where
>     a ^ n | n < 0 = recip (a ^ (negate n))
>     a ^ n         = a ^ (positive n)

Powerful is the class of pairs of numbers which can be exponentiated,
with the following laws:

   (a ^ b) * (a ^ c) === a ^ (b + c)
             a ^ one === a

I don't know interesting examples of this structure besides the
instances above defined above and the Floating class below.
"Positive" is a type constructor that asserts that its argument is >=
0; "positive" makes this assertion.  I am not sure how this will
interact with defaulting arguments so that one can write

  x ^ 5

without constraining x to be of Fractional type.

> -- Note: I think "Analytic" would be a better name than "Floating".
> class (Fractional a, Powerful a a) => Floating a where
>     pi                  :: a
>     exp, log, sqrt      :: a -> a
>     logBase             :: a -> a -> a
>     sin, cos, tan       :: a -> a
>     asin, acos, atan    :: a -> a
>     sinh, cosh, tanh    :: a -> a
>     asinh, acosh, atanh :: a -> a
> 
>         -- Minimal complete definition:
>         --      pi, exp, log, sin, cos, sinh, cosh
>         --      asinh, acosh, atanh
>     x ^ y            =  exp (log x * y)
>     logBase x y      =  log y / log x
>     sqrt x           =  x ^ 0.5
>     tan  x           =  sin  x / cos  x
>     tanh x           =  sinh x / cosh x

Floating is the type of numbers supporting various analytic
functions.  Examples include real numbers, complex numbers, and
computable reals represented as a lazy list of rational
approximations.

Note the default declaration for a superclass.  See the comments
below, under "Instance declaractions for superclasses".

The semantics of these operations are rather ill-defined because of
branch cuts, etc.

> class (Num a, Ord a) => Real a where
>     abs    :: x -> x
>     signum :: x -> x
>
>       -- Minimal definition: nothing
>     abs x    = max x (negate x)
>     signum x = case compare x zero of GT -> one
>				        EQ -> zero
>				        LT -> negate one

This is the type of an ordered ring, satisfying the laws

             a * b === b * a
     a + (max b c) === max (a+b) (a+c)
  negate (max b c) === min (negate b) (negate c)
     a * (max b c) === max (a*b) (a*c) where a >= 0

Note that abs is in a rather different place than it is in the Haskell
98 Prelude.  In particular,

  abs :: Complex -> Complex

is not defined.  To me, this seems to have the wrong type anyway;
Complex.magnitude has the correct type.

> class (Real a, Floating a) => RealFrac a where
> -- lifted directly from Haskell 98 Prelude
>     properFraction   :: (Integral b) => a -> (b,a)
>     truncate, round  :: (Integral b) => a -> b
>     ceiling, floor   :: (Integral b) => a -> b
> 
>         -- Minimal complete definition:
>         --      properFraction
>     truncate x       =  m  where (m,_) = properFraction x
>     
>     round x          =  let (n,r) = properFraction x
>                             m     = if r < 0 then n - 1 else n + 1
>                           in case signum (abs r - 0.5) of
>                                 -1 -> n
>                                 0  -> if even n then n else m
>                                 1  -> m
>     
>     ceiling x        =  if r > 0 then n + 1 else n
>                         where (n,r) = properFraction x
>     
>     floor x          =  if r < 0 then n - 1 else n
>                         where (n,r) = properFraction x

As an aside, let me note the similarities between "properFraction x"
and "x divMod 1" (if that were defined.)

> class (RealFrac a, Floating a) => RealFloat a where
>     atan2            :: a -> a -> a
>     atan2 y x
>       | x>0           =  atan (y/x)
>       | x==0 && y>0   =  pi/2
>       | x<0  && y>0   =  pi + atan (y/x) 
>       |(x<=0 && y<0)  ||
>        (x<0 && isNegativeZero y) ||
>        (isNegativeZero x && isNegativeZero y)
>                       = -atan2 (-y) x
>       | y==0 && (x<0 || isNegativeZero x)
>                       =  pi    -- must be after the previous test on zero y
>       | x==0 && y==0  =  y     -- must be after the other double zero tests
>       | otherwise     =  x + y -- x or y is a NaN, return a NaN (via +)
>
> class (Real a, Integral a) => RealIntegral a where
>     quot, rem        :: a -> a -> a   
>     quotRem          :: a -> a -> (a,a)
>
>       -- Minimal definition: toInteger
>     -- insert quot, rem, quotRem definition here
>
> --- Numerical functions
> subtract         :: (Additive a) => a -> a -> a
> subtract         =  flip (-)
>
> even, odd        :: (Integral a) => a -> Bool
> even n           =  n `div` 2 == 0
> odd              =  not . even


Additional standard libraries would include IEEEFloat (including the
bulk of the functions in Haskell 98's RealFloat class), VectorSpace,
Ratio, and Lattice.  Let me explain that last one.

-----

> module Lattice where
> class Lattice a where
>     meet, join :: a -> a -> a

Mathematically, a lattice (more properly, a semilattice) is a space
with operations "meet" and "join" which are idempotent, commutative,
associative, and (usually) distribute over each other.  Examples
include real-valued function with (pointwise) max and min and sets
with union and intersection.  It would be reasonable to make Ord a
subclass of this, but it would probably complicate the class heirarchy
too much for the gain.  The advantage of Lattice over Ord is that it
is better defined.  Thus we can define a class

> class (Lattice a, Num a) => NumLattice a where
>     abs :: a -> a -> a
>     abs x = meet x (negate x)

and real-valued functions and computable reals can both be declared as
instances of this class.

--7AUc2qLy4jB3hD7Z--


From ashley@semantic.org Mon Feb 12 00:03:37 2001 Date: Sun, 11 Feb 2001 16:03:37 -0800 From: Ashley Yakeley ashley@semantic.org Subject: A sample revised prelude for numeric classes
At 2001-02-11 14:42, Dylan Thurston wrote:

>I've started writing up a more concrete proposal for what I'd like the
>Prelude to look like in terms of numeric classes.  Please find it
>attached below.  It's still a draft and rather incomplete, but please
>let me know any comments, questions, or suggestions.

Apologies if this has been discussed and I missed it. When it comes to 
writing a 'geek' prelude, what was wrong with the Basic Algebra Proposal 
found in <ftp://ftp.botik.ru/pub/local/Mechveliani/basAlgPropos/> ? 
Perhaps it could benefit from multi-parameter classes?

-- 
Ashley Yakeley, Seattle WA



From qrczak@knm.org.pl Mon Feb 12 00:26:35 2001 Date: 12 Feb 2001 00:26:35 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: A sample revised prelude for numeric classes
Sun, 11 Feb 2001 17:42:15 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:

> I've started writing up a more concrete proposal for what I'd like
> the Prelude to look like in terms of numeric classes.  Please find
> it attached below.  It's still a draft and rather incomplete,
> but please let me know any comments, questions, or suggestions.

I must say I like it. It has a good balance between generality and
usefulness / convenience.

Modulo a few details, see below.

> > class (Num a, Additive b) => Powerful a b where
> >     (^) :: a -> b -> a
> > instance (Num a) => Powerful a (Positive Integer) where
> >     a ^ 0 = one
> >     a ^ n = reduceRepeated (*) a n
> > instance (Fractional a) => Powerful a Integer where
> >     a ^ n | n < 0 = recip (a ^ (negate n))
> >     a ^ n         = a ^ (positive n)

I don't like the fact that there is no Powerful Integer Integer.
Since the definition on negative exponents really depends on the first
type but can be polymorphic wrt. any Integral exponent, I would make
other instances instead:

instance RealIntegral b          => Powerful Int       b
instance RealIntegral b          => Powerful Integer   b
instance (Num a, RealIntegral b) => Powerful (Ratio a) b
instance                            Powerful Float     Int
instance                            Powerful Float     Integer
instance                            Powerful Float     Float
instance                            Powerful Double    Int
instance                            Powerful Double    Integer
instance                            Powerful Double    Double

This requires more instances for other types, but I don't see how to
make it better with (^), (^^) and (**) unified. It's a bit irregular:
Int can be raised to custom integral types without extra instances,
but Double not.

It's simpler to unify only (^) and (^^), leaving
    (**) :: Floating a => a -> a -> a
with the default definition of \a b -> exp (b * log a).
I guess that we always know which one we mean, although in math the
notation is the same.

Then the second argument of (^) is always arbitrary RealIntegral,
so we can have a single-parameter class with a default definition:

class (Num a) => Powerful a where
    (^) :: RealIntegral b => a -> b -> a
    a ^ 0 = one
    a ^ n = reduceRepeated (*) a n

instance Powerful Int
instance Powerful Integer
instance (Num a) => Powerful (Ratio a) where
    -- Here unfortunately we must write the definition explicitly,
    -- including the positive exponent case: we don't have access to
    -- whatever the default definition would give if it was not
    -- replaced here. We should probably provide the default definition
    -- for such cases as a global function:
    --     fracPower :: (Fractional a, RealIntegral b) => a -> b -> a
    -- (under a better name).
instance Powerful Float
    -- Ditto here.
instance Powerful Double
    -- And here.

> > class (Real a, Floating a) => RealFrac a where
> > -- lifted directly from Haskell 98 Prelude
> >     properFraction   :: (Integral b) => a -> (b,a)
> >     truncate, round  :: (Integral b) => a -> b
> >     ceiling, floor   :: (Integral b) => a -> b

Should be RealIntegral instead of Integral.

Perhaps RealIntegral should be called Integral, and your Integral
should be called somewhat differently.

> > class (Real a, Integral a) => RealIntegral a where
> >     quot, rem        :: a -> a -> a   
> >     quotRem          :: a -> a -> (a,a)
> >
> >       -- Minimal definition: toInteger

You forgot toInteger.

> > class (Lattice a, Num a) => NumLattice a where
> >     abs :: a -> a -> a
> >     abs x = meet x (negate x)

Should be:
        abs :: a -> a

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From wli@holomorphy.com Mon Feb 12 02:48:42 2001 Date: Sun, 11 Feb 2001 18:48:42 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: A sample revised prelude for numeric classes
On Sun, Feb 11, 2001 at 05:42:15PM -0500, Dylan Thurston wrote:
> I've started writing up a more concrete proposal for what I'd like the
> Prelude to look like in terms of numeric classes.  Please find it
> attached below.  It's still a draft and rather incomplete, but please
> let me know any comments, questions, or suggestions.

This is great, it gets something concrete out there to comment on, which
is probably quite a bit of what needs to happen.

For brevity's sake, I'll have to chop up your message a bit.

> (1) The current Prelude defines no semantics for the fundamental
>     operations.  For instance, presumably addition should be
>     associative (or come as close as feasible), but this is not
>     mentioned anywhere.

This is something serious, as I sort of took for granted the various
properties of operations etc. I'm glad you pointed it out.

> (2) There are some superfluous superclasses.  For instance, Eq and
>     Show are superclasses of Num.  Consider the data type
> 
> > data IntegerFunction a = IF (a -> Integer)
> 
>     One can reasonably define all the methods of Num for
>     IntegerFunction a (satisfying good semantics), but it is
>     impossible to define non-bottom instances of Eq and Show.
> 
>     In general, superclass relationship should indicate some semantic
>     connection between the two classes.

It's possible to define non-bottom instances, for instance:

instance Eq (a->b) where
	_ == _ = False

instance Show (a->b) where
	show = const "<<function>>"

I suspect you're aware of this and had in mind the constraint that
they should also respect the invariants and laws of the classes.

> > class (Additive a) => Num a where
> >     (*)         :: a -> a -> a
> >     one	  :: a
> >     fromInteger :: Integer -> a

> Num encapsulates the mathematical structure of a (not necessarily
> commutative) ring, with the laws
> 
>   a * (b * c) === (a * b) * c
>       one * a === a
>       a * one === a
>   a * (b + c) === a * b + a * c
> 
> Typical examples include integers, matrices, and quaternions.

There is an additional property of zero being neglected here, namely
that it is an annihilator. That is,

	zero * x === zero
	x * zero === zero

Again, it's probably a reasonable compromise not to accommodate
nonassociative algebras, though an important application of them
lies within graphics, namely 3-vectors with the cross product.

> "reduceRepeat op a n" is an auxiliary function that, for an
> associative operation "op", computes the same value as
> 
>   reduceRepeat op a n = foldr1 op (repeat n a)
> 
> but applies "op" O(log n) times.  A sample implementation is below.

This is a terrific idea, and I'm glad someone has at last proposed
using it.

> > class (Num a) => Integral a where
> >     div, mod :: a -> a -> a
> >     divMod :: a -> a -> (a,a)
> >     gcd, lcm :: a -> a -> a
> >     extendedGCD :: a -> a -> (a,a,a)

While I'm wholeheartedly in favor of the Euclidean algorithm idea, I
suspect that more structure (i.e. separating it out to another class)
could be useful, for instance, formal power series' over Z are integral
domains, but are not a Euclidean domain because their residue classes
aren't computable by a finite process. Various esoteric rings like
Z[sqrt(k)] for various positive and negative integer k can also make
this dependence explode, though they're probably too rare to matter.

> TODO: quot, rem partially defined.  Explain.
> The default definition of extendedGCD above should not be taken as
> canonical (unlike most default definitions); for some Integral
> instances, the algorithm could diverge, might not satisfy the laws
> above, etc.
> TODO: (/) is only partially defined.  How to specify?  Add a member
>       isInvertible :: a -> Bool?
> Typical examples include rationals, the real numbers, and rational
> functions (ratios of polynomials).

It's too easy to make it a partial function to really consider this,
but if you wanted to go over the top (and you don't) you want the
multiplicative group of units to be the type of the argument (and
hence result) of recip.

> > class (Num a, Additive b) => Powerful a b where
> > ...
> I don't know interesting examples of this structure besides the
> instances above defined above and the Floating class below.
> "Positive" is a type constructor that asserts that its argument is >=
> 0; "positive" makes this assertion.  I am not sure how this will
> interact with defaulting arguments so that one can write
> 
>   x ^ 5
> 
> without constraining x to be of Fractional type.

What you're really trying to capture here is the (right?) Z-module-like
structure of the multiplicative monoid in a commutative ring. There are
some weird things going on here I'm not sure about, namely:

	(1) in an arbitary commutative ring (or multiplicative semigroup),
		the function can (at best) be defined as
		(^) :: ring -> NaturalNumbers -> ring
		That is, only the natural numbers can act on ring to produce
		an exponentiation-like operation.
	(2) if you have at least a division ring (or multiplicative group),
		you can extend it to
		(^) :: ring -> Integer -> ring
		so that all of Z acts on ring to produce an exponentiation
		operation.
	(3) Under some condition I don't seem to be able to formulate
		offhand, one can do
		(^) :: ring -> ring -> ring
		Now the ring (or perhaps more generally some related ring)
		acts on ring to produce an expontiation operation like what
		is typically thought of for real numbers. Anyone with good
		ideas as to what the appropriate conditions are here, please
		speak up.
		(Be careful, w ^ z = exp (z * log w) behaves badly for w < 0
			on the reals.)

> > -- Note: I think "Analytic" would be a better name than "Floating".
> > class (Fractional a, Powerful a a) => Floating a where
> > ...
> The semantics of these operations are rather ill-defined because of
> branch cuts, etc.

A useful semantics can be recovered by assuming that the library-defined
functions are all the Cauchy principal values. Even now:
Complex> (0 :+ 1)**(0 :+ 1)
0.20788 :+ 0.0

> > class (Num a, Ord a) => Real a where
> >     abs    :: x -> x
> >     signum :: x -> x

I'm not convinced that Real is a great name for this, or that this
is really the right type for all this stuff. I'd still like to see
abs and signum generalized to vector spaces.

> > module Lattice where
> > class Lattice a where
> >     meet, join :: a -> a -> a
> 
> Mathematically, a lattice (more properly, a semilattice) is a space
> with operations "meet" and "join" which are idempotent, commutative,
> associative, and (usually) distribute over each other.  Examples
> include real-valued function with (pointwise) max and min and sets
> with union and intersection.  It would be reasonable to make Ord a
> subclass of this, but it would probably complicate the class heirarchy
> too much for the gain.  The advantage of Lattice over Ord is that it
> is better defined.  Thus we can define a class
> 
> > class (Lattice a, Num a) => NumLattice a where
> >     abs :: a -> a -> a
> >     abs x = meet x (negate x)
> 
> and real-valued functions and computable reals can both be declared as
> instances of this class.

I'd be careful here, a meet (join) semilattices are partial orders in
which finite meets (joins) exist, and they only distribute over each
other in distributive lattices. Boolean lattices also have
complementation (e.g. not on type Bool) and Heyting lattices have
implications (x <= y ==> z iff x `meet` y <= z). My suggestion
(for simplicity) is:

class Ord a => MeetSemiLattice a where
	meet :: a -> a -> a

class MeetSemiLattice a => CompleteMeetSemiLattice a where
	bottom :: a

class Ord a => JoinSemiLattice a where
	join :: a -> a -> a

class JoinSemiLattice a => CompleteJoinSemiLattice a where
	top :: a

and Ord defines a partial order (and hence induces Eq) on a type.

(e.g.
instance Ord a => Eq a where
	x == y = x <= y && y <= x
)

I don't really think bottoms and tops really get bundled in with
the strict mathematical definition, e.g. natural numbers have all
finite joins but no top, Integer has no bottom or top but all finite
joins and meets, etc. Again, your design seems to incorporate the
kind of simplicity that language implementors might want for a
Standard Prelude, so your judgment on how much generality is
appropriate here would probably be good.


Cheers,
Bill


From jenglish@flightlab.com Mon Feb 12 03:11:25 2001 Date: Sun, 11 Feb 2001 19:11:25 -0800 From: Joe English jenglish@flightlab.com Subject: A sample revised prelude for numeric classes
Dylan Thurston wrote:
>
> I've started writing up a more concrete proposal for what I'd like the
> Prelude to look like in terms of numeric classes.

I like this proposal a lot.  The organization is closer to
traditional mathematical structures than the current
Prelude, but not as intimidating as Mechveliani's
Basic Algebra Proposal.  A very nice balance, IMO.

A couple of requests:

> > module Lattice where
> > class Lattice a where
> >     meet, join :: a -> a -> a

Could this be split into

    class SemiLattice a where
	join :: a -> a -> a

and

    class (SemiLattice a) => Lattice a where
	meet :: a -> a -> a

I run across a lot of structures which could usefully
be modeled as semilattices, but lack a 'meet' operation.

> It would be reasonable to make Ord a
> subclass of this, but it would probably complicate the class heirarchy
> too much for the gain.

In a similar vein, I'd really like to see the Ord class
split up:

    class PartialOrder a where
    	(<), (>)   :: a -> a -> Bool

    class (Eq a, PartialOrder a) => Ord a where
	compare    :: a -> a -> Ordering
	(<=), (>=) :: a -> a -> Bool
	max, min   :: a -> a -> a

Perhaps it would make sense for PartialOrder to be a
superclass of Lattice?


--Joe English

  jenglish@flightlab.com


From wli@holomorphy.com Mon Feb 12 03:13:26 2001 Date: Sun, 11 Feb 2001 19:13:26 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: A sample revised prelude for numeric classes
At 2001-02-11 14:42, Dylan Thurston wrote:
> >I've started writing up a more concrete proposal for what I'd like the
> >Prelude to look like in terms of numeric classes.  Please find it
> >attached below.  It's still a draft and rather incomplete, but please
> >let me know any comments, questions, or suggestions.

On Sun, Feb 11, 2001 at 04:03:37PM -0800, Ashley Yakeley wrote:
> Apologies if this has been discussed and I missed it. When it comes to 
> writing a 'geek' prelude, what was wrong with the Basic Algebra Proposal 
> found in <ftp://ftp.botik.ru/pub/local/Mechveliani/basAlgPropos/> ? 
> Perhaps it could benefit from multi-parameter classes?

I'm not sure if there is anything concrete wrong with it, in fact, I'd
like to see it made into a Prelude, but there are several reasons why
I don't think it's being discussed here in the context of an alternative
for a Prelude.

	(1) It's widely considered too complex and/or too mathematically
		involved for the general populace (or whatever semblance thereof
		exists within the Haskell community).
	(2) As a "Geek Prelude", it's considered to have some aesthetic
		and/or usability issues.
	(3) For persons as insane as myself, it's actually not radical enough.

My commentary on it thus far is that I see it as high-quality software
that could not only already serve as a "Geek Prelude" for many users, but
upon which could also be based implementations and designs of future
"Geek Preludes". The fact that no one has discussed it is probably due
to a desire not to return to previous flamewars, but it should almost
definitely be discussed as a reference point.

I've actually been hoping that Mechveliani would chime in and comment on
the various ideas, since he's actually already been through the motions
of implementing an alternative Prelude and seen what sort of
difficulties arise from actually trying to do these things various ways.


Cheers,
Bill


From dpt@math.harvard.edu Mon Feb 12 03:27:53 2001 Date: Sun, 11 Feb 2001 22:27:53 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: A sample revised prelude for numeric classes
Thanks for the comments!

On Mon, Feb 12, 2001 at 12:26:35AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> I don't like the fact that there is no Powerful Integer Integer.

Reading this, it occurred to me that you could explictly declare an
instance of Powerful Integer Integer and have everything else work.

> Then the second argument of (^) is always arbitrary RealIntegral,

Nit: the second argument should be an Integer, not an arbitrary
RealIntegral.

> > > class (Real a, Floating a) => RealFrac a where
> > > -- lifted directly from Haskell 98 Prelude
> > >     properFraction   :: (Integral b) => a -> (b,a)
> > >     truncate, round  :: (Integral b) => a -> b
> > >     ceiling, floor   :: (Integral b) => a -> b
> 
> Should be RealIntegral instead of Integral.

Yes.  I'd actually like to make it Integer, and let the user compose
with fromInteger herself.

> Perhaps RealIntegral should be called Integral, and your Integral
> should be called somewhat differently.

Perhaps.  Do you have suggestions for names?  RealIntegral is what
naive users probably want, but Integral is what mathematicians would
use (and call something like an integral domain).

> > > class (Real a, Integral a) => RealIntegral a where
> > >     quot, rem        :: a -> a -> a   
> > >     quotRem          :: a -> a -> (a,a)
> > >
> > >       -- Minimal definition: toInteger
> 
> You forgot toInteger.

Oh, right.  I actually had it and then deleted it.  On the one hand,
it feels very implementation-specific to me, comparable to the
decodeFloat routines (which are useful, but not generally
applicable).  On the other hand, I couldn't think of many examples
where I really wouldn't want that operation (other than monadic
numbers, that, say, count the number of operations), and I couldn't
think of a better place to put it.

You'll notice that toRational was similarly missing.

My preferred solution might still be the Convertible class I mentioned
earlier.  Recall it was
  class Convertible a b where
      convert :: a -> b
maybe with another class like
  class (Convertible a Integer) => ConvertibleToInteger a where
      toInteger :: a -> Integer
      toInteger = convert
if the restrictions on instance contexts remain.  Convertible a b
should indicate that a can safely be converted to b without losing any
information and maintaining relevant structure; from this point of 
view, its use would be strictly limited.  (But what's relevant?)

I'm still undecided here.

Best,
	Dylan Thurston


From fjh@cs.mu.oz.au Mon Feb 12 03:35:55 2001 Date: Mon, 12 Feb 2001 14:35:55 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: A sample revised prelude for numeric classes
On 11-Feb-2001, Dylan Thurston <dpt@math.harvard.edu> wrote:
> > class (Num a) => Integral a where
> >     div, mod :: a -> a -> a
> >     divMod :: a -> a -> (a,a)
> >     gcd, lcm :: a -> a -> a
> >     extendedGCD :: a -> a -> (a,a,a)
> >
> >      -- Minimal definition: divMod or (div and mod)
> >      --   and extendedGCD, if the provided definition does not work
> >     div a b | (d,_) <- divMod a b = d
> >     mod a b | (_,m) <- divMod a b = m
> >     divMod a b = (div a b, mod a b)
> >     gcd a b | (_,_,g) <- extendedGCD a b = g
> >     extendedGCD a b = ... -- insert Euclid's algorithm here
> >     lcm a b = (a `div` gcd a b) * b
> 
> Integral has the mathematical structure of a unique factorization
> domain, satisfying the laws
> 
>                       a * b === b * a
>   (div a b) * b + (mod a b) === a
>               mod (a+k*b) b === mod a b
>             a `div` gcd a b === zero
>                     gcd a b === gcd b a
>             gcd (a + k*b) b === gcd a b
>                   a*c + b*d === g where (c, d, g) = extendedGCD a b
> 
> TODO: quot, rem partially defined.  Explain.
> The default definition of extendedGCD above should not be taken as
> canonical (unlike most default definitions); for some Integral
> instances, the algorithm could diverge, might not satisfy the laws
> above, etc.

In that case, I think it might be better to not provide it as a
default, and instead to provide a function called say
`euclid_extendedGCD'; someone defining an instance can then

        extendedGCD = euclid_extendedGCD

if that is appropriate.  It's so much easier to find bugs in code that you
did write rather than bugs which are caused by what you *didn't* write.

Of course this is not so effective if we keep the awful Haskell 98
rule that instance methods always default to bottom if not defined;
but even if that rule is not changed, compilers can at least warn
about that case.

> > class (Num a, Additive b) => Powerful a b where
> >     (^) :: a -> b -> a

I don't like the name.  Plain `Pow' would be better, IMHO.

Apart from those two points, I quite like this proposal,
at least at first glance.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From dpt@math.harvard.edu Mon Feb 12 03:56:29 2001 Date: Sun, 11 Feb 2001 22:56:29 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: A sample revised prelude for numeric classes
On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote:
> There is an additional property of zero being neglected here, namely
> that it is an annihilator. That is,
> 
> 	zero * x === zero
> 	x * zero === zero

It follows:

  zero * x === (one - one) * x === one * x - one * x === x - x === zero

> Again, it's probably a reasonable compromise not to accommodate
> nonassociative algebras, though an important application of them
> lies within graphics, namely 3-vectors with the cross product.

Agreed that non-associative algebras are useful, but I feel that they
should have a different symbol.

> > > class (Num a) => Integral a where
> > >     div, mod :: a -> a -> a
> > >     divMod :: a -> a -> (a,a)
> > >     gcd, lcm :: a -> a -> a
> > >     extendedGCD :: a -> a -> (a,a,a)
> 
> While I'm wholeheartedly in favor of the Euclidean algorithm idea, I
> suspect that more structure (i.e. separating it out to another class)
> could be useful, for instance, formal power series' over Z are integral
> domains, but are not a Euclidean domain because their residue classes
> aren't computable by a finite process. Various esoteric rings like
> Z[sqrt(k)] for various positive and negative integer k can also make
> this dependence explode, though they're probably too rare to matter.

<technical math>
I tried to write the definitions in a way that could be defined for
any unique factorization domain, not necessarily Euclidean: just take
the two numbers, write them as a unit times prime factors in canonical
form, and take the product of the common factors and call that the
GCD.  On reflection, extendedGCD probably isn't easy to write in
general.

What operations would you propose to encapsulate an integral domain
(rather than a UFD)?

Formal power series over Z are an interesting example; I'll think
about it.  On first blush, it seems like if you represented them as
lazy lists you might be able to compute the remainder term by term.
</technical math>

> > TODO: quot, rem partially defined.  Explain.
> > The default definition of extendedGCD above should not be taken as
> > canonical (unlike most default definitions); for some Integral
> > instances, the algorithm could diverge, might not satisfy the laws
> > above, etc.
> > TODO: (/) is only partially defined.  How to specify?  Add a member
> >       isInvertible :: a -> Bool?
> > Typical examples include rationals, the real numbers, and rational
> > functions (ratios of polynomials).
> 
> It's too easy to make it a partial function to really consider this,
> but if you wanted to go over the top (and you don't) you want the
> multiplicative group of units to be the type of the argument (and
> hence result) of recip.

Yes.  I considered and rejected that.  But it would be nice to let
callers check whether the division will blow up, and that's not
possible for classes that aren't members of Eq.

But I suppose that's the whole point.  For computable reals, the way I
would compute 1/(very small number) would be to look at (very small
number) more and more closely to figure out on which side of 0 it lay;
if it actually were zero, the program would loop.  I think programs
that want to avoid this have to take type-specific steps (in this
case, cutting off the evaluation at a certain point.)

> What you're really trying to capture here is the (right?) Z-module-like
> structure of the multiplicative monoid in a commutative ring. There are
> some weird things going on here I'm not sure about, namely:

Right.

> 	(3) Under some condition I don't seem to be able to formulate
> 		offhand, one can do
> 		(^) :: ring -> ring -> ring
> 		Now the ring (or perhaps more generally some related ring)
> 		acts on ring to produce an expontiation operation like what
> 		is typically thought of for real numbers. Anyone with good
> 		ideas as to what the appropriate conditions are here, please
> 		speak up.
> 		(Be careful, w ^ z = exp (z * log w) behaves badly for w < 0
> 			on the reals.)

For complex numbers as well, this operation has problems because of
branch cuts.  It does satisfy that identity I mentioned, but is not
continuous in the first argument.

It is more common to see functions like exp be well defined (for more
general additive groups) than to see the full (^) be defined.

> > > class (Num a, Ord a) => Real a where
> > >     abs    :: x -> x
> > >     signum :: x -> x
> 
> I'm not convinced that Real is a great name for this, or that this
> is really the right type for all this stuff. I'd still like to see
> abs and signum generalized to vector spaces.

After thinking about this, I decided that I would be happy calling the
comparable operation on vector spaces "norm":
a) it's compatible with mathematical usage
b) it keeps the Prelude itself simple.
It's unfortunate that the operation for complex numbers can't be
called "abs", but I think it's reasonable.

> <good stuff on lattices deleted> ...and Ord defines a partial order
> (and hence induces Eq) on a type.

I think that "Ord" should define a total ordering; it's certainly what
naive users would expect.  I would define another class "Poset" with a
partial ordering.

> (e.g.
> instance Ord a => Eq a where
> 	x == y = x <= y && y <= x
> )

But to define <= in terms of meet and join you already need Eq!

  x <= y === meet x y == y

Best,
	Dylan Thurston


From brian@boutel.co.nz Mon Feb 12 04:24:37 2001 Date: Mon, 12 Feb 2001 17:24:37 +1300 From: Brian Boutel brian@boutel.co.nz Subject: A sample revised prelude for numeric classes
Dylan Thurston wrote:
> 
> I've started writing up a more concrete proposal for what I'd like the
> Prelude to look like in terms of numeric classes.  Please find it
> attached below.  It's still a draft and rather incomplete, but please
> let me know any comments, questions, or suggestions.
> 
>

This is a good basis for discussion, and it helps to see something
concrete. 

Here are a few comments:

> Thus these laws should be interpreted as guidelines rather than
> absolute rules.  In particular, the compiler is not allowed to use
> them.  Unless stated otherwise, default definitions should also be
> taken as laws.

Including laws was discussed very early in the development of the
language, but was rejected. IIRC Miranda had them. The argument against
laws was that their presence might mislead users into the assumption
that they did hold, yet if they were not enforcable then they might not
hold and that could have serious consequences. Also, some laws do not
hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a
is not bottom. 


> class (Additive a) => Num a where
>     (*)         :: a -> a -> a
>     one         :: a
>     fromInteger :: Integer -> a
>
>       -- Minimal definition: (*), one
>     fromInteger 0         = zero
>     fromInteger n | n < 0 = negate (fromInteger (-n))
>     fromInteger n | n > 0 = reduceRepeat (+) one n

This definition requires both Eq and Ord!!!



As does this one:

> class (Num a, Additive b) => Powerful a b where
>     (^) :: a -> b -> a
> instance (Num a) => Powerful a (Positive Integer) where
>     a ^ 0 = one
>     a ^ n = reduceRepeated (*) a n
> instance (Fractional a) => Powerful a Integer where
>     a ^ n | n < 0 = recip (a ^ (negate n))
>     a ^ n         = a ^ (positive n)


and several others further down. 


> (4) In some cases, the hierarchy is not finely-grained enough:
>     operations that are often defined independently are lumped
>     together.  For instance, in a financial application one might want
>     a type "Dollar", or in a graphics application one might want a
>     type "Vector".  It is reasonable to add two Vectors or Dollars,
>     but not, in general, reasonable to multiply them.  But the
>     programmer is currently forced to define a method for (*) when she
>     defines a method for (+).

Why do you stop at allowing addition on Dollars and not include
multiplication by a scalar? Division is also readily defined on Dollar
values, with a scalar result, but this, too, is not available in the
proposal. 

Having Units as types, with the idea of preventing adding Apples to
Oranges, or Dollars to Roubles, is a venerable idea, but is not in
widespread use in actual programming languages. Why not?

Vectors, too, can be multiplied, producing both scalar- and
vector-products.

It seems that you are content with going as far as the proposal permits,
though you cannot define, even within the revised Class system, all the
common and useful operations on these types. This is the same situation
as with Haskell as it stands. The question is whether the (IMHO)
marginal increase in flexibility is worth the cost.

This is not an argument for not separating Additive from Num, but it
does weaken the argument for doing it.

--brian


From wli@holomorphy.com Mon Feb 12 05:17:53 2001 Date: Sun, 11 Feb 2001 21:17:53 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: A sample revised prelude for numeric classes
On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> It follows:
>   zero * x === (one - one) * x === one * x - one * x === x - x === zero

Heh, you've caught me sleeping. =)

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> I tried to write the definitions in a way that could be defined for
> any unique factorization domain, not necessarily Euclidean: just take
> the two numbers, write them as a unit times prime factors in canonical
> form, and take the product of the common factors and call that the
> GCD.  On reflection, extendedGCD probably isn't easy to write in
> general.

Well, factorizing things in various UFD's doesn't sound easy to me, but
at this point I'm already having to do some reaching for counterexamples
of practical programs where this matters. It could end up being a useless
class method in some instances, so I'm wary of it.

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> What operations would you propose to encapsulate an integral domain
> (rather than a UFD)?

I'm not necessarily proposing a different set of operations to
encapsulate them, but rather that gcd and cousins be split off into
another subclass. Your design decisions in general appear to be
striking a good chord, so I'll just bring up the idea and let you
decide whether it should be done that way and so on.

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> Formal power series over Z are an interesting example; I'll think
> about it.  On first blush, it seems like if you represented them as
> lazy lists you might be able to compute the remainder term by term.

Consider taking of the residue of a truly infinite member of Z[[x]]
mod an ideal generated by a polynomial, e.g. 1/(1-x) mod (1+x^2).
You can take the residue of each term of 1/(1-x), so x^(2n) -> (-1)^n
and x^(2n+1) -> (-1)^n x, but you end up with an infinite number of
(nonzero!) residues to add up and hence encounter the troubles with
processes not being finite that I mentioned.

On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote:
>> 	(3) Under some condition I don't seem to be able to formulate
>> 		offhand, one can do
>> 		(^) :: ring -> ring -> ring
>> 		Now the ring (or perhaps more generally some related ring)
>> 		acts on ring to produce an expontiation operation like what
>> 		is typically thought of for real numbers. Anyone with good
>> 		ideas as to what the appropriate conditions are here, please
>> 		speak up.
>> 		(Be careful, w ^ z = exp (z * log w) behaves badly for w < 0
>> 			on the reals.)

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> For complex numbers as well, this operation has problems because of
> branch cuts.  It does satisfy that identity I mentioned, but is not
> continuous in the first argument.
> It is more common to see functions like exp be well defined (for more
> general additive groups) than to see the full (^) be defined.

I think it's nice to have the Cauchy principal value versions of things
floating around.  I know at least that I've had call for using the CPV
of exponentiation (and it's not hard to contrive an implementation),
but I'm almost definitely an atypical user. (Note, (**) does this today.)

On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote:
>> I'm not convinced that Real is a great name for this, or that this
>> is really the right type for all this stuff. I'd still like to see
>> abs and signum generalized to vector spaces.

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> After thinking about this, I decided that I would be happy calling the
> comparable operation on vector spaces "norm":
> a) it's compatible with mathematical usage
> b) it keeps the Prelude itself simple.
> It's unfortunate that the operation for complex numbers can't be
> called "abs", but I think it's reasonable.

I'm not entirely sure, but I think part of the reason this hasn't been
done already is because it's perhaps painful to statically type
dimensionality in vector spaces. On the other hand, assuming that the
user has perhaps contrived a representation satisfactory to him or her,
defining a class on the necessary type constructor shouldn't be tough
at all.

In a side note, it seems conventional to use abs and signum on complex
numbers (and functions), and also perhaps the same symbol as abs for
the norm on vectors and vector functions. It seems the distinction
drawn is that abs is definitely pointwise and the norm more often does
some sort of shenanigan like L^p norms etc. How much of this convention
should be preserved seems like a design decision, but perhaps one that
should be made explicit.

On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote:
>> <good stuff on lattices deleted> ...and Ord defines a partial order
>> (and hence induces Eq) on a type.

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> I think that "Ord" should define a total ordering; it's certainly what
> naive users would expect.  I would define another class "Poset" with a
> partial ordering.

I neglected here to add in the assumption that (<=) was a total relation,
I had in mind antisymmetry of (<=) in posets so that element isomorphism
implies equality. Introducing a Poset class where elements may be
incomparable appears to butt against some of the bits where Bool is
hardwired into the language, at least where one might attempt to use a
trinary logical type in place of Bool to denote the result of an
attempted comparison.

On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote:
>> (e.g.
>> instance Ord a => Eq a where
>> 	x == y = x <= y && y <= x
>> )

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> But to define <= in terms of meet and join you already need Eq!
> 
>   x <= y === meet x y == y

I don't usually see this definition of (<=), and it doesn't seem like
the natural way to go about defining it on most machines. The notion
of the partial (possibly total) ordering (<=) seems to be logically
prior to that of the meet to me. The containment usually goes:

reflexive + transitive partial relation (preorder)
	=>
antisymmetric (partial order)
	[lattices possible here with additional structure,
		also equality decidable in terms of <= independently
		of the notion of lattices, for arbitrary partial orders]
	=>
total relation (well ordering)

Whether this matters for library design is fairly unclear.

Good work!


Cheers,
Bill


From Tom.Pledger@peace.com Mon Feb 12 05:18:19 2001 Date: Mon, 12 Feb 2001 18:18:19 +1300 From: Tom Pledger Tom.Pledger@peace.com Subject: A sample revised prelude for numeric classes
Brian Boutel writes:
 :
 | Having Units as types, with the idea of preventing adding Apples to
 | Oranges, or Dollars to Roubles, is a venerable idea, but is not in
 | widespread use in actual programming languages. Why not?

There was a pointer to some good papers on this in a previous
discussion of units and dimensions:

    http://www.mail-archive.com/haskell@haskell.org/msg04490.html

The main complication is that the type system needs to deal with
integer exponents of dimensions, if it's to do the job well.

For example, it should be OK to divide an acceleration (length *
time^-2) by a density (mass * length^-3).  Such things may well occur
as subexpressions of something more intuitive, and it's undesirable to
spell out all the anticipated dimension types in a program (a Haskell
98 program, for example) because:

  - Only an arbitrary finite number would be covered, and

  - The declarations would contain enough un-abstracted clich=E9s to
    bring a tear to the eye.
        instance Mul Double         (Dim_L Double)     (Dim_L Double)
        instance Mul (Dim_L Double) (Dim_per_T Double) (Dim_L_per_T Dou=
ble)
        etc.

Regards,
Tom


From wli@holomorphy.com Mon Feb 12 05:57:03 2001 Date: Sun, 11 Feb 2001 21:57:03 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: A sample revised prelude for numeric classes
On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> Including laws was discussed very early in the development of the
> language, but was rejected. IIRC Miranda had them. The argument against
> laws was that their presence might mislead users into the assumption
> that they did hold, yet if they were not enforcable then they might not
> hold and that could have serious consequences. Also, some laws do not
> hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a
> is not bottom. 

I actually think it would be useful to have them and optionally
dynamically enforce them, or at least whichever ones are computable, as
a compile-time option. This would be _extremely_ useful for debugging
purposes, and I, at the very least, would use it. I think Eiffel does
something like this, can anyone else comment?

This, of course, is a language extension, and so probably belongs in
a different discussion from the rest of all this.

Dylan Thurston wrote:
>> class (Additive a) => Num a where
>>     (*)         :: a -> a -> a
>>     one         :: a
>>     fromInteger :: Integer -> a
>>       -- Minimal definition: (*), one
>>     fromInteger 0         = zero
>>     fromInteger n | n < 0 = negate (fromInteger (-n))
>>     fromInteger n | n > 0 = reduceRepeat (+) one n

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> This definition requires both Eq and Ord!!!

Only on Integer, not on a.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> As does this one:

Dylan Thurston wrote:
>> class (Num a, Additive b) => Powerful a b where
>>     (^) :: a -> b -> a
>> instance (Num a) => Powerful a (Positive Integer) where
>>     a ^ 0 = one
>>     a ^ n = reduceRepeated (*) a n
>> instance (Fractional a) => Powerful a Integer where
>>     a ^ n | n < 0 = recip (a ^ (negate n))
>>     a ^ n         = a ^ (positive n)

I should note that both of these definitions which require Eq and
Ord only require it on Integer.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> and several others further down. 

I'm not sure which ones you hit on, though I'm sure we'd all be more
than happy to counter-comment on them or repair the inadequacies.

Dylan Thurston wrote:
>> (4) In some cases, the hierarchy is not finely-grained enough:
>>     operations that are often defined independently are lumped
>>     together.  For instance, in a financial application one might want
>>     a type "Dollar", or in a graphics application one might want a
>>     type "Vector".  It is reasonable to add two Vectors or Dollars,
>>     but not, in general, reasonable to multiply them.  But the
>>     programmer is currently forced to define a method for (*) when she
>>     defines a method for (+).

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> Why do you stop at allowing addition on Dollars and not include
> multiplication by a scalar? Division is also readily defined on Dollar
> values, with a scalar result, but this, too, is not available in the
> proposal. 

I can comment a little on this, though I can't speak for someone else's
design decisions. In general, the results of division and multiplication
for units have a different result type than those of the arguments. This
makes defining them by type class overloading either require existential
wrappers or makes them otherwise difficult or impossible to define.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> Having Units as types, with the idea of preventing adding Apples to
> Oranges, or Dollars to Roubles, is a venerable idea, but is not in
> widespread use in actual programming languages. Why not?

I'm probably even less qualified to comment on this, but I'll conjecture
that the typing disciplines of most languages make it impractical. I
suspect it could be possible in Haskell.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> Vectors, too, can be multiplied, producing both scalar- and
> vector-products.

Exterior and inner products both encounter much the same troubles as
defining arithmetic on types with units attached, with the additional
complication that statically typing dimensionality is nontrivial.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> It seems that you are content with going as far as the proposal permits,
> though you cannot define, even within the revised Class system, all the
> common and useful operations on these types. This is the same situation
> as with Haskell as it stands. The question is whether the (IMHO)
> marginal increase in flexibility is worth the cost.
> This is not an argument for not separating Additive from Num, but it
> does weaken the argument for doing it.

I'm not convinced of this, though I _am_ convinced that a general
framework for units would probably be useful to have in either a
standard or add-on library distributed with Haskell, or perhaps to
attempt to address units even within the standard Prelude if it's
simple enough. Are you up to perhaps taking a stab at this? Perhaps
if you tried it within the framework Thurston has laid out, some of
the inadequacies could be revealed.


Cheers,
Bill


From ashley@semantic.org Mon Feb 12 06:16:02 2001 Date: Sun, 11 Feb 2001 22:16:02 -0800 From: Ashley Yakeley ashley@semantic.org Subject: A sample revised prelude for numeric classes
At 2001-02-11 21:18, Tom Pledger wrote:

>The main complication is that the type system needs to deal with
>integer exponents of dimensions, if it's to do the job well.

Very occasionally non-integer or 'fractal' exponents of dimensions are 
useful. For instance, geographic coastlines can be measured in km ^ n, 
where 1 <= n < 2. This doesn't stop the CIA world factbook listing all 
coastline lengths in straight kilometres, however.

More unit weirdness occurs with logarithms. For instance, if y and x are 
distances, log (y/x) = log y - log x. Note that 'log x' is some number + 
log (metre). Strange, huh?

Interestingly, in C++ you can parameterise types by values. For instance:

--
// Mass, Length and Time
template <long M,long L,long T>
class Unit
     {
     public:
     double mValue;

     inline explicit Unit(double value)
          {
          mValue = value;
          }
     };

template <long M,long L,long T>
Unit<M,L,T> operator + (Unit<M,L,T> a,Unit<M,L,T> b)
     {
     return Unit<M,L,T>(a.mValue + b.mValue);
     }

template <long Ma,long La,long Ta,long Mb,long Lb,long Tb>
Unit<Ma+Mb,La+Lb,Ta+Tb> operator * (Unit<Ma,La,Ta> a,Unit<Mb,Lb,Tb> b)
     {
     return Unit<Ma+Mb,La+Lb,Ta+Tb>(a.mValue * b.mValue);
     }

// etc.

int main()
     {
     Unit<0,1,0> oneMetre(1);
     Unit<0,1,0> twoMetres = oneMetre + oneMetre;
     Unit<0,2,0> oneSquareMetre = oneMetre * oneMetre;
     }
--

Can you do this sort of thing in Haskell?


-- 
Ashley Yakeley, Seattle WA



From wli@holomorphy.com Mon Feb 12 06:46:15 2001 Date: Sun, 11 Feb 2001 22:46:15 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: A sample revised prelude for numeric classes
At 2001-02-11 21:18, Tom Pledger wrote:
>>The main complication is that the type system needs to deal with
>>integer exponents of dimensions, if it's to do the job well.

On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote:
> Very occasionally non-integer or 'fractal' exponents of dimensions are 
> useful. For instance, geographic coastlines can be measured in km ^ n, 
> where 1 <= n < 2. This doesn't stop the CIA world factbook listing all 
> coastline lengths in straight kilometres, however.

This is pretty rare, and it's also fairly tough to represent points in
spaces of fractional dimension. I'll bet the sorts of complications
necessary to do so would immediately exclude it from consideration in
the design of a standard library, but nevertheless would be interesting
to hear about. Can you comment further on this?

On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote:
> More unit weirdness occurs with logarithms. For instance, if y and x are 
> distances, log (y/x) = log y - log x. Note that 'log x' is some number + 
> log (metre). Strange, huh?

If you (or anyone else) could comment on what sorts of units would be
appropriate for the result type of a logarithm operation, I'd be glad to
hear it. I don't know what the result type of this example is supposed
to be if the units of a number are encoded in the type.

On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote:
> Interestingly, in C++ you can parameterise types by values. For instance:
	[interesting C++ example elided]
> Can you do this sort of thing in Haskell?

No, in general I find it necessary to construct some sort of set of
types parallel to the actual data type, define some sort of existential
data type encompassing the set of all types which can represent one of
those appropriate values, and "lift" things to that type by means of
sample arguments. I usually like ensuring that the types representing
things like integers never actually have any sort of data manifest,
i.e. the sample arguments are always undefined. This is a bit awkward.

I think Okasaki's work on square matrices and perhaps some other ideas
should be exploited for this sort of thing, as there is quite a bit of
opposition to the usage of sample arguments. I'd like to see a library
for vector spaces based on similar ideas. I seem to be caught up in
other issues caused by mucking with fundamental data types' definitions,
my working knowldedge of techniques like Okasaki's is insufficient for
the task, and my design concepts are probably too radical for general
usage, so I'm probably not the man for the job, though I will very
likely take a stab at such a beast for my own edification.


Cheers,
Bill


From qrczak@knm.org.pl Mon Feb 12 07:34:15 2001 Date: 12 Feb 2001 07:34:15 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: A sample revised prelude for numeric classes
Mon, 12 Feb 2001 17:24:37 +1300, Brian Boutel <brian@boutel.co.nz> pisze:

> > class (Additive a) => Num a where
> >     (*)         :: a -> a -> a
> >     one         :: a
> >     fromInteger :: Integer -> a
> >
> >       -- Minimal definition: (*), one
> >     fromInteger 0         = zero
> >     fromInteger n | n < 0 = negate (fromInteger (-n))
> >     fromInteger n | n > 0 = reduceRepeat (+) one n
> 
> This definition requires both Eq and Ord!!!

Only Eq Integer and Ord Integer, which are always there.

> Why do you stop at allowing addition on Dollars and not include
> multiplication by a scalar?

Perhaps because there is no good universal type for (*).
Sorry, it would have to have a different symbol.

> Having Units as types, with the idea of preventing adding Apples to
> Oranges, or Dollars to Roubles, is a venerable idea, but is not in
> widespread use in actual programming languages. Why not?

It does not scale to more general cases. (m/s) / (s) = (m/s^2),
so (/) would have to have the type (...) => a -> b -> c, which is not
generally usable because of ambiguities. Haskell's classes are not
powerful enough to define full algebra of units.

> It seems that you are content with going as far as the proposal permits,
> though you cannot define, even within the revised Class system, all the
> common and useful operations on these types. This is the same situation
> as with Haskell as it stands. The question is whether the (IMHO)
> marginal increase in flexibility is worth the cost.

The Prelude class system requires a compromise. There is no single
design which accommodates all needs because Haskell's classes are
not powerful enough to unify all levels of generality in a single
class operation. And even if it was possible, it would be awkward
to use in simpler cases.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Mon Feb 12 07:04:30 2001 Date: 12 Feb 2001 07:04:30 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: A sample revised prelude for numeric classes
Sun, 11 Feb 2001 16:03:37 -0800, Ashley Yakeley <ashley@semantic.org> pisze:

> Apologies if this has been discussed and I missed it. When it comes to 
> writing a 'geek' prelude, what was wrong with the Basic Algebra Proposal 
> found in <ftp://ftp.botik.ru/pub/local/Mechveliani/basAlgPropos/> ? 
> Perhaps it could benefit from multi-parameter classes?

Let me quote myself why I don't like this proposal:

- It's too complicated.

- Relies on controversial type system features, like undecidable
  instances and overlapping instances.

- Relies on type system features that are not implemented and it's
  not clear if they can be correctly designed or implemented at all,
  like "domain conversions".

- Has many instances that should not exist because the relevant type
  does not have the class property; they return Nothing or fail,
  instead of failing to compile.

- Properties like commutativity cannot be specified in Haskell.
  The compiler won't be able to automatically perform any optimizations
  based on commutativity.

- belongs is strange. IMHO it should always return True for valid
  arguments, and invalid arguments should be impossible to construct
  if the validity can be checked at all.

- Tries to turn a compiled language into an interpreted language.
  FuncExpr, too much parsing (with arbitrary rules hardwired into
  the language), too much runtime checks.

- It's too complicated.

- It's not true that it's "not necessary to dig into mathematics".
  I studied mathematics and did not have that much of algebra.

- I perfer minBound to looking at element under Just under Just under
  tuple of osetBounds.

- Uses ugly character and string arguments that tune the behavior,
  e.g. in syzygyGens, divRem, canFr. I like Haskell98's divMod+quotRem
  better.

- Uses unneeded sample arguments, e.g. in toEnum, zero, primes, read.

- Have I said that it's too complicated?

There were lengthy discussions about it...

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Mon Feb 12 07:11:36 2001 Date: 12 Feb 2001 07:11:36 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: A sample revised prelude for numeric classes
Sun, 11 Feb 2001 18:48:42 -0800, William Lee Irwin III <wli@holomorphy.com> pisze:

> class Ord a => MeetSemiLattice a where
> 	meet :: a -> a -> a
> 
> class MeetSemiLattice a => CompleteMeetSemiLattice a where
> 	bottom :: a
> 
> class Ord a => JoinSemiLattice a where
> 	join :: a -> a -> a
> 
> class JoinSemiLattice a => CompleteJoinSemiLattice a where
> 	top :: a

Please: ok, but not for Prelude!

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Mon Feb 12 07:24:31 2001 Date: 12 Feb 2001 07:24:31 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: A sample revised prelude for numeric classes
Sun, 11 Feb 2001 22:27:53 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:

> Reading this, it occurred to me that you could explictly declare an
> instance of Powerful Integer Integer and have everything else work.

No, because it overlaps with Powerful a Integer (the constraint on a
doesn't matter for determining if it overlaps).

> > Then the second argument of (^) is always arbitrary RealIntegral,
> 
> Nit: the second argument should be an Integer, not an arbitrary
> RealIntegral.

Of course not. (2 :: Integer) ^ (i :: Int) makes perfect sense.

> > You forgot toInteger.
> 
> Oh, right.  I actually had it and then deleted it.  On the one hand,
> it feels very implementation-specific to me, comparable to the
> decodeFloat routines

It is needed for conversions (fromIntegral in particular).

>   class Convertible a b where
>       convert :: a -> b
> maybe with another class like
>   class (Convertible a Integer) => ConvertibleToInteger a where
>       toInteger :: a -> Integer
>       toInteger = convert

This requires to write a Convertible instance in addition to
ConvertibleToInteger, where currently mere toInteger in Integral
suffices.

Since Convertible must be defined separately for each pair of types
(otherwise instances would easily overlap), it's not very useful for
numeric conversions. Remember that there are a lot of numeric types
in the FFI: Int8, Word16, CLong, CSize. It does not provide anything
in this area so should not be required to define instances there.

After a proposal is developed, please check how many instances one
has to define to make a type the same powerful as Int, and is it
required to define methods irrelevant to non-mathematical needs.
basAlgPropos fails badly in this criterion.

> Convertible a b should indicate that a can safely be converted to
> b without losing any information and maintaining relevant structure;

So fromInteger does not require Convertible, which is inconsistent
with toInteger. Sorry, I am against Convertible in Prelude - tries
to be too general, which makes it inappropriate.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From karczma@info.unicaen.fr Mon Feb 12 09:33:03 2001 Date: Mon, 12 Feb 2001 09:33:03 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: In hoc signo vinces (Was: Revamping the numeric classes)
Marcin Kowalczyk pretends not to understand:

> JK:
> 
> > Again, a violation of the orthogonality principle. Needing division
> > just to define signum. And of course a completely different approach
> > do define the signum of integers. Or of polynomials...
 
> So what? That's why it's a class method and not a plain function with
> a single definition.
> 
> Multiplication of matrices is implemented differently than
> multiplication of integers. Why don't you call it a violation of the
> orthogonality principle (whatever it is)?


1. Orthogonality priniciple has - in principle - nothing to do with
   the implementation.
   Separating a complicated structure in independent, or "orthogonal"
   concepts is a basic invention of human mind, spanning from the
   principle of Montesquieu of the independence of three political
   powers, down to syntactic issues in the design of a programming
language.

   If you eliminate as far as possible the "interfacing" between
concepts,
   the integration of the whole is easier. Spurious dependencies are
   always harmful.

2. This has been a major driving force in the construction of
mathematical
   entities for centuries. What do you really NEED for your proof. What
   is the math. category where a given concept can be defined, where
   a theorem holds, etc.

3. The example of matrices is inadequate (to say it mildly). The monoid
   rules hold in both cases, e.g. the associativity. So, I might call
   both operations "multiplication", although one is commutative, and
   the other one not.

==

In a later posting you say:

> If (+) can be implicitly lifted to functions, then why not signum?
> Note that I would lift neither signum nor (+). I don't feel the need.
 ...

I not only feel the need, but I feel that this is important that the
additive structure in the codomain is inherited by functions. In a more
specific context: the fact that linear functionals over a vector space
form also a vector space, is simply *fundamental* for the quantum 
mechanics, for the cristallography, etc. You don't need to be a Royal
Abstractor to see this. 



Jerzy Karczmarczuk
Caen, France


From wli@holomorphy.com Mon Feb 12 08:43:57 2001 Date: Mon, 12 Feb 2001 00:43:57 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Primitive types and Prelude shenanigans
It seems to me that some additional primitive types would be useful,
most of all a natural number type corresponding to an arbitrary-
precision unsigned integer. Apparently Integer is defined in terms
of something else in the GHC Prelude, what else might be needed to
define it?

Some of my other natural thoughts along these lines are positive
reals and rationals, and nonzero integers. Rationals I have an idea
that they might be similar to natural numbers and nonzero integers,
though the nonzero and positive reals pose some nasty problems
(try underflow).

Would such machinations be useful to anyone else?

Further down this line, I've gone off and toyed with Bool, and
discovered GHC doesn't like it much. Is there a particular place within
GHC I should look to see how the primitive Boolean type, and perhaps
other types are handled? I'd also like to see where some of the magic
behind the typing of various other built-in constructs happens, like
list comprehensions, tuples, and derived classes.


Cheers,
Bill


From lisper@it.kth.se Mon Feb 12 09:08:02 2001 Date: Mon, 12 Feb 2001 10:08:02 +0100 (MET) From: Bjorn Lisper lisper@it.kth.se Subject: A sample revised prelude for numeric classes
Tom Pledger:
>Brian Boutel writes:
> :
> | Having Units as types, with the idea of preventing adding Apples to
> | Oranges, or Dollars to Roubles, is a venerable idea, but is not in
> | widespread use in actual programming languages. Why not?

>There was a pointer to some good papers on this in a previous
>discussion of units and dimensions:

>    http://www.mail-archive.com/haskell@haskell.org/msg04490.html

>The main complication is that the type system needs to deal with
>integer exponents of dimensions, if it's to do the job well.

Andrew Kennedy has basically solved this for higher order languages with HM
type inference. He made an extension of the ML type system with dimensional
analysis a couple of years back. Sorry I don't have the references at hand
but he had a paper in ESOP I think.

I think the real place for dimension and unit inference is in modelling
languages, where you can specify physical systems through differential
equations and simulate them numerically. Such languages are being
increasingly used in the "real world" now. 

It would be quite interesting to have a version of Haskell that would allow
the specification of differential equations, so one could make use of all
the good features of Haskell for this. This would allow the unified
specification of systems that consist both of physical and computational
components. This niche is now being filled by a mix of special-purpose
modeling languages like Modelica and Matlab/Simulink for the physical part,
and SDL and UML for control parts. The result is likely to be a mess, in
particular when these specifications are to be combined into full system
descriptions.

Björn Lisper


From karczma@info.unicaen.fr Mon Feb 12 10:56:55 2001 Date: Mon, 12 Feb 2001 10:56:55 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: Dimensions of the World (was: A sample revised prelude)
Ashley Yakeley after Tom Pledger:
> 
> >The main complication is that the type system needs to deal with
> >integer exponents of dimensions, if it's to do the job well.
> 
> Very occasionally non-integer or 'fractal' exponents of dimensions are
> useful. For instance, geographic coastlines can be measured in km ^ n,
> where 1 <= n < 2. This doesn't stop the CIA world factbook listing all
> coastline lengths in straight kilometres, however.
> 
> More unit weirdness occurs with logarithms. For instance, if y and x are
> distances, log (y/x) = log y - log x. Note that 'log x' is some number +
> log (metre). Strange, huh?

When a week ago I mentioned those dollars difficult to multiply
(although
some people spend their lives doing it...), and some dimensional
quantities
which should have focalised some people attention on the differences
between (*) and (+), I never thought the discussion would go so far.

Dimensional quantities *are* a can of worms.
>From the practical point of view they are very useful in order to avoid
making silly programming errors, I have applied them several times while
coding some computer algebra expressions.
Dimensions were "just symbols", but with "reasonable" mathematical
properties (concerning (*) and (/)), so factorizing this symbolic part
was an easy way to see whether I didn't produce some illegal
combinations.

Sometimes they are really "dimensionless" scaling factor! In
TeX/MetaFont
the units such as mm, cm, in etc. exist and function very nicely as
conversion factor.

W.L.I.III asks:

> If you (or anyone else) could comment on what sorts of units would be
> appropriate for the result type of a logarithm operation, I'd be glad to
> hear it. I don't know what the result type of this example is supposed
> to be if the units of a number are encoded in the type.

Actually, the logarithm example would be consider as spurious by almost
all "practical" mathematicians (e.g., physicists). A formula is sane if
the argument of the logarithm is dimensionless (if in x/y both elements
share the same dimension). Then adding and subtracting the same 
log(GHmSmurf) is irrelevant.

==

But in general mathematical physics (and in geometry which encompasses
the
major part of the former) there are some delicate issues, which
sometimes
involve fractality, and sometimes the necessity of "religious acts",
such
as the renormalization schemes in Quantum Field Theory. 
In this case we have the "dimensional transmutation" phenomenon: the
gluon
coupling constant which is dimensionless, acquires a dimension, and 
conditions the hadronic mass scale, i.e. the masses of elementary
particles.
[[[Yes, I know, you, serious comp. scist won't bother about it, but I
will
try anyway to tell you in two words why. A way of making a singular
theory
finite, is to put in on a discrete lattice which represent the phys.
space.
There is a dimensional object here: the lattice constant. Then you go to
zero with it, in order to retrieve the physical space-time. When you
reach
this zero, you lose this constant, and this is one of the reasons why
the
theory explodes. So, it must be introduced elsewhere... In another
words:
a physical correlation length L between objects is finite. If the
lattice
constant c is finite, L=N*c. But if c goes to zero... Now, programming
all this, Haskell or not, is another issue.]]]

==

Fractals are seen not only in geography, but everywhere, as Mandelbrot
and
his followers duly recognized. You will need them doing computations in
colloid physics, in the galaxy statistics, and in the metabolism of
human
body [[if you think that your energy depenses are proportional to your
volume, you are dead wrong, most interesting processes take place within
membranes. You are much flatter than you think, folks, ladies
included.]].

Actually, ALL THIS was one of major driving forces behind my interest in
functional programming. I found an approach to programming which did not
target "symbolic manipulations", but "normal computing", so it could be
practically competiting against Fortran etc. Yet, it had a potential to
deal in a serious, formal manner with the mathematical properties of the
manipulated objects.

That's why I suffer seeing random, ad hoc numerics.

Björn Lisper mentions some approach to dimensions:

> Andrew Kennedy has basically solved this for higher order languages 
> with HM type inference. He made an extension of the ML type system 
> with dimensional analysis a couple of years back. Sorry I don't have 
> the references at hand but he had a paper in ESOP I think.
> 
> I think the real place for dimension and unit inference is in modelling
> languages, where you can specify physical systems through differential
> equations and simulate them numerically. Such languages are being
> increasingly used in the "real world" now. 

ESOP '94. Andrew Kennedy: Dimension Types. 348-362. 
There are other articles:
Jean Goubault. Inférence d'unités physiques en ML ;
Mitchell Wand and Patrick O'Keefe. Automatic dimensional inference;
and *hundreds* (literally) of papers within the Computer Algebra domain
about dimensionful computations. 

I wouldn't say that the issue is "solved".

!!!!!!

There is MUCH MORE in modelling physical (or biologic or financial)
world than just the differential equations. There is plenty of algebra
involved, nad *here* the dimensional reasoning may be important. And
such systems as Matlab/Simulink, etc. ignore the dimensions, although
they have now some OO layer permitting to define something like them.



Jerzy Karczmarczuk
Caen, France


From mk167280@students.mimuw.edu.pl Mon Feb 12 10:00:02 2001 Date: Mon, 12 Feb 2001 11:00:02 +0100 (CET) From: Marcin 'Qrczak' Kowalczyk mk167280@students.mimuw.edu.pl Subject: Primitive types and Prelude shenanigans
On Mon, 12 Feb 2001, William Lee Irwin III wrote:

> It seems to me that some additional primitive types would be useful,
> most of all a natural number type corresponding to an arbitrary-
> precision unsigned integer. Apparently Integer is defined in terms
> of something else in the GHC Prelude, what else might be needed to
> define it?

It depends on the implementation and IMHO it would be bad to require
a particular implementation for no reason. For example ghc uses the gmp
library and does not implement Integers in terms of Naturals; gmp handles
negative numbers natively.

> Some of my other natural thoughts along these lines are positive
> reals and rationals, and nonzero integers.

You can define it yourself by wrapping not-necessarily-positive types
if you feel the need. Most of the time there is no need because Haskell
has no subtyping - they would be awkward to use together with present
types which include negative numbers.

> Further down this line, I've gone off and toyed with Bool, and
> discovered GHC doesn't like it much. Is there a particular place within
> GHC I should look to see how the primitive Boolean type, and perhaps
> other types are handled?

Modules with names beginning with Prel define approximately everything
what Prelude includes and everything with magic support in the compiler.

PrelGHC defines primops that are hardwired in the compiler, and PrelBase
is a basic module from which most things begin. In particular Bool is
defined there as a regular algebraic type
    data Bool = False | True

Types like Int or Double are defined in terms of primitive unboxed types
called Int# and Double#. They are always evaluated and are not like other
Haskell types: they don't have the kind of the form * or k1->k2 but a
special unboxed kind, their values don't include bottom. You can't have
[Int#] or use Prelude.id :: Int# -> Int#. They can be present in data
definitions and function arguments and results.

There is more of primitive stuff: primops like +# :: Int# -> Int# -> Int#,
primitive array types, unboxed tuples, unboxed constants. There is a paper
about it but I don't have the URL here. They are also described in GHC
User's Guide.

They are not portable at all. Other Haskell implementations may use very
different implementation techniques.

In ghc they exist primarily to make it easy to express optimizations -
these types occur all the time during internal transformations of the
module being opzimized - laziness is optimized away when possible. They
are often present in .hi files when a function has been split into worker
and wrapper, so that code using the module can refer to the worker using
primitive types directly instead of allocating every number on the heap.

They are also exposed to the programmer (who imports GlaExts) who really
wants to hack with them manualy. They don't have nice Haskell properties
of other types (fully polymorphic operations don't work on these types) so
I would not expect such thing to appear officially in the Haskell
definition.

> I'd also like to see where some of the magic behind the typing of
> various other built-in constructs happens, like list comprehensions,
> tuples, and derived classes.

Inside the compiler, not in libraries.

-- 
Marcin 'Qrczak' Kowalczyk



From john@foo.net Mon Feb 12 10:21:08 2001 Date: Mon, 12 Feb 2001 02:21:08 -0800 From: John Meacham john@foo.net Subject: A sample revised prelude for numeric classes
I quadruple the vote that the basic algebra proposal is too complicated.
However I don't see how one could write even moderately complex programs
and not wish for a partial ordering class or the ability to use standard
terms for groups and whatnot. the current proposal is much more to my
liking. 

An important thing is that in Haskell it is easy to build up
functionality with fine grained control, but difficult or impossible to
tear it down, You can't take a complicated class and split it up into
smaller independent pieces(not easily at least). but you can
take the functionality of several smaller classes and build up a
'bigger' class. Because of this feature one should always err on the
side of simplicity and smaller classes when writing re-usable code. 

I guess what I'm trying to say is that we don't need a Prelude which
will provide all of the mathematical structure everyone will need or
want, but rather one which doesn't inhibit the ability to build what is
needed upon it in a reasonable fashion. (I don't consider un-importing
the prelude reasonable for re-usable code and libraries meant to be
shared.)

in short, three cheers for the new proposal. My one request is that if
at all possible, make some sort of partial ordering class part of the
changes, they are just way to useful in all types of programs to not
have a standard abstraction.

	John

-- 
--------------------------------------------------------------
John Meacham   http://www.ugcs.caltech.edu/~john/
California Institute of Technology, Alum.  john@foo.net
--------------------------------------------------------------


From ketil@ii.uib.no Mon Feb 12 10:31:00 2001 Date: 12 Feb 2001 11:31:00 +0100 From: Ketil Malde ketil@ii.uib.no Subject: A sample revised prelude for numeric classes
qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) writes:

>> Why do you stop at allowing addition on Dollars and not include
>> multiplication by a scalar?

> Perhaps because there is no good universal type for (*).
> Sorry, it would have to have a different symbol.

Is this ubiquitous enough that we should have a *standardized*
different symbol?   Any candidates?

>> Having Units as types, with the idea of preventing adding Apples to
>> Oranges, or Dollars to Roubles, is a venerable idea, but is not in
>> widespread use in actual programming languages. Why not?

> It does not scale to more general cases. (m/s) / (s) = (m/s^2),
> so (/) would have to have the type (...) => a -> b -> c, which is not
> generally usable because of ambiguities. Haskell's classes are not
> powerful enough to define full algebra of units.

While it may not be in the language, nothing's stopping you from - and
some will probably encourage you to - implementing e.g. financial
libraries with different data types for different currencies. 

Which I think is a better way to handle it, since when you want m to
be divisible by s is rather application dependent.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants


From ashley@semantic.org Mon Feb 12 10:49:02 2001 Date: Mon, 12 Feb 2001 02:49:02 -0800 From: Ashley Yakeley ashley@semantic.org Subject: Scalable and Continuous
A brief idea:

something like...

--
class (Additive a) => Scalable a
     scale :: Real -> a -> a -- equivalent to * (not sure of name for 
Real type)

class (Scalable b) => Continuous a b | a -> b
     add :: b -> a -> a
     difference :: a -> a -> b
--

Vectors, for instance, are Scalable. You can multiply them by any real 
number to get another vector. Num would also be Scalable.

An example of Continuous would be time, e.g. "Continuous Time Interval". 
There's no zero time, although there is a zero interval. Space too: 
"Continuous Position Displacement", since there's no "zero position".

-- 
Ashley Yakeley, Seattle WA



From jf15@hermes.cam.ac.uk Mon Feb 12 10:58:10 2001 Date: Mon, 12 Feb 2001 10:58:10 +0000 (GMT) From: Jon Fairbairn jf15@hermes.cam.ac.uk Subject: A sample revised prelude for numeric classes
On 12 Feb 2001, Ketil Malde wrote:

> qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) writes:
>=20
> >> Why do you stop at allowing addition on Dollars and not include
> >> multiplication by a scalar?
>=20
> > Perhaps because there is no good universal type for (*).
> > Sorry, it would have to have a different symbol.
>=20
> Is this ubiquitous enough that we should have a *standardized*
> different symbol? =20

I'd think so.

> Any candidates?

=2E* *. [and .*.] ?

where the "." is on the side of the scalar

--=20
J=F3n Fairbairn                                 Jon.Fairbairn@cl.cam.ac.uk
31  Chalmers Road                                        jf@cl.cam.ac.uk
Cambridge CB1 3SZ                      +44 1223 570179 (pm only, please)



From mk167280@students.mimuw.edu.pl Mon Feb 12 11:04:39 2001 Date: Mon, 12 Feb 2001 12:04:39 +0100 (CET) From: Marcin 'Qrczak' Kowalczyk mk167280@students.mimuw.edu.pl Subject: A sample revised prelude for numeric classes
On Mon, 12 Feb 2001, John Meacham wrote:

> My one request is that if at all possible, make some sort of partial
> ordering class part of the changes, they are just way to useful in all
> types of programs to not have a standard abstraction.

I like the idea of having e.g. (<) and (>) not necessarily total, and only
total compare. It doesn't need to introduce new operations, just split
an existing class into two.

Only I'm not sure why (<) and (>) should be partial, with (<=) and (>=)
total, and not for example opposite. Or perhaps all four partial, with
compare, min, max - total.

For partial ordering it's often easier to define (<=) or (>=) than (<) or
(>). They are related by (==) and not by negation, so it's not exactly the
same.

I would have PartialOrd with (<), (>), (<=), (>=), and Ord with the rest.
Or perhaps with names Ord and TotalOrd respectively?

There are several choices of default definitions of these four operators.
First of all they can be related either by (==) or by negation. The first
works for partial order, the second is more efficient in the case it works
(total order).

We can have (<=) and (>=) defined in terms of each other, with (<) and (>)
defined in terms of (<=) and (>=) - in either way. Or vice versa, but if
the definition is in terms of (==), then as I said it's better to let
programmers define (<=) or (>=) and derive (<), (>) from them. If they are
defined by negation, then we get more efficient total orders, but we must
explicitly define both one of (<=), (>=) and one of (<), (>) for truly
partial orders, or the results will be wrong.

Perhaps it's safer to have inefficient (<), (>) for total orders than
wrong for partial orders, even if it means that for optimal performance
of total orders one have to define (<=), (<) and (>):

    class Eq a => PartialOrd a where -- or Ord
        (<=), (>=), (<), (>) :: a -> a -> Bool
        -- Minimal definition: (<=) or (>=)
        a <= b = b >= a
        a >= b = b <= a
        a < b  = a <= b && a /= b
        a > b  = a >= b && a /= b

We could also require to define one of (<=), (>=), and one of (<), (>),
for both partial and total orders. Everybody must think about whether he
defines (<) as negation of (>=) or not, and it's simpler for the common
case of total orders - two definitions are needed. The structure of
default definitions is more uniform:

    class Eq a => PartialOrd a where -- or Ord
        (<), (>), (<=), (>=) :: a -> a -> Bool
        -- Minimal definition: (<) or (>), (<=) or (>=)
        a < b  = b > a
        a > b  = b < a
        a <= b = b >= a
        a >= b = b <= a

This is my bet.

-- 
Marcin 'Qrczak' Kowalczyk




From mk167280@students.mimuw.edu.pl Mon Feb 12 11:17:02 2001 Date: Mon, 12 Feb 2001 12:17:02 +0100 (CET) From: Marcin 'Qrczak' Kowalczyk mk167280@students.mimuw.edu.pl Subject: Scalable and Continuous
On Mon, 12 Feb 2001, Ashley Yakeley wrote:

> class (Additive a) => Scalable a
>      scale :: Real -> a -> a -- equivalent to * (not sure of name for Real type)

Or times, which would require multiparameter classes.
    5 `times` "--" == "----------"
    5 `times` (\x -> x+1) === (\x -> x+5)
But this would suggest separating out Monoid from Additive - ugh. It makes
sense to have zero and (+) for lists and functions a->a, but not negation.
There is a class Monoid for ghc's nonstandard MonadWriter class. We would
have (++) unified with (+) and concat unified with sum.

I'm afraid of making too many small classes. But it would perhaps be not
so bad if one could define superclass' methods in subclasses, so that one
can forget about exact structure of classes and treat a bunch of classes
as a single class if he wishes. It would have to be combined with
compiler-inferred warnings about mutual definitions giving bottoms.

-- 
Marcin 'Qrczak' Kowalczyk



From wli@holomorphy.com Mon Feb 12 11:24:08 2001 Date: Mon, 12 Feb 2001 03:24:08 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: In hoc signo vinces (Was: Revamping the numeric classes)
In a later posting Marcin Kowalczyk says:
>> If (+) can be implicitly lifted to functions, then why not signum?
>> Note that I would lift neither signum nor (+). I don't feel the need.
>>  ...

On Mon, Feb 12, 2001 at 09:33:03AM +0000, Jerzy Karczmarczuk wrote:
> I not only feel the need, but I feel that this is important that the
> additive structure in the codomain is inherited by functions. In a more
> specific context: the fact that linear functionals over a vector space
> form also a vector space, is simply *fundamental* for the quantum 
> mechanics, for the cristallography, etc. You don't need to be a Royal
> Abstractor to see this. 

I see this in a somewhat different light, though I'm in general agreement.

What I'd like to do is to be able to effectively model module structures
in the type system, and furthermore be able to simultaneously impose
distinct module structures on a particular type. For instance, complex
n-vectors are simultaneously C-modules and R-modules. and an arbitrary
commutative ring R is at once a Z-module and an R-module. Linear
functionals, which seem like common beasts (try a partially applied
inner product) live in the mathematical structure Hom_R(M,R) which is once
again an R-module, and, perhaps, by inheriting structure on R, an R'
module from various R'. So how does this affect Prelude design? Examining
a small bit of code could be helpful:

-- The group must be Abelian. I suppose anyone could think of this.
class (AdditiveGroup g, Ring r) => LeftModule g r where
	(&) :: r -> g -> g

instance AdditiveGroup g => LeftModule g Integer where
	n & x	| n == 0 = one
		| n < 0  = -(n & (-x))
		| n > 0  = x + (n-1) & x

... and we naturally acquire the sort of structure we're looking for.
But this only shows a possible outcome, and doesn't motivate the
implementation. What _will_ motivate the implementation is the sort
of impact this has on various sorts of code:

(1) The fact that R is an AdditiveGroup immediately makes it a
	Z-module, so we have mixed-mode arithmetic by a different
	means from the usual implicit coercion.

(2) This sort of business handles vectors quite handily.

(3) The following tidbit of code immediately handles curried innerprods:

instance (AdditiveGroup group, Ring ring) => LeftModule (group->ring) ring
	where
		r & g = \g' -> r & g g'

(4) Why would we want to curry innerprods? I envision:

type SurfaceAPoles foo = SomeGraph (SomeVector foo)

and then

	surface :: SurfaceAPoles bar
	innerprod v `fmap` normalsOf faces where faces = facesOf surface

(5) Why would we want to do arithmetic on these beasts now that
	we think we might need them at all?

If we're doing things like determining the light reflected off of the
various surfaces we will want to scale and add together the various
beasties. Deferring the innerprod operation so we can do this is inelegant
and perhaps inflexible compared to:

	lightSources :: [(SomeVector foo -> Intensity foo, Position)]
	lightSources = getLightSources boundingSomething
	reflection = sum $ map (\(f,p) -> getSourceWeight p * f) lightSources
	reflection `fmap` normalsOf faces where faces = facesOf surface

and now in the lightSources perhaps ambient light can be represented
very conveniently, or at least the function type serves to abstract out
the manner in which the orientation of a surface determines the amount
of light reflected off it.

(My apologies for whatever inaccuracies are happening with the optics
here, it's quite far removed from my direct experience.)

Furthermore, within things like small interpreters, it is perhaps
convenient to represent the semantic values of various expressions by
function types. If one should care to define arithmetic on vectors and
vector functions in the interpreted language, support in the source
language allows a more direct approach. This would arise within solid
modelling and graphics once again, as little languages are often used
to describe objects, images, and the like.

How can we anticipate all the possible usages of pretty-looking vector
and matrix algebra? I suspect graphics isn't the only place where
linear algebra could arise. All sorts of differential equation models
of physical phenomena, Markov models of state transition systems, even
economic models at some point require linear algebra in their
computational methods.  It's something I at least regard as a fairly
fundamental and important aspect of computation. And to me, that means
that the full power of the language should be applied toward beautifying,
simplifying, and otherwise enabling linear algebraic computations.


Cheers,
Bill
P.S.:	Please forgive the harangue-like nature of the post, it's the best
	I could do at 3AM.


From mk167280@students.mimuw.edu.pl Mon Feb 12 11:36:50 2001 Date: Mon, 12 Feb 2001 12:36:50 +0100 (CET) From: Marcin 'Qrczak' Kowalczyk mk167280@students.mimuw.edu.pl Subject: In hoc signo vinces (Was: Revamping the numeric classes)
On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:

> I not only feel the need, but I feel that this is important that the
> additive structure in the codomain is inherited by functions.

It could support only the basic arithmetic. It would not automatically
lift an expression which uses (>) and if. It would be inconsistent to
provide a shortcut for a specific case, where generally it must be
explicitly lifted anyway. Note that it does make sense to lift (>) and if,
only the type system does not permit it implicitly because a type is fixed
to Bool.

Lifting is so easy to do manually that I would definitely not constrain
the whole Prelude class system only to have convenient lifting of basic
arithmetic. When it happens that an instance of an otherwise sane class
for functions makes sense, then OK, but nothing more.

-- 
Marcin 'Qrczak' Kowalczyk



From karczma@info.unicaen.fr Mon Feb 12 13:10:22 2001 Date: Mon, 12 Feb 2001 13:10:22 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: In hoc signo vinces (Was: Revamping the numeric classes)
Marcin Kowalczyk wrote:
> 
> Jerzy Karczmarczuk wrote:
> 
> > I not only feel the need, but I feel that this is important that the
> > additive structure in the codomain is inherited by functions.
> 
> It could support only the basic arithmetic. It would not automatically
> lift an expression which uses (>) and if. It would be inconsistent to
> provide a shortcut for a specific case, where generally it must be
> explicitly lifted anyway. Note that it does make sense to lift (>) and if,
> only the type system does not permit it implicitly because a type is fixed
> to Bool.
> 
> Lifting is so easy to do manually that I would definitely not constrain
> the whole Prelude class system only to have convenient lifting of basic
> arithmetic. When it happens that an instance of an otherwise sane class
> for functions makes sense, then OK, but nothing more.

Sorry for quoting in extenso the full posting just to say:

I haven't the slightest idea what are you talking about.

-- but I want to avoid partial quotations and misunderstandings
resulting
thereof. I don't want any automatic lifting nor *constrain* the Prelude
class. I want to be *able* to define mathematical operations upon
objects
which by their intrinsic nature permit so!

My goodness, I suspect really that despite plenty of opinions you
express
every day on this list you didn't really try to program something in 
Haskell    IN A MATHEMATICALLY NON-TRIVIAL CONTEXT.

I defined hundred times some special functions to add lists or records,
to multiply a tree by a scalar (btw.: Jón Fairbarn proposes (.*), I have
in principle nothing against, but these operators is used elsewhere, in
other languages, CAML and Matlab; I use (*>) ).

I am fed up with solutions ad hoc, knowing that correct mathematical
hierarchies permit to inherit plenty of subsumptions, e.g. the fact that
x+x exists implies 2*x.

Thank you for reminding me that manual lifting is easy. 
In fact, everything is easy. Type-checking as well. Let's go back to
assembler.

Jerzy Karczmarczuk


From mk167280@students.mimuw.edu.pl Mon Feb 12 12:34:55 2001 Date: Mon, 12 Feb 2001 13:34:55 +0100 (CET) From: Marcin 'Qrczak' Kowalczyk mk167280@students.mimuw.edu.pl Subject: In hoc signo vinces (Was: Revamping the numeric classes)
On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:

> I want to be *able* to define mathematical operations upon objects
> which by their intrinsic nature permit so!

You can't do it in Haskell as it stands now, no matter what the Prelude
would be.

For example I would say that with the definition
    abs x =3D if x >=3D 0 then x else -x
it's obvious how to obtain abs :: ([Int]->Int) -> ([Int]->Int): apply the
definition pointwise.

But it will never work in Haskell, unless we changed the type rules for if
and the tyoe of the result of (>=3D).

You are asking for letting
    abs x =3D max x (-x)
work on functions. OK, in this particular case it can be made to work by
making appropriate instances, but it's because this is a special case
where all intermediate types are appropriately polymorphic.

This technique cannot work in general, as the previous example shows. So
IMHO it's better to not try to pretend that functions can be implicitly
lifted. Better provide as convenient as possible way of manual lifting
arbitrary functions, so it doesn't matter if they have fixed Integer in
the result or not.

You are asking for an impossible thing.

> I defined hundred times some special functions to add lists or
> records, to multiply a tree by a scalar (btw.: J=F3n Fairbarn proposes
> (.*), I have in principle nothing against, but these operators is used
> elsewhere, in other languages, CAML and Matlab; I use (*>) ).

Please show a concrete proposal how Prelude classes could be improved.

--=20
Marcin 'Qrczak' Kowalczyk



From dlb@wash.averstar.com Mon Feb 12 12:53:42 2001 Date: Mon, 12 Feb 2001 07:53:42 -0500 From: David Barton dlb@wash.averstar.com Subject: A sample revised prelude for numeric classes
   This is pretty rare, and it's also fairly tough to represent points in
   spaces of fractional dimension. I'll bet the sorts of complications
   necessary to do so would immediately exclude it from consideration in
   the design of a standard library, but nevertheless would be interesting
   to hear about. Can you comment further on this?

Even without fractals, there are cases where weird dimensions come up
(I ran across this in my old MHDL (microwave) days).  Square root
volts is the example that was constantly thrown in my face.  It
doesn't really mess up the model that much; you just have to use
rational dimensions rather than integer dimensions.  Everything else
works out.  I have *not* come across a case where real dimensions are
necessary, so equality still works.

					Dave Barton <*>
					dlb@averstar.com )0(
					http://www.averstar.com/~dlb


From qrczak@knm.org.pl Mon Feb 12 14:12:02 2001 Date: 12 Feb 2001 14:12:02 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: A sample revised prelude for numeric classes
Mon, 12 Feb 2001 12:04:39 +0100 (CET), Marcin 'Qrczak' Kowalczyk <mk167280@zodiac.mimuw.edu.pl> pisze:

> This is my bet.

I changed my mind:

    class Eq a => PartialOrd a where -- or Ord
        (<), (>), (<=), (>=) :: a -> a -> Bool
        -- Minimal definition: (<) or (<=).
        -- For partial order (<=) is required.
        -- For total order (<) is recommended for efficiency.
        a < b  = a <= b && a /= b
        a > b  = b < a
        a <= b = not (b < a)
        a >= b = b <= a

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From karczma@info.unicaen.fr Mon Feb 12 16:40:06 2001 Date: Mon, 12 Feb 2001 16:40:06 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: In hoc signo vinces (Was: Revamping the numeric classes)
Marcin Kowalczyk continues:

> On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:
> 
> > I want to be *able* to define mathematical operations upon objects
> > which by their intrinsic nature permit so!
> 
> You can't do it in Haskell as it stands now, no matter what the Prelude
> would be.
> 
> For example I would say that with the definition
>     abs x = if x >= 0 then x else -x
> it's obvious how to obtain abs :: ([Int]->Int) -> ([Int]->Int): apply the
> definition pointwise.
> 
> But it will never work in Haskell, unless we changed the type rules for if
> and the tyoe of the result of (>=).
> 
> You are asking for letting
>     abs x = max x (-x)
> work on functions. OK, in this particular case it can be made to work 
 ....

Why don't you try from time to time to attempt to understand what
other people want? And wait, say 2 hours, before responding? 

I DON'T WANT max TO WORK ON FUNCTIONS. I never did. I will soon (because
I am writing a graphical package where max serves to intersect implicit
graphical objects) need that, but for very specific functions which
represent textures, but NOT in general.

I repeat for the last time, that I want to have those operations which
are *implied* by the mathematical properties. And anyway, if you replace
x>=0 by x>=zero with an appropriate zero, this should work as well.
I want only that Prelude avoids spurious dependencies.

This is the way I program in Clean, where there is no Num, and (+), (*),
zero, abs, etc. constitute classes by themselves. So, when you say:

> You are asking for an impossible thing.

My impression is what is impossible, is your way of interpreting/
understanding the statements (and/or desiderata) of other people. 

> > I defined hundred times some special functions to add lists or
> > records, to multiply a tree by a scalar (btw.: Jón Fairbarn proposes
> > (.*), I have in principle nothing against, but these operators is used
> > elsewhere, in other languages, CAML and Matlab; I use (*>) ).
> 
> Please show a concrete proposal how Prelude classes could be improved.

(Why do you precede your query by this citation? What do you have to say
here about the syntax proposed by Jón Fairbarn, or whatever??)

I am Haskell USER. I have no ambition to save the world. The "proposal"
has been presented in 1995 in Nijmegen (FP in education). Actually, it
hasn't, I concentrated on lazy power series etc., and the math oriented
prelude has been mentioned casually. Jeroen Fokker presented similar
ideas, implemented differently. 
If you have nothing else to do (but only in this case!) you may find 
the modified prelude called math.hs for Hugs (which needs a modified 
prelude.hs exporting primitives) in 

http://users.info.unicaen.fr/~karczma/humat/

This is NOT a "public proposal" and I *don't want* your public comments
on it. If you want to be nice, show me some of *your* Haskell programs.

Jerzy Karczmarczuk
Caen, France


From dpt@math.harvard.edu Mon Feb 12 16:59:04 2001 Date: Mon, 12 Feb 2001 11:59:04 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: A sample revised prelude for numeric classes
On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
> > Thus these laws should be interpreted as guidelines rather than
> > absolute rules.  In particular, the compiler is not allowed to use
> > them.  Unless stated otherwise, default definitions should also be
> > taken as laws.
> 
> Including laws was discussed very early in the development of the
> language, but was rejected. IIRC Miranda had them. The argument against
> laws was that their presence might mislead users into the assumption
> that they did hold, yet if they were not enforcable then they might not
> hold and that could have serious consequences. Also, some laws do not
> hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a
> is not bottom. 

These are good points, but I still feel that laws can be helpful as
guidelines, as long as they are not interpreted as anything more.  For
instance, the Haskell Report does give laws for Monads and quotRem,
although they, too, are not satisfied in the presence of bottom, etc.
(Is that right?)

Writing out the laws lets me say, for instance, whether users of Num
and Fractional should expect multiplication to be commutative.  (No
and yes, respectively.  I require Fractional to be commutative mainly
because common usage does not use either '/' or 'reciprocal' to
indicate inverse in a non-commutative ring.)

> > class (Additive a) => Num a where
> >     (*)         :: a -> a -> a
> >     one         :: a
> >     fromInteger :: Integer -> a
> >
> >       -- Minimal definition: (*), one
> >     fromInteger 0         = zero
> >     fromInteger n | n < 0 = negate (fromInteger (-n))
> >     fromInteger n | n > 0 = reduceRepeat (+) one n
> 
> This definition requires both Eq and Ord!!!

Ah, but only Eq and Ord for Integer, which (as a built-in type) has Eq
and Ord instances.  The type signature for reduceRepeated is

  reduceRepeated :: (a -> a -> a) -> a -> Integer -> a

> As does this one:
> > class (Num a, Additive b) => Powerful a b where
> >     (^) :: a -> b -> a
> > instance (Num a) => Powerful a (Positive Integer) where
> >     a ^ 0 = one
> >     a ^ n = reduceRepeated (*) a n
> > instance (Fractional a) => Powerful a Integer where
> >     a ^ n | n < 0 = recip (a ^ (negate n))
> >     a ^ n         = a ^ (positive n)

Likewise here.

> and several others further down. 

I tried to be careful not to use Eq and Ord for generic types when not
necessary, but I may have missed some.  Please let me know.

(Oh, I just realised that Euclid's algorithm requires Eq.  Oops.
That's what I get for not writing it out explicitly.  I'll have to
revisit the Integral part of the hierarchy.)

> > (4) In some cases, the hierarchy is not finely-grained enough:
> >     operations that are often defined independently are lumped
> >     together.  For instance, in a financial application one might want
> >     a type "Dollar", or in a graphics application one might want a
> >     type "Vector".  It is reasonable to add two Vectors or Dollars,
> >     but not, in general, reasonable to multiply them.  But the
> >     programmer is currently forced to define a method for (*) when she
> >     defines a method for (+).
> 
> Why do you stop at allowing addition on Dollars and not include
> multiplication by a scalar? Division is also readily defined on Dollar
> values, with a scalar result, but this, too, is not available in the
> proposal. 

I will allow multiplication by a scalar; it's just not in the classes
I've written down so far.  (And may not be in the Prelude.)

Thanks for reminding me about division.  I had forgotten about that.
It bears some thought.

> Having Units as types, with the idea of preventing adding Apples to
> Oranges, or Dollars to Roubles, is a venerable idea, but is not in
> widespread use in actual programming languages. Why not?

That's a good question.  I don't know.  One cheeky answer would be for
lack of a powerful enough type system (allowing you to, e.g., work on
generic units when you want to), but I don't know if that is actually
true.

Don't modern HP calculators use units consistently?

> Vectors, too, can be multiplied, producing both scalar- and
> vector-products.

Yes, but these are really different operations and should be
represented with different symbols.  Neither one is associative, for
instance.

> It seems that you are content with going as far as the proposal permits,
> though you cannot define, even within the revised Class system, all the
> common and useful operations on these types. This is the same situation
> as with Haskell as it stands. The question is whether the (IMHO)
> marginal increase in flexibility is worth the cost.

I believe that with this structure as base, the other common and
useful operations can easily be added on top.

But I should go ahead and do it.

Best,
	Dylan Thurston


From qrczak@knm.org.pl Mon Feb 12 17:20:43 2001 Date: 12 Feb 2001 17:20:43 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revamping the numeric classes
Mon, 12 Feb 2001 10:58:40 +1300, Tom Pledger <Tom.Pledger@peace.com> pisze:

>  | Approach it differently. z is Double, (x+y) is added to it, so
>  | (x+y) must have type Double.
> 
> That's a restriction I'd like to avoid.  Instead: ...so the most
> specific common supertype of Double and (x+y)'s type must support
> addition.

In general there is no such thing as (x+y)'s type considered separately
from this usage. The use of (x+y) as one of arguments of this addition
influences the type determined for it. Suppose x and y are lambda-bound
variables: then you don't know their types yet.

Currently this addition determines their types: it must be the same
as the type of z.

With your rules the type of
    \x y -> x + y
is not
    (some context) => a -> a -> a
but
    (some context) => a -> b -> c

It leads to horrible ambiguities unless the context is able to
determine some types exactly (which is currently true only for
fundeps).

>  | Why is your approach better than mine?
> 
> It used a definition of (+) which was a closer fit for the types of x
> and y.

But used a worse definition of the outer (+): mine was
    Double -> Double -> Double
and yours was
    Int -> Double -> Double
with the implicit conversion of Int to double.

> Yes, I rashly glossed over the importance of having well-defined most
> specific common supertype (MSCS) and least specific common subtype
> (LSCS) operators in a subtype lattice.

They are not always defined. Suppose the following holds:
    Word32 `Subtype` Double
    Word32 `Subtype` Integer
    Int32  `Subtype` Double
    Int32  `Subtype` Integer
What is the MSCS of Word32 and Int32? What is the LSCS of Double
and Integer?

> Anyway, since neither of us is about to have a change of mind, and
> nobody else is showing an interest in this branch of the discussion,
> it appears that the most constructive thing for me to do is return to
> try-to-keep-quiet-about-subtyping-until-I've-done-it-in-THIH mode.

IMHO it's impossible to do.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From dpt@math.harvard.edu Mon Feb 12 17:24:53 2001 Date: Mon, 12 Feb 2001 12:24:53 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Clean numeric system?
On Mon, Feb 12, 2001 at 04:40:06PM +0000, Jerzy Karczmarczuk wrote:
> This is the way I program in Clean, where there is no Num, and (+), (*),
> zero, abs, etc. constitute classes by themselves. ...

I've heard Clean mentioned before in this context, but I haven't found
the Clean numeric class system described yet.  Can you send me a
pointer to their class system, or just give me a description?

Does each operation really have its own class?  That seems slightly
silly.  Are the (/) and 'recip' equivalents independent, and
independent of (*) as well?

Best,
	Dylan Thurston


From dpt@math.harvard.edu Mon Feb 12 18:15:14 2001 Date: Mon, 12 Feb 2001 13:15:14 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: A sample revised prelude for numeric classes
On Mon, Feb 12, 2001 at 07:24:31AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Sun, 11 Feb 2001 22:27:53 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:
> > Reading this, it occurred to me that you could explictly declare an
> > instance of Powerful Integer Integer and have everything else work.
> No, because it overlaps with Powerful a Integer (the constraint on a
> doesn't matter for determining if it overlaps).

Point.  Thanks.  Slightly annoying.

> > > Then the second argument of (^) is always arbitrary RealIntegral,
> > 
> > Nit: the second argument should be an Integer, not an arbitrary
> > RealIntegral.
> 
> Of course not. (2 :: Integer) ^ (i :: Int) makes perfect sense.

But for arbitrary RealIntegrals it need not make sense.

Please do not assume that
  toInteger :: RealIntegral a => a -> Integer
  toInteger n | n < 0 = toInteger negate n
  toInteger 0         = 0
  toInteger n | n > 0 = 1 + toInteger (n-1)
(or the more efficient version using 'even') terminates (in principle)
for all RealIntegrals, at least with the definition as it stands in my
proposal.  Possibly toInteger should be added; then (^) could have the
type you suggest.  For usability issues, I suppose it should.  (E.g.,
users will want to use Int ^ Int.)

OK, I'm convinced of the necessity of toInteger (or an equivalent).
I'll fit it in.

Best,
	Dylan Thurston


From dpt@math.harvard.edu Mon Feb 12 18:23:53 2001 Date: Mon, 12 Feb 2001 13:23:53 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: A sample revised prelude for numeric classes
On Sun, Feb 11, 2001 at 09:17:53PM -0800, William Lee Irwin III wrote:
> Consider taking of the residue of a truly infinite member of Z[[x]]
> mod an ideal generated by a polynomial, e.g. 1/(1-x) mod (1+x^2).
> You can take the residue of each term of 1/(1-x), so x^(2n) -> (-1)^n
> and x^(2n+1) -> (-1)^n x, but you end up with an infinite number of
> (nonzero!) residues to add up and hence encounter the troubles with
> processes not being finite that I mentioned.

Sorry, isn't (1+x^2) invertible in Z[[x]]?

> I think it's nice to have the Cauchy principal value versions of things
> floating around.  I know at least that I've had call for using the CPV
> of exponentiation (and it's not hard to contrive an implementation),
> but I'm almost definitely an atypical user. (Note, (**) does this today.)

Does Cauchy Principal Value have a specific definition I should know?
The Haskell report refers to the APL language report; do you mean that
definition?

For the Complex class, that should be the choice.

> I neglected here to add in the assumption that (<=) was a total relation,
> I had in mind antisymmetry of (<=) in posets so that element isomorphism
> implies equality. Introducing a Poset class where elements may be
> incomparable appears to butt against some of the bits where Bool is
> hardwired into the language, at least where one might attempt to use a
> trinary logical type in place of Bool to denote the result of an
> attempted comparison.

I'm still agnostic on the Poset issue, but as an aside, let me mention
that "Maybe Bool" works very well as a trinary logical type.  "liftM2
&&" does the correct trinary and, for instance.

> On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
> > But to define <= in terms of meet and join you already need Eq!
> > 
> >   x <= y === meet x y == y
> 
> I don't usually see this definition of (<=), and it doesn't seem like
> the natural way to go about defining it on most machines. The notion
> of the partial (possibly total) ordering (<=) seems to be logically
> prior to that of the meet to me. The containment usually goes:

It may be logically prior, but computationally it's not...  Note that
the axioms for lattices can be stated either in terms of the partial
ordering, or in terms of meet and join.

(In a completely fine-grained ordering heirarchy, I would have the
equation I gave above as a default definition for <=, with the
expectation that most users would want to override it.  Compare my
fromInteger default definition.)

Best,
	Dylan Thurston


From dpt@math.harvard.edu Mon Feb 12 18:51:54 2001 Date: Mon, 12 Feb 2001 13:51:54 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Typing units correctly
On Mon, Feb 12, 2001 at 10:08:02AM +0100, Bjorn Lisper wrote:
> >The main complication is that the type system needs to deal with
> >integer exponents of dimensions, if it's to do the job well.
> Andrew Kennedy has basically solved this for higher order languages with HM
> type inference. He made an extension of the ML type system with dimensional
> analysis a couple of years back. Sorry I don't have the references at hand
> but he had a paper in ESOP I think.

The papers I could find (e.g.,
http://citeseer.nj.nec.com/kennedy94dimension.html, "Dimension Types")
mention extensions to ML.  I wonder if it is possible to work within
the Haskell type system, which is richer than ML's type system.

The main problem I see is that the dimensions should commute:
  Length * Time = Time * Length.
I can't think of how to represent Length, Time, and * as types,
type constructors, or whatnot so that that would be true.  You could
put in functions to explicitly do the conversion, but that obviously
gets impractical.

Any such system would probably not be able to type (^), since the
output type depends on the exponent.  I think that is acceptable.

I think you would also need a finer-grained heirarchy in the Prelude
(including than in my proposal) to get this to work.

> It would be quite interesting to have a version of Haskell that would allow
> the specification of differential equations, so one could make use of all
> the good features of Haskell for this. This would allow the unified
> specification of systems that consist both of physical and computational
> components. This niche is now being filled by a mix of special-purpose
> modeling languages like Modelica and Matlab/Simulink for the physical part,
> and SDL and UML for control parts. The result is likely to be a mess, in
> particular when these specifications are to be combined into full system
> descriptions.

My hope is that you wouldn't need a special version of Haskell.

Best,
	Dylan Thurston


From Tom.Pledger@peace.com Mon Feb 12 20:59:37 2001 Date: Tue, 13 Feb 2001 09:59:37 +1300 From: Tom Pledger Tom.Pledger@peace.com Subject: Typing units correctly
Dylan Thurston writes:
 | Any such system would probably not be able to type (^), since the
 | output type depends on the exponent.  I think that is acceptable.

In other words, the first argument to (^) would have to be
dimensionless?  I agree.  So would the arguments to trig functions,
etc.


Ashley Yakeley writes:
 | Very occasionally non-integer or 'fractal' exponents of dimensions
 | are useful. For instance, geographic coastlines can be measured in
 | km ^ n, where 1 <= n < 2. This doesn't stop the CIA world factbook
 | listing all coastline lengths in straight kilometres, however.

David Barton writes:
 | Even without fractals, there are cases where weird dimensions come
 | up (I ran across this in my old MHDL (microwave) days).  Square
 | root volts is the example that was constantly thrown in my face.

In both of those cases, the apparent non-integer dimension is
accompanied by a particular unit (km, V).  So, could they equally well
be handled by stripping away the units and exponentiating a
dimensionless number?  For example:

    (x / 1V) ^ y


Regards,
Tom


From jhf@lanl.gov Mon Feb 12 21:13:38 2001 Date: Mon, 12 Feb 2001 14:13:38 -0700 (MST) From: Joe Fasel jhf@lanl.gov Subject: In hoc signo vinces (Was: Revamping the numeric classes)
On 09-Feb-2001 William Lee Irwin III wrote:
| Matrix rings actually manage to expose the inappropriateness of signum
| and abs' definitions and relationships to Num very well:
| 
| class  (Eq a, Show a) => Num a  where
|     (+), (-), (*)   :: a -> a -> a
|     negate          :: a -> a
|     abs, signum     :: a -> a
|     fromInteger     :: Integer -> a
|     fromInt         :: Int -> a -- partain: Glasgow extension
| 
| Pure arithmetic ((+), (-), (*), negate) works just fine.
| 
| But there are no good injections to use for fromInteger or fromInt,
| the type of abs is wrong if it's going to be a norm, and it's not
| clear that signum makes much sense.

For fromInteger, fromInt, and abs, the result should be a scalar matrix.
For the two coercions, I don't think there would be much controversy about this.
I agree that it would be nice if abs could return a scalar, but this requires
multiparameter classes, so we have to make do with a scalar matrix.

We already have this problem with complex numbers:  It might be nice
if the result of abs were real.

signum does make sense.  You want abs and signum to obey these laws:

        x == abs x * signum x
        abs (signum x) == (if abs x == 0 then 0 else 1)

Thus, having fixed an appropriate matrix norm, signum is a normalization
function, just as with reals and complexes.

If we make the leap to multiparameter classes, I think this is
the signature we want:

        class (Eq a, Show a) => Num a b | a --> b where
            (+), (-), (*)       :: a -> a -> a
            negate              :: a -> a
            abs                 :: a -> b
            signum              :: a -> a
            scale               :: b -> a -> a
            fromInteger         :: Integer -> a
            fromInt             :: Int -> a

Here, b is the type of norms of a.  Instead of the first law above, we have

        x == scale (abs x) (signum x)

All this, of course, is independent of whether we want a more proper
algebraic class hierarchy, with (+) introduced by Monoid, negate and (-)
by Group, etc.

Cheers,
--Joe

Joseph H. Fasel, Ph.D.              email: jhf@lanl.gov
Technology Modeling and Analysis    phone: +1 505 667 7158
University of California            fax:   +1 505 667 2960
Los Alamos National Laboratory      post:  TSA-7 MS F609; Los Alamos, NM 87545


From wli@holomorphy.com Mon Feb 12 21:31:29 2001 Date: Mon, 12 Feb 2001 13:31:29 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: In hoc signo vinces (Was: Revamping the numeric classes)
On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote:
> For fromInteger, fromInt, and abs, the result should be a scalar matrix.
> For the two coercions, I don't think there would be much controversy
> about this. I agree that it would be nice if abs could return a
> scalar, but this requires multiparameter classes, so we have to make
> do with a scalar matrix.

I'm not a big fan of this approach. I'd like to see at least some
attempt to statically type dimensionality going on, and that flies in
the face of it. Worse yet, coercing integers to matrices is likely to
be a programmer error.

On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote:
> signum does make sense.  You want abs and signum to obey these laws:
> 
>         x == abs x * signum x
>         abs (signum x) == (if abs x == 0 then 0 else 1)
> 
> Thus, having fixed an appropriate matrix norm, signum is a normalization
> function, just as with reals and complexes.

This works fine for matrices of reals, for matrices of integers and
polynomials over integers and the like, it breaks down quite quickly.
It's unclear that in domains like that, the norm would be meaningful
(in the sense of something we might want to compute) or that it would
have a type that meshes well with a class hierarchy we might want to
design. Matrices over Z/nZ for various n and Galois fields, and perhaps
various other unordered algebraically incomplete rings explode this
further still.

On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote:
> If we make the leap to multiparameter classes, I think this is
> the signature we want:

Well, nothing is going to satisfy everyone. It's pretty reasonable,
though.

Cheers,
Bill


From jhf@lanl.gov Mon Feb 12 21:51:52 2001 Date: Mon, 12 Feb 2001 14:51:52 -0700 (MST) From: Joe Fasel jhf@lanl.gov Subject: In hoc signo vinces (Was: Revamping the numeric classes)
On 12-Feb-2001 William Lee Irwin III wrote:
| On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote:
|> signum does make sense.  You want abs and signum to obey these laws:
|> 
|>         x == abs x * signum x
|>         abs (signum x) == (if abs x == 0 then 0 else 1)
|> 
|> Thus, having fixed an appropriate matrix norm, signum is a normalization
|> function, just as with reals and complexes.
| 
| This works fine for matrices of reals, for matrices of integers and
| polynomials over integers and the like, it breaks down quite quickly.
| It's unclear that in domains like that, the norm would be meaningful
| (in the sense of something we might want to compute) or that it would
| have a type that meshes well with a class hierarchy we might want to
| design. Matrices over Z/nZ for various n and Galois fields, and perhaps
| various other unordered algebraically incomplete rings explode this
| further still.

Fair enough.  So, the real question is not whether signum makes sense,
but whether abs does.  I guess the answer is that it does for matrix rings
over division rings.

Cheers,
--Joe

Joseph H. Fasel, Ph.D.              email: jhf@lanl.gov
Technology Modeling and Analysis    phone: +1 505 667 7158
University of California            fax:   +1 505 667 2960
Los Alamos National Laboratory      post:  TSA-7 MS F609; Los Alamos, NM 87545


From wli@holomorphy.com Mon Feb 12 22:10:20 2001 Date: Mon, 12 Feb 2001 14:10:20 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: A sample revised prelude for numeric classes
On Sun, Feb 11, 2001 at 09:17:53PM -0800, William Lee Irwin III wrote:
>> mod an ideal generated by a polynomial, e.g. 1/(1-x) mod (1+x^2).

On Mon, Feb 12, 2001 at 01:23:53PM -0500, Dylan Thurston wrote:
> Sorry, isn't (1+x^2) invertible in Z[[x]]?

You've caught me asleep at the wheel again. Try 1/(1-x) mod 2+x^2. Then
	x^(2n)   -> (-2)^n
	x^(2n+1) -> (-2)^n x
so our process isn't finite again, and as 2 is not a unit in Z,
2+x^2 is not a unit in Z[[x]].

On Sun, Feb 11, 2001 at 09:17:53PM -0800, William Lee Irwin III wrote:
>> I think it's nice to have the Cauchy principal value versions of things
>> floating around.  I know at least that I've had call for using the CPV
>> of exponentiation (and it's not hard to contrive an implementation),
>> but I'm almost definitely an atypical user. (Note, (**) does this today.)

On Mon, Feb 12, 2001 at 01:23:53PM -0500, Dylan Thurston wrote:
> Does Cauchy Principal Value have a specific definition I should know?
> The Haskell report refers to the APL language report; do you mean that
> definition?

The Cauchy principal value of an integral seems fairly common in complex
analysis, and so what I mean by the CPV of exponentiation is using the
principal value of the logarithm in the definition w^z = exp (z * log w).

Essentially, given an integral from one point to another in the complex
plane (where the points can be e^(i*\gamma)*\infty) the Cauchy principal
value specifies precisely which contour to use, for if the function has
a singularity, connecting the endpoints by a countour that loops about
those singularities a number of times will affect the value of the
integral. This is fairly standard complex analysis, are you sure you
can't dig it up somewhere? It basically says to connect the endpoints
of integration by a straight line unless singularities occur along that
line, and in that case, to shrink a semicircle about the singularities,
and the limit is the Cauchy principal value. More precise definitions
are lengthier.

On Mon, Feb 12, 2001 at 01:23:53PM -0500, Dylan Thurston wrote:
> I'm still agnostic on the Poset issue, but as an aside, let me mention
> that "Maybe Bool" works very well as a trinary logical type.  "liftM2
> &&" does the correct trinary and, for instance.

I can only argue against this on aesthetic grounds. (<=) and cousins
are not usually typed so as to return Maybe Bool.

On Mon, Feb 12, 2001 at 01:23:53PM -0500, Dylan Thurston wrote:
> It may be logically prior, but computationally it's not...  Note that
> the axioms for lattices can be stated either in terms of the partial
> ordering, or in terms of meet and join.

I was under the impression the distinction between lattices and partial
orders was the existence of the meet and join operations.

Actually, I think my argument centers about the use of the antisymmetry
of the relation (<=) being used to define computational equality in
some instances. Can I think of any good examples? Well, a contrived one
would be that on types, if there is a substitution S such that S t = t'
(structurally), where we might say that t' <= t, and also a
substitution S' so such that S' t' = t (again, structurally) where we
might say that t <= t', so we have then t == t' (semantically). Yes,
I realize this is not a great way to go about this.

Another (perhaps contrived) example would be ordering expression trees
by the flat CPO bottom <= _ on constants of a signature, and the
natural business where if the trees differ in structure, they're
incomparable, except where bottom would be compared with something
non-bottom, in which case (<=) holds. In this case, we might want
equality to be that two expression trees t, t' are equal iff there are
sequences of reductions r, r' such that r t = r' t' (again, structurally).

You might argue that the notion of structural equality underlying these
is some sort of grounds for the dependency, and I think that hits on
the gray area where design decisions come in. What I'm hoping the
examples demonstrate is the mathematical equality and ordering (in some
metalanguage) underlie both of the computational notions, and that the
computational notions may very reverse or break the dependency

	class Eq t => Ord t where ...

especially when the structure of the data does not reflect the
equivalence relation we'd like (==) to denote.

Cheers,
Bill


From wli@holomorphy.com Mon Feb 12 22:38:25 2001 Date: Mon, 12 Feb 2001 14:38:25 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Primitive types and Prelude shenanigans
On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote:
> It depends on the implementation and IMHO it would be bad to require
> a particular implementation for no reason. For example ghc uses the gmp
> library and does not implement Integers in terms of Naturals; gmp handles
> negative numbers natively.

I'm aware natural numbers are not a primitive data type within Haskell;
I had the idea in mind that for my own experimentation I might add them.

On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote:
> You can define it yourself by wrapping not-necessarily-positive types
> if you feel the need. Most of the time there is no need because Haskell
> has no subtyping - they would be awkward to use together with present
> types which include negative numbers.

Perhaps I should clarify my intentions:
The various symbols for integer literals all uniformly denote the values
fromInteger #n where #n is some monotype or other. What I had in mind
was (again, for my own wicked purposes) treating specially the symbols
0 and 1 so that the implicit coercions going on are for the type
classes where additive and multiplicative identities exist, then
overloading the positive symbols so that the implicit coercion is
instead fromNatural, and then leaving the negative symbols (largely) as
they are.

This is obviously too radical for me to propose it as anything, I intend
to only do it as an experiment or perhaps for my own usage (though if
others find it useful, they can have it).

On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote:
> Modules with names beginning with Prel define approximately everything
> what Prelude includes and everything with magic support in the compiler.

I've not only already found these, but in attempting to substantially
alter them I've run into the trouble below:

On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote:
> PrelGHC defines primops that are hardwired in the compiler, and PrelBase
> is a basic module from which most things begin. In particular Bool is
> defined there as a regular algebraic type
>     data Bool = False | True

The magic part I don't seem to get is that moving the definition of Bool
around and also changing the types of various things assumed to be Bool
causes things to break. The question seems to be of figuring out what
depends on it being where and how to either make it more flexible or
accommodate it.

On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote:
[useful info not needing a response snipped]

On Mon, 12 Feb 2001, William Lee Irwin III wrote:
>> I'd also like to see where some of the magic behind the typing of
>> various other built-in constructs happens, like list comprehensions,
>> tuples, and derived classes.

On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote:
> Inside the compiler, not in libraries.

I had in mind looking within the compiler, actually. Where in the
compiler? It's a big program, it might take me a while to do an
uninformed search. I've peeked around a little bit and not gotten
anywhere.


Cheers,
Bill


From laszlo@ropas.kaist.ac.kr Tue Feb 13 03:06:01 2001 Date: Tue, 13 Feb 2001 12:06:01 +0900 (KST) From: Laszlo Nemeth laszlo@ropas.kaist.ac.kr Subject: Deja vu: Re: In hoc signo vinces (Was: Revamping the numeric classes)
[incomprehensible (not necessarily wrong!) stuff about polynomials,
 rings, modules over Z and complaints about the current prelude nuked]

--- Marcin 'Qrczak' Kowalczyk pisze ---

> Please show a concrete proposal how Prelude classes could be improved.

--- Jerzy Karczmarczuk repondre ---

> I am Haskell USER. I have no ambition to save the world. The "proposal"
> has been presented in 1995 in Nijmegen (FP in education). Actually, it
> hasn't, I concentrated on lazy power series etc., and the math oriented
> prelude has been mentioned casually. Jeroen Fokker presented similar
> ideas, implemented differently. 

I'm afraid all this discussion reminds me the one we had a year or two
ago. At that time the mathematically inclined side was lead by Sergei,
who to his credit developed the Basic Algebra Proposal, which I don't
understand, but many people seemed to be happy about at that time. And
then of course nothing happend, because no haskell implementor has
bitten the bullet and implemented the proposal. This is something
understandable as supporting Sergei's proposal seem to be a lot of
work, most of which would be incompatible with current
implementations. And noone wants to maintain *two* haskell compilers
within one.

Even if this discussion continues and another brave soul develops
another algebra proposal I am prepared to bet with both of you in one
years supply of Ben and Jerry's (not Jerzy :)!) icecream that nothing
will continue to happen on the implementors side. It is simply too
much work for an *untested* (in practice, for teaching etc)
alternative prelude.

So instead of wasting time, why don't you guys ask the implementors to
provide a flag '-IDontWantYourStinkingPrelude' which would give you a
bare metal compiler with no predefined types, functions, classes, no
derived instances, no fancy stuff and build and test your proposals
with it?

I guess the RULES pragma (in GHC) could be abused to allow access to
the primitive operations (on Ints), but you are still likely to loose
much of the elegance, conciseness and perhaps even some efficiency of
Haskell (e.g. list comprehensions), but this should allow us to gain
experience in what sort of support is essential for providing
alternative prelude(s). Once we learnt how to decouple the prelude
from the compiler, and gained experience with alternative preludes
implementors would have no excuse not to provide the possibility
(unless it turns out to be completely impossible or impractical, in
which case we learnt something genuinely useful).

So, Marcin (as you are one of the GHC implementors), how much work
would it be do disable the disputed Prelude stuff within the compiler,
and what would be lost?

Laszlo

[Disclaimer: Just my 10 wons. This message is not in disagreement or
             agreement with any of the previous messages]


From simonpj@microsoft.com Tue Feb 13 01:16:02 2001 Date: Mon, 12 Feb 2001 17:16:02 -0800 From: Simon Peyton-Jones simonpj@microsoft.com Subject: Revamping the numeric HUMAN ATTITUDE
| I'm seeing a bit of this now, and the error messages GHC spits out
| are hilarious! e.g.
| 
|     My brain just exploded.
|     I can't handle pattern bindings for 
| existentially-quantified constructors.
| 
| and
| 
|     Couldn't match `Bool' against `Bool'
|         Expected type: Bool
|         Inferred type: Bool
| 

The first of these is defensible, I think.  It's not at all clear 
(to me anyway) what pattern bindings for existentially-quantified 
constructors should mean.  

The second is plain bogus.  GHC should never give a message like
that.  Which version of the compiler are you using?   If you
can send a small example I'll try it on the latest compiler.

Simon



From dlb@wash.averstar.com Tue Feb 13 11:43:06 2001 Date: Tue, 13 Feb 2001 06:43:06 -0500 From: David Barton dlb@wash.averstar.com Subject: Typing units correctly
Tom Pledger writes:

   In both of those cases, the apparent non-integer dimension is
   accompanied by a particular unit (km, V).  So, could they equally
   well be handled by stripping away the units and exponentiating a
   dimensionless number?  For example:

       (x / 1V) ^ y


I think not.  The "Dimension Types" paper really is excellent, and
makes the distinction between the necessity of exponents on the
dimensions and the exponents on the numbers very clear; I commend it
to everyone in this discussion.  The two things (a number of "square
root volts" and a "number of volts to an exponent" are different
things, unless you are simply trying to represent a ground number as
an expression!

					Dave Barton <*>
					dlb@averstar.com )0(
					http://www.averstar.com/~dlb


From dpt@math.harvard.edu Tue Feb 13 19:01:25 2001 Date: Tue, 13 Feb 2001 14:01:25 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: A sample revised prelude for numeric classes
On Mon, Feb 12, 2001 at 12:26:35AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> I must say I like it. It has a good balance between generality and
> usefulness / convenience.
> 
> Modulo a few details, see below.
> 
> > > class (Num a, Additive b) => Powerful a b where
> > >     (^) :: a -> b -> a
> > > instance (Num a) => Powerful a (Positive Integer) where
> > >     a ^ 0 = one
> > >     a ^ n = reduceRepeated (*) a n
> > > instance (Fractional a) => Powerful a Integer where
> > >     a ^ n | n < 0 = recip (a ^ (negate n))
> > >     a ^ n         = a ^ (positive n)
> 
> I don't like the fact that there is no Powerful Integer Integer.
> Since the definition on negative exponents really depends on the first
> type but can be polymorphic wrt. any Integral exponent, I would make
> other instances instead:
> 
> instance RealIntegral b          => Powerful Int       b
> instance RealIntegral b          => Powerful Integer   b
> instance (Num a, RealIntegral b) => Powerful (Ratio a) b
> instance                            Powerful Float     Int
> instance                            Powerful Float     Integer
> instance                            Powerful Float     Float
> instance                            Powerful Double    Int
> instance                            Powerful Double    Integer
> instance                            Powerful Double    Double

OK, I'm slow.  I finally understand your point here.  I might leave
off a few cases, and simplify this to

instance Powerful Int Int
instance Powerful Integer Integer
instance (Num a, SmallIntegral b) => Powerful (Ratio a) b
instance Powerful Float Float
instance Powerful Double Double
instance Powerful Complex Complex

(where "SmallIntegral" is a class that contains toInteger; "small" in
the sense that it fits inside an Integer.)  All of these call one of 3
functions:
  postivePow :: (Num a, SmallIntegral b) => a -> b -> a
  integerPow :: (Fractional a, SmallIntegral b) => a -> b -> a
  analyticPow :: (Floating a) => a -> a -> a
(These 3 functions might be in a separate module from the Prelude.)
Consequences: you cannot, e.g., raise a Double to an Integer power
without an explicit conversion or calling a different function (or
declaring your own instance).  Is this acceptable?  I think it might
be: after all, you can't multiply a Double by an Integer either...
You then have one instance declaration per type, just as for the other
classes.

Opinions?  I'm still not very happy.

Best,
	Dylan Thurston



From qrczak@knm.org.pl Tue Feb 13 19:47:09 2001 Date: 13 Feb 2001 19:47:09 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: A sample revised prelude for numeric classes
Tue, 13 Feb 2001 14:01:25 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:

> Consequences: you cannot, e.g., raise a Double to an Integer power
> without an explicit conversion or calling a different function (or
> declaring your own instance).  Is this acceptable?

I don't like it: (-3::Double)^2 should be 9, and generally x^(2::Integer)
should be x*x for all types of x where it makes sense. Same for Int.

(**) does not work for negative base. Neither of (^) and (**) is
a generalization of the other: the knowledge that an exponent is
restricted to integers widens the domain of the base.

x^2 = x*x cannot actually work for any x in Num, or whatever the class
of (*) is called, if (^) is not defined inside the same class. This
is because (^) is unified with (^^): the unified (^) should use recip
if available, but be partially defined without it if it's not available.

So I propose to put (^) together with (*). With a default definition
of course. It means "apply (*) the specified number of times", and
for fractional types has a meaning extended to negative exponents.
(^) is related to (*) as discussed times or scale is related to (+).

(**):: a -> a -> a, together with other analytic functions. Sorry,
the fact that they are written the same in conventional math is not
enough to force their unification against technical reasons. It's
not bad: we succeeded in unification of (^) and (^^).

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From dpt@math.harvard.edu Tue Feb 13 23:32:21 2001 Date: Tue, 13 Feb 2001 18:32:21 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Revised numerical prelude, version 0.02
--dDRMvlgZJXvWKvBx
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Here's a revision of the numerical prelude.  Many thanks to all who
helped.  Changes include:

* Removed "Powerful", replacing it with (^) in Num and (**) in Real.
* Fixed numerous typos
* Removed gcd and co. from Integral
* Added shortcomings & limitation of scope
* Added SmallIntegral, SmallReal
* wrote skeleton VectorSpace, PowerSeries
* Added framework to make it run under hugs.  There are some usability issues.

Any comments welcome!

Best,
	Dylan Thurston

--dDRMvlgZJXvWKvBx
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="NumPrelude.lhs"

Revisiting the Numeric Classes
------------------------------
The Prelude for Haskell 98 offers a well-considered set of numeric
classes which cover the standard numeric types (Integer, Int,
Rational, Float, Double, Complex) quite well.  But they offer limited
extensibility and have a few other flaws.  In this proposal we will
revisit these classes, addressing the following concerns:

(1) The current Prelude defines no semantics for the fundamental
    operations.  For instance, presumably addition should be
    associative (or come as close as feasible), but this is not
    mentioned anywhere.

(2) There are some superfluous superclasses.  For instance, Eq and
    Show are superclasses of Num.  Consider the data type

  data IntegerFunction a = IF (a -> Integer)

    One can reasonably define all the methods of Num for
    IntegerFunction a (satisfying good semantics), but it is
    impossible to define non-bottom instances of Eq and Show.

    In general, superclass relationship should indicate some semantic
    connection between the two classes.

(3) In a few cases, there is a mix of semantic operations and
    representation-specific operations.  toInteger, toRational, and
    the various operations in RealFloating (decodeFloat, ...) are the
    main examples.

(4) In some cases, the hierarchy is not finely-grained enough:
    operations that are often defined independently are lumped
    together.  For instance, in a financial application one might want
    a type "Dollar", or in a graphics application one might want a
    type "Vector".  It is reasonable to add two Vectors or Dollars,
    but not, in general, reasonable to multiply them.  But the
    programmer is currently forced to define a method for (*) when she
    defines a method for (+).

In specifying the semantics of type classes, I will state laws as
follows:
  (a + b) + c === a + (b + c)
The intended meaning is extensional equality: the rest of the program
should behave in the same way if one side is replaced with the
other.  Unfortunately, the laws are frequently violated by standard
instances; the law above, for instance, fails for Float:

  (100000000000000000000.0 + (-100000000000000000000.0)) + 1.0 = 1.0
  100000000000000000000.0 + ((-100000000000000000000.0) + 1.0) = 0.0

Thus these laws should be interpreted as guidelines rather than
absolute rules.  In particular, the compiler is not allowed to use
them.  Unless stated otherwise, default definitions should also be
taken as laws.

This version is fairly conservative.  I have retained the names for
classes with similar functions as far as possible, I have not made
some distinctions that could reasonably be made, and I have tried to
opt for simplicity over generality.

Thanks to Brian Boutel, Joe English, William Lee Irwin II, Marcin
Kowalczyk, and Ken Shan for helpful comments.

Scope & Limitations/TODO:
* It might be desireable to split Ord up into Poset and Ord (a total
 ordering).  This is not addressed here.

* In some cases, this heirarchy is not fine-grained enough.  For
  instance, time spans ("5 minutes") can be added to times ("12:34"),
  but two times are not addable.  ("12:34 + 8:23"??)  As it stands,
  users have to use a different operator for adding time spans to times
  than for adding two time spans.  Similar issues arise for vector space
  et al.  This is a consciously-made tradeoff, but might be changed.

  This becomes most serious when dealing with quantities with units
  like length/distance^2, for which (*) as defined here is useless,
  but Haskell's type system doesn't seem to be strong enough to deal
  with those in any convenient way.

  [One way to see the issue: should
    f x y = iterate (x *) y
  have principal type
    (Num a) => a -> a -> [a]
  or something like
    (Num a, Module a b) => a -> b -> [b]
  ?]

* I stuck with the Haskell 98 names.  In some cases I find them
  lacking.  Given free rein and not worrying about backwards
  compatibility, I might rename the classes as follows:
    Num           --> Ring
    Floating      --> Analytic
    RealFloat     --> RealAnalytic

* I'm not happy with Haskell's current treatment of numeric literals.
  I'm particularly unhappy with their use in pattern matching.  I feel
  like it should be a special case of some more general construction.
  I'd like to make it easier to use a non-standard Prelude, but
  there's a little too much magic.  For instance, the definition of
  round in the Haskell 98 Prelude is

    round x          =  let (n,r) = properFraction x
                             m     = if r < 0 then n - 1 else n + 1
                        in case signum (abs r - 0.5) of
                              -1 -> n
                              0  -> if even n then n else m
                              1  -> m

  I'd like to copy this over to this revised library.  But the numeric
  constants have to be wrapped in explicit calls to fromInteger.
  Worse, the case statement must be rewritten!

> module NumPrelude where
> import qualified Prelude as P
> -- Import some standard Prelude types verbatim verbandum
> import Prelude hiding (
>        Int, Integer, Float, Double, Rational, Num(..), Real(..),
>        Integral(..), Fractional(..), Floating(..), RealFrac(..),
>        RealFloat(..), subtract, even, odd,
>        gcd, lcm, (^), (^^))
>		 
>
> infixr 8  ^, **
> infixl 7  *
> infixl 7 /, `quot`, `rem`, `div`, `mod`
> infixl 6  +, -
>
> class Additive a where
>     (+), (-) :: a -> a -> a
>     negate   :: a -> a
>     zero     :: a
>
>      -- Minimal definition: (+), zero, and (negate or (-1))
>     negate a = zero - a
>     a - b    = a + (negate b)

Additive a encapsulates the notion of a commutative group, specified
by the following laws:

          a + b === b + a
   (a + b) + c) === a + (b + c)
       zero + a === a
 a + (negate a) === 0

Typical examples include integers, dollars, and vectors.

> class (Additive a) => Num a where
>     (*)         :: a -> a -> a
>     one	  :: a
>     fromInteger :: Integer -> a
>     (^)         :: (SmallIntegral b) => a -> b -> a
>
>       -- Minimal definition: (*), (one or fromInteger)
>     fromInteger n | n < 0 = negate (fromInteger (-n))
>     fromInteger n | n >= 0 = reduceRepeated (+) zero one n
>     a ^ n | n < zero = error "Illegal negative exponent"
>           | True  = reduceRepeated (*) one a (toInteger n)
>     one = fromInteger 1

Num encapsulates the mathematical structure of a (not necessarily
commutative) ring, with the laws

  a * (b * c) === (a * b) * c
      one * a === a
      a * one === a
  a * (b + c) === a * b + a * c

Typical examples include integers, matrices, and quaternions.

"reduceRepeat op a n" is an auxiliary function that, for an
associative operation "op", computes the same value as

  reduceRepeated op a0 a n = foldr op a0 (repeat (fromInteger n) a)

but applies "op" O(log n) times and works for large n.  A sample
implementation is below:

> reduceRepeated :: (a -> a -> a) -> a -> a -> Integer -> a
> reduceRepeated op a0 a n
>                   | n == 0 = a0
>                   | even n = reduceRepeated op a0 (op a a) (div n 2)
>                   | True   = reduceRepeated op (op a0 a) (op a a) (div n 2)

> class (Num a) => Integral a where
>     div, mod :: a -> a -> a
>     divMod :: a -> a -> (a,a)
>
>      -- Minimal definition: divMod or (div and mod)
>     div a b = let (d,_) = divMod a b in d
>     mod a b = let (_,m) = divMod a b in m
>     divMod a b = (div a b, mod a b)

Integral corresponds to a commutative ring, where "a mod b" picks a
canonical element of the equivalence class of "a" in the ideal
generated by "b".  Div and mod satisfy the laws

                        a * b === b * a
(a `div` b) * b + (a `mod` b) === a
              (a+k*b) `mod` b === a `mod` b
                    0 `mod` b === 0
                    a `mod` 0 === a

Typical examples of Integral include integers and polynomials over a
field.  Note that for a field, there is a canonical instance defined
by the above rules; e.g.,

  instance Integral Rational where
      divMod a 0 = (a,undefined)
      divMod a b = (0,a/b)

> class (Num a) => Fractional a where
>     (/)          :: a -> a -> a
>     recip        :: a -> a
>     fromRational :: Rational -> a
>
>      -- Minimal definition: recip or (/)
>     recip a = one / a
>     a / b = a * (recip b)
>     fromRational r = fromInteger (numerator r) / fromInteger (denominator r)
>     -- I'd like this next definition to be legal.
>     -- It would only apply if there were an implicit instance for Num a
>     -- through Fractional a.
>  -- a ^ n | n < 0 = reduceRepeated (^) one (recip a) (negate (toInteger n))
>  --       | True  = reduceRepeated (^) one a (toInteger n)

Fractional again corresponds to a commutative ring.  Division is
partially defined and satisfies

           a * b === b * a
   a * (recip a) === one

when it is defined.  To safely call division, the program most take
type-specific action; e.g., the following is appropriate in many
cases:

safeRecip :: (Integral a, Eq a, Fractional a) => a -> Maybe a
safeRecip a = let (q,r) = one `divMod` b in
    if (r == 0) then Just q else Nothing

Typical examples include rationals, the real numbers, and rational
functions (ratios of polynomials).  An instance should not typically
be declared unless most elements are invertible.

> -- Note: I think "Analytic" would be a better name than "Floating".
> class (Fractional a) => Floating a where
>     pi                  :: a
>     exp, log, sqrt      :: a -> a
>     logBase, (**)       :: a -> a -> a
>     sin, cos, tan       :: a -> a
>     asin, acos, atan    :: a -> a
>     sinh, cosh, tanh    :: a -> a
>     asinh, acosh, atanh :: a -> a
> 
>         -- Minimal complete definition:
>         --      pi, exp, log, sin, cos, sinh, cosh
>         --      asinh, acosh, atanh
>     x ** y           =  exp (log x * y)
>     logBase x y      =  log y / log x
>     sqrt x           =  x ** (fromRational 0.5)
>     tan  x           =  sin  x / cos  x
>     tanh x           =  sinh x / cosh x

Floating is the type of numbers supporting various analytic
functions.  Examples include real numbers, complex numbers, and
computable reals represented as a lazy list of rational
approximations.

Note the default declaration for a superclass.  See the comments
below, under "Instance declaractions for superclasses".

The semantics of these operations are rather ill-defined because of
branch cuts, etc.

> class (Num a, Ord a) => Real a where
>     abs    :: a -> a
>     signum :: a -> a
>
>       -- Minimal definition: nothing
>     abs x    = max x (negate x)
>     signum x = case compare x zero of GT -> one
>				        EQ -> zero
>				        LT -> negate one

This is the type of an ordered ring, satisfying the laws

             a * b === b * a
     a + (max b c) === max (a+b) (a+c)
  negate (max b c) === min (negate b) (negate c)
     a * (max b c) === max (a*b) (a*c) where a >= 0

Note that abs is in a rather different place than it is in the Haskell
98 Prelude.  In particular,

  abs :: Complex -> Complex

is not defined.  To me, this seems to have the wrong type anyway;
Complex.magnitude has the correct type.

> class (Real a, Floating a) => RealFrac a where
> -- lifted directly from Haskell 98 Prelude
>     properFraction   :: (Integral b) => a -> (b,a)
>     truncate, round  :: (Integral b) => a -> b
>     ceiling, floor   :: (Integral b) => a -> b
> 
>         -- Minimal complete definition:
>         --      properFraction
>     truncate x   =  m  where (m,_) = properFraction x
>     
>     round x      =  fromInteger (
>                     let (n,r) = properFraction x
>                         m     = if r < zero then n - one else n + one
>                       in case compare (abs r - (fromRational 0.5)) zero of
>                             LT -> n
>                             EQ  -> if even n then n else m
>                             GT  -> m
>                     )
>     
>     ceiling x      =  fromInteger (if r > zero then n + one else n)
>                       where (n,r) = properFraction x
>     
>     floor x        =  fromInteger (if r < zero then n - one else n)
>                       where (n,r) = properFraction x

As an aside, let me note the similarities between "properFraction x"
and "x divMod 1" (if that were defined).  In particular, it might make
sense to unify the rounding modes somehow.

> class (RealFrac a, Floating a) => RealFloat a where
>     atan2            :: a -> a -> a
> {- This needs lots of fromIntegral wrapping.
>     atan2 y x
>       | x>0           =  atan (y/x)
>       | x==0 && y>0   =  pi/2
>       | x<0  && y>0   =  pi + atan (y/x) 
>       |(x<=0 && y<0)  = -atan2 (-y) x
>       | y==0 && x<0   =  pi    -- must be after the previous test on zero y
>       | x==0 && y==0  =  y     -- must be after the other double zero tests
> -}

(Note that I removed the IEEEFloat-specific calls here, so probably
nobody will actually use this default definition.)

> class (Real a, Integral a) => RealIntegral a where
>     quot, rem        :: a -> a -> a   
>     quotRem          :: a -> a -> (a,a)
>
>       -- Minimal definition: nothing required
>     quot a b = let (q,_) = quotRem a b in q
>     rem a b  = let (_,r) = quotRem a b in r
>     quotRem a b = let (d,m) = divMod a b in
>                    if (signum d < (fromInteger 0)) then
>	                   (d+(fromInteger 1),m-b) else (d,m)

Remember that divMod does not specify exactly a `quot` b should be,
mainly because there is no sensible way to define it in general.  For
an instance of RealIntegral a, it is expected that a `quot` b will
round towards minus infinity and a `div` b will round towards 0.

> class (Real a) => SmallReal a where
>     toRational :: a -> Rational
> class (SmallReal a, RealIntegral a) => SmallIntegral a where
>     toInteger :: a -> Integer

These two classes exist to allow convenient conversions, primarily
between the built-in types.  These classes are "small" in the sense
that they can be converted to integers (resp. rationals) without loss
of information.  They should satisfy

    fromInteger . toInteger === id
  fromRational . toRational === id
     toRational . toInteger === toRational

> --- Numerical functions
> subtract         :: (Additive a) => a -> a -> a
> subtract         =  flip (-)
>
> even, odd        :: (Eq a, Integral a) => a -> Bool
> even n           =  n `mod` (one + one) == fromInteger zero
> odd              =  not . even

Additional standard libraries might include IEEEFloat (including the
bulk of the functions in Haskell 98's RealFloat class), VectorSpace,
Ratio, and Lattice.

> -- Support functions so that this whole thing can be tested on top
> -- of a standard prelude.
> -- Alternative: use "newtype".
> type Integer = P.Integer
> type Int = P.Int
> type Float = P.Float
> type Double = P.Double
> type Rational = P.Rational -- This one is lame.

> instance Additive P.Integer where
>     (+)    = (P.+)
>     zero   = 0
>     negate = P.negate
> instance Num P.Integer where
>     (*)    = (P.*)
>     one    = 1
> instance Integral P.Integer where
>     divMod = P.divMod
> instance Real P.Integer
> instance RealIntegral P.Integer
> instance SmallReal P.Integer where
>     toRational = P.toRational
> instance SmallIntegral P.Integer where
>     toInteger = id

> data T a = T a
--dDRMvlgZJXvWKvBx
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="VectorSpace.lhs"

> module VectorSpace where
> import NumPrelude
> import qualified Prelude
>
> -- Is this right?
> infixl 7 *>, <*
>
> class (Num a, Additive b) => Module a b where
>     (*>) :: a -> b -> b

A module over a ring satisfies:

   a *> (b + c) === a *> b + a *> c
   (a * b) *> c === a *> (b *> c)
   (a + b) *> c === a *> c + b *> c

For instance, the following function can be used to define any
Additive as a module over Integer:

> integerMultiply :: (SmallIntegral a, Additive b) => a -> b -> b
> integerMultiply a b = reduceRepeated (+) zero b (toInteger a)

There are no instance declarations by default, since they would
overlap with too many other instances and would be slower than
desired.

> class (Num a, Additive b) => RightModule a b where
>     (<*) :: b -> a -> b

> class (Fractional a, Additive b) => VectorSpace a b

> class (VectorSpace a b) => DivisibleSpace a b where
>     (</>) :: b -> b -> a

DivisibleSpace is used for free one-dimensional vector spaces.  It
satisfies

  (a </> b) *> b = a

Examples include dollars and kilometers.
--dDRMvlgZJXvWKvBx
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="PowerSeries.lhs"

> module PowerSeries where
> import NumPrelude
> import qualified Prelude as P
> import VectorSpace
> import Prelude hiding (
>        Int, Integer, Float, Double, Rational, Num(..), Real(..),
>        Integral(..), Fractional(..), Floating(..), RealFrac(..),
>        RealFloat(..), subtract, even, odd,
>        gcd, lcm, (^), (^^))

Power series, either finite or unbounded.  (zipWith does exactly the
right thing to make it work almost transparently.)

> newtype PowerSeries a = PS [a] deriving (Eq, Ord, Show)
> stripPS (PS l) = l
> truncatePS :: Int -> PowerSeries a -> PowerSeries a
> truncatePS n (PS a) = PS (take n a)

Note that the derived instances only make sense for finite series.

> instance (Additive a) => Additive (PowerSeries a) where
>     negate (PS l) = PS (map negate l)
>     (PS a) + (PS b) = PS (zipWith (+) a b)
>     zero = PS (repeat zero)
>
> instance (Num a) => Num (PowerSeries a) where
>     one = PS (one:repeat zero)
>     fromInteger n = PS (fromInteger n : repeat zero)
>     PS (a:as) * PS (b:bs) = PS ((a*b):stripPS (a *> PS bs + PS as*PS (b:bs)))
>     PS _ * PS _ = PS []
>
> instance (Num a) => Module a (PowerSeries a) where
>     a *> (PS bs) = PS (map (a *) bs)

It would be nice to also provide:

  instance (Module a b) => Module a (PowerSeries b) where
      a *> (PS bs) = PS (map (a *>) bs)

maybe with

  instance (Num a) => Module a a where
      (*>) = (*)

> instance (Integral a) => Integral (PowerSeries a) where
>     divMod a b = (\(x,y)-> (PS x, PS y)) (unzip (aux a b))
>        where aux (PS (a:as)) (PS (b:bs)) =
>                 let (d,m) = divMod a b in
>                 (d,m):aux (PS as - d *> (PS bs)) (PS (b:bs))
>              aux _ _ = []


--dDRMvlgZJXvWKvBx--


From wli@holomorphy.com Wed Feb 14 00:20:01 2001 Date: Tue, 13 Feb 2001 16:20:01 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Primitive types and Prelude shenanigans
On Mon, 12 Feb 2001, William Lee Irwin III wrote:
>>> I'd also like to see where some of the magic behind the typing of
>>> various other built-in constructs happens, like list comprehensions,
>>> tuples, and derived classes.

On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote:
>> Inside the compiler, not in libraries.

On Mon, Feb 12, 2001 at 02:38:25PM -0800, William Lee Irwin III wrote:
> I had in mind looking within the compiler, actually. Where in the
> compiler? It's a big program, it might take me a while to do an
> uninformed search. I've peeked around a little bit and not gotten
> anywhere.

If anyone else is pursuing thoughts along the same lines as I am (and I
have suspicions), TysWiredIn.lhs appears quite relevant to the set of
primitive data types, though there is no obvious connection to the
module issue (PrelBase.Bool vs. Foo.Bool). PrelMods.lhs appears to shed
more light on that issue in particular. $TOP/ghc/compiler/prelude/ was
the gold mine I encountered.

In DsExpr.lhs, I found:
] \subsection[DsExpr-literals]{Literals}
] ...
] We give int/float literals type @Integer@ and @Rational@, respectively.
] The typechecker will (presumably) have put \tr{from{Integer,Rational}s}
] around them.

and following this pointer, I found TcExpr.lhs (lines 213ff) had more
material of interest.

While I can't say I know how to act on these "discoveries" (esp. since
I don't really understand OverloadedIntegral and OverloadedFractional's
treatment(s) yet), perhaps this might be useful to others interested in
ideas along the same lines as mine.

Happy hacking,
Bill


From qrczak@knm.org.pl Wed Feb 14 05:08:16 2001 Date: 14 Feb 2001 05:08:16 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revised numerical prelude, version 0.02
Tue, 13 Feb 2001 18:32:21 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:

>   I'd like to copy this over to this revised library.  But the numeric
>   constants have to be wrapped in explicit calls to fromInteger.

ghc's docs (the CVS version) say that -fno-implicit-prelude causes
numeric literals to use whatever fromInteger is in scope. AFAIR it
worked at some time, but it does not work anymore!

BTW, why not let 'import MyPrelude as Prelude' work that way?

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From andrew@andrewcooke.free-online.co.uk Wed Feb 14 17:02:24 2001 Date: Wed, 14 Feb 2001 17:02:24 +0000 From: andrew@andrewcooke.free-online.co.uk andrew@andrewcooke.free-online.co.uk Subject: Typing units correctly
Hi,

I don't know if this is useful, but in response to a link to that
article that I posted on Lambda, someone posted a link arguing that
such an approach (at least in Ada) was impractical.  To be honest, I
don't find it very convincing, but I haven't been following this
discussion in detail.  It might raise some problems you have not
considered.

Anyway, if you are interested, it's all at
http://lambda.weblogs.com/discuss/msgReader$818

Apologies if it's irrelevant or you've already seen it,
Andrew

On Mon, Feb 12, 2001 at 01:51:54PM -0500, Dylan Thurston wrote:
[...]
> The papers I could find (e.g.,
> http://citeseer.nj.nec.com/kennedy94dimension.html, "Dimension Types")
> mention extensions to ML.  I wonder if it is possible to work within
> the Haskell type system, which is richer than ML's type system.
[...]

-- 
http://www.andrewcooke.free-online.co.uk/index.html


From akenn@microsoft.com Wed Feb 14 16:10:39 2001 Date: Wed, 14 Feb 2001 08:10:39 -0800 From: Andrew Kennedy akenn@microsoft.com Subject: Typing units correctly
To be frank, the poster that you cite doesn't know what he's talking
about. He makes two elementary mistakes:

(a) attempting to encode dimension/unit checking in an existing type
system;
(b) not appreciating the need for parametric polymorphism over
dimensions/units.

As others have pointed out, (a) doesn't work because the algebra of units of
measure 
is not free - units form an Abelian group (if integer exponents are used) or
a
vector space over the rationals (if rational exponents are used) and so it's
not
possible to do unit-checking by equality-on-syntax or unit-inference by
ordinary
syntactic unification. Furthermore, parametric polymorphism is essential for
code
reuse - one can't even write a generic squaring function (say) without it.

Best to ignore the poster and instead read the papers that contributors to
this
thread have cited :-)

To turn to the original question, I did once give a moment's thought to the
combination
of type classes and types for units-of-measure. I don't think there's any
particular
problem: units (or dimensions) are a new "sort" or "kind", just as "row" is
in various
proposals for record polymorphism in Haskell. As long as this is tracked
through the
type system, everything should work out fine. Of course, I may have missed
something,
in which case I'd be very interested to know about it.

- Andrew Kennedy.

> -----Original Message-----
> From: andrew@andrewcooke.free-online.co.uk
> [mailto:andrew@andrewcooke.free-online.co.uk]
> Sent: Wednesday, February 14, 2001 5:02 PM
> To: haskell-cafe@haskell.org
> Subject: Re: Typing units correctly
> 
> 
> 
> Hi,
> 
> I don't know if this is useful, but in response to a link to that
> article that I posted on Lambda, someone posted a link arguing that
> such an approach (at least in Ada) was impractical.  To be honest, I
> don't find it very convincing, but I haven't been following this
> discussion in detail.  It might raise some problems you have not
> considered.
> 
> Anyway, if you are interested, it's all at
> http://lambda.weblogs.com/discuss/msgReader$818
> 
> Apologies if it's irrelevant or you've already seen it,
> Andrew
> 
> On Mon, Feb 12, 2001 at 01:51:54PM -0500, Dylan Thurston wrote:
> [...]
> > The papers I could find (e.g.,
> > http://citeseer.nj.nec.com/kennedy94dimension.html, 
> "Dimension Types")
> > mention extensions to ML.  I wonder if it is possible to work within
> > the Haskell type system, which is richer than ML's type system.
> [...]
> 
> -- 
> http://www.andrewcooke.free-online.co.uk/index.html
> 


From dpt@math.harvard.edu Wed Feb 14 19:14:36 2001 Date: Wed, 14 Feb 2001 14:14:36 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Typing units correctly
--opJtzjQTFsWo+cga
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Wed, Feb 14, 2001 at 08:10:39AM -0800, Andrew Kennedy wrote:
> To be frank, the poster that you cite doesn't know what he's talking
> about. He makes two elementary mistakes:

Quite right, I didn't know what I was talking about.  I still don't.
But I do hope to learn.

> (a) attempting to encode dimension/unit checking in an existing type
> system;

We're probably thinking about different contexts, but please see the
attached file (below) for a partial solution.  I used Hugs' dependent
types to get type inference. This makes me uneasy, because I know that
Hugs' instance checking is, in general, not decidable; I don't know if
the fragment I use is decidable.  You can remove the dependent types,
but then you need to type all the results, etc., explicitly.  This
version doesn't handle negative exponents; perhaps what you say here:

> As others have pointed out, (a) doesn't work because the algebra of
> units of measure is not free - units form an Abelian group (if
> integer exponents are used) or a vector space over the rationals (if
> rational exponents are used) and so it's not possible to do
> unit-checking by equality-on-syntax or unit-inference by ordinary
> syntactic unification. ...

is that I won't be able to do it?

Note that I didn't write it out, but this version can accomodate
multiple units of measure.

> (b) not appreciating the need for parametric polymorphism over
> dimensions/units.
> ...  Furthermore, parametric polymorphism is
> essential for code reuse - one can't even write a generic squaring
> function (say) without it.

I'm not sure what you're getting at here; I can easily write a
squaring function in the version I wrote.  It uses ad-hoc polymorphism
rather than parametric polymorphism.  It also gives much uglier
types; e.g., the example from your paper 
  f (x,y,z) = x*x + y*y*y + z*z*z*z*z
gets some horribly ugly context:
f :: (Additive a, Mul b c d, Mul c c e, Mul e c b, Mul d c a, Mul f f a, Mul g h a, Mul h h g) => (f,h,c) -> a

Not that I recommend this solution, mind you.  I think language
support would be much better.  But specific language support for units
rubs me the wrong way: I'd much rather see a general notion of types
with integer parameters, which you're allowed to add.  This would be
useful in any number of places.  Is this what you're suggesting below?

> To turn to the original question, I did once give a moment's thought
> to the combination of type classes and types for units-of-measure. I
> don't think there's any particular problem: units (or dimensions)
> are a new "sort" or "kind", just as "row" is in various proposals
> for record polymorphism in Haskell. As long as this is tracked
> through the type system, everything should work out fine. Of course,
> I may have missed something, in which case I'd be very interested to
> know about it.

Incidentally, I went and read your paper just now.  Very interesting.
You mentioned one problem came up that sounds interesting: to give a
nice member of the equivalence class of the principal type.  This
boils down to picking a nice basis for a free Abelian group with a few
distinguished elements.  Has any progress been made on that?

Best,
	Dylan Thurston

--opJtzjQTFsWo+cga
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="dim3.hs"

module Dim3 where
default (Double)
infixl 7 ***
infixl 6 +++

data Zero = Zero
data Succ x = Succ x

class Peano a where
  value :: a -> Int
  element :: a
instance Peano Zero where
  value Zero = 0 ; element = Zero
instance (Peano a) => Peano (Succ a) where
  value (Succ x) = value x + 1 ; element = Succ element

class (Peano a, Peano b, Peano c) => PeanoAdd a b c | a b -> c
instance (Peano a) => PeanoAdd Zero a a
instance (PeanoAdd a b c) => PeanoAdd (Succ a) b (Succ c)

data (Peano a) => Dim a b = Dim a b deriving (Eq)

class Mul a b c | a b -> c where (***) :: a -> b -> c
instance Mul Double Double Double where (***) = (*)
instance (Mul a b c, PeanoAdd d e f) => Mul (Dim d a) (Dim e b) (Dim f c) where
  (Dim _ a) *** (Dim _ b) = Dim element (a *** b)
instance (Show a, Peano b) => Show (Dim b a) where
  show (Dim b a) = show a ++ " d^" ++ show (value b)

class Additive a where
  (+++) :: a -> a -> a
  zero :: a
instance Additive Double where
  (+++) = (+) ; zero = 0
instance (Peano a, Additive b) => Additive (Dim a b) where
  Dim a b +++ Dim c d = Dim a (b+++d)
  zero = Dim element zero

scalar :: Double -> Dim Zero Double
scalar x = Dim Zero x
unit = scalar 1.0
d = Dim (Succ Zero) 1.0

f (x,y,z) = x***x +++ y***y***y +++ z***z***z***z***z

--opJtzjQTFsWo+cga--


From qrczak@knm.org.pl Wed Feb 14 21:53:16 2001 Date: 14 Feb 2001 21:53:16 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Revised numerical prelude, version 0.02
Tue, 13 Feb 2001 18:32:21 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:

> Here's a revision of the numerical prelude.

I like it!

> > class (Real a, Floating a) => RealFrac a where
> > -- lifted directly from Haskell 98 Prelude
> >     properFraction   :: (Integral b) => a -> (b,a)
> >     truncate, round  :: (Integral b) => a -> b
> >     ceiling, floor   :: (Integral b) => a -> b

These should be SmallIntegral.

> For an instance of RealIntegral a, it is expected that a `quot` b
> will round towards minus infinity and a `div` b will round towards 0.

The opposite.

> > class (Real a) => SmallReal a where
> >     toRational :: a -> Rational
> > class (SmallReal a, RealIntegral a) => SmallIntegral a where
> >     toInteger :: a -> Integer
> 
> These two classes exist to allow convenient conversions, primarily
> between the built-in types.  These classes are "small" in the sense
> that they can be converted to integers (resp. rationals) without loss
> of information.

I find names of these classes unclear: Integer is not small integral,
it's big integral (as opposed to Int)! :-)

Perhaps these classes should be called Real and Integral, with
different names for current Real and Integral. But I don't have
a concrete proposal.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From dpt@math.harvard.edu Wed Feb 14 22:20:11 2001 Date: Wed, 14 Feb 2001 17:20:11 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Revised numerical prelude, version 0.02
On Wed, Feb 14, 2001 at 09:53:16PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Tue, 13 Feb 2001 18:32:21 -0500, Dylan Thurston <dpt@math.harvard.edu> pisze:
> > Here's a revision of the numerical prelude.
> I like it!

I'd like to start using something like this in my programs.  What are
the chances that the usability issues will be addressed?  (The main
one is all the fromInteger's, I think.)

> > > class (Real a, Floating a) => RealFrac a where
> > > -- lifted directly from Haskell 98 Prelude
> > >     properFraction   :: (Integral b) => a -> (b,a)
> > >     truncate, round  :: (Integral b) => a -> b
> > >     ceiling, floor   :: (Integral b) => a -> b
> These should be SmallIntegral.

It could be either one, since they produce the type on output (it
calls fromInteger).  I changed it, on the theory that it might be less
confusing.  But it should inherit from SmallReal.  (Oh, except then
RealFloat inherits from SmallReal, which it shouldn't have to.  Gah.)

> > For an instance of RealIntegral a, it is expected that a `quot` b
> > will round towards minus infinity and a `div` b will round towards 0.
> The opposite.

Thanks.

> > > class (Real a) => SmallReal a where
> > >     toRational :: a -> Rational
> > > class (SmallReal a, RealIntegral a) => SmallIntegral a where
> > >     toInteger :: a -> Integer
> ...
> I find names of these classes unclear: Integer is not small integral,
> it's big integral (as opposed to Int)! :-)

I agree, but I couldn't think of anything better.  I think this end of
the heirarchy (that inherits from Real) could use some more work.

RealIntegral and SmallIntegral could possibly be merged, except that
it violates the principle of not combining semantically disparate
operations in a single class.

Best,
	Dylan Thurston


From simonpj@microsoft.com Wed Feb 14 22:19:39 2001 Date: Wed, 14 Feb 2001 14:19:39 -0800 From: Simon Peyton-Jones simonpj@microsoft.com Subject: Primitive types and Prelude shenanigans
| On Mon, Feb 12, 2001 at 02:38:25PM -0800, William Lee Irwin III wrote:
| > I had in mind looking within the compiler, actually. Where in the
| > compiler? It's a big program, it might take me a while to do an
| > uninformed search. I've peeked around a little bit and not gotten
| > anywhere.
| 
| If anyone else is pursuing thoughts along the same lines as I 
| am (and I
| have suspicions), TysWiredIn.lhs appears quite relevant to the set of
| primitive data types, though there is no obvious connection to the
| module issue (PrelBase.Bool vs. Foo.Bool). PrelMods.lhs 
| appears to shed
| more light on that issue in particular. $TOP/ghc/compiler/prelude/ was
| the gold mine I encountered.

Perhaps I should add something here.

I'm very sympathetic to the idea of making it possible to do entirely
without the standard Prelude, and to substitute a Prelude of one's own.

The most immediate and painful stumbling block in Haskell 98 is that numeric
literals,
like 3, turn into (Prelude.fromInt 3), where "Prelude.fromInt" really means
"the fromInt from the standard Prelude" regardless of whether the standard
Prelude is imported scope.

Some while ago I modified GHC to have an extra runtime flag to let you
change
this behaviour.  The effect was that 3 turns into simply (fromInt 3), and
the
"fromInt" means "whatever fromInt is in scope".  The same thing happens for
	- numeric patterns
	- n+k patterns (the subtraction is whatever is in scope)
	- negation (you get whatever "negate" is in scope, not
Prelude.negate)

(Of course, this is not Haskell 98 behaviour.)   I think I managed to forget
to tell anyone of this flag.  And to my surprise I can't find it any more!
But several changes I made to make it easy are still there, so I'll
reinstate
it shortly.  That should make it easy to define a new numeric class
structure.


So much for numerics.  It's much less obvious what to do about booleans.
Of course, you can always define your own Bool type.  But we're going to
have to change the type that if-then-else uses, and presumably guards too.
Take if-then-else.  Currently it desugars to 
	case e of
	  True -> then-expr
	  False -> else-expr
but your new boolean might not have two constructors.  So maybe we should 
simply assume a function 	
	if :: Bool -> a -> a -> a
and use that for both if-then-else and guards....  I wonder what else?
For example, can we assume that
	f x | otherwise = e
is equivalent to
	f x = e
That is, "otherwise" is a guard that is equivalent to the boolean "true"
value.
("otherwise" might be bound to something else if you import a non-std
Prelude.)
If we don't assume this, we may generate rather bizarre code:
	f x y | x==y = e1
		| otherwise = e2

===>
	f x y = if (x==y) e1 (if otherwise e2 (error "non-exhaustive
patterns for f"))

And we'll get warnings from the pattern-match compiler.  So perhaps we
should
guarantee that (if otherwise e1 e2) = e1.  

You may say that's obvious, but the point is that we have to specify what
can be assumed about an alien Prelude.




Matters get even more tricky if you want to define your own lists.  
There's quite a lot of built-in syntax for lists, and type checking that
goes with it.  Last time I thought about it, it made my head hurt.
Tuples are even worse, because they constitute an infinite family.

The bottom line is this.
  a) It's desirable to be able to substitute a new prelude
  b) It's not obvious exactly what that should mean
  c) And it may not be straightforward to implement

It's always hard to know how to deploy finite design-and-implementation
resources.  Is this stuff important to a lot of people?  
If you guys can come up with a precise specification for (b), I'll
think hard about how hard (c) really is.  

Simon


From qrczak@knm.org.pl Thu Feb 15 00:01:23 2001 Date: 15 Feb 2001 00:01:23 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Primitive types and Prelude shenanigans
Wed, 14 Feb 2001 14:19:39 -0800, Simon Peyton-Jones <simonpj@microsoft.com> pisze:

> Some while ago I modified GHC to have an extra runtime flag to let
> you change this behaviour.  The effect was that 3 turns into simply
> (fromInt 3), and the "fromInt" means "whatever fromInt is in scope".

Wasn't that still fromInteger?

> I think I managed to forget to tell anyone of this flag.

I remember that it has been advertised.

> And to my surprise I can't find it any more!

Me neither. But it's still documented. It must have been list during
some branch merging I guess.

May I propose an alternative way of specifying an alternative Prelude?
Instead of having a command line switch, let's say that 3 always means
Prelude.fromInteger 3 - for any *module Prelude* which is in scope!

That is, one could say:
    import Prelude ()
    import MyPrelude as Prelude
IMHO it's very intuitive, contrary to -fno-implicit-prelude flag.

I see only one problem with that: inside the module MyPrelude it is
not visible as Prelude yet. But it's easy to fix. Just allow a module
to import itself!
    module MyPrelude where
    import Prelude as P
    import MyPrelude as Prelude
Now names qualified with Prelude refer to entities defined in this
very module, including implicit Prelude.fromInteger.

I don't know if such self-import should hide MyPrelude qualification or
not. I guess it should, similarly as explicit import of Prelude hides
its implicit import. That is, each module implicitly imports itself,
unless it imports itself explicitly (possibly under a different name)
- same as for Prelude.

> So much for numerics.  It's much less obvious what to do about booleans.

IMHO a natural generalization (not necessarily useful) is to follow
the definition of the 'if' syntactic sugar literally. 'if' expands
to the appropriate 'case'. So Prelude.True and Prelude.False must be
defined, and they must have the same type (otherwise we get a type
error each time we use 'if'). This would allow even
    data FancyBool a = True | False | DontKnow a

The main problem is probably the current implementation: syntactic
sugar like 'if' is typechecked prior to desugaring. The same problem
is with the 'do' notation. But I don't see conceptual dilemmas.

> For example, can we assume that
> 	f x | otherwise = e
> is equivalent to
> 	f x = e

We should not need this information except for performance and
warnings. Semantically otherwise is just a normal variable. So it
does not matter much.

Non-standard 'otherwise' is the same as currently would be
    foo :: Bool
    foo = True

The compiler could be improved by examining the unfolded definition
for checking whether to generate warnings, instead of relying on
special treatment of the particular qualified name.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From matth@ninenet.com Thu Feb 15 05:27:55 2001 Date: Wed, 14 Feb 2001 23:27:55 -0600 From: Matt Harden matth@ninenet.com Subject: Scalable and Continuous
Marcin 'Qrczak' Kowalczyk wrote:
> 
> I'm afraid of making too many small classes. But it would perhaps be not
> so bad if one could define superclass' methods in subclasses, so that one
> can forget about exact structure of classes and treat a bunch of classes
> as a single class if he wishes. It would have to be combined with
> compiler-inferred warnings about mutual definitions giving bottoms.

I totally agree with this.  We should be able to split up Num into many
superclasses, while still retaining the traditional Num, and not
inconveniencing anybody currently using Num.  We could even put the
superclasses into Library modules, so as not to "pollute" the standard
Prelude's namespace.  The Prelude could import those modules, then
define Num and Num's instances, and only export the Num stuff.

We shouldn't have to be afraid of making too many classes, if that more
precisely reflects reality.  It is only the current language definition
that makes us afraid of this.  We should be able to work with a class,
subclass it, and define instances of it, without needing to know about
all of its superclasses.  This is certainly true in OOP, although I
realize of course that OOP classes are not Haskell classes.

I also wonder: should one be allowed to create new superclasses of an
existing class without updating the original class's definition?  Also,
should the subclass be able to create new default definitions for
functions in the superclasses?  I think it should; such defaults would
only be legal if the superclass did not define a default for the same
function.

What do you mean by mutual definitions?

Matt Harden


From qrczak@knm.org.pl Thu Feb 15 07:44:40 2001 Date: 15 Feb 2001 07:44:40 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Scalable and Continuous
Wed, 14 Feb 2001 23:27:55 -0600, Matt Harden <matth@ninenet.com> pisze:

> I also wonder: should one be allowed to create new superclasses of an
> existing class without updating the original class's definition?

It would not buy anything. You could not make use of the superclass
in default definitions anyway (because they are already written).
And what would happen to types which are instances of the subclass
but not of the new superclass?

> Also, should the subclass be able to create new default definitions
> for functions in the superclasses?

I hope the system can be designed such that it can.

> such defaults would only be legal if the superclass did not define
> a default for the same function.

Not necessarily. For example (^) in Num (of the revised Prelude)
has a default definition, but Fractional gives the opportunity to
have better (^) defined in terms of other methods. When a type is an
instance of Fractional, it should always have the Fractional's (^)
in practice. When not, Num's (^) is always appropriate.

I had many cases like this when trying to design a container class
system. It's typical that a more specialized class has something
generic as a superclass, and that a more generic function can easily
be expressed in terms of specialized functions (but not vice versa).
It follows that many kinds of types have the same written definition
for a method, which cannot be put in the default definition in the
class because it needs a more specialized context.

It would be very convenient to be able to do that, but it cannot be
very clear design. It relies on the absence of an instance, a negative
constraint. Hopefully it will be OK, since it's determined once for a
type - it's not a systematic way of parametrizing code over negative
constrained types, which would break the principle that additional
instances are harmless to old code.

This design does have some problems. For example what if there are two
subclasses which define the default method in an incompatible ways.
We should design the system such that adding a non-conflicting instance
does not break previously written code. It must be resolved once per
module, probably complaining about the ambiguity (ugh!), but once
the instance is generated, it's cast in stone for this type.

> What do you mean by mutual definitions?

Definitions of methods in terms of each other. Suppose there is a
class having only (-) and negate, with default definitions:
    a - b = a + negate b
    negate b = zero - b
When we make an instance of its subclass but don't make an explicit
instance of this class and don't write (-) or negate explicitly,
it would be dangerous if the compiler silently included definitions
generated by the above, because both are functions which always
return bottoms.

The best solution I can think of is to let the compiler deduce that
these default definitions lead to a useless instance, and give a
warning when both are instantiated from the default. It cannot be an
error because there is no formal way we can distinguish bad mutual
recursion from good mutual recursion. The validity of the code cannot
depend on heuristics, but warnings can. There are already warnings
when a method without default is not defined explicitly (although
people say it should be an error; it is diagnosable).

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From koen@cs.chalmers.se Thu Feb 15 09:07:41 2001 Date: Thu, 15 Feb 2001 10:07:41 +0100 (MET) From: Koen Claessen koen@cs.chalmers.se Subject: Primitive types and Prelude shenanigans
Simon Peyton-Jones wrote:

 | I'm very sympathetic to the idea of making it possible
 | to do entirely without the standard Prelude, and to
 | substitute a Prelude of one's own.

I think this is a very good idea.

 | Some while ago I modified GHC to have an extra runtime
 | flag to let you change this behaviour.  The effect was
 | that 3 turns into simply (fromInt 3), and the
 | "fromInt" means "whatever fromInt is in scope".

Hmmm... so how about:

  foo fromInt = 3

Would this translate to:

  foo f = f 3

? How about alpha renaming?

 | [...] guarantee that (if otherwise e1 e2) = e1.

I do not understand this. "otherwise" is simply a function
name, that can be used, redefined or hidden, by anyone. It
is not used in any desugaring. Why change that behaviour?

 | It's always hard to know how to deploy finite
 | design-and-implementation resources. Is this stuff
 | important to a lot of people?

I think it is important to define a minimalistic Prelude, so
that people at least know what is standard and what is not.
Try to put everything else in modules.

/Koen.

--
Koen Claessen         http://www.cs.chalmers.se/~koen
phone:+46-31-772 5424      mailto:koen@cs.chalmers.se
-----------------------------------------------------
Chalmers University of Technology, Gothenburg, Sweden



From Malcolm.Wallace@cs.york.ac.uk Thu Feb 15 10:50:01 2001 Date: Thu, 15 Feb 2001 10:50:01 +0000 From: Malcolm Wallace Malcolm.Wallace@cs.york.ac.uk Subject: Revised numerical prelude, version 0.02
Dylan Thurston writes:

> I'd like to start using something like this in my programs.  What are
> the chances that the usability issues will be addressed?  (The main
> one is all the fromInteger's, I think.)

Have you tried using your alternative Prelude with nhc98?  Offhand,
I couldn't be certain it would work, but I think nhc98 probably makes
fewer assumptions about the Prelude than ghc.

You will need something like

    import qualified Prelude as NotUsed
    import Dylan'sPrelude as Prelude

in any module that wants to use your prelude.  IIRC, nhc98 treats
'fromInteger' exactly as the qualified name 'Prelude.fromInteger',
so in theory it should simply pick up your replacement definitions.
(In practice, it might actually do the resolution of module 'as'
renamings a little too early or late, but the easiest way to find
out for certain is to try it.)

Regards,
    Malcolm


From simonmar@microsoft.com Thu Feb 15 10:54:24 2001 Date: Thu, 15 Feb 2001 10:54:24 -0000 From: Simon Marlow simonmar@microsoft.com Subject: Primitive types and Prelude shenanigans
> (Of course, this is not Haskell 98 behaviour.)   I think I=20
> managed to forget
> to tell anyone of this flag.  And to my surprise I can't find=20
> it any more!

It's lumped in with -fno-implicit-prelude, but the extra functionality
isn't supported in 4.08.2 (but hopefully will be in 5.00).

Cheers,
	Simon


From akenn@microsoft.com Thu Feb 15 15:18:14 2001 Date: Thu, 15 Feb 2001 07:18:14 -0800 From: Andrew Kennedy akenn@microsoft.com Subject: Typing units correctly
First, I think there's been a misunderstanding. I was referring to 
the poster ("Christoph Grein") of
    http://www.adapower.com/lang/dimension.html
when I said that "he doesn't know what he's talking about". I've 
not been following the haskell cafe thread very closely, but from 
what I've seen your (Dylan's) posts are well-informed. Sorry if 
there was any confusion.

As you suspect, negative exponents are necessary. How else would you 
give a polymorphic type to
  \ x -> 1.0/x
?

However, because of the equivalence on type schemes that's not just 
alpha-conversion, many types can be rewritten to avoid negative 
exponents, though I don't think that this is particularly desirable.
For example the type of division can be written

  / :: Real (u.v) -> Real u -> Real v

or

  / :: Real u -> Real v -> Real (u.v^-1)

where u and v are "unit" variables.

In fact, I have since solved the simplification problem mentioned 
in my ESOP paper, and it would assign the second of these two 
(equivalent) types, as it works from left to right in the type. I
guess it does boil down to choosing a nice basis; more precisely
it corresponds to the Hermite Normal Form from the theory of 
integer matrices (more generally: modules over commutative rings).

For more detail see my thesis, available from

  http://research.microsoft.com/users/akenn/papers/index.html

By the way, type system pathologists might be interested to know
that the algorithm described in ESOP'94 doesn't actually work
without an additional step in the rule for let (he says shamefacedly). 
Again all this is described in my thesis - but for a clearer explanation
of this issue you might want to take a look at my technical report 
"Type Inference and Equational Theories".

Which brings me to your last point: some more general system that 
subsumes the rather specific dimension/unit types system. There's been
some nice work by Martin Sulzmann et al on constraint based systems 
which can express dimensions. See 

  http://www.cs.mu.oz.au/~sulzmann/

for more details. To my taste, though, unless you want to express all
sorts of other stuff in the type system, the equational-unification-based 
approach that I described in ESOP is simpler, even with the fix for let.

I've been promising for years that I'd write up a journal-quality (and 
correct!) version of my ESOP paper including all the relevant material
from my thesis. As I have now gone so far as to promise my boss that I'll
do such a thing, perhaps it will happen :-)

- Andrew.



> -----Original Message-----
> From: Dylan Thurston [mailto:dpt@math.harvard.edu]
> Sent: Wednesday, February 14, 2001 7:15 PM
> To: Andrew Kennedy; haskell-cafe@haskell.org
> Subject: Re: Typing units correctly
> 
> 
> On Wed, Feb 14, 2001 at 08:10:39AM -0800, Andrew Kennedy wrote:
> > To be frank, the poster that you cite doesn't know what he's talking
> > about. He makes two elementary mistakes:
> 
> Quite right, I didn't know what I was talking about.  I still don't.
> But I do hope to learn.
> 
> > (a) attempting to encode dimension/unit checking in an existing type
> > system;
> 
> We're probably thinking about different contexts, but please see the
> attached file (below) for a partial solution.  I used Hugs' dependent
> types to get type inference. This makes me uneasy, because I know that
> Hugs' instance checking is, in general, not decidable; I don't know if
> the fragment I use is decidable.  You can remove the dependent types,
> but then you need to type all the results, etc., explicitly.  This
> version doesn't handle negative exponents; perhaps what you say here:
> 
> > As others have pointed out, (a) doesn't work because the algebra of
> > units of measure is not free - units form an Abelian group (if
> > integer exponents are used) or a vector space over the rationals (if
> > rational exponents are used) and so it's not possible to do
> > unit-checking by equality-on-syntax or unit-inference by ordinary
> > syntactic unification. ...
> 
> is that I won't be able to do it?
> 
> Note that I didn't write it out, but this version can accomodate
> multiple units of measure.
> 
> > (b) not appreciating the need for parametric polymorphism over
> > dimensions/units.
> > ...  Furthermore, parametric polymorphism is
> > essential for code reuse - one can't even write a generic squaring
> > function (say) without it.
> 
> I'm not sure what you're getting at here; I can easily write a
> squaring function in the version I wrote.  It uses ad-hoc polymorphism
> rather than parametric polymorphism.  It also gives much uglier
> types; e.g., the example from your paper 
>   f (x,y,z) = x*x + y*y*y + z*z*z*z*z
> gets some horribly ugly context:
> f :: (Additive a, Mul b c d, Mul c c e, Mul e c b, Mul d c a, 
> Mul f f a, Mul g h a, Mul h h g) => (f,h,c) -> a
> 
> Not that I recommend this solution, mind you.  I think language
> support would be much better.  But specific language support for units
> rubs me the wrong way: I'd much rather see a general notion of types
> with integer parameters, which you're allowed to add.  This would be
> useful in any number of places.  Is this what you're suggesting below?
> 
> > To turn to the original question, I did once give a moment's thought
> > to the combination of type classes and types for units-of-measure. I
> > don't think there's any particular problem: units (or dimensions)
> > are a new "sort" or "kind", just as "row" is in various proposals
> > for record polymorphism in Haskell. As long as this is tracked
> > through the type system, everything should work out fine. Of course,
> > I may have missed something, in which case I'd be very interested to
> > know about it.
> 
> Incidentally, I went and read your paper just now.  Very interesting.
> You mentioned one problem came up that sounds interesting: to give a
> nice member of the equivalence class of the principal type.  This
> boils down to picking a nice basis for a free Abelian group with a few
> distinguished elements.  Has any progress been made on that?
> 
> Best,
> 	Dylan Thurston
> 


From kort@wins.uva.nl Thu Feb 15 16:16:07 2001 Date: Thu, 15 Feb 2001 17:16:07 +0100 From: Jan Kort kort@wins.uva.nl Subject: framework for composing monads?
Andy Gill's Monad Template Library is good for that, but the link
from the Haskell library page is broken:

  http://www.cse.ogi.edu/~andy/monads/doc.htm

  Jan


From zulf_jafferi@hotmail.com Thu Feb 15 20:55:51 2001 Date: Thu, 15 Feb 2001 20:55:51 -0000 From: zulf jafferi zulf_jafferi@hotmail.com Subject: Downloading Hugs
hi,
     I tried to download the Hugs 98.after downloading Hugs 98,when i try to 
click on the Hugs icon it gives me an error saying COULD NOT
LOAD PRELUDE.i am using Windows 2000.
I would be much obliged if you could help me solve the problem.

cheers!!

_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.



From konsu@microsoft.com Fri Feb 16 01:11:20 2001 Date: Thu, 15 Feb 2001 17:11:20 -0800 From: Konst Sushenko konsu@microsoft.com Subject: need help w/ monad comprehension syntax
This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

------_=_NextPart_001_01C097B5.5F16060C
Content-Type: text/plain;
	charset="iso-8859-1"

hello,
 
i am having trouble getting my program below to work.
i think i implemented the monad methods correctly, but
the function 'g' does not type as i would expect. Hugs
thinks that it is just a list (if i remove the explicit
typing). i want it to be functionally identical to the
function 'h'.
 
what am i missing?
 
thanks
konst
 
 
> newtype State s a = ST (s -> (a,s))
>
> unST (ST m) = m
>
> instance Functor (State s) where
>     fmap f m = ST (\s -> let (a,s') = unST m s in (f a, s'))
>
> instance Monad (State s) where
>     return a = ST (\s -> (a,s))
>     m >>= f  = ST (\s -> let (a,s') = unST m s in unST (f a) s')
>
> --g :: State String Char
> g = [ x | x <- return 'a' ]
>
> h :: State String Char
> h = return 'a'
 

------_=_NextPart_001_01C097B5.5F16060C
Content-Type: text/html;
	charset="iso-8859-1"

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">


<META content="MSHTML 5.50.4208.1700" name=GENERATOR></HEAD>
<BODY>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2>hello,</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>i am having 
trouble&nbsp;getting my program below to work.</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>i think i 
implemented the monad methods correctly, but</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>the function 
'g' does not type as i would expect. Hugs</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>thinks that 
it is just a list (if i remove the explicit</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>typing). i 
want it to be functionally identical to the</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>function 
'h'.</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2></FONT></SPAN><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>what am i 
missing?</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2>thanks</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2>konst</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>&gt; newtype 
State s a = ST (s -&gt; (a,s))</FONT></SPAN></DIV>
<DIV><FONT face="Courier New" size=2><SPAN 
class=628090501-16022001>&gt;</SPAN></FONT></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>&gt; unST 
(ST m) = m</FONT></SPAN></DIV>
<DIV><FONT face="Courier New" size=2><SPAN 
class=628090501-16022001>&gt;</SPAN></FONT></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>&gt; 
instance Functor (State s) where<BR>&gt; &nbsp;&nbsp;&nbsp; fmap f m = ST (\s 
-&gt; let (a,s') = unST m s in (f a, s'))</FONT></SPAN></DIV>
<DIV><FONT face="Courier New" size=2><SPAN 
class=628090501-16022001>&gt;</SPAN></FONT></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>&gt; 
instance Monad (State s) where<BR>&gt; &nbsp;&nbsp;&nbsp; return a = ST (\s 
-&gt; (a,s))<BR>&gt; &nbsp;&nbsp;&nbsp; m &gt;&gt;= f&nbsp; = ST (\s -&gt; let 
(a,s') = unST m s in unST (f a) s')</FONT></SPAN></DIV>
<DIV><FONT face="Courier New" size=2><SPAN 
class=628090501-16022001>&gt;</SPAN></FONT></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>&gt; --g :: 
State String Char<BR>&gt; g = [ x | x &lt;- return 'a' ]</FONT></SPAN></DIV>
<DIV><FONT face="Courier New" size=2><SPAN 
class=628090501-16022001>&gt;</SPAN></FONT></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" size=2>&gt; h :: 
State String Char<BR>&gt; h = return 'a'</FONT></SPAN></DIV>
<DIV><SPAN class=628090501-16022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV></BODY></HTML>

------_=_NextPart_001_01C097B5.5F16060C--


From Tom.Pledger@peace.com Fri Feb 16 01:22:28 2001 Date: Fri, 16 Feb 2001 14:22:28 +1300 From: Tom Pledger Tom.Pledger@peace.com Subject: need help w/ monad comprehension syntax
Konst Sushenko writes:
 | what am i missing?
 :
 | > --g :: State String Char
 | > g = [ x | x <- return 'a' ]

Hi.

The comprehension syntax used to be for monads in general (in Haskell
1.4-ish), but is now (Haskell 98) back to being specific to lists.

Does it help if you use do-notation instead?

Regards,
Tom


From konsu@microsoft.com Fri Feb 16 01:43:04 2001 Date: Thu, 15 Feb 2001 17:43:04 -0800 From: Konst Sushenko konsu@microsoft.com Subject: need help w/ monad comprehension syntax
thanks, did not know that. the articles that i read are
outdated in that respect...

using the "do" notation is just fine. with the list notation
not working i thought that i musunderstand something about
monads. ;-)

konst


-----Original Message-----
From: Tom Pledger [mailto:Tom.Pledger@peace.com]
Sent: Thursday, February 15, 2001 5:22 PM
To: haskell-cafe@haskell.org
Subject: need help w/ monad comprehension syntax


Konst Sushenko writes:
 | what am i missing?
 :
 | > --g :: State String Char
 | > g = [ x | x <- return 'a' ]

Hi.

The comprehension syntax used to be for monads in general (in Haskell
1.4-ish), but is now (Haskell 98) back to being specific to lists.

Does it help if you use do-notation instead?

Regards,
Tom

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


From wli@holomorphy.com Fri Feb 16 04:56:20 2001 Date: Thu, 15 Feb 2001 20:56:20 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Primitive types and Prelude shenanigans
On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
> The most immediate and painful stumbling block in Haskell 98 is that
> numeric literals, like 3, turn into (Prelude.fromInt 3), where
> "Prelude.fromInt" really means "the fromInt from the standard Prelude"
> regardless of whether the standard Prelude is imported scope.

> Some while ago I modified GHC to have an extra runtime flag to let you
> change this behaviour.  The effect was that 3 turns into simply
> (fromInt 3), and the "fromInt" means "whatever fromInt is in scope".
> The same thing happens for
> 	- numeric patterns
> 	- n+k patterns (the subtraction is whatever is in scope)
> 	- negation (you get whatever "negate" is in scope, not Prelude.negate)

For the idea for numeric literals I had in mind (which is so radical I
don't intend to seek much, if any help in implementing it other than
general information), even this is insufficient. Some analysis of the
value of the literal would need to be incorporated so that something
like the following happens:

	literal "0" gets mapped to zero :: AdditiveMonoid t => t
	literal "1" gets mapped to one :: MultiplicativeMonoid t => t
	literal "5" gets mapped to (fromPositiveInteger 5)
	literal "-9" gets mapped to (fromNonZeroInteger -9)
	literal "5.0" gets mapped to (fromPositiveReal 5.0)
	literal "-2.0" gets mapped to (fromNonZeroReal -2.0)
	literal "0.0" gets mapped to (fromReal 0.0)

etc. A single fromInteger or fromIntegral won't suffice here. The
motivation behind this is so that some fairly typical mathematical
objects (multiplicative monoid of nonzero integers, etc.) can be
directly represented by numerical literals (and primitive types).

I don't for a minute think this is suitable for general use, but
I regard it as an interesting (to me) experiment.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
> (Of course, this is not Haskell 98 behaviour.)   I think I managed to
> forget to tell anyone of this flag.  And to my surprise I can't find
> it any more! But several changes I made to make it easy are still
> there, so I'll reinstate it shortly.  That should make it easy to
> define a new numeric class structure.

It certainly can't hurt; even if the code doesn't help directly with
my dastardly plans, examining how the handling of overloaded literals
differs will help me understand what's going on.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
> So much for numerics.  It's much less obvious what to do about booleans.
> Of course, you can always define your own Bool type.  But we're going to
> have to change the type that if-then-else uses, and presumably guards too.
> Take if-then-else.  Currently it desugars to 
> 	case e of
> 	  True -> then-expr
> 	  False -> else-expr
> but your new boolean might not have two constructors.  So maybe we should 
> simply assume a function 	
> 	if :: Bool -> a -> a -> a
> and use that for both if-then-else and guards....  I wonder what else?

I had in mind that there might be a class of suitable logical values
corresponding to the set of all types suitable for use as such. As
far as I know, the only real restriction on subobject classifiers
for logical values is that it be a pointed set where the point
represents truth. Even if it's not the most general condition, it's
unlikely much can be done computationally without that much. So
since we must be able to compare logical values to see if they're
that distinguished truth value:

\begin{pseudocode}
class Eq lv => LogicalValue lv where
		definitelyTrue :: lv
\end{pseudocode}

>From here, ifThenElse might be something like:

\begin{morepseudocode}
ifThenElse :: LogicalValue lv => lv -> a -> a -> a
ifThenElse isTrue thenValue elseValue =
	case isTrue == definitelyTrue of
		BooleanTrue -> thenValue
		_           -> elseValue
\end{morepseudocode}

or something on that order. The if/then/else syntax is really just
a combinator like this with a mixfix syntax, and case is the primitive,
so quite a bit of flexibility is possible given either some "hook" the
mixfix operator will use or perhaps even means for defining arbitrary
mixfix operators. (Of course, a hook is far easier.)

The gains from something like this are questionable, but it's not
about gaining anything for certain, is it? Handling weird logics
could be fun.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
[interesting example using otherwise in a pattern guard elided]
> And we'll get warnings from the pattern-match compiler.  So perhaps we
> should guarantee that (if otherwise e1 e2) = e1.  

I'm with you on this, things would probably be too weird otherwise.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
> You may say that's obvious, but the point is that we have to specify
> what can be assumed about an alien Prelude.

There is probably a certain amount of generality that would be desirable
to handle, say, Dylan Thurston's prelude vs. the standard prelude. I'm
willing to accept compiler hacking as part of ideas as radical as mine.

Some reasonable assumptions:
	(1) lists are largely untouchable
	(2) numeric monotypes present in the std. prelude will also be present
	(3) tuples probably won't change
	(4) I/O libs will probably not be toyed with much (monads are good!)
	(5) logical values will either be a monotype or a pointed set class
		(may be too much to support more than a monotype)
	(6) relations (==), (<), etc. will get instances on primitive monotypes
	(7) Read and Show probably won't change much
	(8) Aside from perhaps Arrows, monads probably won't change much
		(Arrows should be able to provide monad compatibility)
	(9) probably no one will try to alter application syntax to operate
		on things like instances of class Applicable
	(10) the vast majority of the prelude changes desirable to support
		will have to do with the numeric hierarchy

These are perhaps not a terribly useful set of assumptions.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
> Matters get even more tricky if you want to define your own lists.  
> There's quite a lot of built-in syntax for lists, and type checking
> that goes with it.  Last time I thought about it, it made my head
> hurt. Tuples are even worse, because they constitute an infinite family.

The only ideas I have about lists are maybe to reinstate monad
comprehensions. As far as tuples go, perhaps a derived or automagically
defined Functor (yes, I know it isn't derivable now) instance and other
useful instances (e.g. AdditiveMonoid, PointedSet, other instances where
distinguished elements etc. cannot be written for the infinite number of
instances required) would have interesting consequences if enough were
cooked up to bootstrap tuples in a manner polymorphic in the dimension
(fillTuple :: Tuple t => (Natural -> a) -> t a ?, existential tuples?)
Without polytypism or some other mechanism for defining instances on
these infinite families of types, achieving the same effect(s) would be
difficult outside of doing it magically in the compiler. Neither looks
easy to pull off in any case, so I'm wary of these ideas.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
> The bottom line is this.
>   a) It's desirable to be able to substitute a new prelude
>   b) It's not obvious exactly what that should mean
>   c) And it may not be straightforward to implement

> It's always hard to know how to deploy finite design-and-implementation
> resources.  Is this stuff important to a lot of people?  
> If you guys can come up with a precise specification for (b), I'll
> think hard about how hard (c) really is.  

I think Dylan Thurston's proposal is probably the best starting point
for something that should really get support. If other alternatives in
the same vein start going around, I'd think supporting them would also
be good, but much of what I have in mind is probably beyond reasonable
expectations, and will probably not get broadly used.


Cheers,
Bill


From fjh@cs.mu.oz.au Fri Feb 16 06:14:14 2001 Date: Fri, 16 Feb 2001 17:14:14 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: Primitive types and Prelude shenanigans
On 15-Feb-2001, William Lee Irwin III <wli@holomorphy.com> wrote:
> Some reasonable assumptions:

I disagree about the reasonableness of many of your assumptions ;-)

> 	(1) lists are largely untouchable

I want to be able to write a Prelude that has lists as a strict data
type, rather than a lazy data type.

> 	(4) I/O libs will probably not be toyed with much (monads are good!)
> 	(5) logical values will either be a monotype or a pointed set class
> 		(may be too much to support more than a monotype)

I think that that replacing the I/O libs is likely to be a much more
useful and realistic proposition than replacing the boolean type.

> 	(9) probably no one will try to alter application syntax to operate
> 		on things like instances of class Applicable

That's a separate issue; you're talking here about a language
extension, not just a new Prelude.

> 	(10) the vast majority of the prelude changes desirable to support
> 		will have to do with the numeric hierarchy

s/numeric hierarchy/class hierarchy/

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From ketil@ii.uib.no Fri Feb 16 06:21:46 2001 Date: 16 Feb 2001 07:21:46 +0100 From: Ketil Malde ketil@ii.uib.no Subject: Primitive types and Prelude shenanigans
William Lee Irwin III <wli@holomorphy.com> writes:

> Some analysis of the value of the literal would need to be
> incorporated so that something like the following happens:

> 	literal "0" gets mapped to zero :: AdditiveMonoid t => t
> 	literal "1" gets mapped to one :: MultiplicativeMonoid t => t

Indeed.  Is it a reasonable assumption that all values of literal "0"
are intended to be the additive identity element?  How about "1",
might we not have it as a successor element in a group, where
multiplication isn't defined?

I guess other behaviour (e.g. using implicit fromInteger) assumes even
more about the classes that can be represented by literal numbers, so
it appears this would be an improvement, if it is at all workable.

> 	literal "5" gets mapped to (fromPositiveInteger 5)

Is something like
        (fromInteger 5) *> one 
(where *> is Module scalar multiplication  from the left?  Could we
avoid having lots and lots of from* functions?

I guess having to declare any datatype as Module is as bad as the
explicit conversion functions...

Anyway, I like it, especially the zero and one case.

> I think Dylan Thurston's proposal is probably the best starting point
> for something that should really get support.

Indeed.  How far are we from being able to import it, e.g. with
"import DTlude as Prelude" or whatever mechanism, so that we can
start to play with it, and see how it works out in practice?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants


From qrczak@knm.org.pl Fri Feb 16 08:09:58 2001 Date: 16 Feb 2001 08:09:58 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Primitive types and Prelude shenanigans
Thu, 15 Feb 2001 20:56:20 -0800, William Lee Irwin III <wli@holomorphy.com> pisze:

> 	literal "0" gets mapped to zero :: AdditiveMonoid t => t
> 	literal "1" gets mapped to one :: MultiplicativeMonoid t => t
> 	literal "5" gets mapped to (fromPositiveInteger 5)
> 	literal "-9" gets mapped to (fromNonZeroInteger -9)

Actually -9 gets mapped to negate (fromInteger 9). At least in theory,
because in ghc it's fromInteger (-9) AFAIK.

> The motivation behind this is so that some fairly typical
> mathematical objects (multiplicative monoid of nonzero integers,
> etc.) can be directly represented by numerical literals (and
> primitive types).

I am definitely against it, especially the zero and one case.
When one can write 1, he should be able to write 2 too obtaining the
same type. It's not hard to write zero and one.

What next: 0 for nullPtr and []?

Moreover, the situation where each integer literal means applied
fromInteger is simple to understand, remember and use. I don't want to
define a bunch of operations for the same thing. Please keep Prelude's
rules simple.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From wli@holomorphy.com Fri Feb 16 08:26:05 2001 Date: Fri, 16 Feb 2001 00:26:05 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Primitive types and Prelude shenanigans
On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote:
> I disagree about the reasonableness of many of your assumptions ;-)

Great! =)

On 15-Feb-2001, William Lee Irwin III <wli@holomorphy.com> wrote:
>> 	(1) lists are largely untouchable

On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote:
> I want to be able to write a Prelude that has lists as a strict data
> type, rather than a lazy data type.

Hmm, sounds like infinite lists might have trouble there, but I hereby
cast out that assumption.

On 15-Feb-2001, William Lee Irwin III <wli@holomorphy.com> wrote:
>> 	(4) I/O libs will probably not be toyed with much (monads are good!)
>> 	(5) logical values will either be a monotype or a pointed set class
>> 		(may be too much to support more than a monotype)

On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote:
> I think that that replacing the I/O libs is likely to be a much more
> useful and realistic proposition than replacing the boolean type.

I won't pretend for an instant that replacing the Boolean type will
be remotely useful to more than a handful of people.

On 15-Feb-2001, William Lee Irwin III <wli@holomorphy.com> wrote:
>> 	(9) probably no one will try to alter application syntax to operate
>> 		on things like instances of class Applicable

On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote:
> That's a separate issue; you're talking here about a language
> extension, not just a new Prelude.

I'm not sure one would have to go that far (though I'm willing to be
convinced), but either way, we need not concern ourselves.

On 15-Feb-2001, William Lee Irwin III <wli@holomorphy.com> wrote:
>> 	(10) the vast majority of the prelude changes desirable to support
>> 		will have to do with the numeric hierarchy

On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote:
> s/numeric hierarchy/class hierarchy/

I suppose I was trying to narrow it down as far as possible, but if
people really are touching every place in the class hierarchy, then
I can't do better than that.


Cheers,
Bill


From wli@holomorphy.com Fri Feb 16 09:17:38 2001 Date: Fri, 16 Feb 2001 01:17:38 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Primitive types and Prelude shenanigans
William Lee Irwin III <wli@holomorphy.com> pisze:
>> 	literal "0" gets mapped to zero :: AdditiveMonoid t => t
>> 	literal "1" gets mapped to one :: MultiplicativeMonoid t => t
>> 	literal "5" gets mapped to (fromPositiveInteger 5)
>> 	literal "-9" gets mapped to (fromNonZeroInteger -9)

On Fri, Feb 16, 2001 at 08:09:58AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Actually -9 gets mapped to negate (fromInteger 9). At least in theory,
> because in ghc it's fromInteger (-9) AFAIK.

Sorry I was unclear about this, I had in mind that in the scheme I was
going to implement that the sign of the literal value would be discerned
and negative literals carried to fromNonZeroInteger (-9) etc.

William Lee Irwin III <wli@holomorphy.com> pisze:
>> The motivation behind this is so that some fairly typical
>> mathematical objects (multiplicative monoid of nonzero integers,
>> etc.) can be directly represented by numerical literals (and
>> primitive types).

On Fri, Feb 16, 2001 at 08:09:58AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> I am definitely against it, especially the zero and one case.
> When one can write 1, he should be able to write 2 too obtaining the
> same type. It's not hard to write zero and one.

The real hope here is to get the distinct zero and one for things that
are already traditionally written that way, like the multiplicative
monoid of nonzero integers or the additive monoid of natural numbers.
Another implication I view as beneficial is that the 0 (and 1) symbols
can be used in vector (and perhaps matrix) contexts without the
possibility that other integer literals might be used inadvertantly.

On Fri, Feb 16, 2001 at 08:09:58AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> What next: 0 for nullPtr and []?

It's probably good to point out that this scheme is "permissive" enough,
or more specifically, allows enough fine-grained expressiveness to allow
the symbol to be overloaded for address types on which arithmetic is
permitted, and lists under their natural monoid structure, which I agree
is aesthetically displeasing at the very least, and probably undesirable
to allow by default.

On Fri, Feb 16, 2001 at 08:09:58AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Moreover, the situation where each integer literal means applied
> fromInteger is simple to understand, remember and use. I don't want to
> define a bunch of operations for the same thing. Please keep Prelude's
> rules simple.

I don't think this sort of scheme is appropriate for a standard Prelude
either, though I do think it's interesting to me, and perhaps others. I
don't mean to give the impression that I'm proposing this for inclusion
in any sort of standard Prelude. It's a more radical point in the design
space that I am personally interested in exploring both to discover its
implications for programming (what's really awkward, what things become
convenient, etc.), and to acquaint myself with the aspects of the
compiler pertinent to the handling of primitive types.


Cheers,
Bill


From karczma@info.unicaen.fr Fri Feb 16 14:24:50 2001 Date: Fri, 16 Feb 2001 14:24:50 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: Just for your fun and horror
Perhaps I mentioned that I use Haskell to teach compilation,
since I think that functional structures are good not only
for parsers, but for a legible semantics for virtual machines,
for the code generators, etc. The main assignment was to write
a syntactic converter from a Haskell-like language to Scheme,
and the exam included such exercises as

Find the type of fm in

fm _ z [] = return z
fm g z (a:aq) = g z a >>= \y->fm g y aq

When I started correcting the exam, I thought I would jump
out of the window. First 30 copies: The type of fm is

ff -> b -> [c] -> b

(with an appropriate constraint for the functional type ff).
The result had for them the same type as the type of z.

My inquiry proved beyond any doubt that my students are so
conditioned by "C", that despite the fact that we worked with
monads for several weeks, they *cannot imagine* that
"return z"
may mean something different than the value of "z".

Any suggestions?

[Yes, I have one! Stop teaching, find a more appropriate job,
e.g., cultivate genetically modified, mutant tomatoes.]]


Jerzy Karczmarczuk
"C"-aen, Fran-"C"-e.


From jans@numeric-quest.com Fri Feb 16 09:17:36 2001 Date: Fri, 16 Feb 2001 04:17:36 -0500 (EST) From: Jan Skibinski jans@numeric-quest.com Subject: Just for your fun and horror
On Fri, 16 Feb 2001, Jerzy Karczmarczuk wrote:

> My inquiry proved beyond any doubt that my students are so
> conditioned by "C", that despite the fact that we worked with
> monads for several weeks, they *cannot imagine* that
> "return z"
> may mean something different than the value of "z".
> 
> Any suggestions?

	Perhaps the name "return" in the monadic definitions
	could be replaced by something more suggestive of
	an action? How about running a little experiment
	next time, with a new name, to see whether this would
	remove this unfortunate association with C-like
	"return" in the minds of your students? 

	Jan
 



From gruenbacher-lists@geoinfo.tuwien.ac.at Fri Feb 16 14:57:34 2001 Date: Fri, 16 Feb 2001 15:57:34 +0100 (CET) From: Andreas Gruenbacher gruenbacher-lists@geoinfo.tuwien.ac.at Subject: Just for your fun and horror
On Fri, 16 Feb 2001, Jerzy Karczmarczuk wrote:

> [..]
>
> fm _ z [] = return z
> fm g z (a:aq) = g z a >>= \y->fm g y aq
>
> When I started correcting the exam, I thought I would jump
> out of the window. First 30 copies: The type of fm is
>
> ff -> b -> [c] -> b
>
> (with an appropriate constraint for the functional type ff).
> The result had for them the same type as the type of z.
>
> My inquiry proved beyond any doubt that my students are so
> conditioned by "C", that despite the fact that we worked with
> monads for several weeks, they *cannot imagine* that
> "return z"
> may mean something different than the value of "z".
>
> Any suggestions?

Not that it would help you much, but I also think that return is a rather
confusing name for what might otherwise be called liftM0.


Regards,
Andreas.

------------------------------------------------------------------------
 Andreas Gruenbacher                  gruenbacher@geoinfo.tuwien.ac.at
 Research Assistant                       Phone      +43(1)58801-12723
 Institute for Geoinformation             Fax        +43(1)58801-12799
 Technical University of Vienna           Cell phone   +43(664)4064789



From simonpj@microsoft.com Fri Feb 16 12:14:24 2001 Date: Fri, 16 Feb 2001 04:14:24 -0800 From: Simon Peyton-Jones simonpj@microsoft.com Subject: Primitive types and Prelude shenanigans
|  | Some while ago I modified GHC to have an extra runtime
|  | flag to let you change this behaviour.  The effect was
|  | that 3 turns into simply (fromInt 3), and the
|  | "fromInt" means "whatever fromInt is in scope".
| 
| Hmmm... so how about:
| 
|   foo fromInt = 3
| 
| Would this translate to:
| 
|   foo f = f 3

This exactly what will happen.  But you are right to say that
it is perhaps not what you want.

Another alternative would be: "3" turns into "Prelude.fromInt 3",
where "Prelude.fromInt" means "whatever Prelude.fromInt is in scope".
So then you'd have to say
	import Prelude ()
	import MyPrelude as Prelude
(as Malcolm and Marcin suggested).  Maybe that's a good plan; it's a little
more
heavyweight.  [Incidentally, if this is nhc's behaviour, it's not H98.
The Report (tries to) stress that you get the "fromInt from the actual
standard Prelude" regardless of what is in scope.  That's why I'm not
going to make it the default behaviour.]

Yet another possibility would to be say you get "the unqualified 
fromInt that's in scope at top level".  But that seems worse.

Re Bools, Koen and Marcin write (respectively)

|  | [...] guarantee that (if otherwise e1 e2) = e1.
| 
| I do not understand this. "otherwise" is simply a function
| name, that can be used, redefined or hidden, by anyone. It
| is not used in any desugaring. Why change that behaviour?

| > So much for numerics.  It's much less obvious what to do 
| about booleans.
| 
| IMHO a natural generalization (not necessarily useful) is to follow
| the definition of the 'if' syntactic sugar literally. 'if' expands
| to the appropriate 'case'. So Prelude.True and Prelude.False must be
| defined, and they must have the same type (otherwise we get a type
| error each time we use 'if'). This would allow even
|     data FancyBool a = True | False | DontKnow a

The point is that there must be a *defined* desugaring.  The 
desugaring in the report defines the behaviour, but the compiler is
free to do differently.  If one is to be free to rebind types, the
desugaring must be fully defined.  Marcin suggests that 'if' is just
syntactic sugar.  But that would be a disaster if the new Bool type
didn't have constructors True and False.  For example, maybe Bool becomes
a function:
	type Bool = forall b. b -> b -> b
No constructor 'True'!  Here I think the right thing is to say that
desugaring for boolean constructs uses a function 'if' assumed to have
type 	
	if :: forall b. Bool -> b -> b -> b
Now the programmer can define both Bool and if, and the compiler will
be happy.  


My point is this: there is some *design* to do here.  It's not obvious
what the design should be.  But if anyone feels inclined to do the design
(in consultation with the community of course) then I'd be inclined to 
implement it in GHC.  (Though I'm not writing a blank cheque!)  
Decoupling the prelude is a desirable goal.

Simon


From matthias@rice.edu Fri Feb 16 15:51:35 2001 Date: Fri, 16 Feb 2001 09:51:35 -0600 (CST) From: Matthias Felleisen matthias@rice.edu Subject: Just for your fun and horror
The problem is Haskell, not your student. 

Haskell undermines the meaning of 'return', which has the same meaning in
C, C++, Java, and who knows whatelse.  These languages use 'return' to
refer to one part of the denotation of a function return (value) and
Haskell uses 'return' to refer to two parts (value, store). These languages
have been around forever; Haskell came late. These languages are
imperative; Haskell is a wanna-be imperative language. 

The students know C'ish stuff (and I take it some Scheme); you teach
Haskell to introduce them to functional and denotational thinking.  That's
laudable. It's great. Just don't expect your students to change deeply
ingrained habits such as the 'return habit' in a few weeks. Instead, teach
explicit store-passing style and do it again and again and again until they
ask "isn't this a pattern that we should abstract out". Then show monads
and apologize profusely for the abuse of the return syntax in Haskell. If
they don't ask, chew them out near the end of the semester for being bad
programmers who can't see a pattern when it bites their b...d. Not worth
the money. Fired. 

:-)

-- Matthias


From Dominic.J.Steinitz@BritishAirways.com Fri Feb 16 15:59:33 2001 Date: 16 Feb 2001 15:59:33 Z From: Steinitz, Dominic J Dominic.J.Steinitz@BritishAirways.com Subject: Just for your fun and horror
I always liked unit rather than return.

Dominic.

-------------------------------------------------------------------------------------------------
21st century air travel     http://www.britishairways.com


From C.Reinke@ukc.ac.uk Fri Feb 16 16:46:53 2001 Date: Fri, 16 Feb 2001 16:46:53 +0000 From: C.Reinke C.Reinke@ukc.ac.uk Subject: Just for your fun and horror
> `return' in Haskell vs `return' in C,...

Unless you're one of Asimov's technicians of eternity, it is a bit
difficult to change the history of programming languages, and assuming
that the students pay for the opportunity to learn, you can't really
fire them either.. but I agree with Mathias' suggestion to go from the
specific to the general. 

Before anyone complains that abstract and generalised concepts are so
much more important and powerful that specific and simplified instances
- if you believe this, you will also agree that giving students a
chance to learn the general process of abstraction for themselves is more
important and empowering than teaching them some specific abstractions.

(I'm not sure whether it is even possible to reach all students in
 a course, but I will certainly not recommend to give up trying;-)

One way to look at the problem is that some of your students have 
concrete experience with `return' in different contexts, and that
Haskell tries to make different things look similar here. You say
"we worked with monads for several weeks" but, you being yourself,
this was probably at a fairly abstract and general level, right?

My suggestion is to give your students some concrete experience to
counter the one they bring into your course, by introducing the
abstract monads via an intermediate step of concrete representations.

As you're teaching programming language implementation anyway, why not
have an algebraic datatype with return and bind *constructors*,
together with some explicit *interpreters* (plural) for the language of
structures built from those constructors (even as student exercises)?

Perhaps we can gain a better understanding of the student perspective
if we compare the situation with lists or other data structures:  do we
start teaching their folds and the fold-representation of data
structures right away, or do we start with concrete intermediate
structures, and move on to folds and deforestation later?

Of course, with a concrete representation of monads, it is difficult to
hold up the law, so after this intermediate step (in which students get
their hands on `return' et.al., and in which interpreters can interpret
`return a' in any way they please), one can move on to an abstract data
type of monads. After all, that's what abstract data types are there for.

Hth,
Claus



From qrczak@knm.org.pl Fri Feb 16 17:13:10 2001 Date: 16 Feb 2001 17:13:10 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Primitive types and Prelude shenanigans
Fri, 16 Feb 2001 04:14:24 -0800, Simon Peyton-Jones <simonpj@microsoft.com> pisze:

> [Incidentally, if this is nhc's behaviour, it's not H98.
> The Report (tries to) stress that you get the "fromInt from the actual
> standard Prelude" regardless of what is in scope.  That's why I'm not
> going to make it the default behaviour.]

But is mere -fglasgow-exts enough to enable it?

BTW: fromInt is not H98. However when a compiler uses fromInt instead
of fromInteger where the number fits, with a suitable default method
for fromInt which is not exported from Prelude, then no program can
tell the difference, so it's OK. Unfortunately integer literals cannot
expand to Prelude.fromInt, because Prelude does not export fromInt!

Currently ghc extension flags can have no effect on module imports,
so if fromInt is not visible in standard mode, it will not visible
in extended mode either. In such case these two extensions (Prelude
substitution and using fromInt for integer literals) are incompatible.

> Marcin suggests that 'if' is just syntactic sugar.  But that would
> be a disaster if the new Bool type didn't have constructors True
> and False.

Correction: it would be a disaster when there are no Prelude.True
and Prelude.False constructors of the same type. It needs not to be
called Bool if the desugaring rule does not say so.

> Here I think the right thing is to say that desugaring for boolean
> constructs uses a function 'if' assumed to have type
>         if :: forall b. Bool -> b -> b -> b

What if somebody wants to make 'if' overloaded on more types than
some constant type called Bool?

    class Condition a where
        if :: a -> b -> b -> b

Generally I don't feel the need of allowing to replace if, Bool and
everything else with custom definitions, especially when there is no
single obvious way.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Fri Feb 16 17:42:17 2001 Date: 16 Feb 2001 17:42:17 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Primitive types and Prelude shenanigans
Thu, 15 Feb 2001 20:56:20 -0800, William Lee Irwin III <wli@holomorphy.com> pisze:

> 	literal "5" gets mapped to (fromPositiveInteger 5)
> 	literal "-9" gets mapped to (fromNonZeroInteger -9)

Note that when a discussed generic Prelude replacement
framework is done, and ghc's rules are changed to expand -9 to
negate (fromInteger 9) instead of fromInteger (-9), then you don't
need uglification of the fromInteger function to be able to define
types with only nonnegative numeric values. Just define your negate
in an appropriate class, different from the fromInteger's class.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From wli@holomorphy.com Fri Feb 16 19:47:10 2001 Date: Fri, 16 Feb 2001 11:47:10 -0800 From: William Lee Irwin III wli@holomorphy.com Subject: Primitive types and Prelude shenanigans
William Lee Irwin III <wli@holomorphy.com> pisze:
>> 	literal "5" gets mapped to (fromPositiveInteger 5)
>> 	literal "-9" gets mapped to (fromNonZeroInteger -9)

On Fri, Feb 16, 2001 at 05:42:17PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Note that when a discussed generic Prelude replacement
> framework is done, and ghc's rules are changed to expand -9 to
> negate (fromInteger 9) instead of fromInteger (-9), then you don't
> need uglification of the fromInteger function to be able to define
> types with only nonnegative numeric values. Just define your negate
> in an appropriate class, different from the fromInteger's class.

Good point, the canonical injection from the positive integers into
the various supersets (with structure) thereof handles it nicely.

I foresee:
fromPositiveInteger :: ContainsPositiveIntegers t => PositiveInteger -> t
instance ContainsPositiveIntegers Integer where ...
instance AdditiveGroup Integer where ...
negate :: AdditiveGroup t => t -> t {- this seems natural, but see below -}

fromPositiveInteger 5 :: ContainsPositiveIntegers t => t

negate $ fromPositiveInteger 5
	:: (AdditiveGroup t, ContainsPositiveIntegers t) => t

which is not exactly what I want (and could probably use some aesthetic
tweaking); I had in mind that negative integers would somehow imply a
ContainsNonZeroIntegers or ContainsAllIntegers instance or the like.
The solution actually imposes a rather natural instance (though one
which could cause overlaps):

instance (AdditiveGroup t, ContainsPositiveIntegers t)
			=> ContainsAllIntegers t where ...

I suppose one big wrinkle comes in when I try to discuss negation in
the multiplicative monoid of nonzero integers. That question already
exists without the Prelude's altered handling of negative literals.
negate . fromInteger $ n just brings it immediately to the surface.

0 and 1 will still take some work, but I don't expect help with them.

Thanks for the simplification!

Cheers,
Bill


From erik@meijcrosoft.com Fri Feb 16 20:26:00 2001 Date: Fri, 16 Feb 2001 12:26:00 -0800 From: Erik Meijer erik@meijcrosoft.com Subject: Just for your fun and horror
Why should we change and not C?

Erik

----- Original Message ----- 
From: "Jan Skibinski" <jans@numeric-quest.com>
To: "Jerzy Karczmarczuk" <karczma@info.unicaen.fr>
Cc: <haskell-cafe@haskell.org>; <plt-scheme@fast.cs.utah.edu>
Sent: Friday, February 16, 2001 1:17 AM
Subject: Re: Just for your fun and horror


> 
> 
> On Fri, 16 Feb 2001, Jerzy Karczmarczuk wrote:
> 
> > My inquiry proved beyond any doubt that my students are so
> > conditioned by "C", that despite the fact that we worked with
> > monads for several weeks, they *cannot imagine* that
> > "return z"
> > may mean something different than the value of "z".
> > 
> > Any suggestions?
> 
> Perhaps the name "return" in the monadic definitions
> could be replaced by something more suggestive of
> an action? How about running a little experiment
> next time, with a new name, to see whether this would
> remove this unfortunate association with C-like
> "return" in the minds of your students? 
> 
> Jan
>  
> 
> 
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe



From matthias@rice.edu Fri Feb 16 21:26:36 2001 Date: Fri, 16 Feb 2001 15:26:36 -0600 (CST) From: Matthias Felleisen matthias@rice.edu Subject: Just for your fun and horror
Because C was first and you don't have the power to change them. 
-- Matthias


From jhf@lanl.gov Fri Feb 16 23:10:29 2001 Date: Fri, 16 Feb 2001 16:10:29 -0700 (MST) From: Joe Fasel jhf@lanl.gov Subject: Just for your fun and horror
On 16-Feb-2001 Matthias Felleisen wrote:
| 
| The problem is Haskell, not your student. 
| 
| Haskell undermines the meaning of 'return', which has the same meaning in
| C, C++, Java, and who knows whatelse.  These languages use 'return' to
| refer to one part of the denotation of a function return (value) and
| Haskell uses 'return' to refer to two parts (value, store). These languages
| have been around forever; Haskell came late. These languages are
| imperative; Haskell is a wanna-be imperative language. 

The denotation of a return command in a typical imperative language supplies
a value and a store to a calling continuation, so why is the name not entirely
appropriate?

Joseph H. Fasel, Ph.D.              email: jhf@lanl.gov
Technology Modeling and Analysis    phone: +1 505 667 7158
University of California            fax:   +1 505 667 2960
Los Alamos National Laboratory      post:  TSA-7 MS F609; Los Alamos, NM 87545


From jhf@lanl.gov Fri Feb 16 23:53:13 2001 Date: Fri, 16 Feb 2001 16:53:13 -0700 (MST) From: jhf@lanl.gov jhf@lanl.gov Subject: Just for your fun and horror
On 16-Feb-2001 Matthias Felleisen wrote:
> 
> Because imperative languages have named one half of the denotation (the
> value return) and not all of it for a long long long time. It's too late 
> for Haskell to change that. -- Matthias

Well now, if I am to understand what a return statement in C does,
I must realize not only that it may return a value to a calling
routine, but also that it preserves the store.  If it allowed
the store to vanish, it wouldn't be very useful, would it?
So I don't see how it's reasonable to assert that "return"
means only one of these two things to a C programmer.

Cheers,
--Joe

Joseph H. Fasel, Ph.D.              email: jhf@lanl.gov
Technology Modeling and Analysis    phone: +1 505 667 7158
University of California            fax:   +1 505 667 2960
Los Alamos National Laboratory      post:  TSA-7 MS F609; Los Alamos, NM 87545


From matthias@rice.edu Fri Feb 16 23:57:41 2001 Date: Fri, 16 Feb 2001 17:57:41 -0600 (CST) From: Matthias Felleisen matthias@rice.edu Subject: Just for your fun and horror
   From: jhf@lanl.gov
   X-Priority: 3 (Normal)
   Content-Type: text/plain; charset=us-ascii
   Date: Fri, 16 Feb 2001 16:53:13 -0700 (MST)
   Organization: Los Alamos National Laboratory
   Cc: karczma@info.unicaen.fr, haskell-cafe@haskell.org


   On 16-Feb-2001 Matthias Felleisen wrote:
   > 
   > Because imperative languages have named one half of the denotation (the
   > value return) and not all of it for a long long long time. It's too late 
   > for Haskell to change that. -- Matthias

   Well now, if I am to understand what a return statement in C does,
   I must realize not only that it may return a value to a calling
   routine, but also that it preserves the store.  If it allowed
   the store to vanish, it wouldn't be very useful, would it?
   So I don't see how it's reasonable to assert that "return"
   means only one of these two things to a C programmer.

   Cheers,
   --Joe


Let me spell it out in detail. When a C programmer thinks about the
'return' type of a C function, he thinks about the value-return half 
of a return statement's denotation. The other half, the modified store, 
remains entirely implicit as far as types are concerned. This is what 
Jerzy's exam question was all about. 

-- Matthias



From jhf@lanl.gov Sat Feb 17 00:19:18 2001 Date: Fri, 16 Feb 2001 17:19:18 -0700 (MST) From: jhf@lanl.gov jhf@lanl.gov Subject: Just for your fun and horror
Matthias,

My apologies for being deliberately obtuse.  Of course, I understood
what you were saying, but my point is this:  The name of the monadic
"return" combinator is perfectly sensible to anyone who understands
the continuation semantics of imperative languages.  While it shouldn't
be necessary to be a denotational semanticist to program in Haskell,
I think it is essential to appreciate the philosophical difference
between the _being_ of functional programming and the _doing_ of
imperative programming, if you're going to play with something like
the I/O monad in Haskell.  If you don't grasp that when you construct
a monad, you're creating a value that represents an action, or in other
words have a basic understanding of the functional denotation of
an imperative command, you don't really understand what you're "doing"
with monads, and your program is likely not to compute what you intend.
In this sense, maybe it's better not to change the (initially) confusing
"return" name, but to regard it as a pons asinorum that the student
must cross.

Cheers,
--Joe


On 16-Feb-2001 Matthias Felleisen wrote:
> 
>    From: jhf@lanl.gov
>    X-Priority: 3 (Normal)
>    Content-Type: text/plain; charset=us-ascii
>    Date: Fri, 16 Feb 2001 16:53:13 -0700 (MST)
>    Organization: Los Alamos National Laboratory
>    Cc: karczma@info.unicaen.fr, haskell-cafe@haskell.org
> 
> 
>    On 16-Feb-2001 Matthias Felleisen wrote:
>    > 
>    > Because imperative languages have named one half of the denotation (the
>    > value return) and not all of it for a long long long time. It's too late
>    > for Haskell to change that. -- Matthias
> 
>    Well now, if I am to understand what a return statement in C does,
>    I must realize not only that it may return a value to a calling
>    routine, but also that it preserves the store.  If it allowed
>    the store to vanish, it wouldn't be very useful, would it?
>    So I don't see how it's reasonable to assert that "return"
>    means only one of these two things to a C programmer.
> 
>    Cheers,
>    --Joe
> 
> 
> Let me spell it out in detail. When a C programmer thinks about the
> 'return' type of a C function, he thinks about the value-return half 
> of a return statement's denotation. The other half, the modified store, 
> remains entirely implicit as far as types are concerned. This is what 
> Jerzy's exam question was all about. 
> 
> -- Matthias
> 

Joseph H. Fasel, Ph.D.              email: jhf@lanl.gov
Technology Modeling and Analysis    phone: +1 505 667 7158
University of California            fax:   +1 505 667 2960
Los Alamos National Laboratory      post:  TSA-7 MS F609; Los Alamos, NM 87545


From p.turner@computer.org Sat Feb 17 01:00:50 2001 Date: Fri, 16 Feb 2001 20:00:50 -0500 From: Scott Turner p.turner@computer.org Subject: Just for your fun and horror
Matthias Felleisen wrote:
>When a C programmer thinks about the
>'return' type of a C function, he thinks about the value-return half 
>of a return statement's denotation. The other half, the modified store, 
>remains entirely implicit as far as types are concerned. 

Just because the type system of C keeps store implicit, it doesn't
change the match between the meaning of 'return' in the two languages.
The IO monad provides a refined way of typing imperative-style 
functions, including return statements.

If you want to use a return statement in Haskell, you can, and it's called
'return'.

(A reasonable alternative would be for 'return' to have second class
status, as syntactic sugar for 'unit', analgous to otherwise=True).

--
Scott Turner
p.turner@computer.org       http://www.billygoat.org/pkturner


From matthias@rice.edu Sat Feb 17 03:24:50 2001 Date: Fri, 16 Feb 2001 21:24:50 -0600 (CST) From: Matthias Felleisen matthias@rice.edu Subject: Just for your fun and horror
Yes, students must cross the bridge. But the name 'return' may
make it more difficult than necessary to cross the bridge. I 
conjecture that the students of our French friend are just the 
tip of the iceberg. 

All functional programmers have problems selling our ware to 
such people. Haskell could have benefited from using a word such 
as 

 produce 10 

to say that a function produces a 10 and a store or whatever. It 
could have driven home the point home. Faking to be C or Java 
is confusing and may create a backlash. Just admit you're different 
-- and better. 

We Schemers have different problems. 

-- Matthias



From matth@ninenet.com Sat Feb 17 04:21:57 2001 Date: Fri, 16 Feb 2001 22:21:57 -0600 From: Matt Harden matth@ninenet.com Subject: Scalable and Continuous
Marcin 'Qrczak' Kowalczyk wrote:
> 
> Wed, 14 Feb 2001 23:27:55 -0600, Matt Harden <matth@ninenet.com> pisze:
> 
> > I also wonder: should one be allowed to create new superclasses of an
> > existing class without updating the original class's definition?
> 
> It would not buy anything. You could not make use of the superclass
> in default definitions anyway (because they are already written).

But that's not the point.  The point is you could create objects that
were only instances of the new superclass and not of the subclass.  It
allows us to have hidden superclasses of Num that wouldn't even have to
be referenced in the standard Prelude, for instance.  It allows users to
define (+) for a type without defining (*), by creating an  appropriate
superclass of Num.  We could keep the current Prelude while allowing
numerous "Geek Preludes" that could coexist with the std one (at least
with regard to this particular issue).

> And what would happen to types which are instances of the subclass
> but not of the new superclass?

They would automatically be instances of the new superclass.  Why not? 
They already have all the appropriate functions defined.  Again, I
wouldn't allow default definitions for the same function in multiple
classes, and this is one of the reasons.  It would introduce ambiguity
when a type that is an instance of a subclass, and didn't override the
default, was considered as an instance of the superclass.

> > Also, should the subclass be able to create new default definitions
> > for functions in the superclasses?
> 
> I hope the system can be designed such that it can.

Me too :).

> > such defaults would only be legal if the superclass did not define
> > a default for the same function.
> 
> Not necessarily. For example (^) in Num (of the revised Prelude)
> has a default definition, but Fractional gives the opportunity to
> have better (^) defined in terms of other methods. When a type is an
> instance of Fractional, it should always have the Fractional's (^)
> in practice. When not, Num's (^) is always appropriate.
> 
> I had many cases like this when trying to design a container class
> system. It's typical that a more specialized class has something
> generic as a superclass, and that a more generic function can easily
> be expressed in terms of specialized functions (but not vice versa).
> It follows that many kinds of types have the same written definition
> for a method, which cannot be put in the default definition in the
> class because it needs a more specialized context.
> 
> It would be very convenient to be able to do that, but it cannot be
> very clear design. It relies on the absence of an instance, a negative
> constraint. Hopefully it will be OK, since it's determined once for a
> type - it's not a systematic way of parametrizing code over negative
> constrained types, which would break the principle that additional
> instances are harmless to old code.

What happens if classes A and B are superclasses of C, all three
define a default for function foo, and we have a type that's an instance
of A and B, but not C, which doesn't override foo?  Which default do we
use?  It's not only a problem for the compiler to figure out, it also
quickly becomes confusing to the programmer.  I'd rather just make the
simple rule of a single default per function.  If multiple "standard"
definitions for a function make sense, then be explicit about which one
you want for each type; i.e.:

   instance Fractional MyFraction where
      (^) = fractionalPow

> This design does have some problems. For example what if there are two
> subclasses which define the default method in an incompatible ways.
> We should design the system such that adding a non-conflicting instance
> does not break previously written code. It must be resolved once per
> module, probably complaining about the ambiguity (ugh!), but once
> the instance is generated, it's cast in stone for this type.

Yeah, ugh.  I hate having opportunities for ambiguity.  Simple rules and
obvious results are far better, IMHO.

> > What do you mean by mutual definitions?
(snipped explanation of mutual definitions)

OK, that's what I thought :).  I didn't really think this was of
particular importance with allowing the definition of superclass's
instances in subclasses, but now I think I see why you said that.  It
would be easy to forget to define one of the functions if the defaults
are way up the hierarchy in one of the superclasses.

Btw, I'm one of those who agrees that omitting a definition of a class
function in an instance should be an error.  If you really intend to
omit the implementation of a function without a default, define it as
(error "Intentionally omitted")!

Matt Harden


From jf15@hermes.cam.ac.uk Sat Feb 17 11:09:52 2001 Date: Sat, 17 Feb 2001 11:09:52 +0000 (GMT) From: Jon Fairbairn jf15@hermes.cam.ac.uk Subject: Just for your fun and horror
On Fri, 16 Feb 2001, Scott Turner wrote:
> Just because the type system of C keeps store implicit, it doesn't
> change the match between the meaning of 'return' in the two languages.

Or to put it another way, _all_ types in C are IO
something.  I think from a didactic point of view making
this observation could be very valuable.

--=20
J=F3n Fairbairn                                 Jon.Fairbairn@cl.cam.ac.uk
31  Chalmers Road                                        jf@cl.cam.ac.uk
Cambridge CB1 3SZ                      +44 1223 570179 (pm only, please)



From elke.kasimir@catmint.de Sat Feb 17 11:24:03 2001 Date: Sat, 17 Feb 2001 12:24:03 +0100 (CET) From: Elke Kasimir elke.kasimir@catmint.de Subject: Just for your fun and horror
Another good exam question (Hmm!):

What does last (last (map return [1..])) lastly return given that
last (return (not True))?

I also would prefer "unit". "return" makes sense  for me as syntactic 
sugar in the context of a "do"-expression (and then please like an 
unary prefix-operat or with low binding power...).

An alternative sugary would be "compute": When a monad represents
a computation, "init" returns a computation with a result, not 
just the result:

foo x = if x > 0 then compute x*x else compute -x*x 

By the way, an alternative for "do" would be "seq" (as in occam) to
indicate that operations are sequenced:

getLine = seq
                    c <- readChar
                    if c == '\n'
                      then compute ""
                      else  seq
                                 l <- getLine
                                 compute c:l

But such a discussion has probably already been taken place some years
ago. It would be interesting for me to know the arguments that led to
the choice of "return" (and "do").

Elke.

---
"If you have nothing to say, don't do it here..."

Elke Kasimir
Skalitzer Str. 79
10997 Berlin (Germany)
fon:  +49 (030) 612 852 16
mail: elke.kasimir@catmint.de>  
see: <http://www.catmint.de/elke>

for pgp public key see:
<http://www.catmint.de/elke/pgp_signature.html>


From p.turner@computer.org Sat Feb 17 20:27:31 2001 Date: Sat, 17 Feb 2001 15:27:31 -0500 From: Scott Turner p.turner@computer.org Subject: [newbie] Lazy >>= ?!
Andrew Cooke wrote:
>1.  After digesting what you wrote I managed to make a lazy list of IO
>monads containing random numbers, but couldn't make an IO monad that
>contained a lazy list of random numbers.  Is this intentional, me
>being stupid, or just chance?

I had wondered what kind of thing you were doing with the IO monad.  Random
numbers are an odd fit.  Pseudorandom numbers can be generated in a lazy
list easily; you don't need a connection with the IO monad to do it.  Using
the Random module of the Hugs distribution, it's for example
         randoms (mkStdGen 1) :: [Int]

The IO monad can be brought into this picture easily.
         return (randoms (mkStdGen 1)) :: IO [Int]

But it sounds as if you're looking for something more sophisticated.  You
want to use randomIO perhaps because it better matches your notion of how
random numbers should be generated.  Using randomIO places more
restrictions on how you operate, because it forces the random numbers to be
created in a particular sequence, in relation to any other IO which the
program performs.  Every random number that is ever accessed must be
produced at a particular point in the sequence.  An unbounded list of such
numbers cannot be returned!  That is, you are looking for
           randomsIO :: IO [a]
which yields a lazy list, by means of repeated calls to randomIO.  All such
calls would have to occur _before_ randomsIO returns, and before _any_ use
of the random numbers could be made.  The program hangs in the process of
making an infinite number of calls to randomIO.

But, you may say, those infinite effects are invisible unless part of the
list is referenced later in the program, so a truly lazy implementation
should be able to skip past that stuff in no time.  Well, that's
conceivable, but (1) that's making some assumptions about the implemetation
of randomIO, and (2) lazy things with no side effects can and should be
handled outside of the IO monad.

>Also, should I be worried about having more than one IO monad - it
>seems odd encapsulating the "outside world" more than once.

No.  Consider the expression 
    sequence_ [print "1", print "two", print "III"]
Try executing it from the Hugs command line, and figure out the type of the
list.  An expression in the IO monad, such as 'print 1' makes contact with
the "outside world" when it executes, but does not take over the entire
outside world, even for the period of time that it's active.

I moved this to haskell-cafe mailing list, because it's getting a little
extended.

--
Scott Turner
p.turner@computer.org       http://www.billygoat.org/pkturner


From p.turner@computer.org Sat Feb 17 20:44:32 2001 Date: Sat, 17 Feb 2001 15:44:32 -0500 From: Scott Turner p.turner@computer.org Subject: [newbie] Lazy >>= ?!
Andrew Cooke wrote:
>2.  Why does the following break finite lists?  Wouldn't they just
>become lazy lists that evaluate to finite lists once map or length or
>whatever is applied?
>
>> Now, if this were changed to 
>>     ~(x:xs) >>= f = f x ++ (xs >>= f)
>> (a lazy pattern match) then your listList2 would work, but finite
>> lists would stop working.

They wouldn't just become lazy lists.  A "lazy" pattern match isn't about
removing unnecessary strictness.  It removes strictness that's necessary
for the program to function normally.  A normal pattern match involves
selecting among various patterns to find the one which matches; so it
evaluates the expression far enough to match patterns.  In the case of
         (x:xs)
it must evaluate the list sufficiently to know that it is not an empty
list.  A lazy pattern match gives up the ability to select which pattern
matches.  For the sake of less evaluation, it opens up the possibility of a
runtime error, when a reference to a named variable won't have anything to
bind to.

The list monad is most often used with complete finite lists, not just
their initial portions.  The lazy pattern match shown above breaks this
because as it operates on the list, it assumes that the list is non-empty,
which is not the case when the end of the list is reached.  A runtime error
is inevitable. 

--
Scott Turner
p.turner@computer.org       http://www.billygoat.org/pkturner


From dpt@math.harvard.edu Sat Feb 17 22:58:55 2001 Date: Sat, 17 Feb 2001 17:58:55 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Scalable and Continuous
On Fri, Feb 16, 2001 at 10:21:57PM -0600, Matt Harden wrote:
> Marcin 'Qrczak' Kowalczyk wrote:
> > Wed, 14 Feb 2001 23:27:55 -0600, Matt Harden <matth@ninenet.com> pisze:
> > > such defaults would only be legal if the superclass did not define
> > > a default for the same function.
> > 
> > Not necessarily. For example (^) in Num (of the revised Prelude)
> > has a default definition, but Fractional gives the opportunity to
> > have better (^) defined in terms of other methods. When a type is an
> > instance of Fractional, it should always have the Fractional's (^)
> > in practice. When not, Num's (^) is always appropriate.
> What happens if classes A and B are superclasses of C, all three
> define a default for function foo, and we have a type that's an instance
> of A and B, but not C, which doesn't override foo?  Which default do we
> use?  It's not only a problem for the compiler to figure out, it also
> quickly becomes confusing to the programmer.  

(Presumably you mean that A and B are subclasses of C, which contains
foo.)  I would make this an error, easily found by the compiler.
But I need to think more to come up with well-defined and uniform
semantics.

> .. I'd rather just make the
> simple rule of a single default per function.  If multiple "standard"
> definitions for a function make sense, then be explicit about which one
> you want for each type; i.e.:
> 
>    instance Fractional MyFraction where
>       (^) = fractionalPow

This is another option.  It has the advantage of being explicit and
allowing you to choose easily in cases of ambiguity.  It is more
conservative, but possibly less convenient.

Best,
	Dylan Thurston


From ham@cs.utexas.edu Sat Feb 17 22:29:56 2001 Date: Sat, 17 Feb 2001 16:29:56 -0600 From: Hamilton Richards ham@cs.utexas.edu Subject: Just for your fun and horror
At 21:24 -0600 2001-02-16, Matthias Felleisen wrote:
> ... Haskell could have benefited from using a word such
>as
>
> produce 10
>
>to say that a function produces a 10 and a store or whatever.

In my classes, I use the term "deliver". This is the first semester I've
gone as deeply into monads, so it's a bit early to say how well this
terminology works.

--HR



------------------------------------------------------------------
Hamilton Richards, PhD           Department of Computer Sciences
Senior Lecturer                  Mail Code C0500
512-471-9525                     The University of Texas at Austin
Taylor Hall 5.138                Austin, Texas 78712-1188
ham@cs.utexas.edu
------------------------------------------------------------------




From chak@cse.unsw.edu.au Sun Feb 18 03:50:16 2001 Date: Sun, 18 Feb 2001 14:50:16 +1100 From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au Subject: Just for your fun and horror
Jon Fairbairn <jf15@hermes.cam.ac.uk> wrote,

> On Fri, 16 Feb 2001, Scott Turner wrote:
> > Just because the type system of C keeps store implicit, it doesn't
> > change the match between the meaning of 'return' in the two languages.
> 
> Or to put it another way, _all_ types in C are IO
> something.  I think from a didactic point of view making
> this observation could be very valuable.

I absolutely agree.  The Haskell

  foo :: IO Int
  foo  = return 42

and C

  int foo ()
  {
    return 42;
  }

are exactly the same.  It is

  bar = 42

for which C has no corresponding phrase.  So, it is a new
concept, which for the students - not surprisingly - is an
intellectual challenge.

In fact, I think, there is a second lesson in the whole
story, too: Syntax is just...well...syntax.  Students
knowing only one or possibly two related languages, often
cannot distinguish between syntax and semantics.  Breaking
their current, misguided model of programming languages is a
first step for them towards gaining a deeper understanding.

So, `return' is a feature, not a bug.  I guess, the remedy
for the course would be to provoke a discussion of the issue
of C's return versus Haskell's return before the exam.

Cheers,
Manuel


From ashley@semantic.org Sun Feb 18 03:59:32 2001 Date: Sat, 17 Feb 2001 19:59:32 -0800 From: Ashley Yakeley ashley@semantic.org Subject: Just for your fun and horror
At 2001-02-17 19:50, Manuel M. T. Chakravarty wrote:

>It is
>
>  bar = 42
>
>for which C has no corresponding phrase.

Hmm...

#define bar 42

...although I would always do

const int bar = 42


-- 
Ashley Yakeley, Seattle WA



From matth@ninenet.com Sun Feb 18 04:28:39 2001 Date: Sat, 17 Feb 2001 22:28:39 -0600 From: Matt Harden matth@ninenet.com Subject: Scalable and Continuous
Dylan Thurston wrote:
> 
> On Fri, Feb 16, 2001 at 10:21:57PM -0600, Matt Harden wrote:
> > Marcin 'Qrczak' Kowalczyk wrote:
> > > Wed, 14 Feb 2001 23:27:55 -0600, Matt Harden <matth@ninenet.com> pisze:
> > > > such defaults would only be legal if the superclass did not define
> > > > a default for the same function.
> > >
> > > Not necessarily. For example (^) in Num (of the revised Prelude)
> > > has a default definition, but Fractional gives the opportunity to
> > > have better (^) defined in terms of other methods. When a type is an
> > > instance of Fractional, it should always have the Fractional's (^)
> > > in practice. When not, Num's (^) is always appropriate.
> > What happens if classes A and B are superclasses of C, all three
> > define a default for function foo, and we have a type that's an instance
> > of A and B, but not C, which doesn't override foo?  Which default do we
> > use?  It's not only a problem for the compiler to figure out, it also
> > quickly becomes confusing to the programmer.
> 
> (Presumably you mean that A and B are subclasses of C, which contains
> foo.)  I would make this an error, easily found by the compiler.
> But I need to think more to come up with well-defined and uniform
> semantics.

No, I meant superclasses.  I was referring to the possible feature we
(Marcin and I) were discussing, which was the ability to create new
superclasses of existing classes.  If you are allowed to create
superclasses which are not referenced in the definition of the subclass,
then presumably you could create two classes A and B that contained foo
from C.  You would have to then be able to create a new subclass of both
of those classes, since C is already a subclass of both.  Then the
question becomes, if they both have a default for foo, who wins?

My contention was that the compiler should not allow a default for foo
in the superclass and the subclass because that would introduce
ambiguities.  I would now like to change my stance on that, and say that
defaults in the superclasses could be allowed, and in a class AB
subclassing both A and B, there would be no default for foo unless it
was defined in AB itself.  Also C would not inherit any default from A
or B, since it does not mention A or B in its definition.

If this feature of creating new superclasses were adopted, I would also
want a way to refer explicitly to default functions in a particular
class definition, so that one could say that foo in AB = foo from A.

BTW, I'm not saying this stuff is necessarily a good idea, just
exploring the possibility.

Matt Harden


From sebc@posse42.net Sun Feb 18 05:17:01 2001 Date: Sun, 18 Feb 2001 05:17:01 +0000 From: Sebastien Carlier sebc@posse42.net Subject: Just for your fun and horror
Manuel M. T. Chakravarty wrote:
> It is
> 
>   bar = 42
> 
> for which C has no corresponding phrase.

But it has:

  #define bar 42

Although then you get call by name, while Haskell provides call by need.

Cheers,
Sebastien


From tom-haskell@moertel.com Sun Feb 18 06:53:41 2001 Date: Sun, 18 Feb 2001 01:53:41 -0500 From: Tom Moertel tom-haskell@moertel.com Subject: Literate Programming in Haskell?
In the Haskell community is there a generally accepted best way to
approach Literate Programming?  The language has support for literate
comments, but it seems that many common LP tools don't respect it.

For example, in order to convert some .lhs code into LaTeX via the noweb
LP tools, I had to write a preprocessor to convert the ">" code blocks
into something that noweb would respect.  (The preprocessor actually
does a bit more and, in conjunection with noweb, gives pretty good
results for little effort.  For an example, see:

    http://www.ellium.com/~thor/hangman/cheating-hangman.lhs
    http://www.ellium.com/~thor/hangman/cheating-hangman.pdf
)

Yet somehow, I don't think that my homebrew approach is optimal.  Can
anybody recommend a particularly elegant LP setup for Haskell
programming?  Or if you have an approach that works well for you, would
you mind sharing it?

Cheers,
Tom


From chak@cse.unsw.edu.au Sun Feb 18 08:54:57 2001 Date: Sun, 18 Feb 2001 19:54:57 +1100 From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au Subject: Just for your fun and horror
Ashley Yakeley <ashley@semantic.org> wrote,

> At 2001-02-17 19:50, Manuel M. T. Chakravarty wrote:
> 
> >It is
> >
> >  bar = 42
> >
> >for which C has no corresponding phrase.
> 
> Hmm...
> 
> #define bar 42

No - this doesn't work as 

  #define bar (printf ("Evil side effect"), 42)

is perfectly legal.  So, we have an implicit IO monad here,
too.  What is interesting, however, is that C does not
require `return' in all contexts itself.  Or in other words,
C's comma notation has an implicit `return' in the last
expression.

> ...although I would always do
> 
> const int bar = 42

That's a good one, however.  It in effect rules out side
effects by C's definition of constant expressions.

So, I guess, I have to extend my example to

  bar x = x + 42

Cheers,
Manuel


From gruenbacher-lists@geoinfo.tuwien.ac.at Sun Feb 18 10:00:31 2001 Date: Sun, 18 Feb 2001 11:00:31 +0100 (CET) From: Andreas Gruenbacher gruenbacher-lists@geoinfo.tuwien.ac.at Subject: Literate Programming in Haskell?
On Sun, 18 Feb 2001, Tom Moertel wrote:

> In the Haskell community is there a generally accepted best way to
> approach Literate Programming?  The language has support for literate
> comments, but it seems that many common LP tools don't respect it.

I'm also very interested in this, but ideally I would want the output to
be in some proportional font, with symbols like =>, ->, <- replaced with
arrows, etc. Also, it would be very nice to have the code automatically
column aligned (using heuristics).

I saw something that looks like this in Mark P. Jones's paper `Typing
Haskell in Haskell', but don't know how he did it.


Cheers,
Andreas.

------------------------------------------------------------------------
 Andreas Gruenbacher                  gruenbacher@geoinfo.tuwien.ac.at
 Research Assistant                       Phone      +43(1)58801-12723
 Institute for Geoinformation             Fax        +43(1)58801-12799
 Technical University of Vienna           Cell phone   +43(664)4064789



From andrew@andrewcooke.free-online.co.uk Sun Feb 18 11:54:42 2001 Date: Sun, 18 Feb 2001 11:54:42 +0000 From: andrew@andrewcooke.free-online.co.uk andrew@andrewcooke.free-online.co.uk Subject: [newbie] Lazy >>= ?!
Thanks to everyone for replying.  Things make more sense now (I've
re-read a chunk of the Haskell Companion that really hammers home the
difference between actions and monads).  Also, thanks for the pointer
to random numbers without IO - I'd actually written my own equivalent,
but will now drop it and use that.

Cheers,
Andrew

On Sat, Feb 17, 2001 at 03:27:31PM -0500, Scott Turner wrote:
> Andrew Cooke wrote:
> >1.  After digesting what you wrote I managed to make a lazy list of IO
> >monads containing random numbers, but couldn't make an IO monad that
> >contained a lazy list of random numbers.  Is this intentional, me
> >being stupid, or just chance?
> 
> I had wondered what kind of thing you were doing with the IO monad.  Random
> numbers are an odd fit.  Pseudorandom numbers can be generated in a lazy
> list easily; you don't need a connection with the IO monad to do it.  Using
> the Random module of the Hugs distribution, it's for example
>          randoms (mkStdGen 1) :: [Int]
> 
> The IO monad can be brought into this picture easily.
>          return (randoms (mkStdGen 1)) :: IO [Int]
> 
> But it sounds as if you're looking for something more sophisticated.  You
> want to use randomIO perhaps because it better matches your notion of how
> random numbers should be generated.  Using randomIO places more
> restrictions on how you operate, because it forces the random numbers to be
> created in a particular sequence, in relation to any other IO which the
> program performs.  Every random number that is ever accessed must be
> produced at a particular point in the sequence.  An unbounded list of such
> numbers cannot be returned!  That is, you are looking for
>            randomsIO :: IO [a]
> which yields a lazy list, by means of repeated calls to randomIO.  All such
> calls would have to occur _before_ randomsIO returns, and before _any_ use
> of the random numbers could be made.  The program hangs in the process of
> making an infinite number of calls to randomIO.
> 
> But, you may say, those infinite effects are invisible unless part of the
> list is referenced later in the program, so a truly lazy implementation
> should be able to skip past that stuff in no time.  Well, that's
> conceivable, but (1) that's making some assumptions about the implemetation
> of randomIO, and (2) lazy things with no side effects can and should be
> handled outside of the IO monad.
> 
> >Also, should I be worried about having more than one IO monad - it
> >seems odd encapsulating the "outside world" more than once.
> 
> No.  Consider the expression 
>     sequence_ [print "1", print "two", print "III"]
> Try executing it from the Hugs command line, and figure out the type of the
> list.  An expression in the IO monad, such as 'print 1' makes contact with
> the "outside world" when it executes, but does not take over the entire
> outside world, even for the period of time that it's active.
> 
> I moved this to haskell-cafe mailing list, because it's getting a little
> extended.
> 
> --
> Scott Turner
> p.turner@computer.org       http://www.billygoat.org/pkturner
> 

-- 
http://www.andrewcooke.free-online.co.uk/index.html


From elke.kasimir@catmint.de Sun Feb 18 11:59:57 2001 Date: Sun, 18 Feb 2001 12:59:57 +0100 (CET) From: Elke Kasimir elke.kasimir@catmint.de Subject: framework for composing monads?
(Moving to haskell cafe...)

On 18-Feb-2001 Manuel M. T. Chakravarty wrote:
>> It is even acceptable for me to manage the state in C -
>> independent of the API design - but then some time there 
>> will be the question: Why do I always say that that Haskell 
>> is the better programming language, when I'm
>> really doing all the tricky stuff in C?...
> 
> Sure - therefore, I proposed to use `IORef's rather than C
> routines. 

Thanks for the hint! 

I took a look at them and now have some questions:

a) It is clear that I need some C-link to access the cli/odbc lib.
Up to now I planned to use Haskell Direct for this. Except of this, I want
to stick to Haskell 98 and seek for maximal portability. 

Practically, this raises the question of wether nhc and hbc support hslibs
or else I can provide a substitute for IORef's for these compilers.

Can someone give me hint?

b) What I finally need is "hidden state". My first attempt to get one 
using IORefs is:

> import IOExts
>
> state :: IORef Int
> state = unsafePerformIO $ newIORef 0
>
> main = seq state $ do
>                   writeIORef state 1
>                   currstate <- readIORef state
>                   putStr (show currstate)

Is this the right way?

Cheers,
Elke


---
"If you have nothing to say, don't do it here..."

Elke Kasimir
Skalitzer Str. 79
10997 Berlin (Germany)
fon:  +49 (030) 612 852 16
mail: elke.kasimir@catmint.de>  
see: <http://www.catmint.de/elke>

for pgp public key see:
<http://www.catmint.de/elke/pgp_signature.html>


From chak@cse.unsw.edu.au Mon Feb 19 02:58:09 2001 Date: Mon, 19 Feb 2001 13:58:09 +1100 From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au Subject: framework for composing monads?
Elke Kasimir <elke.kasimir@catmint.de> wrote,

> (Moving to haskell cafe...)
> 
> On 18-Feb-2001 Manuel M. T. Chakravarty wrote:
> >> It is even acceptable for me to manage the state in C -
> >> independent of the API design - but then some time there 
> >> will be the question: Why do I always say that that Haskell 
> >> is the better programming language, when I'm
> >> really doing all the tricky stuff in C?...
> > 
> > Sure - therefore, I proposed to use `IORef's rather than C
> > routines. 
> 
> Thanks for the hint! 
> 
> I took a look at them and now have some questions:
> 
> a) It is clear that I need some C-link to access the cli/odbc lib.
> Up to now I planned to use Haskell Direct for this. Except of this, I want
> to stick to Haskell 98 and seek for maximal portability. 

I am all for portable code, too.

> Practically, this raises the question of wether nhc and hbc support hslibs
> or else I can provide a substitute for IORef's for these compilers.

nhc does supports `IORef's (they come in the module
IOExtras).  I am not sure whether H/Direct works with nhc,
though.  Sigbjorn should be able to answer this.

> b) What I finally need is "hidden state". My first attempt to get one 
> using IORefs is:
> 
> > import IOExts
> >
> > state :: IORef Int
> > state = unsafePerformIO $ newIORef 0
> >
> > main = seq state $ do
> >                   writeIORef state 1
> >                   currstate <- readIORef state
> >                   putStr (show currstate)
> 
> Is this the right way?

Yes, except that you want to have

  {-# NOINLINE state #-}

too.  Wouldn't be nice if ghc were to choose to inline
`state', would it? ;-)

Cheers,
Manuel


From karczma@info.unicaen.fr Mon Feb 19 11:08:28 2001 Date: Mon, 19 Feb 2001 11:08:28 +0000 From: Jerzy Karczmarczuk karczma@info.unicaen.fr Subject: Just for your fun and horror
Dear all, at haskell-café & plt-scheme.

1. THANK YOU VERY MUCH for enlightening comments about the
   terminology, student psychology, etc. I will get back to it
   in a second, for the moment I ask you very politely:

   survey the adressee list if you "reply-all". For people
   subscribing both Haskell and Scheme forums means 4 copies
   of your message, if you send the messages simultaneously 
   to the private address of the previous author...!. 
   I thought that a cross posting might have some merits, and 
   I see now the nuisance. My deep apologies. 
  

2. People suggest that the word return has been badly chosen.
   I have no strong opinion, I begin to agree... we had unit,
   result, people propose liftM, compute, deliver, etc. 
   I wonder why return stuck?
   Just because it exists elsewhere? I believe not, it has some
   appeal, as Joe Fasel acknowledges.

C. Reinke writes:

> One way to look at the problem is that some of your students have 
> concrete experience with `return' in different contexts, and that
> Haskell tries to make different things look similar here. You say
> "we worked with monads for several weeks" but, you being yourself,
> this was probably at a fairly abstract and general level, right?

No, not exactly. Being myself, just the opposite.
There is *NO* more abstraction in my course than in Wadler's
"Essence...".
* I begin with a silly functional evaluator of a tree representing
  an arithmetic expression. 
* We recognize together with the students that a program may fail,
  and we introduce Maybe. They see thus a simple monadic generali-
  sation and the first non-trivial instance of return. We try to
  implement (in a sketchy way) a tracing generalisation as well.

* They have parallelly a course of Prolog, so we play con mucho
  gusto with a few "non-deterministic" algorithms, such as standard
  combinatoric exercices : the generation of permutations, of the
  powerset, etc. On average the students seem to understand the
  idea and the implementation, and *mind you*: while writing their
  exercises <<a vista>> they duly corrected themselves when they
  were tempted to write "z" instead of "return z". ([] =-> [[]]).

* We worked for a reasonable period with monadic parsers. The 
  comment above is valid. Semantically they accepted the difference 
  between "z" and "return z". I couldn't foresee any surprises.

* They had to write a serious program in Haskell, so I gave them
  an introduction to Haskell I/O. They couldn't escape from
  *practical* Monads (although some of my students "perverted" 
  [with my approval] the idea of writing a *syntactic* converter 
  to Scheme, realizing it not in Haskell but in Scheme...)

I spoke of course about types, but not simultaneously. We took
advantage of the type inference, and the *type* of return has not
been discussed explicitly sufficiently early. This is - I believe -
my main, fundamental pedagogical fault! Yes Joe, I think this
has been my own <<pons asinorum>>.
If my compilation course survives all this affair (not obvious)
I will try to remember Jón Fairbarn's suggestion (repeated by
Manuel Chakravarty), and to discuss thoroughly the status of 
"C" imperative concepts, in order to prevent misunderstandings.

C. Reinke again:
> Unless you're one of Asimov's technicians of eternity, it is a bit
> difficult to change the history of programming languages, and assuming
> that the students pay for the opportunity to learn, you can't really
> fire them either.. 

Hm.
We are all Technicians able to change the past, but since
we do not live outside the System, we do it usually in the 
Orwellian way: we change the INTERPRETATION of the past. Things
which were good (structural top-down programming) become bad
(inadapted to object approach). Strong typing? A straitjacket for
some, a salvation for the other. Scheme'ists add OO layers in
order to facilitate the code reusing, and this smuggles in some
typing. Dynamic typing in static languages became a folkloric,
never-ending issue... The history of languages is full of
"second thoughts". Who will first write a paper with a Wadlerian
style [[but taken from earlier literature]] title:

"Monads considered harmful"
"Return should NOT return its argument"

etc.?

And in France students don't pay for the opportunity to learn.

Regards.
Jerzy Karczmarczuk
Caen, France


From simonpj@microsoft.com Mon Feb 19 09:52:45 2001 Date: Mon, 19 Feb 2001 01:52:45 -0800 From: Simon Peyton-Jones simonpj@microsoft.com Subject: FW: Announcing haskelldoc
> In the Haskell community is there a generally accepted best way to
> approach Literate Programming?  The language has support for literate
> comments, but it seems that many common LP tools don't respect it.

I don't know whether you'd regard this as literate programming,
but there's a move afoot to get a widely used Haskell documentation
tool (enclosed).

Simon

-----Original Message-----
From: Henrik Nilsson [mailto:nilsson@cs.yale.edu]
Sent: 05 February 2001 22:14
To: haskell@haskell.org
Subject: Announcing haskelldoc


Dear Haskellers,

At the recent Haskell Implementors' meeting in Cambridge, UK,
it was decided that it would be useful to have a standard for
embedded Haskell documentation. Such standards, and associated
tools for extracting and formating the documentation in various ways,
exists for other languages like Java and Eiffel and have proven to
be very useful. Some such tools also exist (and are being actively
developed) for Haskell, but there is as yet no generally agreed upon
standard for the format of the embedded documentation as such.

To address this, a mailing list has been started with the aim of
defining a standard for embedded Haskell documentation, and
possibly also related standards which would facilitate the development
of various tools making use of such documentation (formatters, source
code browsers, search tools, etc.).

We feel that it is important to involve all who might be interested
in this work at an early stage, so that as many aspects as possible
can be taken into consideration, and so that the proposal for a
standard which hopefully will emerge has a reasonable chance of
gaining widespread support.

Thus, You are hereby cordially invited to join haskelldoc@haskell.org.
To join, just goto  http://www.haskell.org/mailman/listinfo/haskelldoc.

Best regards,

Armin Groesslinger 	<groessli@fmi.uni-passau.de>
Simon Marlow		<simonmar@microsoft.com>
Henrik Nilsson		<nilsson@cs.yale.edu>
Jan Skibinski		<jans@numeric-quest.com>
Malcolm Wallace		<malcolm@abbess.demon.co.uk>

_______________________________________________
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


From bostjan.slivnik@fri.uni-lj.si Mon Feb 19 13:33:44 2001 Date: Mon, 19 Feb 2001 14:33:44 +0100 From: Bostjan Slivnik bostjan.slivnik@fri.uni-lj.si Subject: Literate Programming in Haskell?
> > In the Haskell community is there a generally accepted best way to
> > approach Literate Programming?  The language has support for literate
> > comments, but it seems that many common LP tools don't respect it.
> 
> I'm also very interested in this, but ideally I would want the output to
> be in some proportional font, with symbols like =>, ->, <- replaced with
> arrows, etc. Also, it would be very nice to have the code automatically
> column aligned (using heuristics).

So am I.  Is anybody willing to cooperate on the desing of such tool?

The solution based on the package `listings' is really nice
(especially because of its simplicity).  However, if different
proportional fonts are used for different lexical categories and the
indentation is preserved (as it should be in Haskell), the package
does not produce the best results.

> I saw something that looks like this in Mark P. Jones's paper `Typing
> Haskell in Haskell', but don't know how he did it.

Perhaps he used ``Haskell Style for LaTeX2e'' (written by Manuel
Chakravarty); just a guess.  Or did it manually.

Bo"stjan Slivnik


From patrikj@cs.chalmers.se Mon Feb 19 14:07:41 2001 Date: Mon, 19 Feb 2001 15:07:41 +0100 (MET) From: Patrik Jansson patrikj@cs.chalmers.se Subject: Literate Programming in Haskell?
On Mon, 19 Feb 2001, Bostjan Slivnik wrote:
>
> > I'm also very interested in this, but ideally I would want the output to
> > be in some proportional font, with symbols like =>, ->, <- replaced with
> > arrows, etc. Also, it would be very nice to have the code automatically
> > column aligned (using heuristics).
>
> So am I.  Is anybody willing to cooperate on the desing of such tool?

A tool I am using is Ralf Hinze's lhs2tex

  http://www.informatik.uni-bonn.de/~ralf/Literate.tar.gz

  http://www.informatik.uni-bonn.de/~ralf/Guide.ps.gz

It transforms .lhs files (with some formatting commands in LaTeX-style
comments) to LaTeX. Development based on this idea is something I would be
willing to participate in as I already have a fair amount of Haskell
code/articles (read: my PhD thesis;-) in this format.

Maybe Ralf can say something about his views on further development of
lhs2tex (copyright etc.) by other people (us?).

/Patrik Jansson

PS. I have made some small improvements to lhs2tex locally and I seem to
    remember that one or two of those were actually needed to get it to
    run with my ghc version.



From Malcolm.Wallace@cs.york.ac.uk Mon Feb 19 14:29:47 2001 Date: Mon, 19 Feb 2001 14:29:47 +0000 From: Malcolm Wallace Malcolm.Wallace@cs.york.ac.uk Subject: framework for composing monads?
Elke Kasimir writes:
> Practically, this raises the question of wether nhc and hbc support hslibs
> or else I can provide a substitute for IORef's for these compilers.

As Manuel reported, nhc98 has IORefs identical to ghc and Hugs, except
in module IOExtras.

For hbc, you have an equivalent interface in:

    module IOMutVar where
    data MutableVar a
    newVar   :: a -> IO (MutableVar a)
    readVar  :: MutableVar a -> IO a
    writeVar :: MutableVar a -> a -> IO a
    sameVar  :: MutableVar a -> MutableVar a -> Bool

Regards,
    Malcolm


From dpt@math.harvard.edu Mon Feb 19 21:05:01 2001 Date: Mon, 19 Feb 2001 16:05:01 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Typing units correctly
On Thu, Feb 15, 2001 at 07:18:14AM -0800, Andrew Kennedy wrote:
> First, I think there's been a misunderstanding. I was referring to 
> the poster ("Christoph Grein") ... but from 
> what I've seen your (Dylan's) posts are well-informed. Sorry if 
> there was any confusion.

It was easy to get confused, since I was quite clueless in the post in
question.  No big deal.

> As you suspect, negative exponents are necessary.

On a recent plane ride, I convinced myself that negative exponents are
possible to provide along the same lines, although it's not very
elegant: addition seems to require 13 separate cases, depending on the
sign of each term, with the representation I picked.

There are other representations.  There is a binary representation,
similar to Chris Okasaki's in the square matrices paper.

> In fact, I have since solved the simplification problem mentioned 
> in my ESOP paper, and it would assign the second of these two 
> (equivalent) types, as it works from left to right in the type. I
> guess it does boil down to choosing a nice basis; more precisely
> it corresponds to the Hermite Normal Form from the theory of 
> integer matrices (more generally: modules over commutative rings).

Great.  I'll look it up.  I had run across similar problems in an
unrelated context recently.

> Which brings me to your last point: some more general system that
> subsumes the rather specific dimension/unit types system. There's
> been some nice work by Martin Sulzmann et al on constraint based
> systems which can express dimensions. ... To my taste, though,
> unless you want to express all sorts of other stuff in the type
> system, the equational-unification-based approach that I described
> in ESOP is simpler, even with the fix for let.

One point of view is that anything you can do inconveniently by hand,
as with the Peano integers example I posted, you ought to be able to
do conveniently with good language support.  I think you can do a lot
of these constraint-based systems using PeanoAdd; I may try
programming some.  Language support does have advantages here: type
signatures can often be simplified considerably, and can often be
shown to be inconsistent.

For instance,
   a <= b, a <= b+1
can be simplified to
  a <= b
while
  (PeanoLessEqual a b, PeanoLessEqual a (Succ b))
which means more or less the same thing, cannot be simplified to
  (PeanoLessEqual a b)
though probably a function could be written that converts between the
two; but I don't see how to make it polymorphic enough.

Your dimension types and Boolean algebra do add something really new
that cannot be simulated like this: type inference and principal
types.  I wonder how they can be incorporated into Haskell in some
reasonable and general way.  Is a single kind of "dimensions" the
right thing?  What if, e.g., I care about the distinction between
rational and integral exponents, or I want Z/2 torsion?  How do I
create a new dimension?  Is there some function that creates a
dimension from a string or some such?  What is its type?  Can I
prevent dimensions from unrelated parts of the program from
interfering?

Best,
	Dylan Thurston



From chak@cse.unsw.edu.au Mon Feb 19 13:26:08 2001 Date: Tue, 20 Feb 2001 00:26:08 +1100 From: Manuel M. T. Chakravarty chak@cse.unsw.edu.au Subject: Just for your fun and horror
Jon Cast <jcast@ou.edu> wrote,

> Manuel M. T. Chakravarty writes:
> > So, I guess, I have to extend my example to
> >
> >   bar x = x + 42
> >
> 
> I don't know if this counts, but gcc allows:
> 
> int bar(int x)__attribute__(const)
> {
> 	return(x + 42);
> }
> 
> which is the exact C analogue of the Haskell syntax.  

Sorry, but I would say that it doesn't count as it is a
compiler specific extension :-)  Nevertheless, a good
point. 

> The majority of `C
> functions', I believe, (and especially in well-written code) are intended to
> be true functions, not IO monads.  They modify the state for
> efficiency/ignorance reasons, not because of a conscious decision.

Yes and no.  I agree that they are often intended to be true
functions.  However, it is not only efficiency and ignorance
which forces side effects on the C programmer.  Restrictions
of the language like the lack of call-by-reference arguments
and (true) multi-valued returns force the use of pointers
upon the programmer.

Anyway, I don't want to do C bashing here - although, on
this list, I might get away with it ;-)

Cheers,
Manuel


From konsu@microsoft.com Tue Feb 20 02:07:17 2001 Date: Mon, 19 Feb 2001 18:07:17 -0800 From: Konst Sushenko konsu@microsoft.com Subject: newbie: running a state transformer in context of a state reader
This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

------_=_NextPart_001_01C09AE1.D954882E
Content-Type: text/plain;
	charset="iso-8859-1"

hello,
 
i have a parser which is a state transformer monad, and i need to implement
a lookahead function, which applies a given parser but does not change the
parser state. so i wrote a function which reads the state, applies the
parser and restores the state (the State monad is derived from the paper
"Monadic parser combinators" by Hutton/Meijer):
 
 
type Parser a = State String Maybe a

lookahead  :: Parser a -> Parser a
lookahead p = do { s <- fetch
                 ; x <- p
                 ; set s
                 ; return x
                 }

now i am curious if it is possible to run the given parser (state
transformer) in a context of a state reader somehow, so as the state gets
preserved automatically. something that would let me omit the calls to fetch
and set methods.
 
i would appreciate any advise
 
thanks
konst

------_=_NextPart_001_01C09AE1.D954882E
Content-Type: text/html;
	charset="iso-8859-1"

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">


<META content="MSHTML 5.50.4208.1700" name=GENERATOR></HEAD>
<BODY>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" 
size=2>hello,</FONT></SPAN></DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" size=2>i have a 
parser which is a state transformer monad, and i need to implement a lookahead 
function, which applies a given parser but does not change the parser state. so 
i wrote a function which reads the state, applies the parser and restores the 
state (the State monad is derived from the paper "Monadic parser combinators" by 
Hutton/Meijer):</FONT></SPAN></DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" size=2>type Parser 
a = State String Maybe a<BR></FONT></SPAN></DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" 
size=2>lookahead&nbsp; :: Parser a -&gt; Parser a<BR>lookahead p = do { s &lt;- 
fetch<BR>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
; x &lt;- 
p<BR>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
; set 
s<BR>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
; return 
x<BR>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
}<BR></FONT></SPAN></DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" size=2>now i am 
curious if it is possible to run the given parser (state transformer) in a 
context of a state reader somehow, so&nbsp;as the state gets preserved 
automatically. something that would let me omit the calls&nbsp;to fetch and set 
methods.</FONT></SPAN></DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" size=2>i would 
appreciate any advise</FONT></SPAN></DIV>
<DIV><SPAN class=942085801-20022001></SPAN>&nbsp;</DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" 
size=2>thanks</FONT></SPAN></DIV>
<DIV><SPAN class=942085801-20022001><FONT face="Courier New" 
size=2>konst</DIV></FONT></SPAN></BODY></HTML>

------_=_NextPart_001_01C09AE1.D954882E--


From erik@meijcrosoft.com Tue Feb 20 06:00:47 2001 Date: Mon, 19 Feb 2001 22:00:47 -0800 From: Erik Meijer erik@meijcrosoft.com Subject: Literate Programming in Haskell?
You also might take a look at Maarten Fokkinga's mira.sty
http://www.cse.ogi.edu/~mbs/src/textools/ and Mark Shileds' abbrev.sty which
was derived from that http://www.cse.ogi.edu/~mbs/src/textools/.

Erik
----- Original Message -----
From: "Patrik Jansson" <patrikj@cs.chalmers.se>
To: <haskell-cafe@haskell.org>
Cc: <ralf@cs.uu.nl>
Sent: Monday, February 19, 2001 6:07 AM
Subject: Re: Literate Programming in Haskell?


> On Mon, 19 Feb 2001, Bostjan Slivnik wrote:
> >
> > > I'm also very interested in this, but ideally I would want the output
to
> > > be in some proportional font, with symbols like =>, ->, <- replaced
with
> > > arrows, etc. Also, it would be very nice to have the code
automatically
> > > column aligned (using heuristics).
> >
> > So am I.  Is anybody willing to cooperate on the desing of such tool?
>
> A tool I am using is Ralf Hinze's lhs2tex
>
>   http://www.informatik.uni-bonn.de/~ralf/Literate.tar.gz
>
>   http://www.informatik.uni-bonn.de/~ralf/Guide.ps.gz
>
> It transforms .lhs files (with some formatting commands in LaTeX-style
> comments) to LaTeX. Development based on this idea is something I would be
> willing to participate in as I already have a fair amount of Haskell
> code/articles (read: my PhD thesis;-) in this format.
>
> Maybe Ralf can say something about his views on further development of
> lhs2tex (copyright etc.) by other people (us?).
>
> /Patrik Jansson
>
> PS. I have made some small improvements to lhs2tex locally and I seem to
>     remember that one or two of those were actually needed to get it to
>     run with my ghc version.
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe



From simonpj@microsoft.com Tue Feb 20 16:33:46 2001 Date: Tue, 20 Feb 2001 08:33:46 -0800 From: Simon Peyton-Jones simonpj@microsoft.com Subject: Primitive types and Prelude shenanigans
I don't mind doing this, but can someone first give a brief justification
about why it's a good idea, independent of the discussion that
has taken place on this list?  I'd like to add such an explanation
to the code.

Simon

| -----Original Message-----
| From: qrczak@knm.org.pl [mailto:qrczak@knm.org.pl]
| Sent: 16 February 2001 17:42
| To: haskell-cafe@haskell.org
| Subject: Re: Primitive types and Prelude shenanigans
| 
| 
| Thu, 15 Feb 2001 20:56:20 -0800, William Lee Irwin III 
| <wli@holomorphy.com> pisze:
| 
| > 	literal "5" gets mapped to (fromPositiveInteger 5)
| > 	literal "-9" gets mapped to (fromNonZeroInteger -9)
| 
| Note that when a discussed generic Prelude replacement
| framework is done, and ghc's rules are changed to expand -9 to
| negate (fromInteger 9) instead of fromInteger (-9), then you don't
| need uglification of the fromInteger function to be able to define
| types with only nonnegative numeric values. Just define your negate
| in an appropriate class, different from the fromInteger's class.
| 
| -- 
|  __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
|  \__/
|   ^^                      SYGNATURA ZASTEPCZA
| QRCZAK
| 
| 
| _______________________________________________
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe
| 


From qrczak@knm.org.pl Tue Feb 20 17:07:28 2001 Date: 20 Feb 2001 17:07:28 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Primitive types and Prelude shenanigans
Tue, 20 Feb 2001 08:33:46 -0800, Simon Peyton-Jones <simonpj@microsoft.com> pisze:

> I don't mind doing this, but can someone first give a brief
> justification about why it's a good idea, independent of the
> discussion that has taken place on this list?

Suppose we build an alternative Prelude with different numeric class
hierarchy, and decide that types for natural numbers should not have
'negate' defined, as it's obviously meaningless for them. We can put
'fromInteger' in some class and 'negate' in its subclass, and make
only the former instance for natural numbers.

So -9 :: Natural should be a compile error. Negation is already
an error for all expressions other than literals when negate has
a wrong type for them; literals should not be an exception.

Negated literals are still treated in a special way in patterns,
but -9 in a pattern should expand to testing equality with
negate (fromInteger 9), not fromInteger (-9), to catch types
which intentionally don't have negate defined.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Tue Feb 20 18:17:07 2001 Date: 20 Feb 2001 18:17:07 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: newbie: running a state transformer in context of a state reader
Mon, 19 Feb 2001 18:07:17 -0800, Konst Sushenko <konsu@microsoft.com> pisze:

> now i am curious if it is possible to run the given parser (state
> transformer) in a context of a state reader somehow, so as the state
> gets preserved automatically. something that would let me omit the
> calls to fetch and set methods.

It should be possible to do something like this:

lookahead:: Parser a -> Parser a
lookahead p = do { s <- fetch
                 ; lift (evalState p s)
                 }

where evalState :: Monad m => State s m a -> s -> m a
      lift      :: Monad m => m a -> State s m a
are functions which should be available or implementable in a monad
transformer framework. I don't have the Hutton/Meijer's paper at hand
so I don't know if they provided them and under which names. Such
functions are provided e.g. in the framework provided with ghc (by
Andy Gill, inspired by Mark P Jones' paper "Functional Programming
with Overloading and Higher-Order Polymorphism").

This definition of lookahead uses a separate state transformer thread
instead of making changes in place and undoing them later. I don't
think that it could make sense to convert a state transformer to
a state reader by replacing its internals, because p does want to
transform the state locally; a value of type Parser a represents
a state transformation. The changes must be isolated from the main
parser, but they must happen in some context.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From jhf@lanl.gov Tue Feb 20 18:40:24 2001 Date: Tue, 20 Feb 2001 11:40:24 -0700 (MST) From: Joe Fasel jhf@lanl.gov Subject: Just for your fun and horror
Despite my arguments (and Jon's and others') about the appropriateness
of "return", I must confess that Ham's "deliver" is excellent
terminology.

--Joe

Joseph H. Fasel, Ph.D.              email: jhf@lanl.gov
Technology Modeling and Analysis    phone: +1 505 667 7158
University of California            fax:   +1 505 667 2960
Los Alamos National Laboratory      post:  TSA-7 MS F609; Los Alamos, NM 87545


From theo@engr.mun.ca Tue Feb 20 21:19:54 2001 Date: Tue, 20 Feb 2001 17:49:54 -0330 From: Theodore Norvell theo@engr.mun.ca Subject: Just for your fun and horror
Some comments in this discussion have said that "return"
is a good name for "return" since it is analogous
to "return" in C (and C++ and fortran and java etc.).
At first I thought "good point", but after
thinking about it a bit, I'd like to argue the contrary.

"return" in Haskell is not analogous to "return" in C.

Obviously the analogy, if there is one, is strongest in
Monads that model imperative effects, like a state monad
or the IO monad, so in the following, I'll assume the
monads involved are of this nature.

First consider this C subroutine:
	int f() {
		const int i = g() ;
		return h(i) ;  // where h may have side effects.
	}
The "return" here serves to indicate the value returned by the
subroutine. In Haskell's do syntax, there is no need for
this sort of "return" because the value delivered by a "do" is the
value delivered by its syntactically last computation, i.e. "return"
in the C sense is entirely implicit.
Haskell:
        f = do i <- g()
               h(i)          // No return
If "return" in Haskell were analogous to "return" in C, we
could write
        f = do i <- g()
               return h(i)
Indeed you can write that (and I sometimes do!), but the
meaning is different; in C executing the "return" executes the
side effects of h(i); in Haskell "executing" the "return" returns
h(i) with its side effects unexecuted.

Now consider Haskell's "return".  In Haskell, for the relevant monads,
"return" promotes a "pure" expression to an "impure" expression
with a side effect; of course the side effect is trivial, but it
is still there.

In C, as others have pointed out, there is, from a language definition
point of view, no distinction between pure and impure expressions,
so there is no need for such a promotion operator; and C does not
have one.  Consider Haskell:
        m = do j <- return e
               n(j)

C:
	void m() {
		const int j = e ;   /* No return */
		n(j) ; }

In C, but not Haskell, "return" has important implications
for flow of control.  Consider C:
        void w() {
            while(a) {
                if( b ) return c ;
                d ;
        }
In Haskell there is no close equivalent (which I say is a good thing).  The closest
analogue is to throw an exception, which is in a sense the opposite of a "return".
If you wrote a denotational semantics of C, you'd find that the denotation
of "return" is very similar to the denotation of "throw".  The implementation
of return is easier, since it is statically nested in its "handler", but this
distinction probably won't show up in the semantics.

In Haskell "return" has no implications for flow of control. Consider Haskell
        x = do y
               return e
               z
In we have C:
        void x() {
            y() ;
            e ;           // semicolon, no return
            z() ; }
The above code is silly, but the point is that if C's "return" were
analogous, we could write
        int x() {
            y() ;
            return e ;
            z() ; }
which is not analogous to the Haskell. This example also shows
another type distinction, since in the Haskell version the type
of e can be anything, yet in the second C version the type of e must
be the same as return type of x().

In short, "return" in C introduces an important side effect
(returning from the function) whereas "return" in any Haskell
monad should introduce only a trivial (identity) side effect.

It could be argued that there is a loose analogy in that
"return" in Haskell converts an expression to a command,
just as "return" in C converts an expression to a command. But
in C, putting a semicolon after an expression also converts
an expression to a command, and as the last example shows,
this is a better analogue since, unlike "return" there are
no additional nontrivial effects introduced.

In summary: (0) There is no analogue, in C, to Haskell's return
because the is no analogue to Haskell's type distinction
between expressions without side effects (pure expressions)
and expressions with side effects. (1) The main point of
"return" in C is to introduce a nontrivial side effect,
and the rule in Haskell is that "return" introduces the
trivial side effect.  The analogue of C's "return" can instead
be built on top of an exception model.

I'm not saying that future Haskells shouldn't call "return"
"return", or that "return" is not a good name for "return",
just that the analogy does not hold up.

Cheers,
Theo Norvell

----------------------------
Dr. Theodore Norvell                                    theo@engr.mun.ca
Electrical and Computer Engineering         http://www.engr.mun.ca/~theo
Engineering and Applied Science                    Phone: (709) 737-8962
Memorial University of Newfoundland                  Fax: (709) 737-4042
St. John's, NF, Canada, A1B 3X5


From konsu@microsoft.com Wed Feb 21 01:52:33 2001 Date: Tue, 20 Feb 2001 17:52:33 -0800 From: Konst Sushenko konsu@microsoft.com Subject: newbie: running a state transformer in context of a state rea der
Marcin,

thanks for your help.

to implement the lift functionality i added these well
known definitions:


class (Monad m, Monad (t m)) => TransMonad t m where
    lift               :: m a -> t m a

instance (Monad m, Monad (State s m)) => TransMonad (State s) m where
    lift m              = ST (\s -> m >>= (\a -> return (a,s)))



but my lookahead function

lookahead p = do { s <- fetch
                 ; lift (evalState p s)
                 }

is typed as

lookahead :: State MyState Maybe a -> State MyState Maybe (a,MyState)

but i need

lookahead :: State MyState Maybe a -> State MyState Maybe a

apparently, the (>>=) and return used in the definition of lift above are
for the monad (State s m), and not monad m...

everything works if i do not use the TransMonad class, but define lift
manually as:

lift :: Parser a -> Parser a
lift m = ST (\s -> unST m s >>= (\(a,_) -> return (a,s)))

but this looks like a special case of the lift above, except the right hand
side of
'bind' is executed in the right context.

i am still missing something

konst


-----Original Message-----
From: Marcin 'Qrczak' Kowalczyk [mailto:qrczak@knm.org.pl]
Sent: Tuesday, February 20, 2001 10:17 AM
To: haskell-cafe@haskell.org
Subject: Re: newbie: running a state transformer in context of a state
reader


Mon, 19 Feb 2001 18:07:17 -0800, Konst Sushenko <konsu@microsoft.com> pisze:

> now i am curious if it is possible to run the given parser (state
> transformer) in a context of a state reader somehow, so as the state
> gets preserved automatically. something that would let me omit the
> calls to fetch and set methods.

It should be possible to do something like this:

lookahead:: Parser a -> Parser a
lookahead p = do { s <- fetch
                 ; lift (evalState p s)
                 }

where evalState :: Monad m => State s m a -> s -> m a
      lift      :: Monad m => m a -> State s m a
are functions which should be available or implementable in a monad
transformer framework. I don't have the Hutton/Meijer's paper at hand
so I don't know if they provided them and under which names. Such
functions are provided e.g. in the framework provided with ghc (by
Andy Gill, inspired by Mark P Jones' paper "Functional Programming
with Overloading and Higher-Order Polymorphism").

This definition of lookahead uses a separate state transformer thread
instead of making changes in place and undoing them later. I don't
think that it could make sense to convert a state transformer to
a state reader by replacing its internals, because p does want to
transform the state locally; a value of type Parser a represents
a state transformation. The changes must be isolated from the main
parser, but they must happen in some context.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTEPCZA
QRCZAK


_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


From fjh@cs.mu.oz.au Wed Feb 21 01:55:37 2001 Date: Wed, 21 Feb 2001 12:55:37 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: Primitive types and Prelude shenanigans
On 20-Feb-2001, Simon Peyton-Jones <simonpj@microsoft.com> wrote:
> I don't mind doing this, but can someone first give a brief justification
> about why it's a good idea, independent of the discussion that
> has taken place on this list?  I'd like to add such an explanation
> to the code.

How about "Because the Haskell 98 Report says so"? ;-)

It's a pity there's no Haskell 98 Rationale, like the Ada 95
Rationale... if there was, then the documentation in the ghc code
could just point at it.

----------

There is however one issue with this change that concerns me.
I'm wondering about what happens with the most negative Int.
E.g. assuming 32-bit Int (as in Hugs and ghc), what happens with
the following code?

	minint :: Int
	minint = -2147483648

I think the rules in the Haskell report mean that you need to write
that example as e.g.

	minint :: Int
	minint = -2147483647 - 1

ghc currently allows the original version, since it treats negative
literals directly, rather than in the manner specified in the Haskell report.
ghc also allows `(negate (fromInteger 2147483648)) :: Int', apparently
because ghc's `fromInteger' for Int just extracts the bottom bits (?),
so changing ghc to respect the Haskell report's treatment of negative
literals won't affect this code.

But the code does not work in Hugs, because Hugs follows the Haskell
report's treatment of negative literals, and the `fromInteger' in Hugs
does bounds checking -- Hugs throws an exception from `fromInteger'.

The documentation in the Haskell report does not say what
`fromInteger' should do for `Int', but the Hugs behaviour definitely
seems preferable, IMHO.  However, this leads to the unfortunate
complication described above when writing a literal for the most
negative Int.

Of course using `minBound' is a much nicer way of finding out the
minumum integer, at least in hand-written code.  But this issue might
be a potential pitfall for programs that automatically generate
Haskell code.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From qrczak@knm.org.pl Wed Feb 21 07:04:02 2001 Date: 21 Feb 2001 07:04:02 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Primitive types and Prelude shenanigans
Wed, 21 Feb 2001 12:55:37 +1100, Fergus Henderson <fjh@cs.mu.oz.au> pisze:

> The documentation in the Haskell report does not say what
> `fromInteger' should do for `Int', but the Hugs behaviour definitely
> seems preferable, IMHO.

Sometimes yes. But for playing with Word8, Int8, CChar etc. it's
sometimes needed to just cast bits without overflow checking, to
convert between "signed bytes" and "unsigned bytes".

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Wed Feb 21 07:00:39 2001 Date: 21 Feb 2001 07:00:39 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: newbie: running a state transformer in context of a state rea der
Tue, 20 Feb 2001 17:52:33 -0800, Konst Sushenko <konsu@microsoft.com> pisze:

> lookahead p = do { s <- fetch
>                  ; lift (evalState p s)
>                  }
> 
> is typed as
> 
> lookahead:: State MyState Maybe a -> State MyState Maybe (a,MyState)
> 
> but i need
> 
> lookahead:: State MyState Maybe a -> State MyState Maybe a

myEvalState = liftM fst yourEvalState

Andy Gill's monadic modules provide evalState as a wrapper for runState,
which throws away the state component returned.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From fjh@cs.mu.oz.au Wed Feb 21 12:05:41 2001 Date: Wed, 21 Feb 2001 23:05:41 +1100 From: Fergus Henderson fjh@cs.mu.oz.au Subject: Primitive types and Prelude shenanigans
On 21-Feb-2001, Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> wrote:
> Wed, 21 Feb 2001 12:55:37 +1100, Fergus Henderson <fjh@cs.mu.oz.au> pisze:
> 
> > The documentation in the Haskell report does not say what
> > `fromInteger' should do for `Int', but the Hugs behaviour definitely
> > seems preferable, IMHO.
> 
> Sometimes yes. But for playing with Word8, Int8, CChar etc. it's
> sometimes needed to just cast bits without overflow checking, to
> convert between "signed bytes" and "unsigned bytes".

Both are desirable in different situations.  But if you want to ignore
overflow, you should have to say so explicitly.  `fromInteger' is
implicitly applied to literals, and implicit truncation is dangerous,
so `fromInteger' should not truncate.

There should be a different function for conversions that silently
truncate.  You can implement such a function yourself, of course,
e.g. as follows:

	trunc :: (Bounded a, Integral a) => Integer -> a
	trunc x = res
	   where min, max, size, modulus, result :: Integer
		 min = toInteger (minBound `asTypeOf` res)
		 max = toInteger (maxBound `asTypeOf` res)
		 size = max - min + 1
		 modulus = x `mod` size
		 result = if modulus > max then modulus - size else modulus
		 res = fromInteger result

But it is probably worth including something like this in the standard
library, perhaps as a type class method.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
                                    |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


From tweed@compsci.bristol.ac.uk Wed Feb 21 16:29:32 2001 Date: Wed, 21 Feb 2001 16:29:32 +0000 (GMT) From: D. Tweed tweed@compsci.bristol.ac.uk Subject: Inferring from context declarations
George Russell wrote:
> 
> (3) Simon Peyton Jones' comments about dictionary passing are a red herring,
>     since they assume a particular form of compiler.  Various (MLj, MLton)
>     ML compilers already inline out all polymorphism. Some C++ compilers/linkers
>     do it in a rather crude way as well, for templates.  If you can do it,
>     you can forget about dictionary passing.

[Standard disclaimer: I write prototype code that's never `finished' to
ever-changing specs in a university environment; other people probably
view things differently.]

I'm not sure I'd agree about this. Note that there's two levels, inlining
polymorphic functions at the call site and `instantiating polymorphic
functions at each usage type' without doing the inlining. C++ compilers
have to at least do the second because of the prevailing philosophy of
what templates are (i.e., that they're safer function-macros). Some of the
time this is what's wanted, but sometimes it imposes annoying compilation
issues (the source code of the polymorphic function has to be available
everytime you want to use the function on a new class, even if its not
time critical, which isn't the case for Haskell). I also often
write/generate very large polymorphic functions that in an ideal world
(where compilers are can do _serious, serious_ magic) I'd prefer to work
using something similar to a dictionary passing implementation. I'd argue
that keeping flexibility about polymorphic function implementation (which
assumes some default but can be overridden by the programmer) in Haskell
compilers is a Good Thing.

Given that, unless computing hardware really revolutionises, the
`speed/memory' profile of todays desktop PC is going to recurr in wearable
computers/PDAs/etc I believe that in 20 years time we'll still be figuring
out the same trade-offs, and so need to keep flexibility.

___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: tweed@cs.bris.ac.uk     |    you have, half your time is spent
work tel: (0117) 954-5250      |    waiting for compilations to finish.




From ger@tzi.de Wed Feb 21 16:56:01 2001 Date: Wed, 21 Feb 2001 17:56:01 +0100 From: George Russell ger@tzi.de Subject: Inferring from context declarations
Hmm, this throwaway comment is getting interesting.  But please cc any replies to
me as I don't normally subscribe to haskell-cafe . . .

"D. Tweed" wrote:
[snip]
> Some of the
> time this is what's wanted, but sometimes it imposes annoying compilation
> issues (the source code of the polymorphic function has to be available
> everytime you want to use the function on a new class, even if its not
> time critical, which isn't the case for Haskell). 
You don't need the original source code, but some pickled form of it,
like that GHC already outputs to .hi files when you ask it to inline
functions.
> I also often
> write/generate very large polymorphic functions that in an ideal world
> (where compilers are can do _serious, serious_ magic) I'd prefer to work
> using something similar to a dictionary passing implementation.
Why then?  If it's memory size, consider that the really important thing
is not how much you need in virtual memory, but how much you need in the
various caches.  Inlining will only use more cache if you are using two
different applications of the same large polymorphic function at approximately
the same time.  Certainly possible, and like all changes you will be able to
construct examples where inlining polymorphism will result in slower execution
time, but after my experience with MLj I find it hard to believe that it is
not a good idea in general.
> I'd argue
> that keeping flexibility about polymorphic function implementation (which
> assumes some default but can be overridden by the programmer) in Haskell
> compilers is a Good Thing.
I'm certainly not in favour of decreeing that Haskell compilers MUST inline
polymorphism.
> 
> Given that, unless computing hardware really revolutionises, the
> `speed/memory' profile of todays desktop PC is going to recurr in wearable
> computers/PDAs/etc I believe that in 20 years time we'll still be figuring
> out the same trade-offs, and so need to keep flexibility.
Extrapolating from the last few decades I predict that
(1) memory will get much much bigger.
(2) CPU times will get faster.
(3) memory access times will get faster, but the ratio of memory access time/CPU processing time
    will continue to increase.
The consequence of the last point is that parallelism and pipelining are going to become
more and more important.  Already the amount of logic required by a Pentium to try to
execute several operations at once is simply incredible, but it only works if you have
comparatively long stretches of code where the processor can guess what is going to happen.
You are basically stuffed if every three instructions the code executes a jump to a location
the processor can't foresee.  Thus if you compile Haskell like you do today, the processor
will be spending about 10% of its time actually processing, and the other 90% waiting on
memory.  If Haskell compilers are to take much advantage of processor speeds, I don't see
any solution but to inline more and more.


From Tom.Pledger@peace.com Wed Feb 21 20:23:03 2001 Date: Thu, 22 Feb 2001 09:23:03 +1300 From: Tom Pledger Tom.Pledger@peace.com Subject: making a Set
(moved to haskell-cafe)

G Murali writes:
 | hi there,
 | 
 | I'm tryng to get my concepts right here.. can you please help in
 | defining a funtion like
 | 
 | makeSet :: (a->Bool)->Set a
 | 
 | I understand that we need a new type Set like
 | data Set a = Set (a->Bool) what puzzles me is how to apply the funtion
 | to all elements belonging to type a.

What other operations do you need to implement for "Set a"?  Is there
anything that can't be expressed in terms of those set membership
functions you already have?


From ketil@ii.uib.no Thu Feb 22 10:15:39 2001 Date: 22 Feb 2001 11:15:39 +0100 From: Ketil Malde ketil@ii.uib.no Subject: Primitive types and Prelude shenanigans
qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) writes:

> 'negate' defined, as it's obviously meaningless for them. We can put
> 'fromInteger' in some class and 'negate' in its subclass, and make
> only the former instance for natural numbers.

Nitpick: not necessarily its subclass, either.  We can probably
imagine types where negate makes sense, but fromInteger does not, as
well as vice versa.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants


From tweed@compsci.bristol.ac.uk Thu Feb 22 13:28:50 2001 Date: Thu, 22 Feb 2001 13:28:50 +0000 (GMT) From: D. Tweed tweed@compsci.bristol.ac.uk Subject: Inferring from context declarations
On Wed, 21 Feb 2001, George Russell wrote:

> Hmm, this throwaway comment is getting interesting.  But please cc any replies to
> me as I don't normally subscribe to haskell-cafe . . .

To be honest, I suspect I was talking complete & unadulterated
rubbish. (Not that that's unusual.)
___cheers,_dave________________________________________________________
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: tweed@cs.bris.ac.uk     |    you have, half your time is spent
work tel: (0117) 954-5250      |    waiting for compilations to finish.



From lars@prover.com Fri Feb 23 09:19:23 2001 Date: Fri, 23 Feb 2001 10:19:23 +0100 (CET) From: Lars Lundgren lars@prover.com Subject: unliftM
On 23 Feb 2001, Julian Assange wrote:

> 
> Is there a standard construct for something of this ilk:
> 
> unliftM :: Monad m a -> a
> 

I do not know if it is a standard, but every monad usually has a
"runMonad" function. For ST you have runST, for IO you have
unsafePerformIO and for your own monad you need to define it.

Note, that if you use unsafePerformIO you, the action must not have
(important) sideeffects. The burden of proof is upon YOU.

/Lars L



From promocionesdosmiluno@yahoo.es Sun Feb 25 18:23:46 2001 Date: Sun, 25 Feb 2001 13:23:46 -0500 (EST) From: promocionesdosmiluno@yahoo.es promocionesdosmiluno@yahoo.es Subject: Lo mejor de Internet aquí
This is a multi-part message in MIME format.

--Z_MULTI_PART_MAIL_BOUNDAEY_S
Content-Type: text/plain
Content-Transfer-Encoding: base64

VmlzaXRhIGVzdGEgd2ViDQoNCmh0dHA6Ly9NdW5kb0VzcGFueWEucmVkaXJlY2Npb24uY29t
DQoNCiAgICA=
--Z_MULTI_PART_MAIL_BOUNDAEY_S--


From lars@prover.com Mon Feb 26 15:40:50 2001 Date: Mon, 26 Feb 2001 16:40:50 +0100 (CET) From: Lars Lundgren lars@prover.com Subject: Tree handling
On Mon, 26 Feb 2001, Martin Gustafsson wrote:

> Hello=20
>=20
> I'm a haskell newbie that tries to create a tree with arbitary numbers of=
 childs.=20
> I create the data structure but i can't do anything on it can someone ple=
ase help
> me with a small function that sums the values of the leafs, so i don=B4t =
loose my hair
> so fast.
>=20
> The datastructure looks like this and a binary tree built with it would l=
ook like this:
>=20
>=20
> data GeneralTree  =3D Nil | Node (Integer,[GeneralTree])
>=20

As you said you were a newbie I will ask a few questions about your
datastructure.

Do you know that there is no need to tuple the elements in the Node if you
do not want to. You can write:

data GeneralTree  =3D Nil | Node Integer [GeneralTree]


What is the intended difference between (Node 5 []) and (Node 5 [Nil]) ?


>=20
> tree =3D=20
>   (20,
>    [
>     (-20,[(30,[Nil]),(20,[Nil])]),
>     (40,[(65,[Nil]),(-40,[Nil])])
>    ]
>   )

This is not of type GeneralTree! (And its layout is messed up)

Hint: write the type of every expression you write, and debugging will be
much easier.

tree :: GeneralTree

ERROR tree.hs:8 - Type error in explicitly typed binding
*** Term           : tree
*** Type           : (a,[(b,[(c,[GeneralTree])])])
*** Does not match : GeneralTree

This is an expression with type GeneralTree:

tree :: GeneralTree
tree =3D Node 20 [Node (-20) [Node 30 [Nil], Node 20 [Nil]],
                Node 40    [Node 65 [Nil], Node (-40) [Nil]]]

Now it should be very easy to write a function to sum the nodes in a tree

sumTree :: GeneralTree -> Integer
sumTree Nil =3D 0
sumTree (Node n ts) =3D ... write this yourself=20

hint - sum and map are very useful functions (defined in the prelude) as
are recursion.

Good luck!

/Lars L





From p.turner@computer.org Mon Feb 26 14:38:36 2001 Date: Mon, 26 Feb 2001 09:38:36 -0500 From: Scott Turner p.turner@computer.org Subject: stack overflow
At 01:26 2001-02-26 -0800, Simon Peyton-Jones wrote:
>And so on.  So we build up a giant chain of thunks.
>Finally we evaluate the giant chain, and that builds up
>a giant stack.
> ...
>If GHC were to inline foldl more vigorously, this would [not] happen.

I'd hate to have my programs rely on implementation-dependent optimizations.

BTW, I've wondered why the Prelude provides foldl, which commonly leads to
this trap, and does not provide the strict variant foldl', which is useful
enough that it's defined internal to the Hugs prelude.  Simple prejudice
against strictness?

--
Scott Turner
p.turner@computer.org       http://www.billygoat.org/pkturner


From konsu@microsoft.com Mon Feb 26 21:07:51 2001 Date: Mon, 26 Feb 2001 13:07:51 -0800 From: Konst Sushenko konsu@microsoft.com Subject: examples using built-in state monad
This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

------_=_NextPart_001_01C0A038.2E1BC689
Content-Type: text/plain;
	charset="iso-8859-1"

hello,
 
in my program i used my own parameterised state transformer monad, which is
well described in literature:
 
newtype State s m a     = ST (s -> m (a,s))

ghc and hugs contain built in implementation of state monad ST.
 
is it the same thing? the documentation is not clear on that.
 
if it is the same, is it faster?
 
also, could someone please recommend any samples that use the built in ST
monad?
 
thanks
konst

------_=_NextPart_001_01C0A038.2E1BC689
Content-Type: text/html;
	charset="iso-8859-1"

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">


<META content="MSHTML 5.50.4208.1700" name=GENERATOR></HEAD>
<BODY>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2>hello,</FONT></SPAN></DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" size=2>in my 
program i used my own parameterised state transformer monad, which is&nbsp;well 
described in literature:</FONT></SPAN></DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" size=2>newtype 
State s m a&nbsp;&nbsp;&nbsp;&nbsp; = ST (s -&gt; m 
(a,s))<BR></FONT></SPAN></DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" size=2>ghc and hugs 
contain built in implementation of state monad ST.</FONT></SPAN></DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" size=2>is it the 
same thing? the documentation is not clear on that.</FONT></SPAN></DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" size=2>if it is the 
same, is it faster?</FONT></SPAN></DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" size=2>also, could 
someone&nbsp;please recommend any samples that use the built in ST 
monad?</FONT></SPAN></DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2>thanks</FONT></SPAN></DIV>
<DIV><SPAN class=467270221-26022001><FONT face="Courier New" 
size=2>konst</FONT></SPAN></DIV></BODY></HTML>

------_=_NextPart_001_01C0A038.2E1BC689--


From dpt@math.harvard.edu Tue Feb 27 18:00:26 2001 Date: Tue, 27 Feb 2001 13:00:26 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Primitive types and Prelude shenanigans
On Fri, Feb 16, 2001 at 05:13:10PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> Fri, 16 Feb 2001 04:14:24 -0800, Simon Peyton-Jones <simonpj@microsoft.com> pisze:
> > Here I think the right thing is to say that desugaring for boolean
> > constructs uses a function 'if' assumed to have type
> >         if :: forall b. Bool -> b -> b -> b
> 
> What if somebody wants to make 'if' overloaded on more types than
> some constant type called Bool?
> 
>     class Condition a where
>         if :: a -> b -> b -> b

(Note that Hawk does almost exactly this.)

> Generally I don't feel the need of allowing to replace if, Bool and
> everything else with custom definitions, especially when there is no
> single obvious way.

Why not just let

  if x then y else z

be syntactic sugar for

  Prelude.ifThenElse x y z

when some flag is given?  That allows a Prelude hacker to do whatever
she wants, from the standard

  ifThenElse :: Bool -> x -> x -> x
  ifThenElse True x _ = x
  ifThenElse True _ y = y

to something like

  class (Boolean a) => Condition a b where
     ifThenElse :: a -> b -> b -> b

("if" is a keyword, so cannot be used as a function name.  Hawk uses
"mux" for this operation.)

Compilers are good enough to inline the standard definition (and
compile it away when appropriate), right?

Pattern guards can be turned into "ifThenElse" as specified in section
3.17.3 of the Haskell Report.  Or maybe there should be a separate
function "evalGuard", which is ordinarily of type
  evalGuard :: [(Bool, a)] -> a -> a
(taking the list of guards and RHS, together with the default case).

It's less clear that compilers would be able to produce good code in
this case.

But this would have to be changed:

  An alternative of the form

    pat -> exp where decls

  is treated as shorthand for:

    pat | True -> exp where decls

Best,
	Dylan Thurston


From bhalchin@hotmail.com Wed Feb 28 07:44:42 2001 Date: Wed, 28 Feb 2001 07:44:42 From: Bill Halchin bhalchin@hotmail.com Subject: Literate Programming in Haskell?
Hello Haskell Community,

    Probably somebody else has already brought this issue up already.
Why can't we have some kind of integrated literate programming model
where I can I can have hyperlinks in comments to documents represented
in XML??  In other words, a kind of seamless literate progarmming
environment in Haskell with XML, i.e. Haskell and XML are seamless??
E.g. here is a step in the right direction by writing in Literate
Haskell in HTML!:

http://www.numeric-quest.com/haskell/

The stuff at this URL is pretty cool, i.e. "Haskell" scripts written
in HTML. I want to also see hyperlinks to XML docs in Literate
Haskell comments or maybe even to Haskell code!

Regards,

Bill Halchin


>From: "Erik Meijer" <erik@meijcrosoft.com>
>To: "Patrik Jansson" <patrikj@cs.chalmers.se>, <haskell-cafe@haskell.org>
>CC: <ralf@cs.uu.nl>
>Subject: Re: Literate Programming in Haskell?
>Date: Mon, 19 Feb 2001 22:00:47 -0800
>
>You also might take a look at Maarten Fokkinga's mira.sty
>http://www.cse.ogi.edu/~mbs/src/textools/ and Mark Shileds' abbrev.sty 
>which
>was derived from that http://www.cse.ogi.edu/~mbs/src/textools/.
>
>Erik
>----- Original Message -----
>From: "Patrik Jansson" <patrikj@cs.chalmers.se>
>To: <haskell-cafe@haskell.org>
>Cc: <ralf@cs.uu.nl>
>Sent: Monday, February 19, 2001 6:07 AM
>Subject: Re: Literate Programming in Haskell?
>
>
> > On Mon, 19 Feb 2001, Bostjan Slivnik wrote:
> > >
> > > > I'm also very interested in this, but ideally I would want the 
>output
>to
> > > > be in some proportional font, with symbols like =>, ->, <- replaced
>with
> > > > arrows, etc. Also, it would be very nice to have the code
>automatically
> > > > column aligned (using heuristics).
> > >
> > > So am I.  Is anybody willing to cooperate on the desing of such tool?
> >
> > A tool I am using is Ralf Hinze's lhs2tex
> >
> >   http://www.informatik.uni-bonn.de/~ralf/Literate.tar.gz
> >
> >   http://www.informatik.uni-bonn.de/~ralf/Guide.ps.gz
> >
> > It transforms .lhs files (with some formatting commands in LaTeX-style
> > comments) to LaTeX. Development based on this idea is something I would 
>be
> > willing to participate in as I already have a fair amount of Haskell
> > code/articles (read: my PhD thesis;-) in this format.
> >
> > Maybe Ralf can say something about his views on further development of
> > lhs2tex (copyright etc.) by other people (us?).
> >
> > /Patrik Jansson
> >
> > PS. I have made some small improvements to lhs2tex locally and I seem to
> >     remember that one or two of those were actually needed to get it to
> >     run with my ghc version.
> >
> >
> > _______________________________________________
> > Haskell-Cafe mailing list
> > Haskell-Cafe@haskell.org
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
>_______________________________________________
>Haskell-Cafe mailing list
>Haskell-Cafe@haskell.org
>http://www.haskell.org/mailman/listinfo/haskell-cafe

_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com



From simonpj@microsoft.com Wed Feb 28 10:05:27 2001 Date: Wed, 28 Feb 2001 02:05:27 -0800 From: Simon Peyton-Jones simonpj@microsoft.com Subject: Primitive types and Prelude shenanigans
| Why not just let
| 
|   if x then y else z
| 
| be syntactic sugar for
| 
|   Prelude.ifThenElse x y z

The burden of my original message was that
a) this is reasonable, but
b) it would have to become the *defined behaviour*

As you say, the "defined behaviour" would have to cover
guards as well, and I'm not absolutely certain what else.

The way GHC is set up now, it's relatively easy to make such
changes (this wasn't true before).  But it takes some design work.  

If someone cares enough
to do the design work, and actively wants the result, I'll see how
hard it is to implement.

Simon



From qrczak@knm.org.pl Wed Feb 28 15:17:02 2001 Date: 28 Feb 2001 15:17:02 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: Primitive types and Prelude shenanigans
Wed, 28 Feb 2001 02:05:27 -0800, Simon Peyton-Jones <simonpj@microsoft.com> pisze:

> If someone cares enough to do the design work, and actively wants
> the result, I'll see how hard it is to implement.

IMHO it should not be done only because it's possible. If a part
of Prelude is to be replaceable, there should be a chance that it's
useful for something.

You can't replace the whole Prelude anyway, e.g. (->) and Integer
don't look as if it was possible for them.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From qrczak@knm.org.pl Wed Feb 28 15:28:29 2001 Date: 28 Feb 2001 15:28:29 GMT From: Marcin 'Qrczak' Kowalczyk qrczak@knm.org.pl Subject: examples using built-in state monad
Mon, 26 Feb 2001 13:07:51 -0800, Konst Sushenko <konsu@microsoft.com> pisze:

> newtype State s m a     = ST (s -> m (a,s))
> 
> ghc and hugs contain built in implementation of state monad ST.
>  
> is it the same thing?

No. GHC's and Hugs' ST allows dynamic creation of arbitrary number
of mutable variables of arbitrary types, using operations
    newSTRef   :: a -> ST s (STRef s a)
    readSTRef  :: STRef s a -> ST s a
    writeSTRef :: STRef s a -> a -> ST s ()

The type variable 's' is used in a very tricky way, to ensure
safety when
    runST :: (forall s. ST s a) -> a
is used to wrap the ST-monadic computation in a purely functional
interface. It does not correspond to the type of data being
manipulated.

GHC >= 4.06 contains also a monad like yours, in module MonadState,
available when -package lang option is passed to the compiler.

-- 
 __("<  Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/
 \__/
  ^^                      SYGNATURA ZASTĘPCZA
QRCZAK



From dpt@math.harvard.edu Wed Feb 28 20:51:54 2001 Date: Wed, 28 Feb 2001 15:51:54 -0500 From: Dylan Thurston dpt@math.harvard.edu Subject: Primitive types and Prelude shenanigans
On Wed, Feb 28, 2001 at 02:05:27AM -0800, Simon Peyton-Jones wrote:
> If someone cares enough
> to do the design work, and actively wants the result, I'll see how
> hard it is to implement.

I've been thinking some about the design, and I'd be happy to finish
it, but I can't honestly say I would use it much (other than for the
numeric types) in the near future.

Best,
	Dylan Thurston


From bhalchin@hotmail.com Thu Feb 1 05:29:44 2001 From: bhalchin@hotmail.com (Bill Halchin) Date: Thu, 01 Feb 2001 05:29:44 Subject: Source tar ball for Simon Marlow's Haskell Web Server?? Message-ID: Hello, I looked on www.haskell.org for the Simon Marlow's Web Server, but couldn't find. Did I overlook it? Regards, Bill Halchin _________________________________________________________________ Get your FREE download of MSN Explorer at http://explorer.msn.com From igloo@earth.li Fri Feb 2 01:04:07 2001 From: igloo@earth.li (Ian Lynagh) Date: Fri, 2 Feb 2001 01:04:07 +0000 Subject: Various software and a question Message-ID: <20010202010407.A18897@stu163.keble.ox.ac.uk> Hi all First a brief question - is there a nicer way to do something like #ifdef __GLASGOW_HASKELL__ #include "GHCCode.hs" #else > import HugsCode #endif than that (i.e. code that needs to be different depending on if you are using GHC or HUGS)? Secondly, I don't know if this sort of thing is of interest to anyone, but inspired by the number of people who looked at the MD5 stuff I thought I might as well mention it. I've put all the Haskell stuff I've written at http://c93.keble.ox.ac.uk/~ian/haskell/ (although I'm new at this game so it may not be the best code in the world). At the moment this consists of a (very nearly complete) clone of GNU ls, and MD5 module and test program and the smae for SHA1 and DES. The ls clone needs a ptch to GHC for things like isLink (incidentally, would it be sensible to try and get this included with GHC? It is basically a simple set of changes to the PosixFiles module, but needs __USE_BSD defined (which I guess is the reason it is not in there, but it could have it's own file?)). Have fun Ian, wondering how this message got to be so long From koen@cs.chalmers.se Fri Feb 2 09:15:17 2001 From: koen@cs.chalmers.se (Koen Claessen) Date: Fri, 2 Feb 2001 10:15:17 +0100 (MET) Subject: Various software and a question In-Reply-To: <20010202010407.A18897@stu163.keble.ox.ac.uk> Message-ID: Ian Lynagh wondered: | is there a nicer way to do something like | | #ifdef __GLASGOW_HASKELL__ | #include "GHCCode.hs" | #else | > import HugsCode | #endif I usually make two directories: Hugs/ Ghc/ That contain files with the same names but different compiler-dependent implementations. Then it is just a question of setting the PATHs right. I hate using C preprocessor stuff for this. I think the directory solution is nice because it forces you to concentrate all the compiler-dependent stuff into a few modules, which are distinct from the rest of the implementation. /Koen. -- Koen Claessen http://www.cs.chalmers.se/~koen phone:+46-31-772 5424 mailto:koen@cs.chalmers.se ----------------------------------------------------- Chalmers University of Technology, Gothenburg, Sweden From Tom.Pledger@peace.com Sat Feb 3 04:13:04 2001 From: Tom.Pledger@peace.com (Tom Pledger) Date: Sat, 3 Feb 2001 17:13:04 +1300 Subject: Fundeps and quantified constructors In-Reply-To: <20010203000629.3141.qmail@web1502.mail.yahoo.com> References: <20010203000629.3141.qmail@web1502.mail.yahoo.com> Message-ID: <14971.34128.320819.582032@waytogo.peace.co.nz> nubie nubie writes: | So I want to have a polymorphic Collection type, just because. | | > class Collection c e | c -> e where | > empty :: c | > put :: c -> e -> c | | > data SomeCollection e = forall c . Collection c e => MakeSomeCollection c | | Hugs (February 2000) doesn't like it. It says | Variable "e" in constraint is not locally bound | | I feel that e *is* bound, sort of, because c is bound and there's a | fundep c->e. That line of reasoning establishes that e is constrained on the right hand side of the "=". However, it's still bound (by an implicit "forall e") on the left hand side of the "=". The problem is that e can leak details about c to parts of the program outside the "forall c". It's still a problem if you remove the "| c -> e" fundep. A more common use of a "Collection c e | c -> e" class is: data SomeCollection e = --some data structure involving e instance SomeContext e => Collection (SomeCollection e) e where --implementations of empty and put, for the aforementioned --data structure, and entitled to assume SomeContext Is that collection type polymorphic enough for your purposes? : | The following things work as expected: | | > data IntCollection = forall c . Collection c Int => MakeIntCollection c | > data AnyCollection = forall c e . Collection c e => MakeAnyCollection c Neither of them has a type variable tunnelling through the local quantifier. HTH. Tom From Claudius.Heitz@web.de Sat Feb 3 14:07:29 2001 From: Claudius.Heitz@web.de (Claudius Heitz) Date: Sat, 3 Feb 2001 15:07:29 +0100 Subject: Provider / Haskell-CGI's Message-ID: <200102031407.f13E7To19675@mailgate3.cinetic.de> Hello! Does anybody know a provider, where I can run Haskell-CGI's=3F I mean, in st= andard webhosting package=3F And are there any in Germany=3F TIA! Claudius =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F Alles unter einem Dach: Informationen, Fun, E-Mails. Bei WEB.DE: http://we= b.de Die gro=DFe Welt der Kommunikation: E-Mail, Fax, SMS, WAP: http://freemail.w= eb.de From christoph@cm-arts.de Fri Feb 2 14:32:01 2001 From: christoph@cm-arts.de (Christoph M.) Date: Fri, 2 Feb 2001 15:32:01 +0100 Subject: Knight Message-ID: Hi ! Yes, I meant the Knight's Tour problem. Could anybody post me a solve of this problem in haskell ? Thanks a lot, Christoph From ahey@iee.org Mon Feb 5 00:43:51 2001 From: ahey@iee.org (Adrian Hey) Date: Mon, 5 Feb 2001 00:43:51 +0000 (GMT) Subject: Knight In-Reply-To: Message-ID: On Fri 02 Feb, Christoph M. wrote: > Yes, I meant the Knight's Tour problem. Could anybody post me a solve of > this problem in haskell ? > Thanks a lot, Sorry I've never done this in Haskell. The second computer program I ever wrote was to solve this (in BASIC). That was many years ago:-) IIRC, it can be solved following a very simple rule. For each square of the chess board keep track of the No. of squares which can be reached in a single move from that square. The next Knight move is to which ever square has the lowest No. (Unused squares and legal legal moves only of course). If you have 2 or more equally good moves just make a random choice. After the move you decrement the counts for each square which can be reached in a single move from the new square. This is more of a heuristic than an algorithm, in that I couldn't prove that it will always work, nor could I prove that not obeying this rule will result in failure. (That's why I wrote the program). It does seem to work. But it's not hard to see why this is reasonable strategy. (The best next move is the one which minimises the No. of possible future moves which get blocked as a result). As far as Haskell solution is concerned, the only difficult decision you have to make is what data structure to use to represent the chess board squares and counts. An array seems the obvious choice, but maybe somebody can suggest something else better suited to a functional solution. Regards -- Adrian Hey From timd@macquarie.com.au Mon Feb 5 22:16:02 2001 From: timd@macquarie.com.au (Timothy Docker) Date: Tue, 6 Feb 2001 09:16:02 +1100 (EST) Subject: Haskell Implemetors Meeting Message-ID: <14975.9642.190720.118469@tcc2> > We agreed that it would be a Jolly Good Thing if GHC could > be persuaded to produce GHC-independent Core output, > ready to feed into some other compiler. For example, > Karl-Filip might be able to use it. > ANDREW will write a specification, and implement it. A quick question. What is meant by "Core output"? Subsequent posts seem to suggest this is some "reduced Haskell", in which full Haskell 98 can be expressed. Am I completely off beam here? Tim Docker From malcolm-haskell@cs.york.ac.uk Tue Feb 6 11:00:22 2001 From: malcolm-haskell@cs.york.ac.uk (malcolm-haskell@cs.york.ac.uk) Date: Tue, 6 Feb 2001 11:00:22 +0000 Subject: binary files in haskell In-Reply-To: <20010205161014.C15959@mark.ugcs.caltech.edu> Message-ID: John Meacham wrote: > I wrote up a proposal for a binary file IO mechanism to be added > as a 'blessed addendum' to the standard at best and as a commonly > implmented extension (in hslibs) at least.. I have looked at your proposal. If you would like it to be widely available, you will need to write an implementation of the library, or find someone who can write it for you. Type signatures are great as documentation, but they are not directly executable. :-) Regards, Malcolm From simonmar@microsoft.com Tue Feb 6 12:50:25 2001 From: simonmar@microsoft.com (Simon Marlow) Date: Tue, 6 Feb 2001 04:50:25 -0800 Subject: binary files in haskell Message-ID: <9584A4A864BD8548932F2F88EB30D1C61157C6@TVP-MSG-01.europe.corp.microsoft.com> > > How about this slightly more general interface, which works > with the new > > FFI libraries, and is trivial to implement on top of the > primitives in > > GHC's IOExts: > > > > hPut :: Storable a => Handle -> a -> IO () > > hGet :: Storable a => Handle -> IO a > > What about endianess? In which format are Floats or even just Bools > stored? For a file which probably shall be read from > different machines > this is not clear at all. The behaviour is defined by the Storable instances for each type. The endianness for writing say an Int32 would be the same as the host architecture, for instance. If you want to work with just bytes, you can always just use hPut and hGet at type Word8. Overloading with Storable gives you more flexibility, since if you have a way to serialise an object in memory for passing to a foreign function, you also have a way to store it in binary format in a file (modulo problems with pointers, of course). In the long term, we'll want to be able to serialise more than just Storable objects (c.f. the other overloaded binary I/O libraries out there), and possibly make the output endian-independent - but after all there's no requirement that Haskell's Int has the same size on all implementations, so there's no guarantee that binary files written on one machine will be readable on another, unless they only use explicitly sized types or Integer. Perhaps these should be called hPutStorable and hGetStorable so as not to prematurely steal the best names. > I think John is right that there needs to be a primitive interface for > just writing bytes. You can then build anything more > complicated on top > (probably different high-level ones for different purposes). > > I just see one problem with John's proposal: the type Byte. It is > completely useless if you don't have operations that go with it; > bit-operations and conversions to and from Int. The FFI > already defines > such a type: Word8. So I suggest that the binary IO library > explicitely > reads and writes Word8's. yup, that's what I had in mind. Cheers, Simon From chak@cse.unsw.edu.au Wed Feb 7 03:06:02 2001 From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty) Date: Wed, 07 Feb 2001 14:06:02 +1100 Subject: binary files in haskell In-Reply-To: <9584A4A864BD8548932F2F88EB30D1C61157C6@TVP-MSG-01.europe.corp.microsoft.com> References: <9584A4A864BD8548932F2F88EB30D1C61157C6@TVP-MSG-01.europe.corp.microsoft.com> Message-ID: <20010207140602C.chak@cse.unsw.edu.au> Simon Marlow wrote, > Olaf wrote, > > Simon Marlow wrote, > > > How about this slightly more general interface, which works > > with the new > > > FFI libraries, and is trivial to implement on top of the > > primitives in > > > GHC's IOExts: > > > > > > hPut :: Storable a => Handle -> a -> IO () > > > hGet :: Storable a => Handle -> IO a > > > > What about endianess? In which format are Floats or even just Bools > > stored? For a file which probably shall be read from > > different machines > > this is not clear at all. Like in any other language, too. If you are writing binary data, you get all the problems of writing binary data. I agree that on top of that it would be nice to have some really nice serilisation routines, but that should be a second step. > Overloading with Storable gives you more flexibility, since if you have > a way to serialise an object in memory for passing to a foreign > function, you also have a way to store it in binary format in a file > (modulo problems with pointers, of course). Yep, good idea. Cheers, Manuel From patrikj@cs.chalmers.se Wed Feb 7 07:35:20 2001 From: patrikj@cs.chalmers.se (Patrik Jansson) Date: Wed, 7 Feb 2001 08:35:20 +0100 (MET) Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] In-Reply-To: <3A80DCA6.7796D64F@boutel.co.nz> Message-ID: {I'm diverting this discussion to haskell-cafe.} [I am not sure a more mathematically correct numeric class system is suitable for inclusion in the language specification of Haskell (a library would certainly be useful though). But this is not my topic in this letter.] On Wed, 7 Feb 2001, Brian Boutel wrote: > * Haskell equality is a defined operation, not a primitive, and may not > be decidable. It does not always define equivalence classes, because > a==a may be Bottom, so what's the problem? It would be a problem, > though, to have to explain to a beginner why they can't print the result > of a computation. The fact that equality can be trivially defined as bottom does not imply that it should be a superclass of Num, it only explains that there is an ugly way of working around the problem. Neither is the argument that the beginner should be able to print the result of a computation a good argument for having Show as a superclass. A Num class without Eq, Show as superclasses would only mean that the implementor is not _forced_ to implement Eq and Show for all Num instances. Certainly most instances of Num will still be in both Show and Eq, so that they can be printed and shown, and one can easily make sure that all Num instances a beginner encounters would be such. As far as I remember from the earlier discussion, the only really visible reason for Show, Eq to be superclasses of Num is that class contexts are simpler when (as is often the case) numeric operations, equality and show are used in some context. f :: Num a => a -> String -- currently f a = show (a+a==2*a) If Show, Eq, Num were uncoupled this would be f :: (Show a, Eq a, Num a) => a -> String But I think I could live with that. (In fact, I rather like it.) Another unfortunate result of having Show, Eq as superclasses to Num is that for those cases when "trivial" instances (of Eq and Show) are defined just to satisfy the current class systems, the users have no way of supplying their own instances. Due to the Haskell rules of always exporting instances we have that if the Num instance is visible, so are the useless Eq and Show instances. In the uncoupled case the users have the choice to define Eq and Show instances that make sense to them. A library designer could provide the Eq and Show instances in two separate modules to give the users maximum flexibility. /Patrik Jansson From herrmann@infosun.fmi.uni-passau.de Wed Feb 7 09:12:10 2001 From: herrmann@infosun.fmi.uni-passau.de (Ch. A. Herrmann) Date: Wed, 7 Feb 2001 10:12:10 +0100 (MET) Subject: Revamping the numeric classes In-Reply-To: References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> Message-ID: <14977.4458.349723.883119@reger.fmi.uni-passau.de> moved to haskell-cafe Ketil> E.g. way back, I wrote a simple differential equation solver. Ketil> Now, the same function *could* have been applied to vector Ketil> functions, except that I'd have to decide on how to implement Ketil> all the "Num" stuff that really didn't fit well. Ideally, a Ketil> nice class design would infer, or at least allow me to Ketil> specify, the mathematical constraints inherent in an Ketil> algorithm, and let my implementation work with any data Ketil> satisfying those constraints. the problem is that the --majority, I suppose?-- of mathematicians tend to overload operators. They use "*" for matrix-matrix multiplication as well as for matrix-vector multiplication etc. Therefore, a quick solution that implements groups, monoids, Abelian groups, rings, Euclidean rings, fields, etc. will not be sufficient. I don't think that it is acceptable for a language like Haskell to permit the user to overload predefined operators, like "*". A cheap solution could be to define a type MathObject and operators like :*: MathObject -> MathObject -> MathObject Then, the user can implement: a :*: b = case (a,b) of (Matrix x, Matrix y) -> foo (Matrix x, Vector y) -> bar -- Christoph Herrmann E-mail: herrmann@fmi.uni-passau.de WWW: http://brahms.fmi.uni-passau.de/cl/staff/herrmann.html From ketil@ii.uib.no Wed Feb 7 10:47:11 2001 From: ketil@ii.uib.no (Ketil Malde) Date: 07 Feb 2001 11:47:11 +0100 Subject: Revamping the numeric classes In-Reply-To: "Ch. A. Herrmann"'s message of "Wed, 7 Feb 2001 10:12:10 +0100 (MET)" References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <14977.4458.349723.883119@reger.fmi.uni-passau.de> Message-ID: "Ch. A. Herrmann" writes: > moved to haskell-cafe No, but *now* it is. (Does haskell@ strip Reply-To? Bad list! Bad!) > the problem is that the --majority, I suppose?-- of mathematicians > tend to overload operators. They use "*" for matrix-matrix > multiplication as well as for matrix-vector multiplication etc. Yes, obviously. On the other hand, I think you could get far by defining (+) as an operator in a Group, (*) in a Ring, and so forth. Another problem is that the mathematical constructs include properties not easily encoded in Haskell, like commutativity, associativity, etc. > I don't think that it is acceptable for a language like Haskell > to permit the user to overload predefined operators, like "*". Depends on your definition of overloading. Is there a difference between overloading and instantiating a class? :-) > A cheap solution could be to define a type MathObject and operators like > :*: MathObject -> MathObject -> MathObject > Then, the user can implement: > a :*: b = case (a,b) of > (Matrix x, Matrix y) -> foo > (Matrix x, Vector y) -> bar Yes. If it is useful to have a fine granularity of classes, you can imagine doing: class Multiplicative a b c where (*) :: a -> b -> c now I can do instance Multiplicative (Vector a) (Vector a) (Vector a) where x * y = ... but also scalar multiplication instance Multiplicative a (Vector a) (Vector a) where a * x = .... Also, I think I can define Group a to be Additive a a a => class Group a where -- inherits plus from "Additive" zero :: a instance Group Int where (+) = built_in_int_addition zero = 0::Int Long qualifier lists might be countered by having Classes -- Num, say -- that just serve to include other classes in reasonable collections. Funny mathematical names would - at least to some extent - be avoided by having simple names for the classes actually defining the operators, so that errors will warn you about missing "Multiplicative" rather than Field or Ring or what have you. From experience, I guess there are probably issues that haven't crossed my mind. :-) -kzm -- If I haven't seen further, it is by standing in the footprints of giants From karczma@info.unicaen.fr Wed Feb 7 15:47:17 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Wed, 07 Feb 2001 15:47:17 +0000 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: Message-ID: <3A816E05.2D2E77C7@info.unicaen.fr> Patrik Jansson wrote: > [I am not sure a more mathematically correct numeric class system is > suitable for inclusion in the language specification of Haskell (a > library would certainly be useful though)....] I think it should be done at the language level. Previously Brian Boutel wrote: ... > Haskell was intended for use by programmers who may not be > mathematicians, as a general purpose language. Changes to make keep > mathematicians happy tend to make it less understandable and attractive > to everyone else. > > Specifically: > > * most usage of (+), (-), (*) is on numbers which support all of them. > > * Haskell equality is a defined operation, not a primitive, and may not > be decidable. It does not always define equivalence classes, because > a==a may be Bottom, so what's the problem? It would be a problem, > though, to have to explain to a beginner why they can't print the result > of a computation. ==== Some people here might recall that I cried loudly and in despair (OK, I am exaggerating a bit...) about the inadequacy of the Num hierarchy much before Sergey Mechveliani's proposal. Finally I implemented my own home-brewed hierarchy of Rings, AdditiveGroups, Modules, etc. in order to play with differential structures and graphical objects. And arithmetic on functions. I AM NOT A MATHEMATICIAN, and still, I see very strongly the need for a sane math layer in Haskell on behalf of 'general purpose' programming. Trying to explain to comp. sci students (who, at least here, don't like formal mathematics too much...) WHY the Haskell Num hierarchy is as it is, is simply hopeless, because some historical accidents were never very rational. * I don't care about "most usage of (+), (-), (*) is on numbers which support all of them" if this produces a chaos if you want to use Haskell for geometry, or graphics, needing vectors. From this point of view a slightly simpler (in this context) type system of Clean seems to be better. And I appreciate also the possibility to define arithmetic operations on *functions*, which is impossible in Haskell because of this Eq/Show superclass constraints. > In the uncoupled case the users have the choice to define Eq and Show > instances that make sense to them. A library designer could provide the Eq > and Show instances in two separate modules to give the users maximum > flexibility. > > /Patrik Jansson Yes. I don't want to be too acrimonious nor sarcastic, but those people who claim that Haskell as a "universal" language should not follow too closely a decent mathematical discipline, serve the devil. When math is taught at school at the elementary level, with full knowledge of the fact that almost nobody will follow the mathematical career afterwards, the rational, logical side of all constructions is methodologically essential. 10 years old pupils learn that you can add two dollars to 7 dollars, but multiplying dollars has not too much sense (a priori), and adding dollars to watermelons is dubious. Numbers are delicate abstractions, and treating them in a cavaličre manner in a supposedly "universal" language, harms not only mathematicians. As you see, treating (*) together with (+) is silly not only to vector spaces, but also for dimensional quantities, useful outside math (if only for debugging). "Ch. A. Herrmann" wrote: > the problem is that the --majority, I suppose?-- of mathematicians > tend to overload operators. They use "*" for matrix-matrix > multiplication as well as for matrix-vector multiplication etc. > > Therefore, a quick solution that implements groups, monoids, Abelian > groups, rings, Euclidean rings, fields, etc. will not be sufficient. > > I don't think that it is acceptable for a language like Haskell > to permit the user to overload predefined operators, like "*". Wha do you mean "predefined" operators? Predefined where? Forbid what? Using the standard notation even to multiply rationals or complexes? And leave this possibility open to C++ programmers who can overload anything without respecting mathematical congruity? Why? A serious mathematician who sees the signature (*) :: a -> a -> a won't try to use it for multiplying a matrix by a vector. But using it as a basic operator within a monoid is perfectly respectable. No need to "lift" or "promote" scalars into vectors/matrices, etc. For "scaling" I use personally an operation (*>) defined within the Module constructor class, but I am unhappy, because (*>) :: a -> (t a) -> (t a) declared in a Module instance of the constructor t prevents from using it in the case where (t a) in reality is a. (By default (*>) maps (x*) through the elements of (t ...), and kinds "*" are not constructors... Jerzy Karczmarczuk Caen, France From herrmann@infosun.fmi.uni-passau.de Wed Feb 7 15:40:04 2001 From: herrmann@infosun.fmi.uni-passau.de (Ch. A. Herrmann) Date: Wed, 7 Feb 2001 16:40:04 +0100 (MET) Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] In-Reply-To: <3A816E05.2D2E77C7@info.unicaen.fr> References: <3A816E05.2D2E77C7@info.unicaen.fr> Message-ID: <14977.27732.84361.265529@reger.fmi.uni-passau.de> Hi Haskellers, >>>>> "Jerzy" == Jerzy Karczmarczuk writes: Jerzy> "Ch. A. Herrmann" wrote: >> the problem is that the --majority, I suppose?-- of >> mathematicians tend to overload operators. They use "*" for >> matrix-matrix multiplication as well as for matrix-vector >> multiplication etc. >> >> Therefore, a quick solution that implements groups, monoids, >> Abelian groups, rings, Euclidean rings, fields, etc. will not be >> sufficient. >> >> I don't think that it is acceptable for a language like Haskell >> to permit the user to overload predefined operators, like "*". Jerzy> Wha do you mean "predefined" operators? Predefined where? In hugs, ":t (*)" tells you: (*) :: Num a => a -> a -> a which is an intended property of Haskell, I suppose. Jerzy> Forbid what? A definition like (a trivial example, instead of matrix/vector) class NewClass a where (*) :: a->[a]->a leads to an error since (*) is already defined on top level, e.g. Repeated definition for member function "*" in hugs, although I didn't specify that I wanted to use (*) in the context of the Num class. However, such things work in local definitions: Prelude> let (*) a b = a++(show b) in "Number " * 5 "Number 5" but you certainly don't want it to use (*) only locally. Jerzy> Using the standard notation even to multiply Jerzy> rationals or complexes? No, that's OK since they belong to the Num class. But as soon as you want to multiply a rational with a complex you'll get a type error. Personally, I've nothing against this strong typing discipline, since it'll catch some errors. Jerzy> And leave this possibility open to C++ programmers who can Jerzy> overload anything without respecting mathematical congruity? Jerzy> Why? If mathematics is to be respected, we really have to discuss a lot of things, e.g., whether it is legal to define comparison for floating point numbers, but that won't help much. Also, the programming language should not prescribe that the "standard" mathematics is the right mathematics and the only the user is allowed to deal with. If the user likes to multiply two strings, like "ten" * "six" (= "sixty"), and he/she has a semantics for that, why not? Jerzy> A serious mathematician who sees the signature Jerzy> (*) :: a -> a -> a Jerzy> won't try to use it for multiplying a matrix by a Jerzy> vector. A good thing would be to allow the signature (*) :: a -> b -> c as well as multi-parameter type classes (a, b and c) and static overloading, as Joe Waldmann suggested. Jerzy> No need to "lift" or "promote" Jerzy> scalars into vectors/matrices, etc. You're right, there is no "need". We can live with a :*: b for matrix multiplication, and with a <*> b for matrix/vector multiplication, etc. It's a matter of style. If anyone has experiences with defining operators in unicode and editing them without problems, please tell me. Unicode will provide enough characters for a distinction, I suppose. Bye -- Christoph Herrmann E-mail: herrmann@fmi.uni-passau.de WWW: http://brahms.fmi.uni-passau.de/cl/staff/herrmann.html From karczma@info.unicaen.fr Wed Feb 7 17:12:24 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Wed, 07 Feb 2001 17:12:24 +0000 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: <3A816E05.2D2E77C7@info.unicaen.fr> <14977.27732.84361.265529@reger.fmi.uni-passau.de> Message-ID: <3A8181F8.BB1673E9@info.unicaen.fr> "Ch. A. Herrmann" answers my questions: > Jerzy> What do you mean "predefined" operators? Predefined where? > > In hugs, ":t (*)" tells you: > (*) :: Num a => a -> a -> a > which is an intended property of Haskell, I suppose. Aha. But I would never call this a DEFINITION of this operator. This is just the type, isn't it? A misunderstanding, I presume. > Jerzy> Forbid what? > A definition like (a trivial example, instead of matrix/vector) > class NewClass a where > (*) :: a->[a]->a > leads to an error OK, OK. Actually my only point was to suggest that the type for (*) as above should be constrained oinly by an *appropriate class*, not by this horrible Num which contains additive operators as well. So this is not the answer I expected, concerning the "overloading of a predefined operator". BTW. In Clean (*) constitutes a class by itself, that is this simplicity I appreciate, although I am far from saying that they have an ideal type system for a working mathemaniac. > ... Also, the programming language should > not prescribe that the "standard" mathematics is the right mathematics > and the only the user is allowed to deal with. If the user likes to > multiply two strings, like "ten" * "six" (= "sixty"), and he/she has a > semantics for that, why not? Aaa, here we might, although need not disagree. I would like to see some rational constraints, preventing the user from inventing a completely insane semantics for this multiplication, mainly to discourage writing of programs impossible to understand. Jerzy Karczmarczuk Caen, France From dpt@haskell.org Wed Feb 7 16:37:33 2001 From: dpt@haskell.org (Dylan Thurston) Date: Wed, 07 Feb 2001 11:37:33 -0500 Subject: (no subject) Message-ID: From qrczak@knm.org.pl Wed Feb 7 18:35:11 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 7 Feb 2001 18:35:11 GMT Subject: Revamping the numeric classes References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <14977.4458.349723.883119@reger.fmi.uni-passau.de> Message-ID: 07 Feb 2001 11:47:11 +0100, Ketil Malde pisze: > If it is useful to have a fine granularity of classes, you can > imagine doing: > > class Multiplicative a b c where > (*) :: a -> b -> c Then a*b*c is ambiguous no matter what are types of a,b,c and the result. Sorry, this does not work. Too general is too bad, it's impossible to have everything at once. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From dpt@math.harvard.edu Wed Feb 7 18:57:41 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Wed, 7 Feb 2001 13:57:41 -0500 Subject: Revamping the numeric classes In-Reply-To: <3A80DCA6.7796D64F@boutel.co.nz>; from brian@boutel.co.nz on Wed, Feb 07, 2001 at 06:27:02PM +1300 References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> Message-ID: <20010207135741.A23527@math.harvard.edu> Other people have been making great points for me. (I particularly liked the example of Dollars as a type with addition but not multiplication.) One point that has not been made: given a class setup like class Additive a where (+) :: a -> a -> a (-) :: a -> a -> a negate :: a -> a zero :: a class Multiplicative a where (*) :: a -> a -> a one :: a class (Additive a, Multiplicative a) => Num a where fromInteger :: Integer -> a then naive users can continue to use (Num a) in contexts, and the same programs will continue to work.[1] (A question in the above context is whether the literal '0' should be interpreted as 'fromInteger (0::Integer)' or as 'zero'. Opinions?) On Wed, Feb 07, 2001 at 06:27:02PM +1300, Brian Boutel wrote: > * Haskell equality is a defined operation, not a primitive, and may not > be decidable. It does not always define equivalence classes, because > a==a may be Bottom, so what's the problem? It would be a problem, > though, to have to explain to a beginner why they can't print the result > of a computation. Why doesn't your argument show that all types should by instances of Eq and Show? Why are numeric types special? Best, Dylan Thurston Footnotes: [1] Except for the lack of abs and signum, which should be in some other class. I have to think about their semantics before I can say where they belong. From andrew@andrewcooke.free-online.co.uk Wed Feb 7 22:08:26 2001 From: andrew@andrewcooke.free-online.co.uk (andrew@andrewcooke.free-online.co.uk) Date: Wed, 7 Feb 2001 22:08:26 +0000 Subject: Revamping the numeric classes Message-ID: <20010207220826.B1541@liron> On Wed, Feb 07, 2001 at 11:47:11AM +0100, Ketil Malde wrote: > "Ch. A. Herrmann" writes: [...] > > the problem is that the --majority, I suppose?-- of mathematicians > > tend to overload operators. They use "*" for matrix-matrix > > multiplication as well as for matrix-vector multiplication etc. > Yes, obviously. On the other hand, I think you could get far by > defining (+) as an operator in a Group, (*) in a Ring, and so forth. As a complete newbie can I add a few points? They may be misguided, but they may also help identify what appears obvious only through use... - understanding the hierarchy of classes (ie constanly referring to Fig 5 in the report) takes a fair amount of effort. It would have been much clearer for me to have classes that simply listed the required super classes (as suggested in an earlier post). - even for me, no great mathematician, I found the forced inclusion of certain classes irritating (in my case - effectively implementing arithmetic on tuples - Enum made little sense and ordering is hacked in order to be total; why do I need to define either to overload "+"?) - what's the deal with fmap and map? > Another problem is that the mathematical constructs include properties > not easily encoded in Haskell, like commutativity, associativity, etc. > > > I don't think that it is acceptable for a language like Haskell > > to permit the user to overload predefined operators, like "*". Do you mean that the numeric classes should be dropped or are you talking about some other overloading procedure? Isn't one popular use of Haskell to define/extend it to support small domain-specific languages? In those cases, overloading operatores via the class mechanism is very useful - you can give the user concise, but stll understandable, syntax for the problem domain. I can see that overloading operators is not good in general purpose libraries, unless carefully controlled, but that doesn't mean it is always bad, or should always be strictly controlled. Maybe the programmer could decide what is appropriate, faced with a particular problem, rather than a language designer, from more general considerations? Balance, as ever, is the key :-) [...] > >From experience, I guess there are probably issues that haven't > crossed my mind. :-) This is certainly true in my case - I presumed there was some deep reason for the complex hierarchy that exists at the moment. It was a surprise to see it questioned here. Sorry if I've used the wrong terminology anywhere. Hope the above makes some sense. Andrew -- http://www.andrewcooke.free-online.co.uk/index.html ----- End forwarded message ----- -- http://www.andrewcooke.free-online.co.uk/index.html From peterd@availant.com Wed Feb 7 21:17:38 2001 From: peterd@availant.com (Peter Douglass) Date: Wed, 7 Feb 2001 16:17:38 -0500 Subject: Revamping the numeric classes Message-ID: <8BDAB3CD0E67D411B02400D0B79EA49A5F6DC1@smail01.clam.com> I have some questions about how Haskell's numeric classes might be revamped. Is it possible in Haskell to circumscribe the availability of certain "unsafe" numeric operations such as div, /, mod? If this is not possible already, could perhaps a compiler flag "-noUnsafeDivide" could be added to make such a restriction? What I have in mind is to remove division by zero as an untypable expression. The idea is to require div, /, mod to take NonZeroNumeric values in their second argument. NonZeroNumeric values could be created by functions of type: Number a => a -> Maybe NonZeroNumeric or something similar. Has this been tried and failed? I'm curious as to what problems there might be with such an approach. --PeterD From dpt@math.harvard.edu Wed Feb 7 21:54:50 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Wed, 7 Feb 2001 16:54:50 -0500 Subject: Revamping the numeric classes In-Reply-To: <20010207220826.B1541@liron>; from andrew@andrewcooke.free-online.co.uk on Wed, Feb 07, 2001 at 10:08:26PM +0000 References: <20010207220826.B1541@liron> Message-ID: <20010207165450.B32215@math.harvard.edu> On Wed, Feb 07, 2001 at 10:08:26PM +0000, andrew@andrewcooke.free-online.co.uk wrote: > - even for me, no great mathematician, I found the forced inclusion of > certain classes irritating (in my case - effectively implementing > arithmetic on tuples - Enum made little sense and ordering is hacked > in order to be total; why do I need to define either to overload "+"?) Persumably you mean "quot" and "rem", since Enum is a superclass of Integral, not Num. toInteger must have been even worse, right? > - what's the deal with fmap and map? I think this one is historical: map already existed before Haskell was powerful enough to type fmap, and the decision was not to affect existing programs too much. Presumably Haskell 2 will have them merged. Best, Dylan Thurston From dpt@math.harvard.edu Thu Feb 8 00:06:54 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Wed, 7 Feb 2001 19:06:54 -0500 Subject: Revamping the numeric classes In-Reply-To: <20010207135741.A23527@math.harvard.edu>; from dpt@math.harvard.edu on Wed, Feb 07, 2001 at 01:57:41PM -0500 References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> Message-ID: <20010207190654.A981@math.harvard.edu> On Wed, Feb 07, 2001 at 01:57:41PM -0500, Dylan Thurston wrote: > ... One point that has not been made: given a class > setup like > > then naive users can continue to use (Num a) in contexts, and the same > programs will continue to work. I take that back. Instance declarations would change, so this isn't a very conservative change. (Users would have to make instance declarations for Additive, Multiplicative, and Num where before they just made a declaration for Num. Of course, they don't have to write any more code.) Best, Dylan Thurston From brian@boutel.co.nz Thu Feb 8 05:37:04 2001 From: brian@boutel.co.nz (Brian Boutel) Date: Thu, 08 Feb 2001 18:37:04 +1300 Subject: Revamping the numeric classes References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> Message-ID: <3A823080.9811C0F6@boutel.co.nz> Dylan Thurston wrote: > > > Why doesn't your argument show that all types should by instances of > Eq and Show? Why are numeric types special? > Why do you think it does? I certainly don't think so. The point about Eq was that a objection was raised to Num being a subclass of Eq because, for some numeric types, equality is undecidable. I suggested that Haskell equality could be undecidable, so (==) on those types could reflect the real situation. One would expect that it could do so in a natural way, producing a value of True or False when possible, and diverging otherwise. Thus no convincing argument has been given for removing Eq as a superclass of Num. In general, if you fine-grain the Class heirarchy too much, the picture gets very complicated. If you need to define separate subclases of Num for those types which have both Eq and Show, those that only Have Eq, those than only have Show and those that have neither, not to mention those that have Ord as well as Eq and those that don't, and then for all the other distinctions that will be suggested, my guess is that Haskell will become the preserve of a few mathematicians and everyone else will give up in disgust. Then the likely result is that no-one will be interested in maintaining and developing Haskell and it will die. --brian From qrczak@knm.org.pl Thu Feb 8 04:53:35 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 8 Feb 2001 04:53:35 GMT Subject: Revamping the numeric classes References: <8BDAB3CD0E67D411B02400D0B79EA49A5F6DC1@smail01.clam.com> Message-ID: Wed, 7 Feb 2001 16:17:38 -0500, Peter Douglass pisze: > What I have in mind is to remove division by zero as an untypable > expression. The idea is to require div, /, mod to take NonZeroNumeric > values in their second argument. NonZeroNumeric values could be created by > functions of type: > Number a => a -> Maybe NonZeroNumeric > or something similar. IMHO it would be impractical. Often I know that the value is non-zero, but it is not statically determined, so it would just require uglification by doing that conversion and then coercing Maybe NonZeroNumeric to NonZeroNumeric. It's bottom anyway when the value is 0, but bottom would come from Maybe coercion instead of from quot, so it only gives a worse error message. It's so easy to define partial functions that it would not buy much for making it explicit outside quot. Haskell does not have subtypes so a coercion from NonZeroNumeric to plain Numbers would have to be explicit as well, even if logically it's just an injection. Everybody assumes that quot has a symmetric type as in all other languages, but in your proposal quot's arguments come from completely disjoint worlds. Moreover, 1/0 is defined on IEEE Doubles (e.g. in ghc): infinity. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From ketil@ii.uib.no Thu Feb 8 07:48:54 2001 From: ketil@ii.uib.no (Ketil Malde) Date: 08 Feb 2001 08:48:54 +0100 Subject: Revamping the numeric classes In-Reply-To: Dylan Thurston's message of "Wed, 7 Feb 2001 19:06:54 -0500" References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <20010207190654.A981@math.harvard.edu> Message-ID: Dylan Thurston writes: > On Wed, Feb 07, 2001 at 01:57:41PM -0500, Dylan Thurston wrote: > > ... One point that has not been made: given a class > > setup like > > > > then naive users can continue to use (Num a) in contexts, and the same > > programs will continue to work. > I take that back. Instance declarations would change, so this isn't > a very conservative change. Would it be a terribly grave change to the language to allow leaf class instance declarations to include the necessary definitions for dependent classes? E.g. class foo a where f :: ... class (foo a) => bar a where b :: ... instance bar T where f = ... b = ... -kzm -- If I haven't seen further, it is by standing in the footprints of giants From k19990158@192.168.1.4 Thu Feb 8 13:49:11 2001 From: k19990158@192.168.1.4 (FAIZAN RAZA) Date: Thu, 08 Feb 2001 13:49:11 Subject: Please help me Message-ID: <3.0.5.32.20010208134911.00901760@192.168.1.4> Hello Please help me to solve this questions Question Cartesian Product of three sets, written as X x Y x Z is defined as the set of all ordered triples such that the first element is a member of X, the second is member of Y, and the thrid member of set Z. write a Haskell function cartesianProduct which when given three lists (to represent three sets) of integers returns a list of lists of ordered triples. For examples, cartesianProduct [1,3][2,4][5,6] returns [[1,2,5],[1,2,6],[1,4,5],[1,4,6],[3,2,5],[3,2,6],[3,4,5],[3,4,6]] Please send me reply as soon as possible Ok I wish you all the best of luck From Tom.Pledger@peace.com Thu Feb 8 08:00:58 2001 From: Tom.Pledger@peace.com (Tom Pledger) Date: Thu, 8 Feb 2001 21:00:58 +1300 (NZDT) Subject: Revamping the numeric classes In-Reply-To: <20010207135741.A23527@math.harvard.edu> References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> Message-ID: <200102080800.VAA36591@waytogo.peace.co.nz> Dylan Thurston writes: : | (A question in the above context is whether the literal '0' should | be interpreted as 'fromInteger (0::Integer)' or as 'zero'. | Opinions?) Opinions? Be careful what you wish for. ;-) In a similar discussion last year, I was making wistful noises about subtyping, and one of Marcin's questions http://www.mail-archive.com/haskell-cafe@haskell.org/msg00125.html was whether the numeric literal 10 should have type Int8 (2's complement octet) or Word8 (unsigned octet). At the time I couldn't give a wholly satisfactory answer. Since then I've read the oft-cited paper "On Understanding Types, Data Abstraction, and Polymorphism" (Cardelli & Wegner, ACM Computing Surveys, Dec 1985), which suggests a nice answer: give the numeric literal 10 the range type 10..10, which is defined implicitly and is a subtype of both -128..127 (Int8) and 0..255 (Word8). The differences in arithmetic on certain important range types could be represented by multiple primitive functions (or perhaps foreign functions, through the FFI): primAdd :: Integer -> Integer -> Integer -- arbitrary precision primAdd8s :: Int8 -> Int8 -> Int8 -- overflow at -129, 128 primAdd8u :: Word8 -> Word8 -> Word8 -- overflow at -1, 256 -- etc. instance Additive Integer where zero = 0 (+) = primAdd ...with similar instances for the integer subrange types which may overflow. These other instances would belong outside the standard Prelude, so that the ambiguity questions don't trouble people (such as beginners) who don't care about the space and time advantages of fixed precision integers. Subtyping offers an alternative approach to handling arithmetic overflows: - Use only arbitrary precision arithmetic. - When calculated result *really* needs to be packed into a fixed precision format, project it (or treat it down, etc., whatever's your preferred name), so that overflows are represented as Nothing. For references to other uses of class Subtype see: http://www.mail-archive.com/haskell@haskell.org/msg07303.html For a reference to some unification-driven rewrites, see: http://www.mail-archive.com/haskell@haskell.org/msg07327.html Marcin 'Qrczak' Kowalczyk writes: : | Assuming that Ints can be implicitly converted to Doubles, is the | function | f :: Int -> Int -> Double -> Double | f x y z = x + y + z | ambiguous? Because there are two interpretations: | f x y z = realToFrac x + realToFrac y + z | f x y z = realToFrac (x + y) + z | | Making this and similar case ambiguous means inserting lots of explicit | type signatures to disambiguate subexpressions. | | Again, arbitrarily choosing one of the alternatives basing on some | set of weighting rules is dangerous, I don't think the following disambiguation is too arbitrary: x + y + z -- as above --> (x + y) + z -- left-associativity of (+) --> realToFrac (x + y) + z -- injection (or treating up) done -- conservatively, i.e. only where needed Regards, Tom From ashley@semantic.org Thu Feb 8 10:04:45 2001 From: ashley@semantic.org (Ashley Yakeley) Date: Thu, 8 Feb 2001 02:04:45 -0800 Subject: Please help me Message-ID: <200102081004.CAA25935@mail4.halcyon.com> At 2001-02-08 13:49, FAIZAN RAZA wrote: >write a Haskell >function cartesianProduct which when given three lists (to represent three >sets) of integers returns a list of lists of ordered triples. That's easy. Just define 'product' as a function that finds the cartesian product of any number of lists, and then once you've done that you can apply it to make the special case of three items like this: cartesianProduct a b c = product [a,b,c] At least, that's how I would do it. -- Ashley Yakeley, Seattle WA From mk167280@students.mimuw.edu.pl Thu Feb 8 10:09:35 2001 From: mk167280@students.mimuw.edu.pl (Marcin 'Qrczak' Kowalczyk) Date: Thu, 8 Feb 2001 11:09:35 +0100 (CET) Subject: Revamping the numeric classes In-Reply-To: <200102080800.VAA36591@waytogo.peace.co.nz> Message-ID: On Thu, 8 Feb 2001, Tom Pledger wrote: > nice answer: give the numeric literal 10 the range type 10..10, which > is defined implicitly and is a subtype of both -128..127 (Int8) and > 0..255 (Word8). What are the inferred types for f = map (\x -> x+10) g l = l ++ f l ? I hope I can use them as [Int] -> [Int]. > x + y + z -- as above > > --> (x + y) + z -- left-associativity of (+) > > --> realToFrac (x + y) + z -- injection (or treating up) done > -- conservatively, i.e. only where needed What does it mean "where needed"? Type inference does not proceed inside-out. What about this? h f = f (1::Int) == (2::Int) Can I apply f to a function of type Int->Double? If no, then it's a pity, because I could inline it (the comparison would be done on Doubles). If yes, then what is the inferred type for h? Note that Int->Double is not a subtype of Int->Int, so if h :: (Int->Int)->Bool, then I can't imagine how h can be applied to something :: Int->Double. -- Marcin 'Qrczak' Kowalczyk From ashley@semantic.org Thu Feb 8 10:11:30 2001 From: ashley@semantic.org (Ashley Yakeley) Date: Thu, 8 Feb 2001 02:11:30 -0800 Subject: Please help me Message-ID: <200102081011.CAA26277@mail4.halcyon.com> At 2001-02-08 02:04, Ashley Yakeley wrote: >That's easy. Just define 'product' as a function that finds the cartesian >product of any number of lists, and then once you've done that you can >apply it to make the special case of three items like this: > >cartesianProduct a b c = product [a,b,c] > >At least, that's how I would do it. eesh, 'product' is something else in the Prelude. Better call it 'cartprod' or something. -- Ashley Yakeley, Seattle WA From karczma@info.unicaen.fr Thu Feb 8 11:24:49 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Thu, 08 Feb 2001 11:24:49 +0000 Subject: Revamping the numeric classes References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> Message-ID: <3A828201.8533014B@info.unicaen.fr> First, a general remark which has nothing to do with Num. PLEASE WATCH YOUR DESTINATION ADDRESSES People send regularly their postings to haskell-cafe with several private receiver addresses, which is a bit annoying when you click "reply all"... Brian Boutel after Dylan Thurston: > > Why doesn't your argument show that all types should by instances of > > Eq and Show? Why are numeric types special? > > Why do you think it does? I certainly don't think so. > > The point about Eq was that a objection was raised to Num being a > subclass of Eq because, for some numeric types, equality is undecidable. > I suggested that Haskell equality could be undecidable, so (==) on those > types could reflect the real situation. One would expect that it could > do so in a natural way, producing a value of True or False when > possible, and diverging otherwise. Thus no convincing argument has been > given for removing Eq as a superclass of Num. > > In general, if you fine-grain the Class heirarchy too much, the picture > gets very complicated. If you need to define separate subclases of Num > for those types which have both Eq and Show, those that only Have Eq, > those than only have Show and those that have neither, not to mention > those that have Ord as well as Eq and those that don't, and then for all > the other distinctions that will be suggested, my guess is that Haskell > will become the preserve of a few mathematicians and everyone else will > give up in disgust. Then the likely result is that no-one will be > interested in maintaining and developing Haskell and it will die. Strange, but from the objectives mentioned in the last part of this posting (even if a little demagogic [insert smiley here if you wish]) I draw opposite conclusions. The fact that the number of cases is quite large suggests that Eq, Show and arithmetic should be treated as *orthogonal* issues, and treated independently. If somebody needs Show for his favourite data type, he is free to arrange this himself. I repeat what I have already said: I work with functional objects as mathematical entities. I want to add parametric surfaces, to rotate trajectories. Also, to handle gracefully and legibly for those simpletons who call themselves 'theoretical physicists', the arithmetic of un-truncated lazy streams representing power series, or infinitely dimensional differential algebra elements. Perhaps those are not convincing arguments for Brian Boutel. They are certainly so for me. Num, with this forced marriage of (+) and (*) violates the principle of orthogonality. Eq and Show constraints make it worse. === And, last, but very high on my check-list: The implicit coercion of numeric constants: 3.14 -=->> (fromDouble 3.14) etc. is sick. (Or was; I still didn't install the last version of GHC, and with Hugs it is bad). The decision is taken by the compiler internally, and it doesn't care at all about the fact that in my prelude I have eliminated the Num class and redefined fromDouble, fromInt, etc. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Dylan Thurston terminates his previous posting about Num with: > Footnotes: > [1] Except for the lack of abs and signum, which should be in some > other class. I have to think about their semantics before I can say > where they belong. Now, signum and abs seem to be quite distincts beasts. Signum seem to require Ord (and a generic zero...). Abs from the mathematical point of view constitutes a *norm*. Now, frankly, I haven't the slightest idea how to cast this concept into Haskell class hierarchy in a sufficiently general way... I'll tell you anyway that if you try to "sanitize" the numeric classes, if you separate additive structures and the multiplication, if you finally define abstract Vectors over some field of scalars, and if you demand the existence of a generic normalization for your vectors, than *most probably* you will need multiparametric classes with dependencies. Jerzy Karczmarczuk Caen, France From CAngus@Armature.com Thu Feb 8 10:29:29 2001 From: CAngus@Armature.com (Chris Angus) Date: Thu, 8 Feb 2001 10:29:29 -0000 Subject: Please help me Message-ID: <753866CAB183D211883F0090271F46C204AB8AC6@COW> Faizan, A clue is to use list comprehensions (which are very like ZF set notation) First think how you would define a cartesian product in set notation X x Y x Z = {(x,y,z) | ...} and then think how this is written in list comprehension notation Chris > -----Original Message----- > From: FAIZAN RAZA [mailto:k19990158@192.168.1.4] > Sent: 08 February 2001 13:49 > To: haskell-cafe@haskell.org > Subject: Please help me > > > Hello > > > Please help me to solve this questions > > > Question > > Cartesian Product of three sets, written as X x Y x Z is > defined as the set > of all ordered triples such that the first element is a > member of X, the > second is member of Y, and the thrid member of set Z. write a Haskell > function cartesianProduct which when given three lists (to > represent three > sets) of integers returns a list of lists of ordered triples. > > For examples, cartesianProduct [1,3][2,4][5,6] returns > [[1,2,5],[1,2,6],[1,4,5],[1,4,6],[3,2,5],[3,2,6],[3,4,5],[3,4,6]] > > > > Please send me reply as soon as possible > > Ok > > I wish you all the best of luck > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe > From fjh@cs.mu.oz.au Thu Feb 8 10:41:56 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Thu, 8 Feb 2001 21:41:56 +1100 Subject: Revamping the numeric classes In-Reply-To: ; from ketil@ii.uib.no on Thu, Feb 08, 2001 at 08:48:54AM +0100 References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <20010207190654.A981@math.harvard.edu> Message-ID: <20010208214156.B4303@venus.cs.mu.oz.au> On 08-Feb-2001, Ketil Malde wrote: > Would it be a terribly grave change to the language to allow leaf > class instance declarations to include the necessary definitions for > dependent classes? E.g. > > class foo a where > f :: ... > > class (foo a) => bar a where > b :: ... > > instance bar T where > f = ... > b = ... I think that proposal is a good idea. It means that the user of a class which inherits from some complicated class hierarchy doesn't need to know (or to write code which depends on) any of the details of that class hierarchy. Instead, they can just give instance declarations for the classes that they want to use, and provide definitions all of the relevant members. It means that the developer of a class can split that class into two or more sub-classes without breaking (source level) backwards compatibility. One point that needs to be resolved is the interaction with default methods. Consider class foo a where f :: ... f = ... f2 :: ... f2 = ... class (foo a) => bar a where b :: ... instance bar T where -- no definitions for f or f2 b = 42 Should this define an instance for `foo T'? (I think not.) How about if the instance declaration is changed to instance bar T where f = 41 -- no definition for f2 b = 42 ? (In that case, I think it should.) -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From elke.kasimir@catmint.de Thu Feb 8 14:11:21 2001 From: elke.kasimir@catmint.de (Elke Kasimir) Date: Thu, 08 Feb 2001 15:11:21 +0100 (CET) Subject: Show, Eq not necessary for Num [Was: Revamping the numeric c In-Reply-To: Message-ID: On 07-Feb-2001 Patrik Jansson wrote: (interesting stuff deleted) > As far as I remember from the earlier discussion, the only really visible > reason for Show, Eq to be superclasses of Num is that class contexts are > simpler when (as is often the case) numeric operations, equality and show > are used in some context. > > f :: Num a => a -> String -- currently > f a = show (a+a==2*a) > > If Show, Eq, Num were uncoupled this would be > > f :: (Show a, Eq a, Num a) => a -> String > > But I think I could live with that. (In fact, I rather like it.) Basically I'm too. However, what is missing for me is something like: type Comfortable a = (Show a, Eq a, Num a) => a or class (Show a, Read a, Eq a) => Comfortable a instance (Show a, Read a, Eq a) => Comfortable a I think here is a point where a general flaw of class hierachies as a mean of software design becomes obvious, which consists of forcing the programmer to arbitrarily prefer few generalizations to all others in a global, context-independent design decision. The oo community (being the source of all the evil...) usually relies on the rather problematic ontological assumption that, at least from a certain point of view (problem domain, design, implemention), the relevant concepts form in a natural way a kind a generalization hierarchy, and that this generalization provides a natural way to design the software (in our case, determine the type system in some a-priory fashion). Considering the fact that a concept, for which (given a certain point of view) n elementary predicates hold a-priory, n! possible generalizations exist a-priory, this assumption can be questioned. In contrary to the given assumption, I have made the experience that, when trying to classify concepts, even a light shift in the situation being under consideration can lead to a severe change in what appears to be the "natural" classification. Besides this, as is apparent in Show a => Num a, it is not always a priory generalizations that are really needed. Instead, the things must be fit into the current point of view with a bit force, thus changing concepts or even inventing new ones. (For example, in the oo community, which likes (or is forced?) to "ontologize" relationships into "objects", has invented "factories" for different things, ranging from GUI border frames to database connection handles. Behind such an at first glance totally arbitary conceptualization might stand a more rational concept, for example applying a certain library design principle called "factory" to different types of things. However one can't always wait until the rationale behind a certain solution is clearly recognized.) In my experience, both class membership and generalization relationships are often needed locally and post hoc, and they sometimes even express empirical (a-posteriory) relations between concepts instead of true analytical (a-priory) generalization relationships. As a consequence, for my opinion, programming languages should make it possible and easy to employ post-hoc and local class membership declarations and post-hoc and local class hierarchy declarations (or even re-organizations). There will of course be situations where a global a-priory declaration of generalization nevertheless still make completely sense. For Haskell, I could imagine (without having having much thought about) in addition to the things mentioned in the beginning, several things making supporting the "locally, fast and easy", including a mean to define classes with implied memberships, for example declarations saying that "Foo is the class of all types in scope for which somefoo :: ... is defined", or declarations saying that "class Num is locally restricted to all instances of global Num which also belong to Eq". Elke. --- Elke Kasimir Skalitzer Str. 79 10997 Berlin (Germany) fon: +49 (030) 612 852 16 mail: elke.kasimir@catmint.de> see: for pgp public key see: From peterd@availant.com Thu Feb 8 15:51:58 2001 From: peterd@availant.com (Peter Douglass) Date: Thu, 8 Feb 2001 10:51:58 -0500 Subject: Revamping the numeric classes Message-ID: <8BDAB3CD0E67D411B02400D0B79EA49A5F6DC5@smail01.clam.com> Marcin Kowalczyk wrote: > Wed, 7 Feb 2001 16:17:38 -0500, Peter Douglass > pisze: > > > What I have in mind is to remove division by zero as an untypable > > expression. The idea is to require div, /, mod to take > NonZeroNumeric > > values in their second argument. NonZeroNumeric values > could be created by > > functions of type: > > Number a => a -> Maybe NonZeroNumeric > > or something similar. > > IMHO it would be impractical. > The first part of my question (not contained in your reply) is whether it is feasible to disable a developer's access to the "unsafe" numerical operations. Whether or not an individual developer chooses to do so is another matter. > Often I know that the value is non-zero, but it is not > statically determined, If you "know" the value is non-zero before run-time, then that is statically determined. Otherwise, you don't "know" that. > so it would just require uglification by > doing that conversion and then coercing Maybe NonZeroNumeric to > NonZeroNumeric. Ugliness is in the eye of the beholder I suppose. For some applications, every division should be preceded by an explicit test for zero, or the denominator must be "known" to be non-zero by the way in which it was created. Forcing a developer to extract a NonZeroNumeric value from a Maybe NonZeroNumeric value seems equivalent to me. > It's bottom anyway when the value is 0, but bottom > would come from Maybe coercion instead of from quot, so it only gives > a worse error message. > It is possible that the developer writes a function which returns a nonZeroNumeric value which actually has a value of zero. However, the value of requiring division to have a nonZeroNumeric denominator is to catch at compile time the "error" of failing to scrutinize (correctly or incorrectly) for zero. For most commercial software, the quality of run-time error messages is far less important than their absence. > It's so easy to define partial functions that it would not buy much > for making it explicit outside quot. > > Haskell does not have subtypes so a coercion from NonZeroNumeric to > plain Numbers would have to be explicit as well, even if logically > it's just an injection. If one is aiming to write code which cannot fail at run-time, then extra work must be done anyway. The only question is whether the language will support such a discipline. > Everybody assumes that quot has a symmetric > type as in all other languages, but in your proposal quot's arguments > come from completely disjoint worlds. If it is optional but not required that a developer may disable unsafe division, then developers who expect arithmetic to work in the usual way will not be disappointed. > Moreover, 1/0 is defined on IEEE Doubles (e.g. in ghc): infinity. This solution doesn't always help with code safety. Thanks for the response. --PeterD From dpt@math.harvard.edu Thu Feb 8 17:43:08 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Thu, 8 Feb 2001 12:43:08 -0500 Subject: Revamping the numeric classes In-Reply-To: <3A828201.8533014B@info.unicaen.fr>; from karczma@info.unicaen.fr on Thu, Feb 08, 2001 at 11:24:49AM +0000 References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> Message-ID: <20010208124308.B2959@math.harvard.edu> On Thu, Feb 08, 2001 at 11:24:49AM +0000, Jerzy Karczmarczuk wrote: > First, a general remark which has nothing to do with Num. > > PLEASE WATCH YOUR DESTINATION ADDRESSES > People send regularly their postings to haskell-cafe with > several private receiver addresses, which is a bit annoying > when you click "reply all"... Yes, apologies. The way the lists do the headers make it very easy to reply to individuals, and hard to reply to the list. > And, last, but very high on my check-list: > > The implicit coercion of numeric constants: 3.14 -=->> (fromDouble > 3.14) > etc. is sick. (Or was; I still didn't install the last version of GHC, > and with Hugs it is bad). The decision is taken by the compiler > internally, > and it doesn't care at all about the fact that in my prelude > I have eliminated the Num class and redefined fromDouble, fromInt, etc. Can't you just put "default ()" at the top of each module? I suppose you still have the problem that a numeric literal "5" means "Prelude.fromInteger 5". Can't you define your types to be instances of Prelude.Num, with no operations defined except Prelude.fromInteger? > Dylan Thurston terminates his previous posting about Num with: > > > Footnotes: > > [1] Except for the lack of abs and signum, which should be in some > > other class. I have to think about their semantics before I can say > > where they belong. > > Now, signum and abs seem to be quite distincts beasts. Signum seem to > require Ord (and a generic zero...). > > Abs from the mathematical point of view constitutes a *norm*. Now, > frankly, I haven't the slightest idea how to cast this concept into > Haskell class hierarchy in a sufficiently general way... This was one thing I liked with the Haskell hierarchy: the observation that "signum" of real numbers is very much like "argument" of complex numbers. abs and signum in Haskell satisfy an implicit law: abs x * signum x = x [1] So signum can be defined anywhere you can define abs (except that it's not a continuous function, so is not terribly well-defined). A default definition for signum x might read signum x = let a = abs x in if (a == 0) then 0 else x / abs x (Possibly signum is the wrong name. What is the standard name for this operation for, e.g., matrices?) [Er, on second thoughts, it's not as well-defined as I thought. Abs x needs to be in a field for the definition above to work.] > I'll tell you anyway that if you try to "sanitize" the numeric > classes, if you separate additive structures and the multiplication, > if you finally define abstract Vectors over some field of scalars, > and if you demand the existence of a generic normalization for your > vectors, than *most probably* you will need multiparametric classes > with dependencies. Multiparametric classes, certainly (for Vectors, at least). Fortunately, they will be in Haskell 2 with high probability. I'm not convinced about dependencies yet. > Jerzy Karczmarczuk > Caen, France Best, Dylan Thurston Footnotes: [1] I'm not sure what I mean by "=" there, since I do not believe these should be forced to be instances of Eq. For clearer cases, consider the various Monad laws, e.g., join . join = join . map join (Hope I got that right.) What does "=" mean there? Some sort of denotational equality, I suppose. From dpt@math.harvard.edu Thu Feb 8 19:55:14 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Thu, 8 Feb 2001 14:55:14 -0500 Subject: Instances of multiple classes at once In-Reply-To: <20010208214156.B4303@venus.cs.mu.oz.au>; from fjh@cs.mu.oz.au on Thu, Feb 08, 2001 at 09:41:56PM +1100 References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <20010207190654.A981@math.harvard.edu> <20010208214156.B4303@venus.cs.mu.oz.au> Message-ID: <20010208145514.C2959@math.harvard.edu> (Superficially irrelevant digression:) Simon Peyton-Jones came here today and talked about his combinator library for financial applications, as in his paper "Composing Contracts". One of the points he made was that a well-designed combinator library for financial traders should have combinators that work on a high level; then, when they want to start writing their own contracts, they can learn about a somewhat smaller set of building blocks inside that; then eventually they might learn about the fundamental building blocks. (Examples of different levels from the paper: "european"; "zcb"; "give"; "anytime".) One theory is that a well-designed class library has the same property. But standard Haskell doesn't allow this; that is why I like the proposal to allow a single instances to simultaneously declare instances of superclasses. One problem is how to present the information on the type hierarchy to users. (This is a problem in Haskell anyway; I find myself referring to the source of the Prelude while writing programs, which seems like a Bad Thing when extrapolated to larger modules.) On Thu, Feb 08, 2001 at 09:41:56PM +1100, Fergus Henderson wrote: > One point that needs to be resolved is the interaction with default methods. > > Consider > > class foo a where > f :: ... > f = ... > f2 :: ... > f2 = ... > > class (foo a) => bar a where > b :: ... > > instance bar T where > -- no definitions for f or f2 > b = 42 > > Should this define an instance for `foo T'? > (I think not.) Whyever not? Because there is no textual mention of class Foo in the instance for Bar? Think about the case of a superclass with no methods; wouldn't you want to allow automatic instances in this case? One might even go further and allow a class to declare default methods for a superclass: class Foo a where f :: ... class (Foo a) => Bar a where b :: ... b = ... f = ... Best, Dylan Thurston From qrczak@knm.org.pl Thu Feb 8 20:51:57 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 8 Feb 2001 20:51:57 GMT Subject: Show, Eq not necessary for Num [Was: Revamping the numeric c References: Message-ID: Thu, 08 Feb 2001 15:11:21 +0100 (CET), Elke Kasimir pisze: > However, what is missing for me is something like: > > type Comfortable a = (Show a, Eq a, Num a) => a > > or > > class (Show a, Read a, Eq a) => Comfortable a > instance (Show a, Read a, Eq a) => Comfortable a I agree and think it should be easy to add. The latter syntax is nice: obvious what it means, not legal today. This instance of course conflicts with any other instance of that class, so it can be recognized and treated specially as a "class synonym". > For Haskell, I could imagine (without having having much thought > about) in addition to the things mentioned in the beginning, > several things making supporting the "locally, fast and easy", > including a mean to define classes with implied memberships, for > example declarations saying that "Foo is the class of all types in > scope for which somefoo :: ... is defined", or declarations saying > that "class Num is locally restricted to all instances of global > Num which also belong to Eq". Here I would be more careful. Don't know if local instances or local classes can be defined to make sense, nor if they could be useful enough... -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Thu Feb 8 20:45:16 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 8 Feb 2001 20:45:16 GMT Subject: Revamping the numeric classes References: <8BDAB3CD0E67D411B02400D0B79EA49A5F6DC5@smail01.clam.com> Message-ID: Thu, 8 Feb 2001 10:51:58 -0500, Peter Douglass pisze: > The first part of my question (not contained in your reply) is > whether it is feasible to disable a developer's access to the > "unsafe" numerical operations. import Prelude hiding (quot, rem, (/) {- etc. -}) import YourPrelude -- which defines substitutes You can "disable" it now. You cannot disable them entirely - anyone can define present functions in terms of your functions if he really wants. > Whether or not an individual developer chooses to do so is another > matter. Why only quot? There are many other ways to write bottom: head [] (\(x:xs) -> (x,xs)) [] let x = x in x log (-1) asin 2 error "foo" > If you "know" the value is non-zero before run-time, then that is > statically determined. I know but the compiler does not know, and I have no way to convince it. > It is possible that the developer writes a function which returns a > nonZeroNumeric value which actually has a value of zero. However, > the value of requiring division to have a nonZeroNumeric denominator > is to catch at compile time the "error" of failing to scrutinize > (correctly or incorrectly) for zero. IMHO it would be more painful than useful. > For most commercial software, the quality of run-time error messages > is far less important than their absence. It would not avoid them if the interface does not give a place to report the error: average xs = sum xs / case checkZero (length xs) of Just notZero -> notZero Nothing -> error "This should never happen" is not any more safe than average xs = sum xs / length xs and I can report bad input without trouble now: average xs = case length xs of 0 -> Nothing l -> Just (sum xs / l) -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Thu Feb 8 20:28:13 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 8 Feb 2001 20:28:13 GMT Subject: Revamping the numeric classes References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <20010207190654.A981@math.harvard.edu> <20010208214156.B4303@venus.cs.mu.oz.au> Message-ID: Thu, 8 Feb 2001 21:41:56 +1100, Fergus Henderson pisze: > Should this define an instance for `foo T'? > (I think not.) > > How about if the instance declaration is changed to > > instance bar T where > f = 41 > -- no definition for f2 > b = 42 > > ? > (In that case, I think it should.) I don't like the idea of treating the case "no explicit definitions were given because all have default definitions which are OK" differently than "some explicit definitions were given". When there is a superclass, it must have an instance defined, so if we permit such thing at all, I would let it implicitly define all superclass instances not defined explicitly, or something like that. At least when all methods have default definitions. Yes, I know that they can be mutually recursive and thus all will be bottoms... So maybe there should be a way to specify that default definitions are cyclic and some of them must be defined? It is usually written in comments anyway, because it is not immediately visible in the definitions. If not formally in the language (now any method definition can be omitted even if it has no default!), then perhaps the compiler could detect most cases when methods are defined in terms of one another and give a warning. Generally the compiler could warn if the programmer has written bottom in an unusual way. For example f x = g some_expression g x = f some_expression is almost certainly a programmer error. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Thu Feb 8 20:30:31 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 8 Feb 2001 20:30:31 GMT Subject: Revamping the numeric classes References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> Message-ID: Thu, 08 Feb 2001 11:24:49 +0000, Jerzy Karczmarczuk pisze: > The implicit coercion of numeric constants: 3.14 -=->> (fromDouble > 3.14) etc. is sick. What do you propose instead? (BTW, it's fromRational, to keep arbitrarily large precision.) > Now, signum and abs seem to be quite distincts beasts. Signum seem > to require Ord (and a generic zero...). Signum doesn't require Ord. signum z = z / abs z for complex numbers. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From brian@boutel.co.nz Thu Feb 8 21:37:46 2001 From: brian@boutel.co.nz (Brian Boutel) Date: Fri, 09 Feb 2001 10:37:46 +1300 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: Message-ID: <3A8311AA.E095770A@boutel.co.nz> Patrik Jansson wrote: > > On Wed, 7 Feb 2001, Brian Boutel wrote: > > * Haskell equality is a defined operation, not a primitive, and may not > > be decidable. It does not always define equivalence classes, because > > a==a may be Bottom, so what's the problem? It would be a problem, > > though, to have to explain to a beginner why they can't print the result > > of a computation. > > The fact that equality can be trivially defined as bottom does not imply > that it should be a superclass of Num, it only explains that there is an > ugly way of working around the problem. Neither is the argument that the > beginner should be able to print the result of a computation a good > argument for having Show as a superclass. > There is nothing trivial or ugly about a definition that reflects reality and bottoms only where equality is undefined. Of course, if you do not need to apply equality to your "numeric" type then having to define it is a waste of time, but consider this: - Having a class hierarchy at all (or making any design decision) implies compromise. - The current hierarchy (and its predecessors) represent a reasonable compromise that meets most needs. - Users have a choice: either work within the class hierarchy and accept the pain of having to define things you don't need in order to get the things that come for free, or omit the instance declarations and work outside the hierarchy. In that case you will not be able to use the overloaded operator symbols of the class, but that is just a matter of concrete syntax, and ultimately unimportant. --brian From wli@holomorphy.com Fri Feb 9 01:37:31 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Thu, 8 Feb 2001 17:37:31 -0800 Subject: Revamping the numeric classes In-Reply-To: ; from qrczak@knm.org.pl on Thu, Feb 08, 2001 at 08:30:31PM +0000 References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> Message-ID: <20010208173731.B960@holomorphy.com> On Thu, Feb 08, 2001 at 08:30:31PM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Signum doesn't require Ord. > signum z = z / abs z > for complex numbers. I'd be careful here. \begin{code} signum 0 = 0 signum z = z / abs z \end{code} This is, perhaps, neither precise nor general enough. The signum/abs pair seem to represent direction and magnitude. According to the line of reasoning in some of the earlier posts in this flamewar, the following constraints: (1) z = signum z <*> abs z where <*> is appropriately defined (2) abs $ signum z = 1 should be enforced, if possible, by the type system. This suggests that for any type having a vector space structure over Fractional (or whatever the hierarchy you're brewing up uses for rings with a division partial function on them) that the result type of signum lives in a more restricted universe, perhaps even one with a different structure (operations defined on it, set of elements) than the argument type, and it seems more than possible to parametrize it on the argument type. The abs is in fact a norm, and the signum projects V^n -> V^n / V. Attempts to define these things on Gaussian integers, p-adic numbers, polynomial rings, and rational points on elliptic curves will quickly reveal limitations of the stock class hierarchy. Now, whether it's actually desirable to scare newcomers to the language into math phobia, wetting their pants, and running screaming with subtleties like this suggests perhaps that one or more "alternative Preludes" may be desirable to have. There is a standard Prelude, why not a nonstandard one or two? We have the source. The needs of the geek do not outweigh the needs of the many. Hence, we can cook up a few Preludes or so on our own, and certainly if we can tinker enough to spam the list with counterexamples and suggestions of what we'd like the Prelude to have, we can compile up a Prelude for ourselves with our "suggested changes" included and perhaps one day knock together something which can actually be used and has been tested, no? The Standard Prelude serves its purpose well and accommodates the largest cross-section of users. Perhaps a Geek Prelude could accommodate the few of us who do need these sorts of schenanigans. Cheers, Bill -- Excel/Spreadsheet Q: What is the formula for finding out the time passed between two dates and or two times in the same day? excel/spreadsheet? Hmm, this is math? Is there a GTM on excel or maybe an article in annals about spreadsheets or maybe there's a link from wolfram to doing your own computer work, eh? jeeem, haven't you seen "Introduction to Algebraic Excel"? or "Spreadsheet Space Embeddings in 2-Manifolds" i got my phd in spreadsheet theory i did my thesis on the spreadsheet conjecture From t-atolm@microsoft.com Tue Feb 6 10:53:55 2001 From: t-atolm@microsoft.com (Andrew Tolmach) Date: Tue, 6 Feb 2001 02:53:55 -0800 Subject: GHC Core output Message-ID: <8D7D23D65C1CEB44ABEE89F137A5D32815E6DA@red-msg-09.redmond.corp.microsoft.com> Timothy Docker [mailto:timd@macquarie.com.au] writes: > > > We agreed that it would be a Jolly Good Thing if GHC could > > be persuaded to produce GHC-independent Core output, > > ready to feed into some other compiler. For example, > > Karl-Filip might be able to use it. > > ANDREW will write a specification, and implement it. > > A quick question. What is meant by "Core output"? Subsequent posts > seem to suggest this is some "reduced Haskell", in which full Haskell > 98 can be expressed. Am I completely off beam here? > Not at all. "Core" is an intermediate language used internally by the GHC compiler. It does indeed resemble a reduced Haskell (but with explicit higher-order polymorphic types) and GHC translates full Haskell 98 into it. Currently Core has no rigorously defined external representation, although by setting certain compiler flags, one can get a (rather ad-hoc) textual representation to be printed at various points in the compilation process. (This is usually done to help debug the compiler). What we hope to do is: - provide a formal definition of Core's external syntax; - give a precise definition of its semantics (both static and dynamic); - modify GHC to produce external Core files, if so requested, at one or more useful points in the compilation sequence -- e.g., just before optimization, or just after. - modify GHC to accept external Core files in place of Haskell source files, again at one or more useful points. The first three facilities will let one couple GHC's front-end (parser, type-checker, etc.), and optionally its optimizer, with new back-end tools. Adding the last facility will let one implement new Core-to-Core transformations in an external tool and integrate them into GHC. It will also allow new front-ends to generate Core that can be fed into GHC's optimizer or back end; however, because there are many (undocumented) idiosynracies in the way GHC produces Core from source Haskell, it will be hard for an external tool to produce Core that can be integrated with GHC-produced core (e.g., for the Prelude), and we don't aim to support this. From erik@meijcrosoft.com Fri Feb 9 04:26:06 2001 From: erik@meijcrosoft.com (Erik Meijer) Date: Thu, 8 Feb 2001 20:26:06 -0800 Subject: GHC Core output References: <8D7D23D65C1CEB44ABEE89F137A5D32815E6DA@red-msg-09.redmond.corp.microsoft.com> Message-ID: <004601c09250$6caba0c0$0100a8c0@mshome.net> I would *really* love to see GHC componetized (TM); it would even be better if it would become easier to use the pieces. I would like to do experiments on smaller bits of the compiler using Hugs (ideally the whole thing!). When I was working on the Java/.NET backend I had to rebuild the whole compiler just to test a few hundred lines of code that translated Core to Java which is a major pain in the butt; I don't get a kick out of dealing with installing Cygnus, recursive multi-staged makefiles, cpp, etc. Erik "do you get a kick out of runnning the marathon with a ball and chain at your feet?" Meijer ----- Original Message ----- From: "Andrew Tolmach" To: "'Timothy Docker'" ; Sent: Tuesday, February 06, 2001 2:53 AM Subject: RE: GHC Core output > Timothy Docker [mailto:timd@macquarie.com.au] writes: > > > > > We agreed that it would be a Jolly Good Thing if GHC could > > > be persuaded to produce GHC-independent Core output, > > > ready to feed into some other compiler. For example, > > > Karl-Filip might be able to use it. > > > ANDREW will write a specification, and implement it. > > > > A quick question. What is meant by "Core output"? Subsequent posts > > seem to suggest this is some "reduced Haskell", in which full Haskell > > 98 can be expressed. Am I completely off beam here? > > > Not at all. > "Core" is an intermediate language used internally by the GHC compiler. > It does indeed resemble a reduced Haskell (but with explicit higher-order > polymorphic types) and GHC translates full Haskell 98 into it. > Currently Core has no rigorously defined external representation, although > by setting certain compiler flags, one can get a (rather ad-hoc) textual > representation to be printed at various points in the compilation process. > (This is usually done to help debug the compiler). > > What we hope to do is: > > - provide a formal definition of Core's external syntax; > > - give a precise definition of its semantics (both static and dynamic); > > - modify GHC to produce external Core files, if so requested, at one or more > useful points in the compilation sequence -- e.g., just before optimization, > or just after. > > - modify GHC to accept external Core files in place of Haskell > source files, again at one or more useful points. > > The first three facilities will let one couple GHC's front-end (parser, > type-checker, etc.), and optionally its optimizer, with new back-end tools. > Adding the last facility will let one implement new Core-to-Core > transformations in an external tool and integrate them into GHC. It will > also > allow new front-ends to generate Core that can be fed into GHC's optimizer > or > back end; however, because there are many (undocumented) > idiosynracies in the way GHC produces Core from source Haskell, it will be > hard > for an external tool to produce Core that can be integrated with > GHC-produced core > (e.g., for the Prelude), and we don't aim to support this. > > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe From t-atolm@microsoft.com Wed Feb 7 09:34:36 2001 From: t-atolm@microsoft.com (Andrew Tolmach) Date: Wed, 7 Feb 2001 01:34:36 -0800 Subject: GHC Core Language Message-ID: <8D7D23D65C1CEB44ABEE89F137A5D32815E6DE@red-msg-09.redmond.corp.microsoft.com> [moving to haskell-cafe] > From: matt hellige [mailto:matt@immute.net] > a quick question re: ghc's Core language... is it still very similar > to the abstract syntax given in, for example, santos' "compilation by > transformation..." (i think it was his dissertation?) and > elsewhere, or > has it changed significantly in the last couple of years? i only ask > because i know the language used in that paper is somewhat > different from > the Core language given in peyton jones and lester's > "implementing functional > languages" from 92, and includes type annotations and so on. > > m > The current Core language is still quite similar to what is described in Santos' work; see SL Peyton Jones and A Santos, "A transformation-based optimiser for Haskell," Science of Computer Programming 32(1-3), pp3-47, September 1998. http://research.microsoft.com/Users/simonpj/papers/comp-by-trans-scp.ps.gz But there have been some noticeable changes; for example, function arguments are no longer required to be atomic. A more recent version of Core is partially described (omitting types) in SL Peyton Jones & S Marlowe, "Secrets of the Glasgow Haskell Compiler Inliner," IDL'99. http://research.microsoft.com/Users/simonpj/papers/inline.ps.gz From Tom.Pledger@peace.com Fri Feb 9 04:29:09 2001 From: Tom.Pledger@peace.com (Tom Pledger) Date: Fri, 9 Feb 2001 17:29:09 +1300 Subject: Revamping the numeric classes In-Reply-To: References: <200102080800.VAA36591@waytogo.peace.co.nz> Message-ID: <14979.29205.362496.555808@waytogo.peace.co.nz> Marcin 'Qrczak' Kowalczyk writes: | On Thu, 8 Feb 2001, Tom Pledger wrote: | | > nice answer: give the numeric literal 10 the range type 10..10, which | > is defined implicitly and is a subtype of both -128..127 (Int8) and | > 0..255 (Word8). | | What are the inferred types for | f = map (\x -> x+10) | g l = l ++ f l | ? I hope I can use them as [Int] -> [Int]. f, g :: (Subtype a b, Subtype 10..10 b, Num b) => [a] -> [b] Yes, because of the substitution {Int/a, Int/b}. | > x + y + z -- as above | > | > --> (x + y) + z -- left-associativity of (+) | > | > --> realToFrac (x + y) + z -- injection (or treating up) done | > -- conservatively, i.e. only where needed | | What does it mean "where needed"? Type inference does not proceed | inside-out. In the expression (x + y) + z we know from the explicit type signature (in your question that I was responding to) that x,y::Int and z::Double. Type inference does not need to treat x or y up, because it can take the first (+) to be Int addition. However, it must treat the result (x + y) up to the most specific supertype which can be added to a Double. | What about this? | h f = f (1::Int) == (2::Int) | Can I apply f h? | to a function of type Int->Double? Yes. | If no, then it's a pity, because I could inline it (the comparison | would be done on Doubles). If yes, then what is the inferred type | for h? Note that Int->Double is not a subtype of Int->Int, so if h | :: (Int->Int)->Bool, then I can't imagine how h can be applied to | something :: Int->Double. There's no explicit type signature for the result of applying f to (1::Int), so... h :: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool That can be inferred by following the structure of the term. Function terms do seem prone to an accumulation of deferred subtype constraints. Regards, Tom From brian@boutel.co.nz Fri Feb 9 05:45:16 2001 From: brian@boutel.co.nz (Brian Boutel) Date: Fri, 09 Feb 2001 18:45:16 +1300 Subject: Revamping the numeric classes References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> <20010208173731.B960@holomorphy.com> Message-ID: <3A8383EC.B8D49FF4@boutel.co.nz> William Lee Irwin III wrote: > > > The Standard Prelude serves its purpose well and accommodates the > largest cross-section of users. Perhaps a Geek Prelude could > accommodate the few of us who do need these sorts of schenanigans. > > Amen. --brian From simonpj@microsoft.com Thu Feb 8 02:32:18 2001 From: simonpj@microsoft.com (Simon Peyton-Jones) Date: Wed, 7 Feb 2001 18:32:18 -0800 Subject: Haskell Implemetors Meeting Message-ID: <37DA476A2BC9F64C95379BF66BA269023D90BE@red-msg-09.redmond.corp.microsoft.com> GHC transforms Haskell into "Core", which is roughly the second-order lambda calculus, augmented with let(rec), case, and constructors. This is an a small explicitly-typed intermediate language, in contrast to Haskell which is a very large, implicitly typed language. Getting from Haskell to Core is a lot of work, and it might be useful to be able to re-use that work. Andrew's proposal (which he'll post to the Haskell list) will define exactly what "Core" is. Simon | -----Original Message----- | From: Timothy Docker [mailto:timd@macquarie.com.au] | Sent: 05 February 2001 22:16 | To: haskell-cafe@haskell.org | Subject: Haskell Implemetors Meeting | | | | > We agreed that it would be a Jolly Good Thing if GHC could | > be persuaded to produce GHC-independent Core output, | > ready to feed into some other compiler. For example, | > Karl-Filip might be able to use it. | > ANDREW will write a specification, and implement it. | | A quick question. What is meant by "Core output"? Subsequent posts | seem to suggest this is some "reduced Haskell", in which full Haskell | 98 can be expressed. Am I completely off beam here? | | Tim Docker | | _______________________________________________ | Haskell-Cafe mailing list | Haskell-Cafe@haskell.org | http://www.haskell.org/mailman/listinfo/haskell-cafe | From ketil@ii.uib.no Fri Feb 9 08:14:53 2001 From: ketil@ii.uib.no (Ketil Malde) Date: 09 Feb 2001 09:14:53 +0100 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] In-Reply-To: Brian Boutel's message of "Fri, 09 Feb 2001 10:37:46 +1300" References: <3A8311AA.E095770A@boutel.co.nz> Message-ID: Brian Boutel writes: >> The fact that equality can be trivially defined as bottom does not imply >> that it should be a superclass of Num, it only explains that there is an >> ugly way of working around the problem. > There is nothing trivial or ugly about a definition that reflects > reality and bottoms only where equality is undefined. I think there is. If I design a class and derive it from Num with (==) is bottom, I am allowed to apply to it functions requiring a Num argument, but I have no guarantee it will work. The implementor of that function can change its internals (to use (==)), and suddenly my previously working program is non-terminating. If I defined (==) to give a run time error, it'd be a bit better, but I'd much prefer the compiler to tell me about this in advance. > Of course, if you do not need to apply equality to your "numeric" type > then having to define it is a waste of time, but consider this: It's not about "needing to apply", but about finding a reasonable definition. > - Having a class hierarchy at all (or making any design decision) > implies compromise. I think the argument is that we should move Eq and Show *out* of the Num hierarchy. Less hierarchy - less compromise. > - The current hierarchy (and its predecessors) represent a reasonable > compromise that meets most needs. Obviously a lot of people seem to think we could find compromises that are more reasonable. > - Users have a choice: either work within the class hierarchy and > accept the pain of having to define things you don't need in order > to get the things that come for free, Isn't it a good idea to reduce the amount of pain? > or omit the instance declarations and work outside the hierarchy. In > that case you will not be able to use the overloaded operator > symbols of the class, but that is just a matter of concrete syntax, > and ultimately unimportant. I don't think syntax is unimportant. -kzm -- If I haven't seen further, it is by standing in the footprints of giants From karczma@info.unicaen.fr Fri Feb 9 10:52:39 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Fri, 09 Feb 2001 10:52:39 +0000 Subject: In hoc signo vinces (Was: Revamping the numeric classes) References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> Message-ID: <3A83CBF7.6C635E94@info.unicaen.fr> Marcin 'Qrczak' Kowalczyk wrote: > JK> Now, signum and abs seem to be quite distincts beasts. Signum seem > JK> to require Ord (and a generic zero...). > > Signum doesn't require Ord. > signum z = z / abs z > for complex numbers. Thank you, I know. And I ignore it. Calling "signum" the result of a vector normalization (on the gauss plane in this case) is something I don't really appreciate, and I wonder why this definition infiltrated the prelude. Just because it conforms to the "normal" definition of signum for reals? Again, a violation of the orthogonality principle. Needing division just to define signum. And of course a completely different approach do define the signum of integers. Or of polynomials... Jerzy Karczmarczuk From karczma@info.unicaen.fr Fri Feb 9 11:26:39 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Fri, 09 Feb 2001 11:26:39 +0000 Subject: Revamping the numeric HUMAN ATTITUDE References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> <20010208173731.B960@holomorphy.com> <3A8383EC.B8D49FF4@boutel.co.nz> Message-ID: <3A83D3EF.3CA1DE70@info.unicaen.fr> Brian Boutel wrote: > > William Lee Irwin III wrote: > > > > > > The Standard Prelude serves its purpose well and accommodates the > > largest cross-section of users. Perhaps a Geek Prelude could > > accommodate the few of us who do need these sorts of schenanigans. > > > > > > Amen. Aha. And we will have The Prole, normal users who can live with incomplete, sometimes contradictory math, and The Inner Party of those who know The Truth? Would you agree that your children be taught at primary school some dubious matter because "they won't need the real stuff". I would agree having a minimal standard Prelude which is incomplete. But it should be sane, should avoid confusion of categories and useless/harmful dependencies. Methodologically and pedagogically it seems a bit risky. Technically it may be awkward. It will require the compiler and the standard libraries almost completely independent of each other. This is not the case now. BTW. what is a schenanigan? Is it by definition someething consumed by Geeks? Is the usage of Vector Spaces restricted to those few Geeks who can't live without schenanigans? Jerzy Karczmarczuk PS. For some time I follow the discussion on some newsgroups dealing with computer graphics, imagery, game programming, etc. I noticed a curious, strong influence of people who shout loudly: "Math?! You don't need it really. Don't waste your time on it! Don't waste your time on cute algorithms, they will be slow as hell. Learn assembler, "C", MMX instructions, learn DirectX APIs, forget this silly geometric speculations. Behave *normally*, as a *normal* computer user, not as a speculative mathematician!" And I noticed that REGULARLY, 1 - 4 times a week some freshmen ask over and over again such questions: 1. How to rotate a vector in 3D? 2. How to zoom an image? 3. What is a quaternion, and why some people hate them so much? 4. How to compute a trajectory if I know the force acting on the object. To summarize: people who don't use and don't need math always feel right to discourage others to give to it an adequate importance. It is not they who will suffer from badly constructed math layer of a language, or from badly taught math concepts, so they don't care too much. From dpt@math.harvard.edu Fri Feb 9 16:48:33 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Fri, 9 Feb 2001 11:48:33 -0500 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric c In-Reply-To: ; from qrczak@knm.org.pl on Thu, Feb 08, 2001 at 08:51:57PM +0000 References: Message-ID: <20010209114833.A26885@math.harvard.edu> On Thu, Feb 08, 2001 at 08:51:57PM +0000, Marcin 'Qrczak' Kowalczyk wrote: > > ... > > class (Show a, Read a, Eq a) => Comfortable a > > instance (Show a, Read a, Eq a) => Comfortable a > ... > The latter syntax is nice: obvious what it means, not legal today. > This instance of course conflicts with any other instance of that > class, so it can be recognized and treated specially as a "class > synonym". Why isn't it legal? I just tried it, and Hugs accepted it, with or without extensions. "where" clauses are optional, right? > .... Don't know if local instances or local classes can be defined > to make sense, nor if they could be useful enough... Well, let's see. Local classes already exist: just don't export them. Local instances would not be hard to add with special syntax, though really they should be part of a more general mechanism for dealing with instances explicitly. Agreed that they might not be useful enough. Best, Dylan Thurston From dpt@math.harvard.edu Fri Feb 9 17:05:09 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Fri, 9 Feb 2001 12:05:09 -0500 Subject: 'Convertible' class? In-Reply-To: ; from qrczak@knm.org.pl on Thu, Feb 08, 2001 at 04:06:24AM +0000 References: <20010207154359.A32026@math.harvard.edu> Message-ID: <20010209120509.A27008@math.harvard.edu> You make some good arguments. Thanks. Let me ask about a few of them. On Thu, Feb 08, 2001 at 04:06:24AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Wed, 7 Feb 2001 15:43:59 -0500, Dylan Thurston pisze: > > > class Convertible a b where > > convert :: a -> b > > > > So, e.g., fromInteger and fromRational could be replaced with > > convert. (But if you did the same thing with toInteger and toRational > > as well, you run into problems of overlapping instances. > > ... > And convert cannot be a substitute for fromIntegral/realToFrac, > because it needs a definition for every pair of types. Right. Those could still be defined as appropriately typed versions of 'convert . convert'. > You can put Num a in some instance's context, but you can't > put Convertible Integer a. It's because instance contexts must > constrain only type variables, which ensures that context reduction > terminates (but is sometimes overly restrictive). There is ghc's > flag -fallow-undecidable-instances which relaxes this restriction, > at the cost of undecidability. Ah! Thanks for reminding me; I've been using Hugs, which allows these instances. Is there no way to relax this restriction while maintaining undecidability? Best, Dylan Thurston From wli@holomorphy.com Fri Feb 9 19:19:05 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Fri, 9 Feb 2001 11:19:05 -0800 Subject: Revamping the numeric HUMAN ATTITUDE In-Reply-To: <3A83D3EF.3CA1DE70@info.unicaen.fr>; from karczma@info.unicaen.fr on Fri, Feb 09, 2001 at 11:26:39AM +0000 References: <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> <20010208173731.B960@holomorphy.com> <3A8383EC.B8D49FF4@boutel.co.nz> <3A83D3EF.3CA1DE70@info.unicaen.fr> Message-ID: <20010209111905.C960@holomorphy.com> William Lee Irwin III wrote: >>> The Standard Prelude serves its purpose well and accommodates the >>> largest cross-section of users. Perhaps a Geek Prelude could >>> accommodate the few of us who do need these sorts of schenanigans. I, of course, intend to use the Geek Prelude(s) myself. =) On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote: > Aha. > And we will have The Prole, normal users who can live with incomplete, > sometimes contradictory math, and The Inner Party of those who know > The Truth? > Would you agree that your children be taught at primary school some > dubious matter because "they won't need the real stuff". This is, perhaps, the best argument against my pseudo-proposal. I'm not against resolving things that are outright inconsistent or otherwise demonstrably bad, but the simplifications made to prevent the (rather large) mathphobic segment of the population from wetting their pants probably shouldn't be done away with to add more generality for the advanced users. We can write our own preludes anyway. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote: > I would agree having a minimal standard Prelude which is incomplete. > But it should be sane, should avoid confusion of categories and > useless/harmful dependencies. At the risk of turning this into "me too", I'm in agreement here. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote: > Methodologically and pedagogically it seems a bit risky. > Technically it may be awkward. It will require the compiler and > the standard libraries almost completely independent of each other. > This is not the case now. I'm seeing a bit of this now, and the error messages GHC spits out are hilarious! e.g. My brain just exploded. I can't handle pattern bindings for existentially-quantified constructors. and Couldn't match `Bool' against `Bool' Expected type: Bool Inferred type: Bool They're not quite Easter eggs, but they're quite a bit of fun. I might have to look into seeing what sort of things I might have to alter in GHC in order to resolve nasty situations like these. I can't speak to the methodological and pedagogical aspects of it. I just have a vague idea that explaining why something isn't an instance of GradedAlgebra or DifferentialRing to freshman or the otherwise mathematically disinclined isn't a task compiler and/or library implementors care to deal with. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote: > BTW. what is a schenanigan? Is it by definition someething consumed > by Geeks? Is the usage of Vector Spaces restricted to those few > Geeks who can't live without schenanigans? Yes! And I can't live without them. I had a few schenanigans at the math bar last night while I was trying to pick up a free module, but she wanted a normed ring before getting down to a basis. I guess that's what I get for going to a algebra bar. I should really have gone to a topology bar instead if I was looking for something kinkier. =) Perhaps "Geek Prelude" isn't a good name for it. Feel free to suggest alternatives. Of course, there's nothing to prevent the non-geek among us from using them if they care to. If I by some miracle produce something which actually works, I'll leave it untitled. And yes, I agree everyone needs VectorSpace. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote: > For some time I follow the discussion on some newsgroups dealing with > computer graphics, imagery, game programming, etc. I noticed a curious, > strong influence of people who shout loudly: > > "Math?! You don't need it really. Don't waste your time on it! > Don't waste your time on cute algorithms, they will be slow as > hell. Learn assembler, "C", MMX instructions, learn DirectX APIs, > forget this silly geometric speculations. Behave *normally*, as > a *normal* computer user, not as a speculative mathematician!" > > And I noticed that REGULARLY, 1 - 4 times a week some freshmen ask > over and over again such questions: > 1. How to rotate a vector in 3D? > 2. How to zoom an image? > 3. What is a quaternion, and why some people hate them so much? > 4. How to compute a trajectory if I know the force acting on the > object. To date I've been highly unsuccessful in convincing anyone in this (the predominant) camp otherwise. People do need math, they just refuse to believe it regardless of how strong the evidence is. I spent my undergrad preaching the gospel of "CS is math" and nobody listened. I don't know how they get anything done. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote: > To summarize: people who don't use and don't need math always feel > right to discourage others to give to it an adequate importance. > It is not they who will suffer from badly constructed math layer > of a language, or from badly taught math concepts, so they don't > care too much. How can I counter-summarize? It's true. I suppose I'm saying that the design goals of a Standard Prelude are outright against being so general it's capable of representing as many mathematical structures as possible. Of course, as it stands, it's not beyond reproach. Cheers, Bill -- A mathematician is a system for turning coffee into theorems. -- Paul Erdös A comathematician is a system for turning theorems into coffee. -- Tim Poston From qrczak@knm.org.pl Fri Feb 9 19:40:18 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 9 Feb 2001 19:40:18 GMT Subject: Revamping the numeric classes References: <200102080800.VAA36591@waytogo.peace.co.nz> <14979.29205.362496.555808@waytogo.peace.co.nz> Message-ID: Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger pisze: > (x + y) + z > > we know from the explicit type signature (in your question that I was > responding to) that x,y::Int and z::Double. Type inference does not > need to treat x or y up, because it can take the first (+) to be Int > addition. However, it must treat the result (x + y) up to the most > specific supertype which can be added to a Double. Approach it differently. z is Double, (x+y) is added to it, so (x+y) must have type Double. This means that x and y must have type Double. This is OK, because they are Ints now, which can be converted to Double. Why is your approach better than mine? > | h f = f (1::Int) == (2::Int) > | Can I apply f > > h? Sure, sorry. > h:: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool This type is ambiguous: the type variable b is needed in the context but not present in the type itself, so it can never be determined from the usage of h. > That can be inferred by following the structure of the term. > Function terms do seem prone to an accumulation of deferred subtype > constraints. When function application generates a constraint, the language gets ambiguous as hell. Applications are found everywhere through the program! Very often the type of the argument or result of an internal application does not appear in the type of the whole function being defined, which makes it ambiguous. Not to mention that there would be *LOTS* of these constraints. Application is used everywhere. It's important to have its typing rule simple and cheap. Generating a constraint for every application is not an option. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Fri Feb 9 19:31:08 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 9 Feb 2001 19:31:08 GMT Subject: Show, Eq not necessary for Num [Was: Revamping the numeric c References: <20010209114833.A26885@math.harvard.edu> Message-ID: Fri, 9 Feb 2001 11:48:33 -0500, Dylan Thurston pisze: > > > class (Show a, Read a, Eq a) => Comfortable a > > > instance (Show a, Read a, Eq a) => Comfortable a > Why isn't it legal? Because in Haskell 98 instance's head must be of the form of a type constructor applied to type variables. Here it's a type variable. > I just tried it, and Hugs accepted it, with or without extensions. My Hugs does not accept it without extensions. ghc does not accept it by default. ghc -fglasgow-exts accepts an instance's head which is a type constructor applied to some other types than just type variables (e.g. instance Foo [Char]), and -fallow-undecidable-instances lets it accept the above too. I forgot that it can make context reduction infinite unless the compiler does extra checking to prevent this. I guess that making it legal keeps the type system decidable, only compilers would have to introduce some extra checks. Try the following module: ------------------------------------------------------------------------ module Test where class Foo a where foo :: a class Bar a where bar :: a class Baz a where baz :: a instance Foo a => Bar a where bar = foo instance Bar a => Baz a where baz = bar instance Baz a => Foo a where foo = baz f = foo ------------------------------------------------------------------------ Both hugs -98 and ghc -fglasgow-exts -fallow-undecidable-instances reach their limits of context reduction steps. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Fri Feb 9 19:19:21 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 9 Feb 2001 19:19:21 GMT Subject: In hoc signo vinces (Was: Revamping the numeric classes) References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> <3A83CBF7.6C635E94@info.unicaen.fr> Message-ID: Fri, 09 Feb 2001 10:52:39 +0000, Jerzy Karczmarczuk pisze: > Again, a violation of the orthogonality principle. Needing division > just to define signum. And of course a completely different approach > do define the signum of integers. Or of polynomials... So what? That's why it's a class method and not a plain function with a single definition. Multiplication of matrices is implemented differently than multiplication of integers. Why don't you call it a violation of the orthogonality principle (whatever it is)? -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From dpt@math.harvard.edu Fri Feb 9 20:49:45 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Fri, 9 Feb 2001 15:49:45 -0500 Subject: 'Convertible' class? In-Reply-To: <20010209120509.A27008@math.harvard.edu>; from dpt@math.harvard.edu on Fri, Feb 09, 2001 at 12:05:09PM -0500 References: <20010207154359.A32026@math.harvard.edu> <20010209120509.A27008@math.harvard.edu> Message-ID: <20010209154945.A27526@math.harvard.edu> On Fri, Feb 09, 2001 at 12:05:09PM -0500, Dylan Thurston wrote: > On Thu, Feb 08, 2001 at 04:06:24AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > > You can put Num a in some instance's context, but you can't > > put Convertible Integer a. It's because instance contexts must > > constrain only type variables, which ensures that context reduction > > terminates (but is sometimes overly restrictive). There is ghc's > > flag -fallow-undecidable-instances which relaxes this restriction, > > at the cost of undecidability. > > Ah! Thanks for reminding me; I've been using Hugs, which allows these > instances. Is there no way to relax this restriction while > maintaining undecidability? After looking up the Jones-Jones-Meijer paper and thinking about it briefly, it seems to me that the troublesome cases (when "reducing" a context gives a more complicated context) can only happen with type constructructors, and not with simple types. Would this work? I.e., if every element of an instance context is required to be of the form C a_1 ... a_n, with each a_i either a type variable or a simple type, is type checking decidable? (Probably I'm missing something.) If this isn't allowed, one could still work around the problem: class (Convertible Integer a) => ConvertibleFromInteger a at the cost of sticking in nuisance instance declarations. Note that this problem arises a lot. E.g., suppose I have class (Field k, Additive v) => VectorSpace k v ... and then I want to talk about vector spaces over Float. Best, Dylan Thurston From wli@holomorphy.com Fri Feb 9 20:55:12 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Fri, 9 Feb 2001 12:55:12 -0800 Subject: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: ; from qrczak@knm.org.pl on Fri, Feb 09, 2001 at 07:19:21PM +0000 References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> <3A83CBF7.6C635E94@info.unicaen.fr> Message-ID: <20010209125512.D960@holomorphy.com> Fri, 09 Feb 2001 10:52:39 +0000, Jerzy Karczmarczuk pisze: >> Again, a violation of the orthogonality principle. Needing division >> just to define signum. And of course a completely different approach >> do define the signum of integers. Or of polynomials... On Fri, Feb 09, 2001 at 07:19:21PM +0000, Marcin 'Qrczak' Kowalczyk wrote: > So what? That's why it's a class method and not a plain function with > a single definition. > > Multiplication of matrices is implemented differently than > multiplication of integers. Why don't you call it a violation of the > orthogonality principle (whatever it is)? Matrix rings actually manage to expose the inappropriateness of signum and abs' definitions and relationships to Num very well: class (Eq a, Show a) => Num a where (+), (-), (*) :: a -> a -> a negate :: a -> a abs, signum :: a -> a fromInteger :: Integer -> a fromInt :: Int -> a -- partain: Glasgow extension Pure arithmetic ((+), (-), (*), negate) works just fine. But there are no good injections to use for fromInteger or fromInt, the type of abs is wrong if it's going to be a norm, and it's not clear that signum makes much sense. So we have two totally inappropriate operations (fromInteger and fromInt), one operation which has the wrong type (abs), and an operation which doesn't have well-defined meaning (signum) on matrices. If we want people doing graphics or linear algebraic computations to be able to go about their business with their code looking like ordinary arithmetic, this is, perhaps, a real concern. I believe that these applications are widespread enough to be concerned about how the library design affects their aesthetics. Cheers, Bill -- Weak coffee is only fit for lemmas. -- From dpt@math.harvard.edu Fri Feb 9 21:49:09 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Fri, 9 Feb 2001 16:49:09 -0500 Subject: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: <20010209125512.D960@holomorphy.com>; from wli@holomorphy.com on Fri, Feb 09, 2001 at 12:55:12PM -0800 References: <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> <3A83CBF7.6C635E94@info.unicaen.fr> <20010209125512.D960@holomorphy.com> Message-ID: <20010209164909.B27683@math.harvard.edu> On Fri, Feb 09, 2001 at 12:55:12PM -0800, William Lee Irwin III wrote: > class (Eq a, Show a) => Num a where > (+), (-), (*) :: a -> a -> a > negate :: a -> a > abs, signum :: a -> a > fromInteger :: Integer -> a > fromInt :: Int -> a -- partain: Glasgow extension > > ... So we have two totally inappropriate operations (fromInteger and > fromInt), ... I beg to differ on this point. One could provide a default implementation for fromInt(eger) as follows, assuming a 'zero' and 'one', which do obviously fit (they are the additive and multiplicative units): fromInteger n | n < 0 = negate (fromInteger (-n)) fromInteger n = foldl (+) zero (repeat n one) (Of course, one could use the algorithm in integer exponentiation to make this efficient.) Best, Dylan Thurston From brian@boutel.co.nz Sat Feb 10 01:09:59 2001 From: brian@boutel.co.nz (Brian Boutel) Date: Sat, 10 Feb 2001 14:09:59 +1300 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: <3A8311AA.E095770A@boutel.co.nz> Message-ID: <3A8494E7.6CF62EA5@boutel.co.nz> Ketil Malde wrote: > > Brian Boutel writes: > > > - Having a class hierarchy at all (or making any design decision) > > implies compromise. > > I think the argument is that we should move Eq and Show *out* of the > Num hierarchy. Less hierarchy - less compromise. Can you demonstrate a revised hierarchy without Eq? What would happen to Ord, and the numeric classes that require Eq because they need signum? > > > - The current hierarchy (and its predecessors) represent a reasonable > > compromise that meets most needs. > > Obviously a lot of people seem to think we could find compromises that > are more reasonable. I would put this differently. "A particular group of people want to change the language to make it more convenient for their special interests." > > > - Users have a choice: either work within the class hierarchy and > > accept the pain of having to define things you don't need in order > > to get the things that come for free, > > Isn't it a good idea to reduce the amount of pain? Not always. > > > or omit the instance declarations and work outside the hierarchy. In > > that case you will not be able to use the overloaded operator > > symbols of the class, but that is just a matter of concrete syntax, > > and ultimately unimportant. > > I don't think syntax is unimportant. > I wrote that *concrete* syntax is ultimately unimportant, not *syntax*. There is a big difference. In particular, *lexical syntax*, the choice of marks on paper used to represent a language element, is not important, although it does give rise to arguments, as do all mattters of taste and style. Thre are not enough usable operator symbols to go round, so they get overloaded. Mathematicians have overloaded common symbols like (+) and (*) for concepts that have may some affinity with addition and multiplication in arithmetic, but which are actually quite different. That's fine, because, in context, expert human readers can distinguish what is meant. From a software engineering point of view, though, such free overloading is dangerous, because readers may assume, incorrectly, that an operator has properties that are typically associated with operators using that symbol. This may not matter in a private world where the program writer is the only person who will see and use the code, and no mission-critial decisions depend on the results, but it should not be the fate of Haskell to be confined to such use. Haskell could have allowed free ad hoc overloading, but one of the first major decisions made by the Haskell Committee in 1988 was not to do so. Instead, it adopted John Hughes' proposal to introduce type classes to control overloading. A symbol could only be overloaded if the whole of a group of related symbols (the Class) was overloaded with it, and the class hierarchy provided an even stronger constraint by restricting overloading of the class operators to cases where other classes, intended to be closely related, were also overloaded. This tended to ensure that the new type at which the classes were overloaded had strong resemblences to the standard types. Simplifying the hierarchy weakens these constraints and so should be approached with extreme caution. Of course, the details of the classes and the hierarchy have changed over the years - there is, always has been and always will be pressure to make changes to meet particular needs - but the essence is still there, and the essence is of a general-purpose language, not a domain-specific language for some branches of mathematics. A consequence of this is that certain uses of overloaded symbols are inconvenient, because they are too far from the mainstream intended meaning. If you have such a use, and you want to write in Haskell, you have to choose other lexical symbols to represent your operators. You make your choice. --brian From john@foo.net Sat Feb 10 01:58:34 2001 From: john@foo.net (John Meacham) Date: Fri, 9 Feb 2001 17:58:34 -0800 Subject: Haskell Implemetors Meeting In-Reply-To: <37DA476A2BC9F64C95379BF66BA269023D90BE@red-msg-09.redmond.corp.microsoft.com>; from simonpj@microsoft.com on Wed, Feb 07, 2001 at 06:32:18PM -0800 References: <37DA476A2BC9F64C95379BF66BA269023D90BE@red-msg-09.redmond.corp.microsoft.com> Message-ID: <20010209175833.C4613@mark.ugcs.caltech.edu> Another Haskell -> Haskell transformation tool which I always thought would be useful (and perhaps exists?) would be a Haskell de-moduleizer. Basically it would take a Haskell program and follow its imports and spit out a single monolithic Haskell module. My first thought is that this should be able to be done by prepending the module name to every symbol (making sure the up/lowercases come out right of course) in each module and then appending them to one another. Why would I want this? curiosity mainly. performance perhaps. There is much more oprotunity to optimize if seperate compilation need not be taken into account. It would be interesting to see what could be done when not worrying about it. It would allow experimentation with non-seperate compilation compilers by allowing them to compile more stuff 'out-of-the-box'. Also it may be that performance is so important that one may want seperate compilation while developing, but when the final product is produced it might be worth the day it takes to compile to get a crazy-optimized product. This could also be done incrementally, unchanging subsystems (like GUI libraries) could be combined this way for speed while your app code is linked normally for development reasons.... John -- -------------------------------------------------------------- John Meacham http://www.ugcs.caltech.edu/~john/ California Institute of Technology, Alum. john@foo.net -------------------------------------------------------------- From fjh@cs.mu.oz.au Sat Feb 10 05:48:30 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Sat, 10 Feb 2001 16:48:30 +1100 Subject: Instances of multiple classes at once In-Reply-To: <20010208145514.C2959@math.harvard.edu> References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <20010207190654.A981@math.harvard.edu> <20010208214156.B4303@venus.cs.mu.oz.au> <20010208145514.C2959@math.harvard.edu> Message-ID: <20010210164830.A28558@hg.cs.mu.oz.au> On 08-Feb-2001, Dylan Thurston wrote: > On Thu, Feb 08, 2001 at 09:41:56PM +1100, Fergus Henderson wrote: > > One point that needs to be resolved is the interaction with default methods. > > Consider > > > > class foo a where > > f :: ... > > f = ... > > f2 :: ... > > f2 = ... > > > > class (foo a) => bar a where > > b :: ... > > > > instance bar T where > > -- no definitions for f or f2 > > b = 42 > > > > Should this define an instance for `foo T'? > > (I think not.) > > Whyever not? Because too much Haskell code uses classes where the methods are defined in terms of each other: class Foo a where -- you should define either f or f2 f :: ... f = ... f2 ... f2 :: ... f2 = ... f ... > Because there is no textual mention of class Foo in the > instance for Bar? Right, and because allowing the compiler to automatically generate instances for class Foo without the programmer having considered whether those instances are OK is too dangerous. > Think about the case of a superclass with no methods; > wouldn't you want to allow automatic instances in this case? Yes. I think Marcin has a better idea: | So maybe there should be a way to specify that default definitions | are cyclic and some of them must be defined? -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From fjh@cs.mu.oz.au Sat Feb 10 05:52:39 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Sat, 10 Feb 2001 16:52:39 +1100 Subject: Revamping the numeric classes In-Reply-To: References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <20010207190654.A981@math.harvard.edu> <20010208214156.B4303@venus.cs.mu.oz.au> Message-ID: <20010210165239.B28558@hg.cs.mu.oz.au> On 08-Feb-2001, Marcin 'Qrczak' Kowalczyk wrote: > > I don't like the idea of treating the case "no explicit definitions > were given because all have default definitions which are OK" > differently than "some explicit definitions were given". I don't really like it that much either, but... > When there is a superclass, it must have an instance defined, so if > we permit such thing at all, I would let it implicitly define all > superclass instances not defined explicitly, or something like that. > At least when all methods have default definitions. Yes, I know that > they can be mutually recursive and thus all will be bottoms... ... that is the problem I was trying to solve. > So maybe there should be a way to specify that default definitions > are cyclic and some of them must be defined? I agree 100%. > It is usually written in comments anyway, because it is not immediately > visible in the definitions. Yes. Much better to make it part of the language, so that the compiler can check it. > (now any method definition > can be omitted even if it has no default!), Yeah, that one really sucks. -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From fjh@cs.mu.oz.au Sat Feb 10 05:55:18 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Sat, 10 Feb 2001 16:55:18 +1100 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] In-Reply-To: <3A8311AA.E095770A@boutel.co.nz> References: <3A8311AA.E095770A@boutel.co.nz> Message-ID: <20010210165518.C28558@hg.cs.mu.oz.au> On 09-Feb-2001, Brian Boutel wrote: > Patrik Jansson wrote: > > > > The fact that equality can be trivially defined as bottom does not imply > > that it should be a superclass of Num, it only explains that there is an > > ugly way of working around the problem. ... > > There is nothing trivial or ugly about a definition that reflects > reality and bottoms only where equality is undefined. I disagree. Haskell is a statically typed language, and having errors which could easily be detected at compile instead being deferred to run time is ugly in a statically typed language. -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From qrczak@knm.org.pl Sat Feb 10 07:17:57 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 10 Feb 2001 07:17:57 GMT Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> Message-ID: Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel pisze: > Can you demonstrate a revised hierarchy without Eq? What would happen to > Ord, and the numeric classes that require Eq because they need signum? signum doesn't require Eq. You can use signum without having Eq, and you can sometimes define signum without having Eq (e.g. on functions). Sometimes you do require (==) to define signum, but it has nothing to do with superclasses. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From bhalchin@hotmail.com Sat Feb 10 08:44:38 2001 From: bhalchin@hotmail.com (Bill Halchin) Date: Sat, 10 Feb 2001 08:44:38 Subject: Mondrian question Message-ID: Hello, Is this the right place to ask Mondrian questions? I will assume so. Is Mondrian only meant to work with .NET?? If so, what good is it as an Internet scripting language? I.e. what good is it as a language if it only runs in Microsoft's .NET environment?? I tried to download but I found I would have to have Win2000 installed. I want to run on Linux. Regards, Bill Halchin _________________________________________________________________ Get your FREE download of MSN Explorer at http://explorer.msn.com From dpt@math.harvard.edu Sat Feb 10 16:25:46 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Sat, 10 Feb 2001 11:25:46 -0500 Subject: Semantics of signum In-Reply-To: ; from qrczak@knm.org.pl on Sat, Feb 10, 2001 at 07:17:57AM +0000 References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> Message-ID: <20010210112546.A30205@math.harvard.edu> On Sat, Feb 10, 2001 at 07:17:57AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel pisze: > > > Can you demonstrate a revised hierarchy without Eq? What would happen to > > Ord, and the numeric classes that require Eq because they need signum? > > signum doesn't require Eq. You can use signum without having Eq, and > you can sometimes define signum without having Eq (e.g. on functions). > Sometimes you do require (==) to define signum, but it has nothing to > do with superclasses. Can you elaborate? What do you mean by signum for functions? The pointwise signum? Then abs would be the pointwise abs as well, right? That might work, but I'm nervous because I don't know the semantics for signum/abs in such generality. What identities should they satisfy? (The current Haskell report says nothing about the meaning of these operations, in the same way it says nothing about the meaning of (+), (-), and (*). Compare this to the situation for the Monad class, where the fundamental identities are given. Oddly, there are identities listed for 'quot', 'rem', 'div', and 'mod'. For +, -, and * I can guess what identities they should satisfy, but not for signum and abs.) (Note that pointwise abs of functions yields a positive function, which are not ordered but do have a sensible notion of max and min.) Best, Dylan Thurston From qrczak@knm.org.pl Sat Feb 10 17:55:32 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 10 Feb 2001 17:55:32 GMT Subject: Semantics of signum References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> <20010210112546.A30205@math.harvard.edu> Message-ID: Sat, 10 Feb 2001 11:25:46 -0500, Dylan Thurston pisze: > Can you elaborate? What do you mean by signum for functions? > The pointwise signum? Yes. > Then abs would be the pointwise abs as well, right? Yes. > That might work, but I'm nervous because I don't know the semantics > for signum/abs in such generality. For example signum x * abs x == x, where (==) is not Haskell's equality but equivalence. Similarly to (x + y) + z == x + (y + z). If (+) can be implicitly lifted to functions, then why not signum? Note that I would lift neither signum nor (+). I don't feel the need. It can't be uniformly applied to e.g. (<) whose result is Bool and not some lifted Bool, so better be consistent and lift explicitly. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From wli@holomorphy.com Sat Feb 10 21:22:32 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Sat, 10 Feb 2001 13:22:32 -0800 Subject: Semantics of signum In-Reply-To: <20010210112546.A30205@math.harvard.edu>; from dpt@math.harvard.edu on Sat, Feb 10, 2001 at 11:25:46AM -0500 References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> <20010210112546.A30205@math.harvard.edu> Message-ID: <20010210132232.A641@holomorphy.com> On Sat, Feb 10, 2001 at 11:25:46AM -0500, Dylan Thurston wrote: > Can you elaborate? What do you mean by signum for functions? The > pointwise signum? Then abs would be the pointwise abs as well, right? > That might work, but I'm nervous because I don't know the semantics > for signum/abs in such generality. What identities should they > satisfy? (The current Haskell report says nothing about the meaning > of these operations, in the same way it says nothing about the meaning > of (+), (-), and (*). Compare this to the situation for the Monad class, > where the fundamental identities are given. Oddly, there are identities > listed for 'quot', 'rem', 'div', and 'mod'. For +, -, and * I can guess > what identities they should satisfy, but not for signum and abs.) Pointwise signum and abs are common in analysis. The identity is: signum f * abs f = f I've already done the pointwise case. As I've pointed out before, abs has the wrong type for doing anything with vector spaces, though, perhaps, abs is a distinct notion from norm. On Sat, Feb 10, 2001 at 11:25:46AM -0500, Dylan Thurston wrote: > (Note that pointwise abs of functions yields a positive function, which > are not ordered but do have a sensible notion of max and min.) The ordering you're looking for needs a norm. If you really want a notion of size on functions, you'll have to do it with something like one of the L^p norms for continua and the \ell^p norms for discrete spaces which are instances of Enum. There is a slightly problematic aspect with this in that the domain of the function does not entirely determine the norm, and furthermore adequately dealing with the different notions of measure on these spaces with the type system is probably also intractable. The sorts of issues raised by trying to define norms on functions probably rather quickly relegate it to something the user should explicitly define, as opposed to something that should appear in a Prelude standard or otherwise. That said, one could do something like instance Enum a => Enum (MyTree a) where ... -- it's tricky, but possible, you figure it out instance (Enum a, RealFloat b) => NormedSpace (MyTree a -> b) where norm f = approxsum $ zipWith (*) (map f . enumFrom $ toEnum 0) weights where weights = map (\x -> 1/factorial x) [0..] approxsum [] = 0 approxsum (x:xs)| x < 1.0e-6 = 0 | otherwise = x + approxsum xs and then do the usual junk where instance NormedSpace a => Ord a where f < g = norm f < norm g ... Cheers, Bill From brian@boutel.co.nz Sun Feb 11 00:37:28 2001 From: brian@boutel.co.nz (Brian Boutel) Date: Sun, 11 Feb 2001 13:37:28 +1300 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> Message-ID: <3A85DEC8.25FAF744@boutel.co.nz> Marcin 'Qrczak' Kowalczyk wrote: > > Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel pisze: > > > Can you demonstrate a revised hierarchy without Eq? What would happen to > > Ord, and the numeric classes that require Eq because they need signum? > > signum doesn't require Eq. You can use signum without having Eq, and > you can sometimes define signum without having Eq (e.g. on functions). > Sometimes you do require (==) to define signum, but it has nothing to > do with superclasses. > Let me restate my question more carefully: Can you demonstrate a revised hierarchy without Eq? What would happen to Ord and the numeric classes with default class method definitions that use (==) either explicitly or in pattern matching against numeric literals? Both Integral and RealFrac do this to compare or test the value of signum. In an instance declaration, if a method requires operations of another class which is not a superclass of the class being instanced, it is sufficient to place the requirement in the context, but for default class method definitions, all class methods used must belong to the class being defined or its superclasses. --brian From brian@boutel.co.nz Sun Feb 11 01:27:35 2001 From: brian@boutel.co.nz (Brian Boutel) Date: Sun, 11 Feb 2001 14:27:35 +1300 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: <3A8311AA.E095770A@boutel.co.nz> <20010210165518.C28558@hg.cs.mu.oz.au> Message-ID: <3A85EA87.384E2D3@boutel.co.nz> Fergus Henderson wrote: > > On 09-Feb-2001, Brian Boutel wrote: > > Patrik Jansson wrote: > > > > > > The fact that equality can be trivially defined as bottom does not imply > > > that it should be a superclass of Num, it only explains that there is an > > > ugly way of working around the problem. > ... > > > > There is nothing trivial or ugly about a definition that reflects > > reality and bottoms only where equality is undefined. > > I disagree. Haskell is a statically typed language, and having errors > which could easily be detected at compile instead being deferred to > run time is ugly in a statically typed language. There may be some misunderstanding here. If you are talking about type for which equality is always undefined, then I agree with you, but that is not what I was talking about. I was thinking about types where equality is defined for some pairs of argument values and undefined for others - I think the original example was some kind of arbitrary precision reals. My remark about "a definition that reflects reality and bottoms only where equality is undefined" was referring to this situation. Returning to the basic issue, I understood the desire to remove Eq as a superclass of Num was so that people were not required to implement equality if they did not need it, not that there were significant numbers of useful numeric types for which equality was not meaningful. Whichever of these was meant, I feel strongly that accomodating this and other similar changes by weakening the constraints on what Num in Haskell implies, is going too far. It devalues the Class structure in Haskell to the point where its purpose, to control ad hoc polymorphism in a way that ensures that operators are overloaded only on closely related types, is lost, and one might as well abandon Classes and allow arbitrary overloading. --brian --brian From dpt@math.harvard.edu Sun Feb 11 02:00:38 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Sat, 10 Feb 2001 21:00:38 -0500 Subject: Show, Eq not necessary for Num In-Reply-To: <3A85DEC8.25FAF744@boutel.co.nz>; from brian@boutel.co.nz on Sun, Feb 11, 2001 at 01:37:28PM +1300 References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> <3A85DEC8.25FAF744@boutel.co.nz> Message-ID: <20010210210038.A31746@math.harvard.edu> On Sun, Feb 11, 2001 at 01:37:28PM +1300, Brian Boutel wrote: > Let me restate my question more carefully: > > Can you demonstrate a revised hierarchy without Eq? What would happen to > Ord and the numeric classes with default class method definitions that > use (==) either explicitly or in pattern matching against numeric > literals? Both Integral and RealFrac do this to compare or test the > value of signum. I've been working on writing up my preferred hierarchy, but the short answer is that classes that are currently derived from Ord often do require Eq as superclasses. In the specific cases: I think possibly divMod and quotRem should be split into separate classes. It seems to me that divMod is the more fundamental pair: it satisfies the identity mod (a+b) b === mod a b div (a+b) b === 1 + div a b in addition to (div a b)*b + mod a b === a. This identity is not enough to specify divMod competely; another reasonable choice for Integers would be to round to the nearest integer. But this is enough to make it useful for many applications. quotRem is also useful (although it only satisfies the second of these), and does require the ordering (and ==) to define sensibly, so I would make it a method of a subclass of Ord (and hence Eq). So I would tend to put these into two separate classes: class (Ord a, Num a) => Real a class (Num a) => Integral a where div, mod :: a -> a -> a divMod :: a -> a -> (a,a) class (Integral a, Real a) => RealIntegral a where quot, rem :: a -> a -> a quotRem :: a -> a -> (a,a) I haven't thought about the operations in RealFrac and their semantics enough to say much sensible, but probably they will again require Ord as a superclass. In general, I think a good approach is to think carefully about the semantics of a class and its operations, and to declare exactly the superclasses that are necessary to define the semantics. Note that sometimes there are no additional operations. For instance, declaring a class to be an instance of Real a should mean that the ordering (from Ord) and the numeric structure (from Num) are compatible. Note also that we cannot require Eq to state laws (the '===' above); consider the laws required for the Monad class to convince yourself. Best, Dylan Thurston From fjh@cs.mu.oz.au Sun Feb 11 07:24:33 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Sun, 11 Feb 2001 18:24:33 +1100 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] In-Reply-To: <3A85EA87.384E2D3@boutel.co.nz> References: <3A8311AA.E095770A@boutel.co.nz> <20010210165518.C28558@hg.cs.mu.oz.au> <3A85EA87.384E2D3@boutel.co.nz> Message-ID: <20010211182432.A5545@hg.cs.mu.oz.au> On 11-Feb-2001, Brian Boutel wrote: > Fergus Henderson wrote: > > > > On 09-Feb-2001, Brian Boutel wrote: > > > Patrik Jansson wrote: > > > > > > > > The fact that equality can be trivially defined as bottom does not imply > > > > that it should be a superclass of Num, it only explains that there is an > > > > ugly way of working around the problem. > > ... > > > > > > There is nothing trivial or ugly about a definition that reflects > > > reality and bottoms only where equality is undefined. > > > > I disagree. Haskell is a statically typed language, and having errors > > which could easily be detected at compile instead being deferred to > > run time is ugly in a statically typed language. > > There may be some misunderstanding here. If you are talking about type > for which equality is always undefined, then I agree with you, but that > is not what I was talking about. I was thinking about types where > equality is defined for some pairs of argument values and undefined for > others - I think the original example was some kind of arbitrary > precision reals. The original example was treating functions as a numeric type. In the case of functions, computing equality is almost always infeasible. But you can easily define addition etc. pointwise: f + g = (\ x -> f x + g x) > Returning to the basic issue, I understood the desire to remove Eq as a > superclass of Num was so that people were not required to implement > equality if they did not need it, not that there were significant > numbers of useful numeric types for which equality was not meaningful. The argument is the latter, with functions as the canonical example. -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From qrczak@knm.org.pl Sun Feb 11 07:59:38 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 11 Feb 2001 07:59:38 GMT Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> <3A85DEC8.25FAF744@boutel.co.nz> Message-ID: Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel pisze: > Can you demonstrate a revised hierarchy without Eq? What would > happen to Ord and the numeric classes with default class method > definitions that use (==) either explicitly or in pattern matching > against numeric literals? OK, then you can't write these default method definitions. I'm against removing Eq from the numeric hierarchy, against making Num instances for functions, but I would probably remove Show. I haven't seen a sensible proposal of a replacement of the whole hierarchy. > In an instance declaration, if a method requires operations of > another class which is not a superclass of the class being instanced, > it is sufficient to place the requirement in the context, Better: it is sufficient if the right instance is defined somewhere. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From wli@holomorphy.com Sun Feb 11 10:01:02 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Sun, 11 Feb 2001 02:01:02 -0800 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] In-Reply-To: ; from qrczak@knm.org.pl on Sun, Feb 11, 2001 at 07:59:38AM +0000 References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> <3A85DEC8.25FAF744@boutel.co.nz> Message-ID: <20010211020102.C641@holomorphy.com> Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel pisze: >> Can you demonstrate a revised hierarchy without Eq? What would >> happen to Ord and the numeric classes with default class method >> definitions that use (==) either explicitly or in pattern matching >> against numeric literals? I anticipate that some restructuring of the numeric classes must be done in order to accomplish this. I am, of course, attempting to contrive such a beast for my own personal use. On Sun, Feb 11, 2001 at 07:59:38AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > OK, then you can't write these default method definitions. > I'm against removing Eq from the numeric hierarchy, against making Num > instances for functions, but I would probably remove Show. I haven't > seen a sensible proposal of a replacement of the whole hierarchy. Well, there are a couple of problems with someone like myself trying to make such a proposal. First, I'm a bit too marginalized and/or committed to a radical alternative. Second, I don't have the right associations or perhaps other resources. Removing Eq sounds like a good idea to me, in all honesty, though I think numeric instances for functions (at least by default) aren't great ideas. More details follow: Regarding Eq, there are other types besides functions which might not be good ideas to define equality on, either because they're not efficiently implementable or are still inappropriate. Matrix types aren't good candidates for defining equality, for one. Another one you might not want to define equality on are formal power series represented by infinite lists, since equality tests will never terminate. A third counterexample comes, of course, from graphics, where one might want to conveniently scale and translate solids. Testing meshes and surface representations for equality is once again not a great idea. Perhaps these counterexamples are a little contrived, but perhaps other people can come up with better ones. As far as the function instances of numeric types, there are some nasty properties that they have that probably make it a bad idea. In particular, I discovered that numeric literals' fromInteger property creates the possibility that something which is supposed to be a scalar or some other numeric result might accidentally be applied. For instance, given an expression with an intermediate numeric result like: f u v . g x y $ h z which is expected to produce a number, one could accidentally apply a numeric literal or something bound to one to some arguments, creating a bug. So this is for at least partial agreement, though I think it should be available in controlled circumstances. Local module importations and/or scoped instances might help here, or perhaps separating out code that relies upon them into a module where the instance is in scope, as it probably needs control which is that tight. Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel pisze: >> In an instance declaration, if a method requires operations of >> another class which is not a superclass of the class being instanced, >> it is sufficient to place the requirement in the context, On Sun, Feb 11, 2001 at 07:59:38AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Better: it is sufficient if the right instance is defined somewhere. Again, I'd be careful with this idea. It's poor design to unnecessarily restrict the generality of code. Of course, it's poor design to not try to enforce necessary conditions in the type system, too, which is why library design is nontrivial. And, of course, keeping it simple enough for use by the general populace (or whatever semblance thereof exists within the Haskell community) might well conflict with the desires of persons like myself who could easily fall prey to the accusation that they're trying to turn Haskell into a computer algebra system, and adds yet another constraint to the library design making it even tougher. Cheers, Bill From brian@boutel.co.nz Sun Feb 11 10:14:44 2001 From: brian@boutel.co.nz (Brian Boutel) Date: Sun, 11 Feb 2001 23:14:44 +1300 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] References: <3A8311AA.E095770A@boutel.co.nz> <3A8494E7.6CF62EA5@boutel.co.nz> <3A85DEC8.25FAF744@boutel.co.nz> Message-ID: <3A866614.35A251A8@boutel.co.nz> Marcin 'Qrczak' Kowalczyk wrote: > I'm against removing Eq from the numeric hierarchy, against making Num > instances for functions, but I would probably remove Show. I haven't > seen a sensible proposal of a replacement of the whole hierarchy. > Then we probably are in agreement. --brian From wli@holomorphy.com Sun Feb 11 13:07:21 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Sun, 11 Feb 2001 05:07:21 -0800 Subject: Show, Eq not necessary for Num [Was: Revamping the numeric classes] In-Reply-To: <20010211182432.A5545@hg.cs.mu.oz.au>; from fjh@cs.mu.oz.au on Sun, Feb 11, 2001 at 06:24:33PM +1100 References: <3A8311AA.E095770A@boutel.co.nz> <20010210165518.C28558@hg.cs.mu.oz.au> <3A85EA87.384E2D3@boutel.co.nz> <20010211182432.A5545@hg.cs.mu.oz.au> Message-ID: <20010211050721.D641@holomorphy.com> On 11-Feb-2001, Brian Boutel wrote: >> There may be some misunderstanding here. If you are talking about type >> for which equality is always undefined, then I agree with you, but that >> is not what I was talking about. I was thinking about types where >> equality is defined for some pairs of argument values and undefined for >> others - I think the original example was some kind of arbitrary >> precision reals. On Sun, Feb 11, 2001 at 06:24:33PM +1100, Fergus Henderson wrote: > The original example was treating functions as a numeric type. In the > case of functions, computing equality is almost always infeasible. > But you can easily define addition etc. pointwise: > > f + g = (\ x -> f x + g x) I have a fairly complete implementation of this with dummy instances of Eq and Show for those who want to see the consequences of this. I found, interestingly enough, that any type constructor f with the following three properties could have an instance of Num defined upon f a: (1) it has a unary constructor to lift scalars (2) it has a Functor instance (3) it has an analogue of zip which can be defined upon it or, more precisely: \begin{code} instance (Eq (f a), Show (f a), Num a, Functor f, Zippable f, HasUnaryCon f) => Num (f a) where f + g = fmap (uncurry (+)) $ fzip f g f * g = fmap (uncurry (*)) $ fzip f g f - g = fmap (uncurry (-)) $ fzip f g negate = fmap negate abs = fmap abs signum = fmap signum fromInteger = unaryCon . fromInteger class Zippable f where fzip :: f a -> f b -> f (a,b) class HasUnaryCon f where unaryCon :: a -> f a instance Functor ((->) a) where fmap = (.) instance Zippable ((->) a) where fzip f g = \x -> (f x, g x) instance HasUnaryCon ((->) a) where unaryCon = const \end{code} and this generalizes nicely to other data types: \begin{code} instance Zippable Maybe where fzip (Just x) (Just y) = Just (x,y) fzip _ Nothing = Nothing fzip Nothing _ = Nothing instance HasUnaryCon Maybe where unaryCon = Just instance Zippable [ ] where fzip = zip instance HasUnaryCon [ ] where unaryCon = cycle . (:[]) \end{code} On 11-Feb-2001, Brian Boutel wrote: >> Returning to the basic issue, I understood the desire to remove Eq as a >> superclass of Num was so that people were not required to implement >> equality if they did not need it, not that there were significant >> numbers of useful numeric types for which equality was not meaningful. On Sun, Feb 11, 2001 at 06:24:33PM +1100, Fergus Henderson wrote: > The argument is the latter, with functions as the canonical example. Well, usually equality as a mathematical concept is meaningful, but either not effectively or efficiently computable. Given an enumerable and bounded domain, equality may be defined (perhaps inefficiently) on functions by \begin{code} instance (Enum a, Bounded a, Eq b) => Eq (a->b) where f == g = all (uncurry (==)) $ zipWith (\x -> (f x, g x)) [minBound..maxBound] \end{code} and as I've said in another post, equality instances on data structures expected to be infinite, very large, or where the semantics of equality are make it difficult to compute, or perhaps even cases where it's just not useful are also not good to be forced. Cheers, Bill From Tom.Pledger@peace.com Sun Feb 11 21:58:40 2001 From: Tom.Pledger@peace.com (Tom Pledger) Date: Mon, 12 Feb 2001 10:58:40 +1300 Subject: Revamping the numeric classes In-Reply-To: References: <200102080800.VAA36591@waytogo.peace.co.nz> <14979.29205.362496.555808@waytogo.peace.co.nz> Message-ID: <14983.2832.508757.673931@waytogo.peace.co.nz> Marcin 'Qrczak' Kowalczyk writes: | Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger pisze: | | > (x + y) + z | > | > we know from the explicit type signature (in your question that I was | > responding to) that x,y::Int and z::Double. Type inference does not | > need to treat x or y up, because it can take the first (+) to be Int | > addition. However, it must treat the result (x + y) up to the most | > specific supertype which can be added to a Double. | | Approach it differently. z is Double, (x+y) is added to it, so | (x+y) must have type Double. That's a restriction I'd like to avoid. Instead: ...so the most specific common supertype of Double and (x+y)'s type must support addition. | This means that x and y must have type Double. This is OK, because | they are Ints now, which can be converted to Double. | | Why is your approach better than mine? It used a definition of (+) which was a closer fit for the types of x and y. : | > h:: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool | | This type is ambiguous: the type variable b is needed in the | context but not present in the type itself, so it can never be | determined from the usage of h. Yes, I rashly glossed over the importance of having well-defined most specific common supertype (MSCS) and least specific common subtype (LSCS) operators in a subtype lattice. Here's a more respectable version: h :: Eq (MSCS a Int) => (Int -> a) -> Bool | > That can be inferred by following the structure of the term. | > Function terms do seem prone to an accumulation of deferred | > subtype constraints. | | When function application generates a constraint, the language gets | ambiguous as hell. Applications are found everywhere through the | program! Very often the type of the argument or result of an | internal application does not appear in the type of the whole | function being defined, which makes it ambiguous. | | Not to mention that there would be *LOTS* of these constraints. | Application is used everywhere. It's important to have its typing | rule simple and cheap. Generating a constraint for every | application is not an option. These constraints tend to get discharged whenever the result of an application is not another function. The hellish ambiguities can be substantially tamed by insisting on a properly constructed subtype lattice. Anyway, since neither of us is about to have a change of mind, and nobody else is showing an interest in this branch of the discussion, it appears that the most constructive thing for me to do is return to try-to-keep-quiet-about-subtyping-until-I've-done-it-in-THIH mode. Regards, Tom From dpt@math.harvard.edu Sun Feb 11 22:42:15 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Sun, 11 Feb 2001 17:42:15 -0500 Subject: A sample revised prelude for numeric classes Message-ID: <20010211174215.A2033@math.harvard.edu> --7AUc2qLy4jB3hD7Z Content-Type: text/plain; charset=us-ascii Content-Disposition: inline I've started writing up a more concrete proposal for what I'd like the Prelude to look like in terms of numeric classes. Please find it attached below. It's still a draft and rather incomplete, but please let me know any comments, questions, or suggestions. Best, Dylan Thurston --7AUc2qLy4jB3hD7Z Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="NumPrelude.lhs" Revisiting the Numeric Classes ------------------------------ The Prelude for Haskell 98 offers a well-considered set of numeric classes which cover the standard numeric types (Integer, Int, Rational, Float, Double, Complex) quite well. But they offer limited extensibility and have a few other flaws. In this proposal we will revisit these classes, addressing the following concerns: (1) The current Prelude defines no semantics for the fundamental operations. For instance, presumably addition should be associative (or come as close as feasible), but this is not mentioned anywhere. (2) There are some superfluous superclasses. For instance, Eq and Show are superclasses of Num. Consider the data type > data IntegerFunction a = IF (a -> Integer) One can reasonably define all the methods of Num for IntegerFunction a (satisfying good semantics), but it is impossible to define non-bottom instances of Eq and Show. In general, superclass relationship should indicate some semantic connection between the two classes. (3) In a few cases, there is a mix of semantic operations and representation-specific operations. toInteger, toRational, and the various operations in RealFloating (decodeFloat, ...) are the main examples. (4) In some cases, the hierarchy is not finely-grained enough: operations that are often defined independently are lumped together. For instance, in a financial application one might want a type "Dollar", or in a graphics application one might want a type "Vector". It is reasonable to add two Vectors or Dollars, but not, in general, reasonable to multiply them. But the programmer is currently forced to define a method for (*) when she defines a method for (+). In specifying the semantics of type classes, I will state laws as follows: (a + b) + c === a + (b + c) The intended meaning is extensional equality: the rest of the program should behave in the same way if one side is replaced with the other. Unfortunately, the laws are frequently violated by standard instances; the law above, for instance, fails for Float: (100000000000000000000.0 + (-100000000000000000000.0)) + 1.0 = 1.0 100000000000000000000.0 + ((-100000000000000000000.0) + 1.0) = 0.0 Thus these laws should be interpreted as guidelines rather than absolute rules. In particular, the compiler is not allowed to use them. Unless stated otherwise, default definitions should also be taken as laws. This version is fairly conservative. I have retained the names for classes with similar functions as far as possible, I have not made some distinctions that could reasonably be made, and I have tried to opt for simplicity over generality. The main non-conservative change is the Powerful class, which allows a unification of the Haskell 98 operators (^), (^^), and (**). There are some problems with it, but I left it in because it might be of interest. It is very easy to change back to the Haskell 98 situation. I sometimes use Simon Peyton Jones' pattern guards in writing functions. This can (as always) be transformed into Haskell 98 syntax. > module NumPrelude where > import qualified Prelude as P > -- Import some standard Prelude types verbatim verbandum > import Prelude(Bool(..),Maybe(..),Eq(..),Either(..),Ordering(..), > Ord(..),Show(..),Read(..),id) > > infixr 8 ^ > infixl 7 * > infixl 7 /, `quot`, `rem`, `div`, `mod` > infixl 6 +, - > > class Additive a where > (+), (-) :: a -> a -> a > negate :: a -> a > zero :: a > > -- Minimal definition: (+), zero, and (negate or (-1)) > negate a = zero - a > a - b = a + (negate b) Additive a encapsulates the notion of a commutative group, specified by the following laws: a + b === b + a (a + b) + c) === a + (b + c) zero + a === a a + (negate a) === 0 Typical examples include integers, dollars, and vectors. > class (Additive a) => Num a where > (*) :: a -> a -> a > one :: a > fromInteger :: Integer -> a > > -- Minimal definition: (*), one > fromInteger 0 = zero > fromInteger n | n < 0 = negate (fromInteger (-n)) > fromInteger n | n > 0 = reduceRepeat (+) one n Num encapsulates the mathematical structure of a (not necessarily commutative) ring, with the laws a * (b * c) === (a * b) * c one * a === a a * one === a a * (b + c) === a * b + a * c Typical examples include integers, matrices, and quaternions. "reduceRepeat op a n" is an auxiliary function that, for an associative operation "op", computes the same value as reduceRepeat op a n = foldr1 op (repeat n a) but applies "op" O(log n) times. A sample implementation is below. > class (Num a) => Integral a where > div, mod :: a -> a -> a > divMod :: a -> a -> (a,a) > gcd, lcm :: a -> a -> a > extendedGCD :: a -> a -> (a,a,a) > > -- Minimal definition: divMod or (div and mod) > -- and extendedGCD, if the provided definition does not work > div a b | (d,_) <- divMod a b = d > mod a b | (_,m) <- divMod a b = m > divMod a b = (div a b, mod a b) > gcd a b | (_,_,g) <- extendedGCD a b = g > extendedGCD a b = ... -- insert Euclid's algorithm here > lcm a b = (a `div` gcd a b) * b Integral has the mathematical structure of a unique factorization domain, satisfying the laws a * b === b * a (div a b) * b + (mod a b) === a mod (a+k*b) b === mod a b a `div` gcd a b === zero gcd a b === gcd b a gcd (a + k*b) b === gcd a b a*c + b*d === g where (c, d, g) = extendedGCD a b TODO: quot, rem partially defined. Explain. The default definition of extendedGCD above should not be taken as canonical (unlike most default definitions); for some Integral instances, the algorithm could diverge, might not satisfy the laws above, etc. Typical examples of Integral include integers and polynomials over a field. Note that, unlike in Haskell 98, gcd and lcm are member function of Integral. extendedGCD is new. > class (Num a) => Fractional a where > (/) :: a -> a -> a > recip :: a -> a > fromRational :: Rational -> a > > -- Minimal definition: recip or (/) > recip a = one / a > a / b = a * (recip b) > fromRational r = fromInteger (numerator r) / fromInteger (denominator r) Fractional encapsulates the mathematical structure of a field, satisfying the laws a * b === b * a a * (recip a) === one TODO: (/) is only partially defined. How to specify? Add a member isInvertible :: a -> Bool? Typical examples include rationals, the real numbers, and rational functions (ratios of polynomials). > class (Num a, Additive b) => Powerful a b where > (^) :: a -> b -> a > instance (Num a) => Powerful a (Positive Integer) where > a ^ 0 = one > a ^ n = reduceRepeated (*) a n > instance (Fractional a) => Powerful a Integer where > a ^ n | n < 0 = recip (a ^ (negate n)) > a ^ n = a ^ (positive n) Powerful is the class of pairs of numbers which can be exponentiated, with the following laws: (a ^ b) * (a ^ c) === a ^ (b + c) a ^ one === a I don't know interesting examples of this structure besides the instances above defined above and the Floating class below. "Positive" is a type constructor that asserts that its argument is >= 0; "positive" makes this assertion. I am not sure how this will interact with defaulting arguments so that one can write x ^ 5 without constraining x to be of Fractional type. > -- Note: I think "Analytic" would be a better name than "Floating". > class (Fractional a, Powerful a a) => Floating a where > pi :: a > exp, log, sqrt :: a -> a > logBase :: a -> a -> a > sin, cos, tan :: a -> a > asin, acos, atan :: a -> a > sinh, cosh, tanh :: a -> a > asinh, acosh, atanh :: a -> a > > -- Minimal complete definition: > -- pi, exp, log, sin, cos, sinh, cosh > -- asinh, acosh, atanh > x ^ y = exp (log x * y) > logBase x y = log y / log x > sqrt x = x ^ 0.5 > tan x = sin x / cos x > tanh x = sinh x / cosh x Floating is the type of numbers supporting various analytic functions. Examples include real numbers, complex numbers, and computable reals represented as a lazy list of rational approximations. Note the default declaration for a superclass. See the comments below, under "Instance declaractions for superclasses". The semantics of these operations are rather ill-defined because of branch cuts, etc. > class (Num a, Ord a) => Real a where > abs :: x -> x > signum :: x -> x > > -- Minimal definition: nothing > abs x = max x (negate x) > signum x = case compare x zero of GT -> one > EQ -> zero > LT -> negate one This is the type of an ordered ring, satisfying the laws a * b === b * a a + (max b c) === max (a+b) (a+c) negate (max b c) === min (negate b) (negate c) a * (max b c) === max (a*b) (a*c) where a >= 0 Note that abs is in a rather different place than it is in the Haskell 98 Prelude. In particular, abs :: Complex -> Complex is not defined. To me, this seems to have the wrong type anyway; Complex.magnitude has the correct type. > class (Real a, Floating a) => RealFrac a where > -- lifted directly from Haskell 98 Prelude > properFraction :: (Integral b) => a -> (b,a) > truncate, round :: (Integral b) => a -> b > ceiling, floor :: (Integral b) => a -> b > > -- Minimal complete definition: > -- properFraction > truncate x = m where (m,_) = properFraction x > > round x = let (n,r) = properFraction x > m = if r < 0 then n - 1 else n + 1 > in case signum (abs r - 0.5) of > -1 -> n > 0 -> if even n then n else m > 1 -> m > > ceiling x = if r > 0 then n + 1 else n > where (n,r) = properFraction x > > floor x = if r < 0 then n - 1 else n > where (n,r) = properFraction x As an aside, let me note the similarities between "properFraction x" and "x divMod 1" (if that were defined.) > class (RealFrac a, Floating a) => RealFloat a where > atan2 :: a -> a -> a > atan2 y x > | x>0 = atan (y/x) > | x==0 && y>0 = pi/2 > | x<0 && y>0 = pi + atan (y/x) > |(x<=0 && y<0) || > (x<0 && isNegativeZero y) || > (isNegativeZero x && isNegativeZero y) > = -atan2 (-y) x > | y==0 && (x<0 || isNegativeZero x) > = pi -- must be after the previous test on zero y > | x==0 && y==0 = y -- must be after the other double zero tests > | otherwise = x + y -- x or y is a NaN, return a NaN (via +) > > class (Real a, Integral a) => RealIntegral a where > quot, rem :: a -> a -> a > quotRem :: a -> a -> (a,a) > > -- Minimal definition: toInteger > -- insert quot, rem, quotRem definition here > > --- Numerical functions > subtract :: (Additive a) => a -> a -> a > subtract = flip (-) > > even, odd :: (Integral a) => a -> Bool > even n = n `div` 2 == 0 > odd = not . even Additional standard libraries would include IEEEFloat (including the bulk of the functions in Haskell 98's RealFloat class), VectorSpace, Ratio, and Lattice. Let me explain that last one. ----- > module Lattice where > class Lattice a where > meet, join :: a -> a -> a Mathematically, a lattice (more properly, a semilattice) is a space with operations "meet" and "join" which are idempotent, commutative, associative, and (usually) distribute over each other. Examples include real-valued function with (pointwise) max and min and sets with union and intersection. It would be reasonable to make Ord a subclass of this, but it would probably complicate the class heirarchy too much for the gain. The advantage of Lattice over Ord is that it is better defined. Thus we can define a class > class (Lattice a, Num a) => NumLattice a where > abs :: a -> a -> a > abs x = meet x (negate x) and real-valued functions and computable reals can both be declared as instances of this class. --7AUc2qLy4jB3hD7Z-- From ashley@semantic.org Mon Feb 12 00:03:37 2001 From: ashley@semantic.org (Ashley Yakeley) Date: Sun, 11 Feb 2001 16:03:37 -0800 Subject: A sample revised prelude for numeric classes Message-ID: <200102120003.QAA05197@mail4.halcyon.com> At 2001-02-11 14:42, Dylan Thurston wrote: >I've started writing up a more concrete proposal for what I'd like the >Prelude to look like in terms of numeric classes. Please find it >attached below. It's still a draft and rather incomplete, but please >let me know any comments, questions, or suggestions. Apologies if this has been discussed and I missed it. When it comes to writing a 'geek' prelude, what was wrong with the Basic Algebra Proposal found in ? Perhaps it could benefit from multi-parameter classes? -- Ashley Yakeley, Seattle WA From qrczak@knm.org.pl Mon Feb 12 00:26:35 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 12 Feb 2001 00:26:35 GMT Subject: A sample revised prelude for numeric classes References: <20010211174215.A2033@math.harvard.edu> Message-ID: Sun, 11 Feb 2001 17:42:15 -0500, Dylan Thurston pisze: > I've started writing up a more concrete proposal for what I'd like > the Prelude to look like in terms of numeric classes. Please find > it attached below. It's still a draft and rather incomplete, > but please let me know any comments, questions, or suggestions. I must say I like it. It has a good balance between generality and usefulness / convenience. Modulo a few details, see below. > > class (Num a, Additive b) => Powerful a b where > > (^) :: a -> b -> a > > instance (Num a) => Powerful a (Positive Integer) where > > a ^ 0 = one > > a ^ n = reduceRepeated (*) a n > > instance (Fractional a) => Powerful a Integer where > > a ^ n | n < 0 = recip (a ^ (negate n)) > > a ^ n = a ^ (positive n) I don't like the fact that there is no Powerful Integer Integer. Since the definition on negative exponents really depends on the first type but can be polymorphic wrt. any Integral exponent, I would make other instances instead: instance RealIntegral b => Powerful Int b instance RealIntegral b => Powerful Integer b instance (Num a, RealIntegral b) => Powerful (Ratio a) b instance Powerful Float Int instance Powerful Float Integer instance Powerful Float Float instance Powerful Double Int instance Powerful Double Integer instance Powerful Double Double This requires more instances for other types, but I don't see how to make it better with (^), (^^) and (**) unified. It's a bit irregular: Int can be raised to custom integral types without extra instances, but Double not. It's simpler to unify only (^) and (^^), leaving (**) :: Floating a => a -> a -> a with the default definition of \a b -> exp (b * log a). I guess that we always know which one we mean, although in math the notation is the same. Then the second argument of (^) is always arbitrary RealIntegral, so we can have a single-parameter class with a default definition: class (Num a) => Powerful a where (^) :: RealIntegral b => a -> b -> a a ^ 0 = one a ^ n = reduceRepeated (*) a n instance Powerful Int instance Powerful Integer instance (Num a) => Powerful (Ratio a) where -- Here unfortunately we must write the definition explicitly, -- including the positive exponent case: we don't have access to -- whatever the default definition would give if it was not -- replaced here. We should probably provide the default definition -- for such cases as a global function: -- fracPower :: (Fractional a, RealIntegral b) => a -> b -> a -- (under a better name). instance Powerful Float -- Ditto here. instance Powerful Double -- And here. > > class (Real a, Floating a) => RealFrac a where > > -- lifted directly from Haskell 98 Prelude > > properFraction :: (Integral b) => a -> (b,a) > > truncate, round :: (Integral b) => a -> b > > ceiling, floor :: (Integral b) => a -> b Should be RealIntegral instead of Integral. Perhaps RealIntegral should be called Integral, and your Integral should be called somewhat differently. > > class (Real a, Integral a) => RealIntegral a where > > quot, rem :: a -> a -> a > > quotRem :: a -> a -> (a,a) > > > > -- Minimal definition: toInteger You forgot toInteger. > > class (Lattice a, Num a) => NumLattice a where > > abs :: a -> a -> a > > abs x = meet x (negate x) Should be: abs :: a -> a -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From wli@holomorphy.com Mon Feb 12 02:48:42 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Sun, 11 Feb 2001 18:48:42 -0800 Subject: A sample revised prelude for numeric classes In-Reply-To: <20010211174215.A2033@math.harvard.edu>; from dpt@math.harvard.edu on Sun, Feb 11, 2001 at 05:42:15PM -0500 References: <20010211174215.A2033@math.harvard.edu> Message-ID: <20010211184842.F641@holomorphy.com> On Sun, Feb 11, 2001 at 05:42:15PM -0500, Dylan Thurston wrote: > I've started writing up a more concrete proposal for what I'd like the > Prelude to look like in terms of numeric classes. Please find it > attached below. It's still a draft and rather incomplete, but please > let me know any comments, questions, or suggestions. This is great, it gets something concrete out there to comment on, which is probably quite a bit of what needs to happen. For brevity's sake, I'll have to chop up your message a bit. > (1) The current Prelude defines no semantics for the fundamental > operations. For instance, presumably addition should be > associative (or come as close as feasible), but this is not > mentioned anywhere. This is something serious, as I sort of took for granted the various properties of operations etc. I'm glad you pointed it out. > (2) There are some superfluous superclasses. For instance, Eq and > Show are superclasses of Num. Consider the data type > > > data IntegerFunction a = IF (a -> Integer) > > One can reasonably define all the methods of Num for > IntegerFunction a (satisfying good semantics), but it is > impossible to define non-bottom instances of Eq and Show. > > In general, superclass relationship should indicate some semantic > connection between the two classes. It's possible to define non-bottom instances, for instance: instance Eq (a->b) where _ == _ = False instance Show (a->b) where show = const "<>" I suspect you're aware of this and had in mind the constraint that they should also respect the invariants and laws of the classes. > > class (Additive a) => Num a where > > (*) :: a -> a -> a > > one :: a > > fromInteger :: Integer -> a > Num encapsulates the mathematical structure of a (not necessarily > commutative) ring, with the laws > > a * (b * c) === (a * b) * c > one * a === a > a * one === a > a * (b + c) === a * b + a * c > > Typical examples include integers, matrices, and quaternions. There is an additional property of zero being neglected here, namely that it is an annihilator. That is, zero * x === zero x * zero === zero Again, it's probably a reasonable compromise not to accommodate nonassociative algebras, though an important application of them lies within graphics, namely 3-vectors with the cross product. > "reduceRepeat op a n" is an auxiliary function that, for an > associative operation "op", computes the same value as > > reduceRepeat op a n = foldr1 op (repeat n a) > > but applies "op" O(log n) times. A sample implementation is below. This is a terrific idea, and I'm glad someone has at last proposed using it. > > class (Num a) => Integral a where > > div, mod :: a -> a -> a > > divMod :: a -> a -> (a,a) > > gcd, lcm :: a -> a -> a > > extendedGCD :: a -> a -> (a,a,a) While I'm wholeheartedly in favor of the Euclidean algorithm idea, I suspect that more structure (i.e. separating it out to another class) could be useful, for instance, formal power series' over Z are integral domains, but are not a Euclidean domain because their residue classes aren't computable by a finite process. Various esoteric rings like Z[sqrt(k)] for various positive and negative integer k can also make this dependence explode, though they're probably too rare to matter. > TODO: quot, rem partially defined. Explain. > The default definition of extendedGCD above should not be taken as > canonical (unlike most default definitions); for some Integral > instances, the algorithm could diverge, might not satisfy the laws > above, etc. > TODO: (/) is only partially defined. How to specify? Add a member > isInvertible :: a -> Bool? > Typical examples include rationals, the real numbers, and rational > functions (ratios of polynomials). It's too easy to make it a partial function to really consider this, but if you wanted to go over the top (and you don't) you want the multiplicative group of units to be the type of the argument (and hence result) of recip. > > class (Num a, Additive b) => Powerful a b where > > ... > I don't know interesting examples of this structure besides the > instances above defined above and the Floating class below. > "Positive" is a type constructor that asserts that its argument is >= > 0; "positive" makes this assertion. I am not sure how this will > interact with defaulting arguments so that one can write > > x ^ 5 > > without constraining x to be of Fractional type. What you're really trying to capture here is the (right?) Z-module-like structure of the multiplicative monoid in a commutative ring. There are some weird things going on here I'm not sure about, namely: (1) in an arbitary commutative ring (or multiplicative semigroup), the function can (at best) be defined as (^) :: ring -> NaturalNumbers -> ring That is, only the natural numbers can act on ring to produce an exponentiation-like operation. (2) if you have at least a division ring (or multiplicative group), you can extend it to (^) :: ring -> Integer -> ring so that all of Z acts on ring to produce an exponentiation operation. (3) Under some condition I don't seem to be able to formulate offhand, one can do (^) :: ring -> ring -> ring Now the ring (or perhaps more generally some related ring) acts on ring to produce an expontiation operation like what is typically thought of for real numbers. Anyone with good ideas as to what the appropriate conditions are here, please speak up. (Be careful, w ^ z = exp (z * log w) behaves badly for w < 0 on the reals.) > > -- Note: I think "Analytic" would be a better name than "Floating". > > class (Fractional a, Powerful a a) => Floating a where > > ... > The semantics of these operations are rather ill-defined because of > branch cuts, etc. A useful semantics can be recovered by assuming that the library-defined functions are all the Cauchy principal values. Even now: Complex> (0 :+ 1)**(0 :+ 1) 0.20788 :+ 0.0 > > class (Num a, Ord a) => Real a where > > abs :: x -> x > > signum :: x -> x I'm not convinced that Real is a great name for this, or that this is really the right type for all this stuff. I'd still like to see abs and signum generalized to vector spaces. > > module Lattice where > > class Lattice a where > > meet, join :: a -> a -> a > > Mathematically, a lattice (more properly, a semilattice) is a space > with operations "meet" and "join" which are idempotent, commutative, > associative, and (usually) distribute over each other. Examples > include real-valued function with (pointwise) max and min and sets > with union and intersection. It would be reasonable to make Ord a > subclass of this, but it would probably complicate the class heirarchy > too much for the gain. The advantage of Lattice over Ord is that it > is better defined. Thus we can define a class > > > class (Lattice a, Num a) => NumLattice a where > > abs :: a -> a -> a > > abs x = meet x (negate x) > > and real-valued functions and computable reals can both be declared as > instances of this class. I'd be careful here, a meet (join) semilattices are partial orders in which finite meets (joins) exist, and they only distribute over each other in distributive lattices. Boolean lattices also have complementation (e.g. not on type Bool) and Heyting lattices have implications (x <= y ==> z iff x `meet` y <= z). My suggestion (for simplicity) is: class Ord a => MeetSemiLattice a where meet :: a -> a -> a class MeetSemiLattice a => CompleteMeetSemiLattice a where bottom :: a class Ord a => JoinSemiLattice a where join :: a -> a -> a class JoinSemiLattice a => CompleteJoinSemiLattice a where top :: a and Ord defines a partial order (and hence induces Eq) on a type. (e.g. instance Ord a => Eq a where x == y = x <= y && y <= x ) I don't really think bottoms and tops really get bundled in with the strict mathematical definition, e.g. natural numbers have all finite joins but no top, Integer has no bottom or top but all finite joins and meets, etc. Again, your design seems to incorporate the kind of simplicity that language implementors might want for a Standard Prelude, so your judgment on how much generality is appropriate here would probably be good. Cheers, Bill From jenglish@flightlab.com Mon Feb 12 03:11:25 2001 From: jenglish@flightlab.com (Joe English) Date: Sun, 11 Feb 2001 19:11:25 -0800 Subject: A sample revised prelude for numeric classes In-Reply-To: <20010211174215.A2033@math.harvard.edu> References: <20010211174215.A2033@math.harvard.edu> Message-ID: <200102120311.TAA00936@dragon.flightlab.com> Dylan Thurston wrote: > > I've started writing up a more concrete proposal for what I'd like the > Prelude to look like in terms of numeric classes. I like this proposal a lot. The organization is closer to traditional mathematical structures than the current Prelude, but not as intimidating as Mechveliani's Basic Algebra Proposal. A very nice balance, IMO. A couple of requests: > > module Lattice where > > class Lattice a where > > meet, join :: a -> a -> a Could this be split into class SemiLattice a where join :: a -> a -> a and class (SemiLattice a) => Lattice a where meet :: a -> a -> a I run across a lot of structures which could usefully be modeled as semilattices, but lack a 'meet' operation. > It would be reasonable to make Ord a > subclass of this, but it would probably complicate the class heirarchy > too much for the gain. In a similar vein, I'd really like to see the Ord class split up: class PartialOrder a where (<), (>) :: a -> a -> Bool class (Eq a, PartialOrder a) => Ord a where compare :: a -> a -> Ordering (<=), (>=) :: a -> a -> Bool max, min :: a -> a -> a Perhaps it would make sense for PartialOrder to be a superclass of Lattice? --Joe English jenglish@flightlab.com From wli@holomorphy.com Mon Feb 12 03:13:26 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Sun, 11 Feb 2001 19:13:26 -0800 Subject: A sample revised prelude for numeric classes In-Reply-To: <200102120003.QAA05197@mail4.halcyon.com>; from ashley@semantic.org on Sun, Feb 11, 2001 at 04:03:37PM -0800 References: <200102120003.QAA05197@mail4.halcyon.com> Message-ID: <20010211191326.G641@holomorphy.com> At 2001-02-11 14:42, Dylan Thurston wrote: > >I've started writing up a more concrete proposal for what I'd like the > >Prelude to look like in terms of numeric classes. Please find it > >attached below. It's still a draft and rather incomplete, but please > >let me know any comments, questions, or suggestions. On Sun, Feb 11, 2001 at 04:03:37PM -0800, Ashley Yakeley wrote: > Apologies if this has been discussed and I missed it. When it comes to > writing a 'geek' prelude, what was wrong with the Basic Algebra Proposal > found in ? > Perhaps it could benefit from multi-parameter classes? I'm not sure if there is anything concrete wrong with it, in fact, I'd like to see it made into a Prelude, but there are several reasons why I don't think it's being discussed here in the context of an alternative for a Prelude. (1) It's widely considered too complex and/or too mathematically involved for the general populace (or whatever semblance thereof exists within the Haskell community). (2) As a "Geek Prelude", it's considered to have some aesthetic and/or usability issues. (3) For persons as insane as myself, it's actually not radical enough. My commentary on it thus far is that I see it as high-quality software that could not only already serve as a "Geek Prelude" for many users, but upon which could also be based implementations and designs of future "Geek Preludes". The fact that no one has discussed it is probably due to a desire not to return to previous flamewars, but it should almost definitely be discussed as a reference point. I've actually been hoping that Mechveliani would chime in and comment on the various ideas, since he's actually already been through the motions of implementing an alternative Prelude and seen what sort of difficulties arise from actually trying to do these things various ways. Cheers, Bill From dpt@math.harvard.edu Mon Feb 12 03:27:53 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Sun, 11 Feb 2001 22:27:53 -0500 Subject: A sample revised prelude for numeric classes In-Reply-To: ; from qrczak@knm.org.pl on Mon, Feb 12, 2001 at 12:26:35AM +0000 References: <20010211174215.A2033@math.harvard.edu> Message-ID: <20010211222753.A2561@math.harvard.edu> Thanks for the comments! On Mon, Feb 12, 2001 at 12:26:35AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > I don't like the fact that there is no Powerful Integer Integer. Reading this, it occurred to me that you could explictly declare an instance of Powerful Integer Integer and have everything else work. > Then the second argument of (^) is always arbitrary RealIntegral, Nit: the second argument should be an Integer, not an arbitrary RealIntegral. > > > class (Real a, Floating a) => RealFrac a where > > > -- lifted directly from Haskell 98 Prelude > > > properFraction :: (Integral b) => a -> (b,a) > > > truncate, round :: (Integral b) => a -> b > > > ceiling, floor :: (Integral b) => a -> b > > Should be RealIntegral instead of Integral. Yes. I'd actually like to make it Integer, and let the user compose with fromInteger herself. > Perhaps RealIntegral should be called Integral, and your Integral > should be called somewhat differently. Perhaps. Do you have suggestions for names? RealIntegral is what naive users probably want, but Integral is what mathematicians would use (and call something like an integral domain). > > > class (Real a, Integral a) => RealIntegral a where > > > quot, rem :: a -> a -> a > > > quotRem :: a -> a -> (a,a) > > > > > > -- Minimal definition: toInteger > > You forgot toInteger. Oh, right. I actually had it and then deleted it. On the one hand, it feels very implementation-specific to me, comparable to the decodeFloat routines (which are useful, but not generally applicable). On the other hand, I couldn't think of many examples where I really wouldn't want that operation (other than monadic numbers, that, say, count the number of operations), and I couldn't think of a better place to put it. You'll notice that toRational was similarly missing. My preferred solution might still be the Convertible class I mentioned earlier. Recall it was class Convertible a b where convert :: a -> b maybe with another class like class (Convertible a Integer) => ConvertibleToInteger a where toInteger :: a -> Integer toInteger = convert if the restrictions on instance contexts remain. Convertible a b should indicate that a can safely be converted to b without losing any information and maintaining relevant structure; from this point of view, its use would be strictly limited. (But what's relevant?) I'm still undecided here. Best, Dylan Thurston From fjh@cs.mu.oz.au Mon Feb 12 03:35:55 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Mon, 12 Feb 2001 14:35:55 +1100 Subject: A sample revised prelude for numeric classes In-Reply-To: <20010211174215.A2033@math.harvard.edu> References: <20010211174215.A2033@math.harvard.edu> Message-ID: <20010212143555.A14678@hg.cs.mu.oz.au> On 11-Feb-2001, Dylan Thurston wrote: > > class (Num a) => Integral a where > > div, mod :: a -> a -> a > > divMod :: a -> a -> (a,a) > > gcd, lcm :: a -> a -> a > > extendedGCD :: a -> a -> (a,a,a) > > > > -- Minimal definition: divMod or (div and mod) > > -- and extendedGCD, if the provided definition does not work > > div a b | (d,_) <- divMod a b = d > > mod a b | (_,m) <- divMod a b = m > > divMod a b = (div a b, mod a b) > > gcd a b | (_,_,g) <- extendedGCD a b = g > > extendedGCD a b = ... -- insert Euclid's algorithm here > > lcm a b = (a `div` gcd a b) * b > > Integral has the mathematical structure of a unique factorization > domain, satisfying the laws > > a * b === b * a > (div a b) * b + (mod a b) === a > mod (a+k*b) b === mod a b > a `div` gcd a b === zero > gcd a b === gcd b a > gcd (a + k*b) b === gcd a b > a*c + b*d === g where (c, d, g) = extendedGCD a b > > TODO: quot, rem partially defined. Explain. > The default definition of extendedGCD above should not be taken as > canonical (unlike most default definitions); for some Integral > instances, the algorithm could diverge, might not satisfy the laws > above, etc. In that case, I think it might be better to not provide it as a default, and instead to provide a function called say `euclid_extendedGCD'; someone defining an instance can then extendedGCD = euclid_extendedGCD if that is appropriate. It's so much easier to find bugs in code that you did write rather than bugs which are caused by what you *didn't* write. Of course this is not so effective if we keep the awful Haskell 98 rule that instance methods always default to bottom if not defined; but even if that rule is not changed, compilers can at least warn about that case. > > class (Num a, Additive b) => Powerful a b where > > (^) :: a -> b -> a I don't like the name. Plain `Pow' would be better, IMHO. Apart from those two points, I quite like this proposal, at least at first glance. -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From dpt@math.harvard.edu Mon Feb 12 03:56:29 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Sun, 11 Feb 2001 22:56:29 -0500 Subject: A sample revised prelude for numeric classes In-Reply-To: <20010211184842.F641@holomorphy.com>; from wli@holomorphy.com on Sun, Feb 11, 2001 at 06:48:42PM -0800 References: <20010211174215.A2033@math.harvard.edu> <20010211184842.F641@holomorphy.com> Message-ID: <20010211225629.B2561@math.harvard.edu> On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote: > There is an additional property of zero being neglected here, namely > that it is an annihilator. That is, > > zero * x === zero > x * zero === zero It follows: zero * x === (one - one) * x === one * x - one * x === x - x === zero > Again, it's probably a reasonable compromise not to accommodate > nonassociative algebras, though an important application of them > lies within graphics, namely 3-vectors with the cross product. Agreed that non-associative algebras are useful, but I feel that they should have a different symbol. > > > class (Num a) => Integral a where > > > div, mod :: a -> a -> a > > > divMod :: a -> a -> (a,a) > > > gcd, lcm :: a -> a -> a > > > extendedGCD :: a -> a -> (a,a,a) > > While I'm wholeheartedly in favor of the Euclidean algorithm idea, I > suspect that more structure (i.e. separating it out to another class) > could be useful, for instance, formal power series' over Z are integral > domains, but are not a Euclidean domain because their residue classes > aren't computable by a finite process. Various esoteric rings like > Z[sqrt(k)] for various positive and negative integer k can also make > this dependence explode, though they're probably too rare to matter. I tried to write the definitions in a way that could be defined for any unique factorization domain, not necessarily Euclidean: just take the two numbers, write them as a unit times prime factors in canonical form, and take the product of the common factors and call that the GCD. On reflection, extendedGCD probably isn't easy to write in general. What operations would you propose to encapsulate an integral domain (rather than a UFD)? Formal power series over Z are an interesting example; I'll think about it. On first blush, it seems like if you represented them as lazy lists you might be able to compute the remainder term by term. > > TODO: quot, rem partially defined. Explain. > > The default definition of extendedGCD above should not be taken as > > canonical (unlike most default definitions); for some Integral > > instances, the algorithm could diverge, might not satisfy the laws > > above, etc. > > TODO: (/) is only partially defined. How to specify? Add a member > > isInvertible :: a -> Bool? > > Typical examples include rationals, the real numbers, and rational > > functions (ratios of polynomials). > > It's too easy to make it a partial function to really consider this, > but if you wanted to go over the top (and you don't) you want the > multiplicative group of units to be the type of the argument (and > hence result) of recip. Yes. I considered and rejected that. But it would be nice to let callers check whether the division will blow up, and that's not possible for classes that aren't members of Eq. But I suppose that's the whole point. For computable reals, the way I would compute 1/(very small number) would be to look at (very small number) more and more closely to figure out on which side of 0 it lay; if it actually were zero, the program would loop. I think programs that want to avoid this have to take type-specific steps (in this case, cutting off the evaluation at a certain point.) > What you're really trying to capture here is the (right?) Z-module-like > structure of the multiplicative monoid in a commutative ring. There are > some weird things going on here I'm not sure about, namely: Right. > (3) Under some condition I don't seem to be able to formulate > offhand, one can do > (^) :: ring -> ring -> ring > Now the ring (or perhaps more generally some related ring) > acts on ring to produce an expontiation operation like what > is typically thought of for real numbers. Anyone with good > ideas as to what the appropriate conditions are here, please > speak up. > (Be careful, w ^ z = exp (z * log w) behaves badly for w < 0 > on the reals.) For complex numbers as well, this operation has problems because of branch cuts. It does satisfy that identity I mentioned, but is not continuous in the first argument. It is more common to see functions like exp be well defined (for more general additive groups) than to see the full (^) be defined. > > > class (Num a, Ord a) => Real a where > > > abs :: x -> x > > > signum :: x -> x > > I'm not convinced that Real is a great name for this, or that this > is really the right type for all this stuff. I'd still like to see > abs and signum generalized to vector spaces. After thinking about this, I decided that I would be happy calling the comparable operation on vector spaces "norm": a) it's compatible with mathematical usage b) it keeps the Prelude itself simple. It's unfortunate that the operation for complex numbers can't be called "abs", but I think it's reasonable. > ...and Ord defines a partial order > (and hence induces Eq) on a type. I think that "Ord" should define a total ordering; it's certainly what naive users would expect. I would define another class "Poset" with a partial ordering. > (e.g. > instance Ord a => Eq a where > x == y = x <= y && y <= x > ) But to define <= in terms of meet and join you already need Eq! x <= y === meet x y == y Best, Dylan Thurston From brian@boutel.co.nz Mon Feb 12 04:24:37 2001 From: brian@boutel.co.nz (Brian Boutel) Date: Mon, 12 Feb 2001 17:24:37 +1300 Subject: A sample revised prelude for numeric classes References: <20010211174215.A2033@math.harvard.edu> Message-ID: <3A876585.6D4B2B8A@boutel.co.nz> Dylan Thurston wrote: > > I've started writing up a more concrete proposal for what I'd like the > Prelude to look like in terms of numeric classes. Please find it > attached below. It's still a draft and rather incomplete, but please > let me know any comments, questions, or suggestions. > > This is a good basis for discussion, and it helps to see something concrete. Here are a few comments: > Thus these laws should be interpreted as guidelines rather than > absolute rules. In particular, the compiler is not allowed to use > them. Unless stated otherwise, default definitions should also be > taken as laws. Including laws was discussed very early in the development of the language, but was rejected. IIRC Miranda had them. The argument against laws was that their presence might mislead users into the assumption that they did hold, yet if they were not enforcable then they might not hold and that could have serious consequences. Also, some laws do not hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a is not bottom. > class (Additive a) => Num a where > (*) :: a -> a -> a > one :: a > fromInteger :: Integer -> a > > -- Minimal definition: (*), one > fromInteger 0 = zero > fromInteger n | n < 0 = negate (fromInteger (-n)) > fromInteger n | n > 0 = reduceRepeat (+) one n This definition requires both Eq and Ord!!! As does this one: > class (Num a, Additive b) => Powerful a b where > (^) :: a -> b -> a > instance (Num a) => Powerful a (Positive Integer) where > a ^ 0 = one > a ^ n = reduceRepeated (*) a n > instance (Fractional a) => Powerful a Integer where > a ^ n | n < 0 = recip (a ^ (negate n)) > a ^ n = a ^ (positive n) and several others further down. > (4) In some cases, the hierarchy is not finely-grained enough: > operations that are often defined independently are lumped > together. For instance, in a financial application one might want > a type "Dollar", or in a graphics application one might want a > type "Vector". It is reasonable to add two Vectors or Dollars, > but not, in general, reasonable to multiply them. But the > programmer is currently forced to define a method for (*) when she > defines a method for (+). Why do you stop at allowing addition on Dollars and not include multiplication by a scalar? Division is also readily defined on Dollar values, with a scalar result, but this, too, is not available in the proposal. Having Units as types, with the idea of preventing adding Apples to Oranges, or Dollars to Roubles, is a venerable idea, but is not in widespread use in actual programming languages. Why not? Vectors, too, can be multiplied, producing both scalar- and vector-products. It seems that you are content with going as far as the proposal permits, though you cannot define, even within the revised Class system, all the common and useful operations on these types. This is the same situation as with Haskell as it stands. The question is whether the (IMHO) marginal increase in flexibility is worth the cost. This is not an argument for not separating Additive from Num, but it does weaken the argument for doing it. --brian From wli@holomorphy.com Mon Feb 12 05:17:53 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Sun, 11 Feb 2001 21:17:53 -0800 Subject: A sample revised prelude for numeric classes In-Reply-To: <20010211225629.B2561@math.harvard.edu>; from dpt@math.harvard.edu on Sun, Feb 11, 2001 at 10:56:29PM -0500 References: <20010211174215.A2033@math.harvard.edu> <20010211184842.F641@holomorphy.com> <20010211225629.B2561@math.harvard.edu> Message-ID: <20010211211753.H641@holomorphy.com> On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > It follows: > zero * x === (one - one) * x === one * x - one * x === x - x === zero Heh, you've caught me sleeping. =) On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > I tried to write the definitions in a way that could be defined for > any unique factorization domain, not necessarily Euclidean: just take > the two numbers, write them as a unit times prime factors in canonical > form, and take the product of the common factors and call that the > GCD. On reflection, extendedGCD probably isn't easy to write in > general. Well, factorizing things in various UFD's doesn't sound easy to me, but at this point I'm already having to do some reaching for counterexamples of practical programs where this matters. It could end up being a useless class method in some instances, so I'm wary of it. On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > What operations would you propose to encapsulate an integral domain > (rather than a UFD)? I'm not necessarily proposing a different set of operations to encapsulate them, but rather that gcd and cousins be split off into another subclass. Your design decisions in general appear to be striking a good chord, so I'll just bring up the idea and let you decide whether it should be done that way and so on. On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > Formal power series over Z are an interesting example; I'll think > about it. On first blush, it seems like if you represented them as > lazy lists you might be able to compute the remainder term by term. Consider taking of the residue of a truly infinite member of Z[[x]] mod an ideal generated by a polynomial, e.g. 1/(1-x) mod (1+x^2). You can take the residue of each term of 1/(1-x), so x^(2n) -> (-1)^n and x^(2n+1) -> (-1)^n x, but you end up with an infinite number of (nonzero!) residues to add up and hence encounter the troubles with processes not being finite that I mentioned. On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote: >> (3) Under some condition I don't seem to be able to formulate >> offhand, one can do >> (^) :: ring -> ring -> ring >> Now the ring (or perhaps more generally some related ring) >> acts on ring to produce an expontiation operation like what >> is typically thought of for real numbers. Anyone with good >> ideas as to what the appropriate conditions are here, please >> speak up. >> (Be careful, w ^ z = exp (z * log w) behaves badly for w < 0 >> on the reals.) On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > For complex numbers as well, this operation has problems because of > branch cuts. It does satisfy that identity I mentioned, but is not > continuous in the first argument. > It is more common to see functions like exp be well defined (for more > general additive groups) than to see the full (^) be defined. I think it's nice to have the Cauchy principal value versions of things floating around. I know at least that I've had call for using the CPV of exponentiation (and it's not hard to contrive an implementation), but I'm almost definitely an atypical user. (Note, (**) does this today.) On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote: >> I'm not convinced that Real is a great name for this, or that this >> is really the right type for all this stuff. I'd still like to see >> abs and signum generalized to vector spaces. On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > After thinking about this, I decided that I would be happy calling the > comparable operation on vector spaces "norm": > a) it's compatible with mathematical usage > b) it keeps the Prelude itself simple. > It's unfortunate that the operation for complex numbers can't be > called "abs", but I think it's reasonable. I'm not entirely sure, but I think part of the reason this hasn't been done already is because it's perhaps painful to statically type dimensionality in vector spaces. On the other hand, assuming that the user has perhaps contrived a representation satisfactory to him or her, defining a class on the necessary type constructor shouldn't be tough at all. In a side note, it seems conventional to use abs and signum on complex numbers (and functions), and also perhaps the same symbol as abs for the norm on vectors and vector functions. It seems the distinction drawn is that abs is definitely pointwise and the norm more often does some sort of shenanigan like L^p norms etc. How much of this convention should be preserved seems like a design decision, but perhaps one that should be made explicit. On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote: >> ...and Ord defines a partial order >> (and hence induces Eq) on a type. On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > I think that "Ord" should define a total ordering; it's certainly what > naive users would expect. I would define another class "Poset" with a > partial ordering. I neglected here to add in the assumption that (<=) was a total relation, I had in mind antisymmetry of (<=) in posets so that element isomorphism implies equality. Introducing a Poset class where elements may be incomparable appears to butt against some of the bits where Bool is hardwired into the language, at least where one might attempt to use a trinary logical type in place of Bool to denote the result of an attempted comparison. On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote: >> (e.g. >> instance Ord a => Eq a where >> x == y = x <= y && y <= x >> ) On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > But to define <= in terms of meet and join you already need Eq! > > x <= y === meet x y == y I don't usually see this definition of (<=), and it doesn't seem like the natural way to go about defining it on most machines. The notion of the partial (possibly total) ordering (<=) seems to be logically prior to that of the meet to me. The containment usually goes: reflexive + transitive partial relation (preorder) => antisymmetric (partial order) [lattices possible here with additional structure, also equality decidable in terms of <= independently of the notion of lattices, for arbitrary partial orders] => total relation (well ordering) Whether this matters for library design is fairly unclear. Good work! Cheers, Bill From Tom.Pledger@peace.com Mon Feb 12 05:18:19 2001 From: Tom.Pledger@peace.com (Tom Pledger) Date: Mon, 12 Feb 2001 18:18:19 +1300 Subject: A sample revised prelude for numeric classes In-Reply-To: <3A876585.6D4B2B8A@boutel.co.nz> References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> Message-ID: <14983.29211.220380.337502@waytogo.peace.co.nz> Brian Boutel writes: : | Having Units as types, with the idea of preventing adding Apples to | Oranges, or Dollars to Roubles, is a venerable idea, but is not in | widespread use in actual programming languages. Why not? There was a pointer to some good papers on this in a previous discussion of units and dimensions: http://www.mail-archive.com/haskell@haskell.org/msg04490.html The main complication is that the type system needs to deal with integer exponents of dimensions, if it's to do the job well. For example, it should be OK to divide an acceleration (length * time^-2) by a density (mass * length^-3). Such things may well occur as subexpressions of something more intuitive, and it's undesirable to spell out all the anticipated dimension types in a program (a Haskell 98 program, for example) because: - Only an arbitrary finite number would be covered, and - The declarations would contain enough un-abstracted clich=E9s to bring a tear to the eye. instance Mul Double (Dim_L Double) (Dim_L Double) instance Mul (Dim_L Double) (Dim_per_T Double) (Dim_L_per_T Dou= ble) etc. Regards, Tom From wli@holomorphy.com Mon Feb 12 05:57:03 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Sun, 11 Feb 2001 21:57:03 -0800 Subject: A sample revised prelude for numeric classes In-Reply-To: <3A876585.6D4B2B8A@boutel.co.nz>; from brian@boutel.co.nz on Mon, Feb 12, 2001 at 05:24:37PM +1300 References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> Message-ID: <20010211215703.I641@holomorphy.com> On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > Including laws was discussed very early in the development of the > language, but was rejected. IIRC Miranda had them. The argument against > laws was that their presence might mislead users into the assumption > that they did hold, yet if they were not enforcable then they might not > hold and that could have serious consequences. Also, some laws do not > hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a > is not bottom. I actually think it would be useful to have them and optionally dynamically enforce them, or at least whichever ones are computable, as a compile-time option. This would be _extremely_ useful for debugging purposes, and I, at the very least, would use it. I think Eiffel does something like this, can anyone else comment? This, of course, is a language extension, and so probably belongs in a different discussion from the rest of all this. Dylan Thurston wrote: >> class (Additive a) => Num a where >> (*) :: a -> a -> a >> one :: a >> fromInteger :: Integer -> a >> -- Minimal definition: (*), one >> fromInteger 0 = zero >> fromInteger n | n < 0 = negate (fromInteger (-n)) >> fromInteger n | n > 0 = reduceRepeat (+) one n On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > This definition requires both Eq and Ord!!! Only on Integer, not on a. On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > As does this one: Dylan Thurston wrote: >> class (Num a, Additive b) => Powerful a b where >> (^) :: a -> b -> a >> instance (Num a) => Powerful a (Positive Integer) where >> a ^ 0 = one >> a ^ n = reduceRepeated (*) a n >> instance (Fractional a) => Powerful a Integer where >> a ^ n | n < 0 = recip (a ^ (negate n)) >> a ^ n = a ^ (positive n) I should note that both of these definitions which require Eq and Ord only require it on Integer. On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > and several others further down. I'm not sure which ones you hit on, though I'm sure we'd all be more than happy to counter-comment on them or repair the inadequacies. Dylan Thurston wrote: >> (4) In some cases, the hierarchy is not finely-grained enough: >> operations that are often defined independently are lumped >> together. For instance, in a financial application one might want >> a type "Dollar", or in a graphics application one might want a >> type "Vector". It is reasonable to add two Vectors or Dollars, >> but not, in general, reasonable to multiply them. But the >> programmer is currently forced to define a method for (*) when she >> defines a method for (+). On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > Why do you stop at allowing addition on Dollars and not include > multiplication by a scalar? Division is also readily defined on Dollar > values, with a scalar result, but this, too, is not available in the > proposal. I can comment a little on this, though I can't speak for someone else's design decisions. In general, the results of division and multiplication for units have a different result type than those of the arguments. This makes defining them by type class overloading either require existential wrappers or makes them otherwise difficult or impossible to define. On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > Having Units as types, with the idea of preventing adding Apples to > Oranges, or Dollars to Roubles, is a venerable idea, but is not in > widespread use in actual programming languages. Why not? I'm probably even less qualified to comment on this, but I'll conjecture that the typing disciplines of most languages make it impractical. I suspect it could be possible in Haskell. On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > Vectors, too, can be multiplied, producing both scalar- and > vector-products. Exterior and inner products both encounter much the same troubles as defining arithmetic on types with units attached, with the additional complication that statically typing dimensionality is nontrivial. On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > It seems that you are content with going as far as the proposal permits, > though you cannot define, even within the revised Class system, all the > common and useful operations on these types. This is the same situation > as with Haskell as it stands. The question is whether the (IMHO) > marginal increase in flexibility is worth the cost. > This is not an argument for not separating Additive from Num, but it > does weaken the argument for doing it. I'm not convinced of this, though I _am_ convinced that a general framework for units would probably be useful to have in either a standard or add-on library distributed with Haskell, or perhaps to attempt to address units even within the standard Prelude if it's simple enough. Are you up to perhaps taking a stab at this? Perhaps if you tried it within the framework Thurston has laid out, some of the inadequacies could be revealed. Cheers, Bill From ashley@semantic.org Mon Feb 12 06:16:02 2001 From: ashley@semantic.org (Ashley Yakeley) Date: Sun, 11 Feb 2001 22:16:02 -0800 Subject: A sample revised prelude for numeric classes Message-ID: <200102120616.WAA08490@mail4.halcyon.com> At 2001-02-11 21:18, Tom Pledger wrote: >The main complication is that the type system needs to deal with >integer exponents of dimensions, if it's to do the job well. Very occasionally non-integer or 'fractal' exponents of dimensions are useful. For instance, geographic coastlines can be measured in km ^ n, where 1 <= n < 2. This doesn't stop the CIA world factbook listing all coastline lengths in straight kilometres, however. More unit weirdness occurs with logarithms. For instance, if y and x are distances, log (y/x) = log y - log x. Note that 'log x' is some number + log (metre). Strange, huh? Interestingly, in C++ you can parameterise types by values. For instance: -- // Mass, Length and Time template class Unit { public: double mValue; inline explicit Unit(double value) { mValue = value; } }; template Unit operator + (Unit a,Unit b) { return Unit(a.mValue + b.mValue); } template Unit operator * (Unit a,Unit b) { return Unit(a.mValue * b.mValue); } // etc. int main() { Unit<0,1,0> oneMetre(1); Unit<0,1,0> twoMetres = oneMetre + oneMetre; Unit<0,2,0> oneSquareMetre = oneMetre * oneMetre; } -- Can you do this sort of thing in Haskell? -- Ashley Yakeley, Seattle WA From wli@holomorphy.com Mon Feb 12 06:46:15 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Sun, 11 Feb 2001 22:46:15 -0800 Subject: A sample revised prelude for numeric classes In-Reply-To: <200102120616.WAA08490@mail4.halcyon.com>; from ashley@semantic.org on Sun, Feb 11, 2001 at 10:16:02PM -0800 References: <200102120616.WAA08490@mail4.halcyon.com> Message-ID: <20010211224615.J641@holomorphy.com> At 2001-02-11 21:18, Tom Pledger wrote: >>The main complication is that the type system needs to deal with >>integer exponents of dimensions, if it's to do the job well. On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote: > Very occasionally non-integer or 'fractal' exponents of dimensions are > useful. For instance, geographic coastlines can be measured in km ^ n, > where 1 <= n < 2. This doesn't stop the CIA world factbook listing all > coastline lengths in straight kilometres, however. This is pretty rare, and it's also fairly tough to represent points in spaces of fractional dimension. I'll bet the sorts of complications necessary to do so would immediately exclude it from consideration in the design of a standard library, but nevertheless would be interesting to hear about. Can you comment further on this? On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote: > More unit weirdness occurs with logarithms. For instance, if y and x are > distances, log (y/x) = log y - log x. Note that 'log x' is some number + > log (metre). Strange, huh? If you (or anyone else) could comment on what sorts of units would be appropriate for the result type of a logarithm operation, I'd be glad to hear it. I don't know what the result type of this example is supposed to be if the units of a number are encoded in the type. On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote: > Interestingly, in C++ you can parameterise types by values. For instance: [interesting C++ example elided] > Can you do this sort of thing in Haskell? No, in general I find it necessary to construct some sort of set of types parallel to the actual data type, define some sort of existential data type encompassing the set of all types which can represent one of those appropriate values, and "lift" things to that type by means of sample arguments. I usually like ensuring that the types representing things like integers never actually have any sort of data manifest, i.e. the sample arguments are always undefined. This is a bit awkward. I think Okasaki's work on square matrices and perhaps some other ideas should be exploited for this sort of thing, as there is quite a bit of opposition to the usage of sample arguments. I'd like to see a library for vector spaces based on similar ideas. I seem to be caught up in other issues caused by mucking with fundamental data types' definitions, my working knowldedge of techniques like Okasaki's is insufficient for the task, and my design concepts are probably too radical for general usage, so I'm probably not the man for the job, though I will very likely take a stab at such a beast for my own edification. Cheers, Bill From qrczak@knm.org.pl Mon Feb 12 07:34:15 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 12 Feb 2001 07:34:15 GMT Subject: A sample revised prelude for numeric classes References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> Message-ID: Mon, 12 Feb 2001 17:24:37 +1300, Brian Boutel pisze: > > class (Additive a) => Num a where > > (*) :: a -> a -> a > > one :: a > > fromInteger :: Integer -> a > > > > -- Minimal definition: (*), one > > fromInteger 0 = zero > > fromInteger n | n < 0 = negate (fromInteger (-n)) > > fromInteger n | n > 0 = reduceRepeat (+) one n > > This definition requires both Eq and Ord!!! Only Eq Integer and Ord Integer, which are always there. > Why do you stop at allowing addition on Dollars and not include > multiplication by a scalar? Perhaps because there is no good universal type for (*). Sorry, it would have to have a different symbol. > Having Units as types, with the idea of preventing adding Apples to > Oranges, or Dollars to Roubles, is a venerable idea, but is not in > widespread use in actual programming languages. Why not? It does not scale to more general cases. (m/s) / (s) = (m/s^2), so (/) would have to have the type (...) => a -> b -> c, which is not generally usable because of ambiguities. Haskell's classes are not powerful enough to define full algebra of units. > It seems that you are content with going as far as the proposal permits, > though you cannot define, even within the revised Class system, all the > common and useful operations on these types. This is the same situation > as with Haskell as it stands. The question is whether the (IMHO) > marginal increase in flexibility is worth the cost. The Prelude class system requires a compromise. There is no single design which accommodates all needs because Haskell's classes are not powerful enough to unify all levels of generality in a single class operation. And even if it was possible, it would be awkward to use in simpler cases. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Mon Feb 12 07:04:30 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 12 Feb 2001 07:04:30 GMT Subject: A sample revised prelude for numeric classes References: <200102120003.QAA05197@mail4.halcyon.com> Message-ID: Sun, 11 Feb 2001 16:03:37 -0800, Ashley Yakeley pisze: > Apologies if this has been discussed and I missed it. When it comes to > writing a 'geek' prelude, what was wrong with the Basic Algebra Proposal > found in ? > Perhaps it could benefit from multi-parameter classes? Let me quote myself why I don't like this proposal: - It's too complicated. - Relies on controversial type system features, like undecidable instances and overlapping instances. - Relies on type system features that are not implemented and it's not clear if they can be correctly designed or implemented at all, like "domain conversions". - Has many instances that should not exist because the relevant type does not have the class property; they return Nothing or fail, instead of failing to compile. - Properties like commutativity cannot be specified in Haskell. The compiler won't be able to automatically perform any optimizations based on commutativity. - belongs is strange. IMHO it should always return True for valid arguments, and invalid arguments should be impossible to construct if the validity can be checked at all. - Tries to turn a compiled language into an interpreted language. FuncExpr, too much parsing (with arbitrary rules hardwired into the language), too much runtime checks. - It's too complicated. - It's not true that it's "not necessary to dig into mathematics". I studied mathematics and did not have that much of algebra. - I perfer minBound to looking at element under Just under Just under tuple of osetBounds. - Uses ugly character and string arguments that tune the behavior, e.g. in syzygyGens, divRem, canFr. I like Haskell98's divMod+quotRem better. - Uses unneeded sample arguments, e.g. in toEnum, zero, primes, read. - Have I said that it's too complicated? There were lengthy discussions about it... -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Mon Feb 12 07:11:36 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 12 Feb 2001 07:11:36 GMT Subject: A sample revised prelude for numeric classes References: <20010211174215.A2033@math.harvard.edu> <20010211184842.F641@holomorphy.com> Message-ID: Sun, 11 Feb 2001 18:48:42 -0800, William Lee Irwin III pisze: > class Ord a => MeetSemiLattice a where > meet :: a -> a -> a > > class MeetSemiLattice a => CompleteMeetSemiLattice a where > bottom :: a > > class Ord a => JoinSemiLattice a where > join :: a -> a -> a > > class JoinSemiLattice a => CompleteJoinSemiLattice a where > top :: a Please: ok, but not for Prelude! -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Mon Feb 12 07:24:31 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 12 Feb 2001 07:24:31 GMT Subject: A sample revised prelude for numeric classes References: <20010211174215.A2033@math.harvard.edu> <20010211222753.A2561@math.harvard.edu> Message-ID: Sun, 11 Feb 2001 22:27:53 -0500, Dylan Thurston pisze: > Reading this, it occurred to me that you could explictly declare an > instance of Powerful Integer Integer and have everything else work. No, because it overlaps with Powerful a Integer (the constraint on a doesn't matter for determining if it overlaps). > > Then the second argument of (^) is always arbitrary RealIntegral, > > Nit: the second argument should be an Integer, not an arbitrary > RealIntegral. Of course not. (2 :: Integer) ^ (i :: Int) makes perfect sense. > > You forgot toInteger. > > Oh, right. I actually had it and then deleted it. On the one hand, > it feels very implementation-specific to me, comparable to the > decodeFloat routines It is needed for conversions (fromIntegral in particular). > class Convertible a b where > convert :: a -> b > maybe with another class like > class (Convertible a Integer) => ConvertibleToInteger a where > toInteger :: a -> Integer > toInteger = convert This requires to write a Convertible instance in addition to ConvertibleToInteger, where currently mere toInteger in Integral suffices. Since Convertible must be defined separately for each pair of types (otherwise instances would easily overlap), it's not very useful for numeric conversions. Remember that there are a lot of numeric types in the FFI: Int8, Word16, CLong, CSize. It does not provide anything in this area so should not be required to define instances there. After a proposal is developed, please check how many instances one has to define to make a type the same powerful as Int, and is it required to define methods irrelevant to non-mathematical needs. basAlgPropos fails badly in this criterion. > Convertible a b should indicate that a can safely be converted to > b without losing any information and maintaining relevant structure; So fromInteger does not require Convertible, which is inconsistent with toInteger. Sorry, I am against Convertible in Prelude - tries to be too general, which makes it inappropriate. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From karczma@info.unicaen.fr Mon Feb 12 09:33:03 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Mon, 12 Feb 2001 09:33:03 +0000 Subject: In hoc signo vinces (Was: Revamping the numeric classes) References: <200102061320.OAA27993@isun11.informatik.uni-leipzig.de> <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> <3A83CBF7.6C635E94@info.unicaen.fr> Message-ID: <3A87ADCF.D939A500@info.unicaen.fr> Marcin Kowalczyk pretends not to understand: > JK: > > > Again, a violation of the orthogonality principle. Needing division > > just to define signum. And of course a completely different approach > > do define the signum of integers. Or of polynomials... > So what? That's why it's a class method and not a plain function with > a single definition. > > Multiplication of matrices is implemented differently than > multiplication of integers. Why don't you call it a violation of the > orthogonality principle (whatever it is)? 1. Orthogonality priniciple has - in principle - nothing to do with the implementation. Separating a complicated structure in independent, or "orthogonal" concepts is a basic invention of human mind, spanning from the principle of Montesquieu of the independence of three political powers, down to syntactic issues in the design of a programming language. If you eliminate as far as possible the "interfacing" between concepts, the integration of the whole is easier. Spurious dependencies are always harmful. 2. This has been a major driving force in the construction of mathematical entities for centuries. What do you really NEED for your proof. What is the math. category where a given concept can be defined, where a theorem holds, etc. 3. The example of matrices is inadequate (to say it mildly). The monoid rules hold in both cases, e.g. the associativity. So, I might call both operations "multiplication", although one is commutative, and the other one not. == In a later posting you say: > If (+) can be implicitly lifted to functions, then why not signum? > Note that I would lift neither signum nor (+). I don't feel the need. ... I not only feel the need, but I feel that this is important that the additive structure in the codomain is inherited by functions. In a more specific context: the fact that linear functionals over a vector space form also a vector space, is simply *fundamental* for the quantum mechanics, for the cristallography, etc. You don't need to be a Royal Abstractor to see this. Jerzy Karczmarczuk Caen, France From wli@holomorphy.com Mon Feb 12 08:43:57 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Mon, 12 Feb 2001 00:43:57 -0800 Subject: Primitive types and Prelude shenanigans Message-ID: <20010212004357.K641@holomorphy.com> It seems to me that some additional primitive types would be useful, most of all a natural number type corresponding to an arbitrary- precision unsigned integer. Apparently Integer is defined in terms of something else in the GHC Prelude, what else might be needed to define it? Some of my other natural thoughts along these lines are positive reals and rationals, and nonzero integers. Rationals I have an idea that they might be similar to natural numbers and nonzero integers, though the nonzero and positive reals pose some nasty problems (try underflow). Would such machinations be useful to anyone else? Further down this line, I've gone off and toyed with Bool, and discovered GHC doesn't like it much. Is there a particular place within GHC I should look to see how the primitive Boolean type, and perhaps other types are handled? I'd also like to see where some of the magic behind the typing of various other built-in constructs happens, like list comprehensions, tuples, and derived classes. Cheers, Bill From lisper@it.kth.se Mon Feb 12 09:08:02 2001 From: lisper@it.kth.se (Bjorn Lisper) Date: Mon, 12 Feb 2001 10:08:02 +0100 (MET) Subject: A sample revised prelude for numeric classes In-Reply-To: <14983.29211.220380.337502@waytogo.peace.co.nz> (message from Tom Pledger on Mon, 12 Feb 2001 18:18:19 +1300) References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> <14983.29211.220380.337502@waytogo.peace.co.nz> Message-ID: <200102120908.KAA01548@cuchulain.it.kth.se> Tom Pledger: >Brian Boutel writes: > : > | Having Units as types, with the idea of preventing adding Apples to > | Oranges, or Dollars to Roubles, is a venerable idea, but is not in > | widespread use in actual programming languages. Why not? >There was a pointer to some good papers on this in a previous >discussion of units and dimensions: > http://www.mail-archive.com/haskell@haskell.org/msg04490.html >The main complication is that the type system needs to deal with >integer exponents of dimensions, if it's to do the job well. Andrew Kennedy has basically solved this for higher order languages with HM type inference. He made an extension of the ML type system with dimensional analysis a couple of years back. Sorry I don't have the references at hand but he had a paper in ESOP I think. I think the real place for dimension and unit inference is in modelling languages, where you can specify physical systems through differential equations and simulate them numerically. Such languages are being increasingly used in the "real world" now. It would be quite interesting to have a version of Haskell that would allow the specification of differential equations, so one could make use of all the good features of Haskell for this. This would allow the unified specification of systems that consist both of physical and computational components. This niche is now being filled by a mix of special-purpose modeling languages like Modelica and Matlab/Simulink for the physical part, and SDL and UML for control parts. The result is likely to be a mess, in particular when these specifications are to be combined into full system descriptions. Björn Lisper From karczma@info.unicaen.fr Mon Feb 12 10:56:55 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Mon, 12 Feb 2001 10:56:55 +0000 Subject: Dimensions of the World (was: A sample revised prelude) References: <200102120616.WAA08490@mail4.halcyon.com> Message-ID: <3A87C177.3E8F1EBB@info.unicaen.fr> Ashley Yakeley after Tom Pledger: > > >The main complication is that the type system needs to deal with > >integer exponents of dimensions, if it's to do the job well. > > Very occasionally non-integer or 'fractal' exponents of dimensions are > useful. For instance, geographic coastlines can be measured in km ^ n, > where 1 <= n < 2. This doesn't stop the CIA world factbook listing all > coastline lengths in straight kilometres, however. > > More unit weirdness occurs with logarithms. For instance, if y and x are > distances, log (y/x) = log y - log x. Note that 'log x' is some number + > log (metre). Strange, huh? When a week ago I mentioned those dollars difficult to multiply (although some people spend their lives doing it...), and some dimensional quantities which should have focalised some people attention on the differences between (*) and (+), I never thought the discussion would go so far. Dimensional quantities *are* a can of worms. From the practical point of view they are very useful in order to avoid making silly programming errors, I have applied them several times while coding some computer algebra expressions. Dimensions were "just symbols", but with "reasonable" mathematical properties (concerning (*) and (/)), so factorizing this symbolic part was an easy way to see whether I didn't produce some illegal combinations. Sometimes they are really "dimensionless" scaling factor! In TeX/MetaFont the units such as mm, cm, in etc. exist and function very nicely as conversion factor. W.L.I.III asks: > If you (or anyone else) could comment on what sorts of units would be > appropriate for the result type of a logarithm operation, I'd be glad to > hear it. I don't know what the result type of this example is supposed > to be if the units of a number are encoded in the type. Actually, the logarithm example would be consider as spurious by almost all "practical" mathematicians (e.g., physicists). A formula is sane if the argument of the logarithm is dimensionless (if in x/y both elements share the same dimension). Then adding and subtracting the same log(GHmSmurf) is irrelevant. == But in general mathematical physics (and in geometry which encompasses the major part of the former) there are some delicate issues, which sometimes involve fractality, and sometimes the necessity of "religious acts", such as the renormalization schemes in Quantum Field Theory. In this case we have the "dimensional transmutation" phenomenon: the gluon coupling constant which is dimensionless, acquires a dimension, and conditions the hadronic mass scale, i.e. the masses of elementary particles. [[[Yes, I know, you, serious comp. scist won't bother about it, but I will try anyway to tell you in two words why. A way of making a singular theory finite, is to put in on a discrete lattice which represent the phys. space. There is a dimensional object here: the lattice constant. Then you go to zero with it, in order to retrieve the physical space-time. When you reach this zero, you lose this constant, and this is one of the reasons why the theory explodes. So, it must be introduced elsewhere... In another words: a physical correlation length L between objects is finite. If the lattice constant c is finite, L=N*c. But if c goes to zero... Now, programming all this, Haskell or not, is another issue.]]] == Fractals are seen not only in geography, but everywhere, as Mandelbrot and his followers duly recognized. You will need them doing computations in colloid physics, in the galaxy statistics, and in the metabolism of human body [[if you think that your energy depenses are proportional to your volume, you are dead wrong, most interesting processes take place within membranes. You are much flatter than you think, folks, ladies included.]]. Actually, ALL THIS was one of major driving forces behind my interest in functional programming. I found an approach to programming which did not target "symbolic manipulations", but "normal computing", so it could be practically competiting against Fortran etc. Yet, it had a potential to deal in a serious, formal manner with the mathematical properties of the manipulated objects. That's why I suffer seeing random, ad hoc numerics. Björn Lisper mentions some approach to dimensions: > Andrew Kennedy has basically solved this for higher order languages > with HM type inference. He made an extension of the ML type system > with dimensional analysis a couple of years back. Sorry I don't have > the references at hand but he had a paper in ESOP I think. > > I think the real place for dimension and unit inference is in modelling > languages, where you can specify physical systems through differential > equations and simulate them numerically. Such languages are being > increasingly used in the "real world" now. ESOP '94. Andrew Kennedy: Dimension Types. 348-362. There are other articles: Jean Goubault. Inférence d'unités physiques en ML ; Mitchell Wand and Patrick O'Keefe. Automatic dimensional inference; and *hundreds* (literally) of papers within the Computer Algebra domain about dimensionful computations. I wouldn't say that the issue is "solved". !!!!!! There is MUCH MORE in modelling physical (or biologic or financial) world than just the differential equations. There is plenty of algebra involved, nad *here* the dimensional reasoning may be important. And such systems as Matlab/Simulink, etc. ignore the dimensions, although they have now some OO layer permitting to define something like them. Jerzy Karczmarczuk Caen, France From mk167280@students.mimuw.edu.pl Mon Feb 12 10:00:02 2001 From: mk167280@students.mimuw.edu.pl (Marcin 'Qrczak' Kowalczyk) Date: Mon, 12 Feb 2001 11:00:02 +0100 (CET) Subject: Primitive types and Prelude shenanigans In-Reply-To: <20010212004357.K641@holomorphy.com> Message-ID: On Mon, 12 Feb 2001, William Lee Irwin III wrote: > It seems to me that some additional primitive types would be useful, > most of all a natural number type corresponding to an arbitrary- > precision unsigned integer. Apparently Integer is defined in terms > of something else in the GHC Prelude, what else might be needed to > define it? It depends on the implementation and IMHO it would be bad to require a particular implementation for no reason. For example ghc uses the gmp library and does not implement Integers in terms of Naturals; gmp handles negative numbers natively. > Some of my other natural thoughts along these lines are positive > reals and rationals, and nonzero integers. You can define it yourself by wrapping not-necessarily-positive types if you feel the need. Most of the time there is no need because Haskell has no subtyping - they would be awkward to use together with present types which include negative numbers. > Further down this line, I've gone off and toyed with Bool, and > discovered GHC doesn't like it much. Is there a particular place within > GHC I should look to see how the primitive Boolean type, and perhaps > other types are handled? Modules with names beginning with Prel define approximately everything what Prelude includes and everything with magic support in the compiler. PrelGHC defines primops that are hardwired in the compiler, and PrelBase is a basic module from which most things begin. In particular Bool is defined there as a regular algebraic type data Bool = False | True Types like Int or Double are defined in terms of primitive unboxed types called Int# and Double#. They are always evaluated and are not like other Haskell types: they don't have the kind of the form * or k1->k2 but a special unboxed kind, their values don't include bottom. You can't have [Int#] or use Prelude.id :: Int# -> Int#. They can be present in data definitions and function arguments and results. There is more of primitive stuff: primops like +# :: Int# -> Int# -> Int#, primitive array types, unboxed tuples, unboxed constants. There is a paper about it but I don't have the URL here. They are also described in GHC User's Guide. They are not portable at all. Other Haskell implementations may use very different implementation techniques. In ghc they exist primarily to make it easy to express optimizations - these types occur all the time during internal transformations of the module being opzimized - laziness is optimized away when possible. They are often present in .hi files when a function has been split into worker and wrapper, so that code using the module can refer to the worker using primitive types directly instead of allocating every number on the heap. They are also exposed to the programmer (who imports GlaExts) who really wants to hack with them manualy. They don't have nice Haskell properties of other types (fully polymorphic operations don't work on these types) so I would not expect such thing to appear officially in the Haskell definition. > I'd also like to see where some of the magic behind the typing of > various other built-in constructs happens, like list comprehensions, > tuples, and derived classes. Inside the compiler, not in libraries. -- Marcin 'Qrczak' Kowalczyk From john@foo.net Mon Feb 12 10:21:08 2001 From: john@foo.net (John Meacham) Date: Mon, 12 Feb 2001 02:21:08 -0800 Subject: A sample revised prelude for numeric classes In-Reply-To: <200102120003.QAA05197@mail4.halcyon.com>; from ashley@semantic.org on Sun, Feb 11, 2001 at 04:03:37PM -0800 References: <200102120003.QAA05197@mail4.halcyon.com> Message-ID: <20010212022108.B3655@mark.ugcs.caltech.edu> I quadruple the vote that the basic algebra proposal is too complicated. However I don't see how one could write even moderately complex programs and not wish for a partial ordering class or the ability to use standard terms for groups and whatnot. the current proposal is much more to my liking. An important thing is that in Haskell it is easy to build up functionality with fine grained control, but difficult or impossible to tear it down, You can't take a complicated class and split it up into smaller independent pieces(not easily at least). but you can take the functionality of several smaller classes and build up a 'bigger' class. Because of this feature one should always err on the side of simplicity and smaller classes when writing re-usable code. I guess what I'm trying to say is that we don't need a Prelude which will provide all of the mathematical structure everyone will need or want, but rather one which doesn't inhibit the ability to build what is needed upon it in a reasonable fashion. (I don't consider un-importing the prelude reasonable for re-usable code and libraries meant to be shared.) in short, three cheers for the new proposal. My one request is that if at all possible, make some sort of partial ordering class part of the changes, they are just way to useful in all types of programs to not have a standard abstraction. John -- -------------------------------------------------------------- John Meacham http://www.ugcs.caltech.edu/~john/ California Institute of Technology, Alum. john@foo.net -------------------------------------------------------------- From ketil@ii.uib.no Mon Feb 12 10:31:00 2001 From: ketil@ii.uib.no (Ketil Malde) Date: 12 Feb 2001 11:31:00 +0100 Subject: A sample revised prelude for numeric classes In-Reply-To: qrczak@knm.org.pl's message of "12 Feb 2001 07:34:15 GMT" References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> Message-ID: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) writes: >> Why do you stop at allowing addition on Dollars and not include >> multiplication by a scalar? > Perhaps because there is no good universal type for (*). > Sorry, it would have to have a different symbol. Is this ubiquitous enough that we should have a *standardized* different symbol? Any candidates? >> Having Units as types, with the idea of preventing adding Apples to >> Oranges, or Dollars to Roubles, is a venerable idea, but is not in >> widespread use in actual programming languages. Why not? > It does not scale to more general cases. (m/s) / (s) = (m/s^2), > so (/) would have to have the type (...) => a -> b -> c, which is not > generally usable because of ambiguities. Haskell's classes are not > powerful enough to define full algebra of units. While it may not be in the language, nothing's stopping you from - and some will probably encourage you to - implementing e.g. financial libraries with different data types for different currencies. Which I think is a better way to handle it, since when you want m to be divisible by s is rather application dependent. -kzm -- If I haven't seen further, it is by standing in the footprints of giants From ashley@semantic.org Mon Feb 12 10:49:02 2001 From: ashley@semantic.org (Ashley Yakeley) Date: Mon, 12 Feb 2001 02:49:02 -0800 Subject: Scalable and Continuous Message-ID: <200102121049.CAA24012@mail4.halcyon.com> A brief idea: something like... -- class (Additive a) => Scalable a scale :: Real -> a -> a -- equivalent to * (not sure of name for Real type) class (Scalable b) => Continuous a b | a -> b add :: b -> a -> a difference :: a -> a -> b -- Vectors, for instance, are Scalable. You can multiply them by any real number to get another vector. Num would also be Scalable. An example of Continuous would be time, e.g. "Continuous Time Interval". There's no zero time, although there is a zero interval. Space too: "Continuous Position Displacement", since there's no "zero position". -- Ashley Yakeley, Seattle WA From jf15@hermes.cam.ac.uk Mon Feb 12 10:58:10 2001 From: jf15@hermes.cam.ac.uk (Jon Fairbairn) Date: Mon, 12 Feb 2001 10:58:10 +0000 (GMT) Subject: A sample revised prelude for numeric classes In-Reply-To: Message-ID: On 12 Feb 2001, Ketil Malde wrote: > qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) writes: >=20 > >> Why do you stop at allowing addition on Dollars and not include > >> multiplication by a scalar? >=20 > > Perhaps because there is no good universal type for (*). > > Sorry, it would have to have a different symbol. >=20 > Is this ubiquitous enough that we should have a *standardized* > different symbol? =20 I'd think so. > Any candidates? =2E* *. [and .*.] ? where the "." is on the side of the scalar --=20 J=F3n Fairbairn Jon.Fairbairn@cl.cam.ac.uk 31 Chalmers Road jf@cl.cam.ac.uk Cambridge CB1 3SZ +44 1223 570179 (pm only, please) From mk167280@students.mimuw.edu.pl Mon Feb 12 11:04:39 2001 From: mk167280@students.mimuw.edu.pl (Marcin 'Qrczak' Kowalczyk) Date: Mon, 12 Feb 2001 12:04:39 +0100 (CET) Subject: A sample revised prelude for numeric classes In-Reply-To: <20010212022108.B3655@mark.ugcs.caltech.edu> Message-ID: On Mon, 12 Feb 2001, John Meacham wrote: > My one request is that if at all possible, make some sort of partial > ordering class part of the changes, they are just way to useful in all > types of programs to not have a standard abstraction. I like the idea of having e.g. (<) and (>) not necessarily total, and only total compare. It doesn't need to introduce new operations, just split an existing class into two. Only I'm not sure why (<) and (>) should be partial, with (<=) and (>=) total, and not for example opposite. Or perhaps all four partial, with compare, min, max - total. For partial ordering it's often easier to define (<=) or (>=) than (<) or (>). They are related by (==) and not by negation, so it's not exactly the same. I would have PartialOrd with (<), (>), (<=), (>=), and Ord with the rest. Or perhaps with names Ord and TotalOrd respectively? There are several choices of default definitions of these four operators. First of all they can be related either by (==) or by negation. The first works for partial order, the second is more efficient in the case it works (total order). We can have (<=) and (>=) defined in terms of each other, with (<) and (>) defined in terms of (<=) and (>=) - in either way. Or vice versa, but if the definition is in terms of (==), then as I said it's better to let programmers define (<=) or (>=) and derive (<), (>) from them. If they are defined by negation, then we get more efficient total orders, but we must explicitly define both one of (<=), (>=) and one of (<), (>) for truly partial orders, or the results will be wrong. Perhaps it's safer to have inefficient (<), (>) for total orders than wrong for partial orders, even if it means that for optimal performance of total orders one have to define (<=), (<) and (>): class Eq a => PartialOrd a where -- or Ord (<=), (>=), (<), (>) :: a -> a -> Bool -- Minimal definition: (<=) or (>=) a <= b = b >= a a >= b = b <= a a < b = a <= b && a /= b a > b = a >= b && a /= b We could also require to define one of (<=), (>=), and one of (<), (>), for both partial and total orders. Everybody must think about whether he defines (<) as negation of (>=) or not, and it's simpler for the common case of total orders - two definitions are needed. The structure of default definitions is more uniform: class Eq a => PartialOrd a where -- or Ord (<), (>), (<=), (>=) :: a -> a -> Bool -- Minimal definition: (<) or (>), (<=) or (>=) a < b = b > a a > b = b < a a <= b = b >= a a >= b = b <= a This is my bet. -- Marcin 'Qrczak' Kowalczyk From mk167280@students.mimuw.edu.pl Mon Feb 12 11:17:02 2001 From: mk167280@students.mimuw.edu.pl (Marcin 'Qrczak' Kowalczyk) Date: Mon, 12 Feb 2001 12:17:02 +0100 (CET) Subject: Scalable and Continuous In-Reply-To: <200102121049.CAA24012@mail4.halcyon.com> Message-ID: On Mon, 12 Feb 2001, Ashley Yakeley wrote: > class (Additive a) => Scalable a > scale :: Real -> a -> a -- equivalent to * (not sure of name for Real type) Or times, which would require multiparameter classes. 5 `times` "--" == "----------" 5 `times` (\x -> x+1) === (\x -> x+5) But this would suggest separating out Monoid from Additive - ugh. It makes sense to have zero and (+) for lists and functions a->a, but not negation. There is a class Monoid for ghc's nonstandard MonadWriter class. We would have (++) unified with (+) and concat unified with sum. I'm afraid of making too many small classes. But it would perhaps be not so bad if one could define superclass' methods in subclasses, so that one can forget about exact structure of classes and treat a bunch of classes as a single class if he wishes. It would have to be combined with compiler-inferred warnings about mutual definitions giving bottoms. -- Marcin 'Qrczak' Kowalczyk From wli@holomorphy.com Mon Feb 12 11:24:08 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Mon, 12 Feb 2001 03:24:08 -0800 Subject: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: <3A87ADCF.D939A500@info.unicaen.fr>; from karczma@info.unicaen.fr on Mon, Feb 12, 2001 at 09:33:03AM +0000 References: <20010206152330.C20441@math.harvard.edu> <3A80DCA6.7796D64F@boutel.co.nz> <20010207135741.A23527@math.harvard.edu> <3A823080.9811C0F6@boutel.co.nz> <3A828201.8533014B@info.unicaen.fr> <3A83CBF7.6C635E94@info.unicaen.fr> <3A87ADCF.D939A500@info.unicaen.fr> Message-ID: <20010212032408.L641@holomorphy.com> In a later posting Marcin Kowalczyk says: >> If (+) can be implicitly lifted to functions, then why not signum? >> Note that I would lift neither signum nor (+). I don't feel the need. >> ... On Mon, Feb 12, 2001 at 09:33:03AM +0000, Jerzy Karczmarczuk wrote: > I not only feel the need, but I feel that this is important that the > additive structure in the codomain is inherited by functions. In a more > specific context: the fact that linear functionals over a vector space > form also a vector space, is simply *fundamental* for the quantum > mechanics, for the cristallography, etc. You don't need to be a Royal > Abstractor to see this. I see this in a somewhat different light, though I'm in general agreement. What I'd like to do is to be able to effectively model module structures in the type system, and furthermore be able to simultaneously impose distinct module structures on a particular type. For instance, complex n-vectors are simultaneously C-modules and R-modules. and an arbitrary commutative ring R is at once a Z-module and an R-module. Linear functionals, which seem like common beasts (try a partially applied inner product) live in the mathematical structure Hom_R(M,R) which is once again an R-module, and, perhaps, by inheriting structure on R, an R' module from various R'. So how does this affect Prelude design? Examining a small bit of code could be helpful: -- The group must be Abelian. I suppose anyone could think of this. class (AdditiveGroup g, Ring r) => LeftModule g r where (&) :: r -> g -> g instance AdditiveGroup g => LeftModule g Integer where n & x | n == 0 = one | n < 0 = -(n & (-x)) | n > 0 = x + (n-1) & x ... and we naturally acquire the sort of structure we're looking for. But this only shows a possible outcome, and doesn't motivate the implementation. What _will_ motivate the implementation is the sort of impact this has on various sorts of code: (1) The fact that R is an AdditiveGroup immediately makes it a Z-module, so we have mixed-mode arithmetic by a different means from the usual implicit coercion. (2) This sort of business handles vectors quite handily. (3) The following tidbit of code immediately handles curried innerprods: instance (AdditiveGroup group, Ring ring) => LeftModule (group->ring) ring where r & g = \g' -> r & g g' (4) Why would we want to curry innerprods? I envision: type SurfaceAPoles foo = SomeGraph (SomeVector foo) and then surface :: SurfaceAPoles bar innerprod v `fmap` normalsOf faces where faces = facesOf surface (5) Why would we want to do arithmetic on these beasts now that we think we might need them at all? If we're doing things like determining the light reflected off of the various surfaces we will want to scale and add together the various beasties. Deferring the innerprod operation so we can do this is inelegant and perhaps inflexible compared to: lightSources :: [(SomeVector foo -> Intensity foo, Position)] lightSources = getLightSources boundingSomething reflection = sum $ map (\(f,p) -> getSourceWeight p * f) lightSources reflection `fmap` normalsOf faces where faces = facesOf surface and now in the lightSources perhaps ambient light can be represented very conveniently, or at least the function type serves to abstract out the manner in which the orientation of a surface determines the amount of light reflected off it. (My apologies for whatever inaccuracies are happening with the optics here, it's quite far removed from my direct experience.) Furthermore, within things like small interpreters, it is perhaps convenient to represent the semantic values of various expressions by function types. If one should care to define arithmetic on vectors and vector functions in the interpreted language, support in the source language allows a more direct approach. This would arise within solid modelling and graphics once again, as little languages are often used to describe objects, images, and the like. How can we anticipate all the possible usages of pretty-looking vector and matrix algebra? I suspect graphics isn't the only place where linear algebra could arise. All sorts of differential equation models of physical phenomena, Markov models of state transition systems, even economic models at some point require linear algebra in their computational methods. It's something I at least regard as a fairly fundamental and important aspect of computation. And to me, that means that the full power of the language should be applied toward beautifying, simplifying, and otherwise enabling linear algebraic computations. Cheers, Bill P.S.: Please forgive the harangue-like nature of the post, it's the best I could do at 3AM. From mk167280@students.mimuw.edu.pl Mon Feb 12 11:36:50 2001 From: mk167280@students.mimuw.edu.pl (Marcin 'Qrczak' Kowalczyk) Date: Mon, 12 Feb 2001 12:36:50 +0100 (CET) Subject: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: <3A87ADCF.D939A500@info.unicaen.fr> Message-ID: On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote: > I not only feel the need, but I feel that this is important that the > additive structure in the codomain is inherited by functions. It could support only the basic arithmetic. It would not automatically lift an expression which uses (>) and if. It would be inconsistent to provide a shortcut for a specific case, where generally it must be explicitly lifted anyway. Note that it does make sense to lift (>) and if, only the type system does not permit it implicitly because a type is fixed to Bool. Lifting is so easy to do manually that I would definitely not constrain the whole Prelude class system only to have convenient lifting of basic arithmetic. When it happens that an instance of an otherwise sane class for functions makes sense, then OK, but nothing more. -- Marcin 'Qrczak' Kowalczyk From karczma@info.unicaen.fr Mon Feb 12 13:10:22 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Mon, 12 Feb 2001 13:10:22 +0000 Subject: In hoc signo vinces (Was: Revamping the numeric classes) References: Message-ID: <3A87E0BE.5310B18B@info.unicaen.fr> Marcin Kowalczyk wrote: > > Jerzy Karczmarczuk wrote: > > > I not only feel the need, but I feel that this is important that the > > additive structure in the codomain is inherited by functions. > > It could support only the basic arithmetic. It would not automatically > lift an expression which uses (>) and if. It would be inconsistent to > provide a shortcut for a specific case, where generally it must be > explicitly lifted anyway. Note that it does make sense to lift (>) and if, > only the type system does not permit it implicitly because a type is fixed > to Bool. > > Lifting is so easy to do manually that I would definitely not constrain > the whole Prelude class system only to have convenient lifting of basic > arithmetic. When it happens that an instance of an otherwise sane class > for functions makes sense, then OK, but nothing more. Sorry for quoting in extenso the full posting just to say: I haven't the slightest idea what are you talking about. -- but I want to avoid partial quotations and misunderstandings resulting thereof. I don't want any automatic lifting nor *constrain* the Prelude class. I want to be *able* to define mathematical operations upon objects which by their intrinsic nature permit so! My goodness, I suspect really that despite plenty of opinions you express every day on this list you didn't really try to program something in Haskell IN A MATHEMATICALLY NON-TRIVIAL CONTEXT. I defined hundred times some special functions to add lists or records, to multiply a tree by a scalar (btw.: Jón Fairbarn proposes (.*), I have in principle nothing against, but these operators is used elsewhere, in other languages, CAML and Matlab; I use (*>) ). I am fed up with solutions ad hoc, knowing that correct mathematical hierarchies permit to inherit plenty of subsumptions, e.g. the fact that x+x exists implies 2*x. Thank you for reminding me that manual lifting is easy. In fact, everything is easy. Type-checking as well. Let's go back to assembler. Jerzy Karczmarczuk From mk167280@students.mimuw.edu.pl Mon Feb 12 12:34:55 2001 From: mk167280@students.mimuw.edu.pl (Marcin 'Qrczak' Kowalczyk) Date: Mon, 12 Feb 2001 13:34:55 +0100 (CET) Subject: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: <3A87E0BE.5310B18B@info.unicaen.fr> Message-ID: On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote: > I want to be *able* to define mathematical operations upon objects > which by their intrinsic nature permit so! You can't do it in Haskell as it stands now, no matter what the Prelude would be. For example I would say that with the definition abs x =3D if x >=3D 0 then x else -x it's obvious how to obtain abs :: ([Int]->Int) -> ([Int]->Int): apply the definition pointwise. But it will never work in Haskell, unless we changed the type rules for if and the tyoe of the result of (>=3D). You are asking for letting abs x =3D max x (-x) work on functions. OK, in this particular case it can be made to work by making appropriate instances, but it's because this is a special case where all intermediate types are appropriately polymorphic. This technique cannot work in general, as the previous example shows. So IMHO it's better to not try to pretend that functions can be implicitly lifted. Better provide as convenient as possible way of manual lifting arbitrary functions, so it doesn't matter if they have fixed Integer in the result or not. You are asking for an impossible thing. > I defined hundred times some special functions to add lists or > records, to multiply a tree by a scalar (btw.: J=F3n Fairbarn proposes > (.*), I have in principle nothing against, but these operators is used > elsewhere, in other languages, CAML and Matlab; I use (*>) ). Please show a concrete proposal how Prelude classes could be improved. --=20 Marcin 'Qrczak' Kowalczyk From dlb@wash.averstar.com Mon Feb 12 12:53:42 2001 From: dlb@wash.averstar.com (David Barton) Date: Mon, 12 Feb 2001 07:53:42 -0500 Subject: A sample revised prelude for numeric classes In-Reply-To: <20010211224615.J641@holomorphy.com> (message from William Lee Irwin III on Sun, 11 Feb 2001 22:46:15 -0800) References: <200102120616.WAA08490@mail4.halcyon.com> <20010211224615.J641@holomorphy.com> Message-ID: <200102121253.HAA01778@hudson.wash.averstar.com> This is pretty rare, and it's also fairly tough to represent points in spaces of fractional dimension. I'll bet the sorts of complications necessary to do so would immediately exclude it from consideration in the design of a standard library, but nevertheless would be interesting to hear about. Can you comment further on this? Even without fractals, there are cases where weird dimensions come up (I ran across this in my old MHDL (microwave) days). Square root volts is the example that was constantly thrown in my face. It doesn't really mess up the model that much; you just have to use rational dimensions rather than integer dimensions. Everything else works out. I have *not* come across a case where real dimensions are necessary, so equality still works. Dave Barton <*> dlb@averstar.com )0( http://www.averstar.com/~dlb From qrczak@knm.org.pl Mon Feb 12 14:12:02 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 12 Feb 2001 14:12:02 GMT Subject: A sample revised prelude for numeric classes References: Message-ID: Mon, 12 Feb 2001 12:04:39 +0100 (CET), Marcin 'Qrczak' Kowalczyk pisze: > This is my bet. I changed my mind: class Eq a => PartialOrd a where -- or Ord (<), (>), (<=), (>=) :: a -> a -> Bool -- Minimal definition: (<) or (<=). -- For partial order (<=) is required. -- For total order (<) is recommended for efficiency. a < b = a <= b && a /= b a > b = b < a a <= b = not (b < a) a >= b = b <= a -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From karczma@info.unicaen.fr Mon Feb 12 16:40:06 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Mon, 12 Feb 2001 16:40:06 +0000 Subject: In hoc signo vinces (Was: Revamping the numeric classes) References: Message-ID: <3A8811E6.B4DE31@info.unicaen.fr> Marcin Kowalczyk continues: > On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote: > > > I want to be *able* to define mathematical operations upon objects > > which by their intrinsic nature permit so! > > You can't do it in Haskell as it stands now, no matter what the Prelude > would be. > > For example I would say that with the definition > abs x = if x >= 0 then x else -x > it's obvious how to obtain abs :: ([Int]->Int) -> ([Int]->Int): apply the > definition pointwise. > > But it will never work in Haskell, unless we changed the type rules for if > and the tyoe of the result of (>=). > > You are asking for letting > abs x = max x (-x) > work on functions. OK, in this particular case it can be made to work .... Why don't you try from time to time to attempt to understand what other people want? And wait, say 2 hours, before responding? I DON'T WANT max TO WORK ON FUNCTIONS. I never did. I will soon (because I am writing a graphical package where max serves to intersect implicit graphical objects) need that, but for very specific functions which represent textures, but NOT in general. I repeat for the last time, that I want to have those operations which are *implied* by the mathematical properties. And anyway, if you replace x>=0 by x>=zero with an appropriate zero, this should work as well. I want only that Prelude avoids spurious dependencies. This is the way I program in Clean, where there is no Num, and (+), (*), zero, abs, etc. constitute classes by themselves. So, when you say: > You are asking for an impossible thing. My impression is what is impossible, is your way of interpreting/ understanding the statements (and/or desiderata) of other people. > > I defined hundred times some special functions to add lists or > > records, to multiply a tree by a scalar (btw.: Jón Fairbarn proposes > > (.*), I have in principle nothing against, but these operators is used > > elsewhere, in other languages, CAML and Matlab; I use (*>) ). > > Please show a concrete proposal how Prelude classes could be improved. (Why do you precede your query by this citation? What do you have to say here about the syntax proposed by Jón Fairbarn, or whatever??) I am Haskell USER. I have no ambition to save the world. The "proposal" has been presented in 1995 in Nijmegen (FP in education). Actually, it hasn't, I concentrated on lazy power series etc., and the math oriented prelude has been mentioned casually. Jeroen Fokker presented similar ideas, implemented differently. If you have nothing else to do (but only in this case!) you may find the modified prelude called math.hs for Hugs (which needs a modified prelude.hs exporting primitives) in http://users.info.unicaen.fr/~karczma/humat/ This is NOT a "public proposal" and I *don't want* your public comments on it. If you want to be nice, show me some of *your* Haskell programs. Jerzy Karczmarczuk Caen, France From dpt@math.harvard.edu Mon Feb 12 16:59:04 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Mon, 12 Feb 2001 11:59:04 -0500 Subject: A sample revised prelude for numeric classes In-Reply-To: <3A876585.6D4B2B8A@boutel.co.nz>; from brian@boutel.co.nz on Mon, Feb 12, 2001 at 05:24:37PM +1300 References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> Message-ID: <20010212115904.A4259@math.harvard.edu> On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote: > > Thus these laws should be interpreted as guidelines rather than > > absolute rules. In particular, the compiler is not allowed to use > > them. Unless stated otherwise, default definitions should also be > > taken as laws. > > Including laws was discussed very early in the development of the > language, but was rejected. IIRC Miranda had them. The argument against > laws was that their presence might mislead users into the assumption > that they did hold, yet if they were not enforcable then they might not > hold and that could have serious consequences. Also, some laws do not > hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a > is not bottom. These are good points, but I still feel that laws can be helpful as guidelines, as long as they are not interpreted as anything more. For instance, the Haskell Report does give laws for Monads and quotRem, although they, too, are not satisfied in the presence of bottom, etc. (Is that right?) Writing out the laws lets me say, for instance, whether users of Num and Fractional should expect multiplication to be commutative. (No and yes, respectively. I require Fractional to be commutative mainly because common usage does not use either '/' or 'reciprocal' to indicate inverse in a non-commutative ring.) > > class (Additive a) => Num a where > > (*) :: a -> a -> a > > one :: a > > fromInteger :: Integer -> a > > > > -- Minimal definition: (*), one > > fromInteger 0 = zero > > fromInteger n | n < 0 = negate (fromInteger (-n)) > > fromInteger n | n > 0 = reduceRepeat (+) one n > > This definition requires both Eq and Ord!!! Ah, but only Eq and Ord for Integer, which (as a built-in type) has Eq and Ord instances. The type signature for reduceRepeated is reduceRepeated :: (a -> a -> a) -> a -> Integer -> a > As does this one: > > class (Num a, Additive b) => Powerful a b where > > (^) :: a -> b -> a > > instance (Num a) => Powerful a (Positive Integer) where > > a ^ 0 = one > > a ^ n = reduceRepeated (*) a n > > instance (Fractional a) => Powerful a Integer where > > a ^ n | n < 0 = recip (a ^ (negate n)) > > a ^ n = a ^ (positive n) Likewise here. > and several others further down. I tried to be careful not to use Eq and Ord for generic types when not necessary, but I may have missed some. Please let me know. (Oh, I just realised that Euclid's algorithm requires Eq. Oops. That's what I get for not writing it out explicitly. I'll have to revisit the Integral part of the hierarchy.) > > (4) In some cases, the hierarchy is not finely-grained enough: > > operations that are often defined independently are lumped > > together. For instance, in a financial application one might want > > a type "Dollar", or in a graphics application one might want a > > type "Vector". It is reasonable to add two Vectors or Dollars, > > but not, in general, reasonable to multiply them. But the > > programmer is currently forced to define a method for (*) when she > > defines a method for (+). > > Why do you stop at allowing addition on Dollars and not include > multiplication by a scalar? Division is also readily defined on Dollar > values, with a scalar result, but this, too, is not available in the > proposal. I will allow multiplication by a scalar; it's just not in the classes I've written down so far. (And may not be in the Prelude.) Thanks for reminding me about division. I had forgotten about that. It bears some thought. > Having Units as types, with the idea of preventing adding Apples to > Oranges, or Dollars to Roubles, is a venerable idea, but is not in > widespread use in actual programming languages. Why not? That's a good question. I don't know. One cheeky answer would be for lack of a powerful enough type system (allowing you to, e.g., work on generic units when you want to), but I don't know if that is actually true. Don't modern HP calculators use units consistently? > Vectors, too, can be multiplied, producing both scalar- and > vector-products. Yes, but these are really different operations and should be represented with different symbols. Neither one is associative, for instance. > It seems that you are content with going as far as the proposal permits, > though you cannot define, even within the revised Class system, all the > common and useful operations on these types. This is the same situation > as with Haskell as it stands. The question is whether the (IMHO) > marginal increase in flexibility is worth the cost. I believe that with this structure as base, the other common and useful operations can easily be added on top. But I should go ahead and do it. Best, Dylan Thurston From qrczak@knm.org.pl Mon Feb 12 17:20:43 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 12 Feb 2001 17:20:43 GMT Subject: Revamping the numeric classes References: <200102080800.VAA36591@waytogo.peace.co.nz> <14979.29205.362496.555808@waytogo.peace.co.nz> <14983.2832.508757.673931@waytogo.peace.co.nz> Message-ID: Mon, 12 Feb 2001 10:58:40 +1300, Tom Pledger pisze: > | Approach it differently. z is Double, (x+y) is added to it, so > | (x+y) must have type Double. > > That's a restriction I'd like to avoid. Instead: ...so the most > specific common supertype of Double and (x+y)'s type must support > addition. In general there is no such thing as (x+y)'s type considered separately from this usage. The use of (x+y) as one of arguments of this addition influences the type determined for it. Suppose x and y are lambda-bound variables: then you don't know their types yet. Currently this addition determines their types: it must be the same as the type of z. With your rules the type of \x y -> x + y is not (some context) => a -> a -> a but (some context) => a -> b -> c It leads to horrible ambiguities unless the context is able to determine some types exactly (which is currently true only for fundeps). > | Why is your approach better than mine? > > It used a definition of (+) which was a closer fit for the types of x > and y. But used a worse definition of the outer (+): mine was Double -> Double -> Double and yours was Int -> Double -> Double with the implicit conversion of Int to double. > Yes, I rashly glossed over the importance of having well-defined most > specific common supertype (MSCS) and least specific common subtype > (LSCS) operators in a subtype lattice. They are not always defined. Suppose the following holds: Word32 `Subtype` Double Word32 `Subtype` Integer Int32 `Subtype` Double Int32 `Subtype` Integer What is the MSCS of Word32 and Int32? What is the LSCS of Double and Integer? > Anyway, since neither of us is about to have a change of mind, and > nobody else is showing an interest in this branch of the discussion, > it appears that the most constructive thing for me to do is return to > try-to-keep-quiet-about-subtyping-until-I've-done-it-in-THIH mode. IMHO it's impossible to do. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From dpt@math.harvard.edu Mon Feb 12 17:24:53 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Mon, 12 Feb 2001 12:24:53 -0500 Subject: Clean numeric system? In-Reply-To: <3A8811E6.B4DE31@info.unicaen.fr>; from karczma@info.unicaen.fr on Mon, Feb 12, 2001 at 04:40:06PM +0000 References: <3A8811E6.B4DE31@info.unicaen.fr> Message-ID: <20010212122453.D4259@math.harvard.edu> On Mon, Feb 12, 2001 at 04:40:06PM +0000, Jerzy Karczmarczuk wrote: > This is the way I program in Clean, where there is no Num, and (+), (*), > zero, abs, etc. constitute classes by themselves. ... I've heard Clean mentioned before in this context, but I haven't found the Clean numeric class system described yet. Can you send me a pointer to their class system, or just give me a description? Does each operation really have its own class? That seems slightly silly. Are the (/) and 'recip' equivalents independent, and independent of (*) as well? Best, Dylan Thurston From dpt@math.harvard.edu Mon Feb 12 18:15:14 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Mon, 12 Feb 2001 13:15:14 -0500 Subject: A sample revised prelude for numeric classes In-Reply-To: ; from qrczak@knm.org.pl on Mon, Feb 12, 2001 at 07:24:31AM +0000 References: <20010211174215.A2033@math.harvard.edu> <20010211222753.A2561@math.harvard.edu> Message-ID: <20010212131514.E4259@math.harvard.edu> On Mon, Feb 12, 2001 at 07:24:31AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Sun, 11 Feb 2001 22:27:53 -0500, Dylan Thurston pisze: > > Reading this, it occurred to me that you could explictly declare an > > instance of Powerful Integer Integer and have everything else work. > No, because it overlaps with Powerful a Integer (the constraint on a > doesn't matter for determining if it overlaps). Point. Thanks. Slightly annoying. > > > Then the second argument of (^) is always arbitrary RealIntegral, > > > > Nit: the second argument should be an Integer, not an arbitrary > > RealIntegral. > > Of course not. (2 :: Integer) ^ (i :: Int) makes perfect sense. But for arbitrary RealIntegrals it need not make sense. Please do not assume that toInteger :: RealIntegral a => a -> Integer toInteger n | n < 0 = toInteger negate n toInteger 0 = 0 toInteger n | n > 0 = 1 + toInteger (n-1) (or the more efficient version using 'even') terminates (in principle) for all RealIntegrals, at least with the definition as it stands in my proposal. Possibly toInteger should be added; then (^) could have the type you suggest. For usability issues, I suppose it should. (E.g., users will want to use Int ^ Int.) OK, I'm convinced of the necessity of toInteger (or an equivalent). I'll fit it in. Best, Dylan Thurston From dpt@math.harvard.edu Mon Feb 12 18:23:53 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Mon, 12 Feb 2001 13:23:53 -0500 Subject: A sample revised prelude for numeric classes In-Reply-To: <20010211211753.H641@holomorphy.com>; from wli@holomorphy.com on Sun, Feb 11, 2001 at 09:17:53PM -0800 References: <20010211174215.A2033@math.harvard.edu> <20010211184842.F641@holomorphy.com> <20010211225629.B2561@math.harvard.edu> <20010211211753.H641@holomorphy.com> Message-ID: <20010212132353.F4259@math.harvard.edu> On Sun, Feb 11, 2001 at 09:17:53PM -0800, William Lee Irwin III wrote: > Consider taking of the residue of a truly infinite member of Z[[x]] > mod an ideal generated by a polynomial, e.g. 1/(1-x) mod (1+x^2). > You can take the residue of each term of 1/(1-x), so x^(2n) -> (-1)^n > and x^(2n+1) -> (-1)^n x, but you end up with an infinite number of > (nonzero!) residues to add up and hence encounter the troubles with > processes not being finite that I mentioned. Sorry, isn't (1+x^2) invertible in Z[[x]]? > I think it's nice to have the Cauchy principal value versions of things > floating around. I know at least that I've had call for using the CPV > of exponentiation (and it's not hard to contrive an implementation), > but I'm almost definitely an atypical user. (Note, (**) does this today.) Does Cauchy Principal Value have a specific definition I should know? The Haskell report refers to the APL language report; do you mean that definition? For the Complex class, that should be the choice. > I neglected here to add in the assumption that (<=) was a total relation, > I had in mind antisymmetry of (<=) in posets so that element isomorphism > implies equality. Introducing a Poset class where elements may be > incomparable appears to butt against some of the bits where Bool is > hardwired into the language, at least where one might attempt to use a > trinary logical type in place of Bool to denote the result of an > attempted comparison. I'm still agnostic on the Poset issue, but as an aside, let me mention that "Maybe Bool" works very well as a trinary logical type. "liftM2 &&" does the correct trinary and, for instance. > On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote: > > But to define <= in terms of meet and join you already need Eq! > > > > x <= y === meet x y == y > > I don't usually see this definition of (<=), and it doesn't seem like > the natural way to go about defining it on most machines. The notion > of the partial (possibly total) ordering (<=) seems to be logically > prior to that of the meet to me. The containment usually goes: It may be logically prior, but computationally it's not... Note that the axioms for lattices can be stated either in terms of the partial ordering, or in terms of meet and join. (In a completely fine-grained ordering heirarchy, I would have the equation I gave above as a default definition for <=, with the expectation that most users would want to override it. Compare my fromInteger default definition.) Best, Dylan Thurston From dpt@math.harvard.edu Mon Feb 12 18:51:54 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Mon, 12 Feb 2001 13:51:54 -0500 Subject: Typing units correctly In-Reply-To: <200102120908.KAA01548@cuchulain.it.kth.se>; from lisper@it.kth.se on Mon, Feb 12, 2001 at 10:08:02AM +0100 References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> <14983.29211.220380.337502@waytogo.peace.co.nz> <200102120908.KAA01548@cuchulain.it.kth.se> Message-ID: <20010212135154.G4259@math.harvard.edu> On Mon, Feb 12, 2001 at 10:08:02AM +0100, Bjorn Lisper wrote: > >The main complication is that the type system needs to deal with > >integer exponents of dimensions, if it's to do the job well. > Andrew Kennedy has basically solved this for higher order languages with HM > type inference. He made an extension of the ML type system with dimensional > analysis a couple of years back. Sorry I don't have the references at hand > but he had a paper in ESOP I think. The papers I could find (e.g., http://citeseer.nj.nec.com/kennedy94dimension.html, "Dimension Types") mention extensions to ML. I wonder if it is possible to work within the Haskell type system, which is richer than ML's type system. The main problem I see is that the dimensions should commute: Length * Time = Time * Length. I can't think of how to represent Length, Time, and * as types, type constructors, or whatnot so that that would be true. You could put in functions to explicitly do the conversion, but that obviously gets impractical. Any such system would probably not be able to type (^), since the output type depends on the exponent. I think that is acceptable. I think you would also need a finer-grained heirarchy in the Prelude (including than in my proposal) to get this to work. > It would be quite interesting to have a version of Haskell that would allow > the specification of differential equations, so one could make use of all > the good features of Haskell for this. This would allow the unified > specification of systems that consist both of physical and computational > components. This niche is now being filled by a mix of special-purpose > modeling languages like Modelica and Matlab/Simulink for the physical part, > and SDL and UML for control parts. The result is likely to be a mess, in > particular when these specifications are to be combined into full system > descriptions. My hope is that you wouldn't need a special version of Haskell. Best, Dylan Thurston From Tom.Pledger@peace.com Mon Feb 12 20:59:37 2001 From: Tom.Pledger@peace.com (Tom Pledger) Date: Tue, 13 Feb 2001 09:59:37 +1300 Subject: Typing units correctly In-Reply-To: <20010212135154.G4259@math.harvard.edu> References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> <14983.29211.220380.337502@waytogo.peace.co.nz> <200102120908.KAA01548@cuchulain.it.kth.se> <20010212135154.G4259@math.harvard.edu> Message-ID: <14984.20153.229218.184472@waytogo.peace.co.nz> Dylan Thurston writes: | Any such system would probably not be able to type (^), since the | output type depends on the exponent. I think that is acceptable. In other words, the first argument to (^) would have to be dimensionless? I agree. So would the arguments to trig functions, etc. Ashley Yakeley writes: | Very occasionally non-integer or 'fractal' exponents of dimensions | are useful. For instance, geographic coastlines can be measured in | km ^ n, where 1 <= n < 2. This doesn't stop the CIA world factbook | listing all coastline lengths in straight kilometres, however. David Barton writes: | Even without fractals, there are cases where weird dimensions come | up (I ran across this in my old MHDL (microwave) days). Square | root volts is the example that was constantly thrown in my face. In both of those cases, the apparent non-integer dimension is accompanied by a particular unit (km, V). So, could they equally well be handled by stripping away the units and exponentiating a dimensionless number? For example: (x / 1V) ^ y Regards, Tom From jhf@lanl.gov Mon Feb 12 21:13:38 2001 From: jhf@lanl.gov (Joe Fasel) Date: Mon, 12 Feb 2001 14:13:38 -0700 (MST) Subject: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: <20010209125512.D960@holomorphy.com> Message-ID: On 09-Feb-2001 William Lee Irwin III wrote: | Matrix rings actually manage to expose the inappropriateness of signum | and abs' definitions and relationships to Num very well: | | class (Eq a, Show a) => Num a where | (+), (-), (*) :: a -> a -> a | negate :: a -> a | abs, signum :: a -> a | fromInteger :: Integer -> a | fromInt :: Int -> a -- partain: Glasgow extension | | Pure arithmetic ((+), (-), (*), negate) works just fine. | | But there are no good injections to use for fromInteger or fromInt, | the type of abs is wrong if it's going to be a norm, and it's not | clear that signum makes much sense. For fromInteger, fromInt, and abs, the result should be a scalar matrix. For the two coercions, I don't think there would be much controversy about this. I agree that it would be nice if abs could return a scalar, but this requires multiparameter classes, so we have to make do with a scalar matrix. We already have this problem with complex numbers: It might be nice if the result of abs were real. signum does make sense. You want abs and signum to obey these laws: x == abs x * signum x abs (signum x) == (if abs x == 0 then 0 else 1) Thus, having fixed an appropriate matrix norm, signum is a normalization function, just as with reals and complexes. If we make the leap to multiparameter classes, I think this is the signature we want: class (Eq a, Show a) => Num a b | a --> b where (+), (-), (*) :: a -> a -> a negate :: a -> a abs :: a -> b signum :: a -> a scale :: b -> a -> a fromInteger :: Integer -> a fromInt :: Int -> a Here, b is the type of norms of a. Instead of the first law above, we have x == scale (abs x) (signum x) All this, of course, is independent of whether we want a more proper algebraic class hierarchy, with (+) introduced by Monoid, negate and (-) by Group, etc. Cheers, --Joe Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Technology Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: TSA-7 MS F609; Los Alamos, NM 87545 From wli@holomorphy.com Mon Feb 12 21:31:29 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Mon, 12 Feb 2001 13:31:29 -0800 Subject: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: ; from jhf@lanl.gov on Mon, Feb 12, 2001 at 02:13:38PM -0700 References: <20010209125512.D960@holomorphy.com> Message-ID: <20010212133129.M641@holomorphy.com> On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote: > For fromInteger, fromInt, and abs, the result should be a scalar matrix. > For the two coercions, I don't think there would be much controversy > about this. I agree that it would be nice if abs could return a > scalar, but this requires multiparameter classes, so we have to make > do with a scalar matrix. I'm not a big fan of this approach. I'd like to see at least some attempt to statically type dimensionality going on, and that flies in the face of it. Worse yet, coercing integers to matrices is likely to be a programmer error. On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote: > signum does make sense. You want abs and signum to obey these laws: > > x == abs x * signum x > abs (signum x) == (if abs x == 0 then 0 else 1) > > Thus, having fixed an appropriate matrix norm, signum is a normalization > function, just as with reals and complexes. This works fine for matrices of reals, for matrices of integers and polynomials over integers and the like, it breaks down quite quickly. It's unclear that in domains like that, the norm would be meaningful (in the sense of something we might want to compute) or that it would have a type that meshes well with a class hierarchy we might want to design. Matrices over Z/nZ for various n and Galois fields, and perhaps various other unordered algebraically incomplete rings explode this further still. On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote: > If we make the leap to multiparameter classes, I think this is > the signature we want: Well, nothing is going to satisfy everyone. It's pretty reasonable, though. Cheers, Bill From jhf@lanl.gov Mon Feb 12 21:51:52 2001 From: jhf@lanl.gov (Joe Fasel) Date: Mon, 12 Feb 2001 14:51:52 -0700 (MST) Subject: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: <20010212133129.M641@holomorphy.com> Message-ID: On 12-Feb-2001 William Lee Irwin III wrote: | On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote: |> signum does make sense. You want abs and signum to obey these laws: |> |> x == abs x * signum x |> abs (signum x) == (if abs x == 0 then 0 else 1) |> |> Thus, having fixed an appropriate matrix norm, signum is a normalization |> function, just as with reals and complexes. | | This works fine for matrices of reals, for matrices of integers and | polynomials over integers and the like, it breaks down quite quickly. | It's unclear that in domains like that, the norm would be meaningful | (in the sense of something we might want to compute) or that it would | have a type that meshes well with a class hierarchy we might want to | design. Matrices over Z/nZ for various n and Galois fields, and perhaps | various other unordered algebraically incomplete rings explode this | further still. Fair enough. So, the real question is not whether signum makes sense, but whether abs does. I guess the answer is that it does for matrix rings over division rings. Cheers, --Joe Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Technology Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: TSA-7 MS F609; Los Alamos, NM 87545 From wli@holomorphy.com Mon Feb 12 22:10:20 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Mon, 12 Feb 2001 14:10:20 -0800 Subject: A sample revised prelude for numeric classes In-Reply-To: <20010212132353.F4259@math.harvard.edu>; from dpt@math.harvard.edu on Mon, Feb 12, 2001 at 01:23:53PM -0500 References: <20010211174215.A2033@math.harvard.edu> <20010211184842.F641@holomorphy.com> <20010211225629.B2561@math.harvard.edu> <20010211211753.H641@holomorphy.com> <20010212132353.F4259@math.harvard.edu> Message-ID: <20010212141020.N641@holomorphy.com> On Sun, Feb 11, 2001 at 09:17:53PM -0800, William Lee Irwin III wrote: >> mod an ideal generated by a polynomial, e.g. 1/(1-x) mod (1+x^2). On Mon, Feb 12, 2001 at 01:23:53PM -0500, Dylan Thurston wrote: > Sorry, isn't (1+x^2) invertible in Z[[x]]? You've caught me asleep at the wheel again. Try 1/(1-x) mod 2+x^2. Then x^(2n) -> (-2)^n x^(2n+1) -> (-2)^n x so our process isn't finite again, and as 2 is not a unit in Z, 2+x^2 is not a unit in Z[[x]]. On Sun, Feb 11, 2001 at 09:17:53PM -0800, William Lee Irwin III wrote: >> I think it's nice to have the Cauchy principal value versions of things >> floating around. I know at least that I've had call for using the CPV >> of exponentiation (and it's not hard to contrive an implementation), >> but I'm almost definitely an atypical user. (Note, (**) does this today.) On Mon, Feb 12, 2001 at 01:23:53PM -0500, Dylan Thurston wrote: > Does Cauchy Principal Value have a specific definition I should know? > The Haskell report refers to the APL language report; do you mean that > definition? The Cauchy principal value of an integral seems fairly common in complex analysis, and so what I mean by the CPV of exponentiation is using the principal value of the logarithm in the definition w^z = exp (z * log w). Essentially, given an integral from one point to another in the complex plane (where the points can be e^(i*\gamma)*\infty) the Cauchy principal value specifies precisely which contour to use, for if the function has a singularity, connecting the endpoints by a countour that loops about those singularities a number of times will affect the value of the integral. This is fairly standard complex analysis, are you sure you can't dig it up somewhere? It basically says to connect the endpoints of integration by a straight line unless singularities occur along that line, and in that case, to shrink a semicircle about the singularities, and the limit is the Cauchy principal value. More precise definitions are lengthier. On Mon, Feb 12, 2001 at 01:23:53PM -0500, Dylan Thurston wrote: > I'm still agnostic on the Poset issue, but as an aside, let me mention > that "Maybe Bool" works very well as a trinary logical type. "liftM2 > &&" does the correct trinary and, for instance. I can only argue against this on aesthetic grounds. (<=) and cousins are not usually typed so as to return Maybe Bool. On Mon, Feb 12, 2001 at 01:23:53PM -0500, Dylan Thurston wrote: > It may be logically prior, but computationally it's not... Note that > the axioms for lattices can be stated either in terms of the partial > ordering, or in terms of meet and join. I was under the impression the distinction between lattices and partial orders was the existence of the meet and join operations. Actually, I think my argument centers about the use of the antisymmetry of the relation (<=) being used to define computational equality in some instances. Can I think of any good examples? Well, a contrived one would be that on types, if there is a substitution S such that S t = t' (structurally), where we might say that t' <= t, and also a substitution S' so such that S' t' = t (again, structurally) where we might say that t <= t', so we have then t == t' (semantically). Yes, I realize this is not a great way to go about this. Another (perhaps contrived) example would be ordering expression trees by the flat CPO bottom <= _ on constants of a signature, and the natural business where if the trees differ in structure, they're incomparable, except where bottom would be compared with something non-bottom, in which case (<=) holds. In this case, we might want equality to be that two expression trees t, t' are equal iff there are sequences of reductions r, r' such that r t = r' t' (again, structurally). You might argue that the notion of structural equality underlying these is some sort of grounds for the dependency, and I think that hits on the gray area where design decisions come in. What I'm hoping the examples demonstrate is the mathematical equality and ordering (in some metalanguage) underlie both of the computational notions, and that the computational notions may very reverse or break the dependency class Eq t => Ord t where ... especially when the structure of the data does not reflect the equivalence relation we'd like (==) to denote. Cheers, Bill From wli@holomorphy.com Mon Feb 12 22:38:25 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Mon, 12 Feb 2001 14:38:25 -0800 Subject: Primitive types and Prelude shenanigans In-Reply-To: ; from mk167280@students.mimuw.edu.pl on Mon, Feb 12, 2001 at 11:00:02AM +0100 References: <20010212004357.K641@holomorphy.com> Message-ID: <20010212143825.O641@holomorphy.com> On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote: > It depends on the implementation and IMHO it would be bad to require > a particular implementation for no reason. For example ghc uses the gmp > library and does not implement Integers in terms of Naturals; gmp handles > negative numbers natively. I'm aware natural numbers are not a primitive data type within Haskell; I had the idea in mind that for my own experimentation I might add them. On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote: > You can define it yourself by wrapping not-necessarily-positive types > if you feel the need. Most of the time there is no need because Haskell > has no subtyping - they would be awkward to use together with present > types which include negative numbers. Perhaps I should clarify my intentions: The various symbols for integer literals all uniformly denote the values fromInteger #n where #n is some monotype or other. What I had in mind was (again, for my own wicked purposes) treating specially the symbols 0 and 1 so that the implicit coercions going on are for the type classes where additive and multiplicative identities exist, then overloading the positive symbols so that the implicit coercion is instead fromNatural, and then leaving the negative symbols (largely) as they are. This is obviously too radical for me to propose it as anything, I intend to only do it as an experiment or perhaps for my own usage (though if others find it useful, they can have it). On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote: > Modules with names beginning with Prel define approximately everything > what Prelude includes and everything with magic support in the compiler. I've not only already found these, but in attempting to substantially alter them I've run into the trouble below: On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote: > PrelGHC defines primops that are hardwired in the compiler, and PrelBase > is a basic module from which most things begin. In particular Bool is > defined there as a regular algebraic type > data Bool = False | True The magic part I don't seem to get is that moving the definition of Bool around and also changing the types of various things assumed to be Bool causes things to break. The question seems to be of figuring out what depends on it being where and how to either make it more flexible or accommodate it. On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote: [useful info not needing a response snipped] On Mon, 12 Feb 2001, William Lee Irwin III wrote: >> I'd also like to see where some of the magic behind the typing of >> various other built-in constructs happens, like list comprehensions, >> tuples, and derived classes. On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote: > Inside the compiler, not in libraries. I had in mind looking within the compiler, actually. Where in the compiler? It's a big program, it might take me a while to do an uninformed search. I've peeked around a little bit and not gotten anywhere. Cheers, Bill From laszlo@ropas.kaist.ac.kr Tue Feb 13 03:06:01 2001 From: laszlo@ropas.kaist.ac.kr (Laszlo Nemeth) Date: Tue, 13 Feb 2001 12:06:01 +0900 (KST) Subject: Deja vu: Re: In hoc signo vinces (Was: Revamping the numeric classes) In-Reply-To: <3A8811E6.B4DE31@info.unicaen.fr> (message from Jerzy Karczmarczuk on Mon, 12 Feb 2001 16:40:06 +0000) References: <3A8811E6.B4DE31@info.unicaen.fr> Message-ID: <200102130306.MAA27712@ropas.kaist.ac.kr> [incomprehensible (not necessarily wrong!) stuff about polynomials, rings, modules over Z and complaints about the current prelude nuked] --- Marcin 'Qrczak' Kowalczyk pisze --- > Please show a concrete proposal how Prelude classes could be improved. --- Jerzy Karczmarczuk repondre --- > I am Haskell USER. I have no ambition to save the world. The "proposal" > has been presented in 1995 in Nijmegen (FP in education). Actually, it > hasn't, I concentrated on lazy power series etc., and the math oriented > prelude has been mentioned casually. Jeroen Fokker presented similar > ideas, implemented differently. I'm afraid all this discussion reminds me the one we had a year or two ago. At that time the mathematically inclined side was lead by Sergei, who to his credit developed the Basic Algebra Proposal, which I don't understand, but many people seemed to be happy about at that time. And then of course nothing happend, because no haskell implementor has bitten the bullet and implemented the proposal. This is something understandable as supporting Sergei's proposal seem to be a lot of work, most of which would be incompatible with current implementations. And noone wants to maintain *two* haskell compilers within one. Even if this discussion continues and another brave soul develops another algebra proposal I am prepared to bet with both of you in one years supply of Ben and Jerry's (not Jerzy :)!) icecream that nothing will continue to happen on the implementors side. It is simply too much work for an *untested* (in practice, for teaching etc) alternative prelude. So instead of wasting time, why don't you guys ask the implementors to provide a flag '-IDontWantYourStinkingPrelude' which would give you a bare metal compiler with no predefined types, functions, classes, no derived instances, no fancy stuff and build and test your proposals with it? I guess the RULES pragma (in GHC) could be abused to allow access to the primitive operations (on Ints), but you are still likely to loose much of the elegance, conciseness and perhaps even some efficiency of Haskell (e.g. list comprehensions), but this should allow us to gain experience in what sort of support is essential for providing alternative prelude(s). Once we learnt how to decouple the prelude from the compiler, and gained experience with alternative preludes implementors would have no excuse not to provide the possibility (unless it turns out to be completely impossible or impractical, in which case we learnt something genuinely useful). So, Marcin (as you are one of the GHC implementors), how much work would it be do disable the disputed Prelude stuff within the compiler, and what would be lost? Laszlo [Disclaimer: Just my 10 wons. This message is not in disagreement or agreement with any of the previous messages] From simonpj@microsoft.com Tue Feb 13 01:16:02 2001 From: simonpj@microsoft.com (Simon Peyton-Jones) Date: Mon, 12 Feb 2001 17:16:02 -0800 Subject: Revamping the numeric HUMAN ATTITUDE Message-ID: <37DA476A2BC9F64C95379BF66BA269023D931C@red-msg-09.redmond.corp.microsoft.com> | I'm seeing a bit of this now, and the error messages GHC spits out | are hilarious! e.g. | | My brain just exploded. | I can't handle pattern bindings for | existentially-quantified constructors. | | and | | Couldn't match `Bool' against `Bool' | Expected type: Bool | Inferred type: Bool | The first of these is defensible, I think. It's not at all clear (to me anyway) what pattern bindings for existentially-quantified constructors should mean. The second is plain bogus. GHC should never give a message like that. Which version of the compiler are you using? If you can send a small example I'll try it on the latest compiler. Simon From dlb@wash.averstar.com Tue Feb 13 11:43:06 2001 From: dlb@wash.averstar.com (David Barton) Date: Tue, 13 Feb 2001 06:43:06 -0500 Subject: Typing units correctly In-Reply-To: <14984.20153.229218.184472@waytogo.peace.co.nz> (message from Tom Pledger on Tue, 13 Feb 2001 09:59:37 +1300) References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> <14983.29211.220380.337502@waytogo.peace.co.nz> <200102120908.KAA01548@cuchulain.it.kth.se> <20010212135154.G4259@math.harvard.edu> <14984.20153.229218.184472@waytogo.peace.co.nz> Message-ID: <200102131143.GAA01181@hudson.wash.averstar.com> Tom Pledger writes: In both of those cases, the apparent non-integer dimension is accompanied by a particular unit (km, V). So, could they equally well be handled by stripping away the units and exponentiating a dimensionless number? For example: (x / 1V) ^ y I think not. The "Dimension Types" paper really is excellent, and makes the distinction between the necessity of exponents on the dimensions and the exponents on the numbers very clear; I commend it to everyone in this discussion. The two things (a number of "square root volts" and a "number of volts to an exponent" are different things, unless you are simply trying to represent a ground number as an expression! Dave Barton <*> dlb@averstar.com )0( http://www.averstar.com/~dlb From dpt@math.harvard.edu Tue Feb 13 19:01:25 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Tue, 13 Feb 2001 14:01:25 -0500 Subject: A sample revised prelude for numeric classes In-Reply-To: ; from qrczak@knm.org.pl on Mon, Feb 12, 2001 at 12:26:35AM +0000 References: <20010211174215.A2033@math.harvard.edu> Message-ID: <20010213140125.A8202@math.harvard.edu> On Mon, Feb 12, 2001 at 12:26:35AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > I must say I like it. It has a good balance between generality and > usefulness / convenience. > > Modulo a few details, see below. > > > > class (Num a, Additive b) => Powerful a b where > > > (^) :: a -> b -> a > > > instance (Num a) => Powerful a (Positive Integer) where > > > a ^ 0 = one > > > a ^ n = reduceRepeated (*) a n > > > instance (Fractional a) => Powerful a Integer where > > > a ^ n | n < 0 = recip (a ^ (negate n)) > > > a ^ n = a ^ (positive n) > > I don't like the fact that there is no Powerful Integer Integer. > Since the definition on negative exponents really depends on the first > type but can be polymorphic wrt. any Integral exponent, I would make > other instances instead: > > instance RealIntegral b => Powerful Int b > instance RealIntegral b => Powerful Integer b > instance (Num a, RealIntegral b) => Powerful (Ratio a) b > instance Powerful Float Int > instance Powerful Float Integer > instance Powerful Float Float > instance Powerful Double Int > instance Powerful Double Integer > instance Powerful Double Double OK, I'm slow. I finally understand your point here. I might leave off a few cases, and simplify this to instance Powerful Int Int instance Powerful Integer Integer instance (Num a, SmallIntegral b) => Powerful (Ratio a) b instance Powerful Float Float instance Powerful Double Double instance Powerful Complex Complex (where "SmallIntegral" is a class that contains toInteger; "small" in the sense that it fits inside an Integer.) All of these call one of 3 functions: postivePow :: (Num a, SmallIntegral b) => a -> b -> a integerPow :: (Fractional a, SmallIntegral b) => a -> b -> a analyticPow :: (Floating a) => a -> a -> a (These 3 functions might be in a separate module from the Prelude.) Consequences: you cannot, e.g., raise a Double to an Integer power without an explicit conversion or calling a different function (or declaring your own instance). Is this acceptable? I think it might be: after all, you can't multiply a Double by an Integer either... You then have one instance declaration per type, just as for the other classes. Opinions? I'm still not very happy. Best, Dylan Thurston From qrczak@knm.org.pl Tue Feb 13 19:47:09 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 13 Feb 2001 19:47:09 GMT Subject: A sample revised prelude for numeric classes References: <20010211174215.A2033@math.harvard.edu> <20010213140125.A8202@math.harvard.edu> Message-ID: Tue, 13 Feb 2001 14:01:25 -0500, Dylan Thurston pisze: > Consequences: you cannot, e.g., raise a Double to an Integer power > without an explicit conversion or calling a different function (or > declaring your own instance). Is this acceptable? I don't like it: (-3::Double)^2 should be 9, and generally x^(2::Integer) should be x*x for all types of x where it makes sense. Same for Int. (**) does not work for negative base. Neither of (^) and (**) is a generalization of the other: the knowledge that an exponent is restricted to integers widens the domain of the base. x^2 = x*x cannot actually work for any x in Num, or whatever the class of (*) is called, if (^) is not defined inside the same class. This is because (^) is unified with (^^): the unified (^) should use recip if available, but be partially defined without it if it's not available. So I propose to put (^) together with (*). With a default definition of course. It means "apply (*) the specified number of times", and for fractional types has a meaning extended to negative exponents. (^) is related to (*) as discussed times or scale is related to (+). (**):: a -> a -> a, together with other analytic functions. Sorry, the fact that they are written the same in conventional math is not enough to force their unification against technical reasons. It's not bad: we succeeded in unification of (^) and (^^). -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From dpt@math.harvard.edu Tue Feb 13 23:32:21 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Tue, 13 Feb 2001 18:32:21 -0500 Subject: Revised numerical prelude, version 0.02 Message-ID: <20010213183221.B973@math.harvard.edu> --dDRMvlgZJXvWKvBx Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Here's a revision of the numerical prelude. Many thanks to all who helped. Changes include: * Removed "Powerful", replacing it with (^) in Num and (**) in Real. * Fixed numerous typos * Removed gcd and co. from Integral * Added shortcomings & limitation of scope * Added SmallIntegral, SmallReal * wrote skeleton VectorSpace, PowerSeries * Added framework to make it run under hugs. There are some usability issues. Any comments welcome! Best, Dylan Thurston --dDRMvlgZJXvWKvBx Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="NumPrelude.lhs" Revisiting the Numeric Classes ------------------------------ The Prelude for Haskell 98 offers a well-considered set of numeric classes which cover the standard numeric types (Integer, Int, Rational, Float, Double, Complex) quite well. But they offer limited extensibility and have a few other flaws. In this proposal we will revisit these classes, addressing the following concerns: (1) The current Prelude defines no semantics for the fundamental operations. For instance, presumably addition should be associative (or come as close as feasible), but this is not mentioned anywhere. (2) There are some superfluous superclasses. For instance, Eq and Show are superclasses of Num. Consider the data type data IntegerFunction a = IF (a -> Integer) One can reasonably define all the methods of Num for IntegerFunction a (satisfying good semantics), but it is impossible to define non-bottom instances of Eq and Show. In general, superclass relationship should indicate some semantic connection between the two classes. (3) In a few cases, there is a mix of semantic operations and representation-specific operations. toInteger, toRational, and the various operations in RealFloating (decodeFloat, ...) are the main examples. (4) In some cases, the hierarchy is not finely-grained enough: operations that are often defined independently are lumped together. For instance, in a financial application one might want a type "Dollar", or in a graphics application one might want a type "Vector". It is reasonable to add two Vectors or Dollars, but not, in general, reasonable to multiply them. But the programmer is currently forced to define a method for (*) when she defines a method for (+). In specifying the semantics of type classes, I will state laws as follows: (a + b) + c === a + (b + c) The intended meaning is extensional equality: the rest of the program should behave in the same way if one side is replaced with the other. Unfortunately, the laws are frequently violated by standard instances; the law above, for instance, fails for Float: (100000000000000000000.0 + (-100000000000000000000.0)) + 1.0 = 1.0 100000000000000000000.0 + ((-100000000000000000000.0) + 1.0) = 0.0 Thus these laws should be interpreted as guidelines rather than absolute rules. In particular, the compiler is not allowed to use them. Unless stated otherwise, default definitions should also be taken as laws. This version is fairly conservative. I have retained the names for classes with similar functions as far as possible, I have not made some distinctions that could reasonably be made, and I have tried to opt for simplicity over generality. Thanks to Brian Boutel, Joe English, William Lee Irwin II, Marcin Kowalczyk, and Ken Shan for helpful comments. Scope & Limitations/TODO: * It might be desireable to split Ord up into Poset and Ord (a total ordering). This is not addressed here. * In some cases, this heirarchy is not fine-grained enough. For instance, time spans ("5 minutes") can be added to times ("12:34"), but two times are not addable. ("12:34 + 8:23"??) As it stands, users have to use a different operator for adding time spans to times than for adding two time spans. Similar issues arise for vector space et al. This is a consciously-made tradeoff, but might be changed. This becomes most serious when dealing with quantities with units like length/distance^2, for which (*) as defined here is useless, but Haskell's type system doesn't seem to be strong enough to deal with those in any convenient way. [One way to see the issue: should f x y = iterate (x *) y have principal type (Num a) => a -> a -> [a] or something like (Num a, Module a b) => a -> b -> [b] ?] * I stuck with the Haskell 98 names. In some cases I find them lacking. Given free rein and not worrying about backwards compatibility, I might rename the classes as follows: Num --> Ring Floating --> Analytic RealFloat --> RealAnalytic * I'm not happy with Haskell's current treatment of numeric literals. I'm particularly unhappy with their use in pattern matching. I feel like it should be a special case of some more general construction. I'd like to make it easier to use a non-standard Prelude, but there's a little too much magic. For instance, the definition of round in the Haskell 98 Prelude is round x = let (n,r) = properFraction x m = if r < 0 then n - 1 else n + 1 in case signum (abs r - 0.5) of -1 -> n 0 -> if even n then n else m 1 -> m I'd like to copy this over to this revised library. But the numeric constants have to be wrapped in explicit calls to fromInteger. Worse, the case statement must be rewritten! > module NumPrelude where > import qualified Prelude as P > -- Import some standard Prelude types verbatim verbandum > import Prelude hiding ( > Int, Integer, Float, Double, Rational, Num(..), Real(..), > Integral(..), Fractional(..), Floating(..), RealFrac(..), > RealFloat(..), subtract, even, odd, > gcd, lcm, (^), (^^)) > > > infixr 8 ^, ** > infixl 7 * > infixl 7 /, `quot`, `rem`, `div`, `mod` > infixl 6 +, - > > class Additive a where > (+), (-) :: a -> a -> a > negate :: a -> a > zero :: a > > -- Minimal definition: (+), zero, and (negate or (-1)) > negate a = zero - a > a - b = a + (negate b) Additive a encapsulates the notion of a commutative group, specified by the following laws: a + b === b + a (a + b) + c) === a + (b + c) zero + a === a a + (negate a) === 0 Typical examples include integers, dollars, and vectors. > class (Additive a) => Num a where > (*) :: a -> a -> a > one :: a > fromInteger :: Integer -> a > (^) :: (SmallIntegral b) => a -> b -> a > > -- Minimal definition: (*), (one or fromInteger) > fromInteger n | n < 0 = negate (fromInteger (-n)) > fromInteger n | n >= 0 = reduceRepeated (+) zero one n > a ^ n | n < zero = error "Illegal negative exponent" > | True = reduceRepeated (*) one a (toInteger n) > one = fromInteger 1 Num encapsulates the mathematical structure of a (not necessarily commutative) ring, with the laws a * (b * c) === (a * b) * c one * a === a a * one === a a * (b + c) === a * b + a * c Typical examples include integers, matrices, and quaternions. "reduceRepeat op a n" is an auxiliary function that, for an associative operation "op", computes the same value as reduceRepeated op a0 a n = foldr op a0 (repeat (fromInteger n) a) but applies "op" O(log n) times and works for large n. A sample implementation is below: > reduceRepeated :: (a -> a -> a) -> a -> a -> Integer -> a > reduceRepeated op a0 a n > | n == 0 = a0 > | even n = reduceRepeated op a0 (op a a) (div n 2) > | True = reduceRepeated op (op a0 a) (op a a) (div n 2) > class (Num a) => Integral a where > div, mod :: a -> a -> a > divMod :: a -> a -> (a,a) > > -- Minimal definition: divMod or (div and mod) > div a b = let (d,_) = divMod a b in d > mod a b = let (_,m) = divMod a b in m > divMod a b = (div a b, mod a b) Integral corresponds to a commutative ring, where "a mod b" picks a canonical element of the equivalence class of "a" in the ideal generated by "b". Div and mod satisfy the laws a * b === b * a (a `div` b) * b + (a `mod` b) === a (a+k*b) `mod` b === a `mod` b 0 `mod` b === 0 a `mod` 0 === a Typical examples of Integral include integers and polynomials over a field. Note that for a field, there is a canonical instance defined by the above rules; e.g., instance Integral Rational where divMod a 0 = (a,undefined) divMod a b = (0,a/b) > class (Num a) => Fractional a where > (/) :: a -> a -> a > recip :: a -> a > fromRational :: Rational -> a > > -- Minimal definition: recip or (/) > recip a = one / a > a / b = a * (recip b) > fromRational r = fromInteger (numerator r) / fromInteger (denominator r) > -- I'd like this next definition to be legal. > -- It would only apply if there were an implicit instance for Num a > -- through Fractional a. > -- a ^ n | n < 0 = reduceRepeated (^) one (recip a) (negate (toInteger n)) > -- | True = reduceRepeated (^) one a (toInteger n) Fractional again corresponds to a commutative ring. Division is partially defined and satisfies a * b === b * a a * (recip a) === one when it is defined. To safely call division, the program most take type-specific action; e.g., the following is appropriate in many cases: safeRecip :: (Integral a, Eq a, Fractional a) => a -> Maybe a safeRecip a = let (q,r) = one `divMod` b in if (r == 0) then Just q else Nothing Typical examples include rationals, the real numbers, and rational functions (ratios of polynomials). An instance should not typically be declared unless most elements are invertible. > -- Note: I think "Analytic" would be a better name than "Floating". > class (Fractional a) => Floating a where > pi :: a > exp, log, sqrt :: a -> a > logBase, (**) :: a -> a -> a > sin, cos, tan :: a -> a > asin, acos, atan :: a -> a > sinh, cosh, tanh :: a -> a > asinh, acosh, atanh :: a -> a > > -- Minimal complete definition: > -- pi, exp, log, sin, cos, sinh, cosh > -- asinh, acosh, atanh > x ** y = exp (log x * y) > logBase x y = log y / log x > sqrt x = x ** (fromRational 0.5) > tan x = sin x / cos x > tanh x = sinh x / cosh x Floating is the type of numbers supporting various analytic functions. Examples include real numbers, complex numbers, and computable reals represented as a lazy list of rational approximations. Note the default declaration for a superclass. See the comments below, under "Instance declaractions for superclasses". The semantics of these operations are rather ill-defined because of branch cuts, etc. > class (Num a, Ord a) => Real a where > abs :: a -> a > signum :: a -> a > > -- Minimal definition: nothing > abs x = max x (negate x) > signum x = case compare x zero of GT -> one > EQ -> zero > LT -> negate one This is the type of an ordered ring, satisfying the laws a * b === b * a a + (max b c) === max (a+b) (a+c) negate (max b c) === min (negate b) (negate c) a * (max b c) === max (a*b) (a*c) where a >= 0 Note that abs is in a rather different place than it is in the Haskell 98 Prelude. In particular, abs :: Complex -> Complex is not defined. To me, this seems to have the wrong type anyway; Complex.magnitude has the correct type. > class (Real a, Floating a) => RealFrac a where > -- lifted directly from Haskell 98 Prelude > properFraction :: (Integral b) => a -> (b,a) > truncate, round :: (Integral b) => a -> b > ceiling, floor :: (Integral b) => a -> b > > -- Minimal complete definition: > -- properFraction > truncate x = m where (m,_) = properFraction x > > round x = fromInteger ( > let (n,r) = properFraction x > m = if r < zero then n - one else n + one > in case compare (abs r - (fromRational 0.5)) zero of > LT -> n > EQ -> if even n then n else m > GT -> m > ) > > ceiling x = fromInteger (if r > zero then n + one else n) > where (n,r) = properFraction x > > floor x = fromInteger (if r < zero then n - one else n) > where (n,r) = properFraction x As an aside, let me note the similarities between "properFraction x" and "x divMod 1" (if that were defined). In particular, it might make sense to unify the rounding modes somehow. > class (RealFrac a, Floating a) => RealFloat a where > atan2 :: a -> a -> a > {- This needs lots of fromIntegral wrapping. > atan2 y x > | x>0 = atan (y/x) > | x==0 && y>0 = pi/2 > | x<0 && y>0 = pi + atan (y/x) > |(x<=0 && y<0) = -atan2 (-y) x > | y==0 && x<0 = pi -- must be after the previous test on zero y > | x==0 && y==0 = y -- must be after the other double zero tests > -} (Note that I removed the IEEEFloat-specific calls here, so probably nobody will actually use this default definition.) > class (Real a, Integral a) => RealIntegral a where > quot, rem :: a -> a -> a > quotRem :: a -> a -> (a,a) > > -- Minimal definition: nothing required > quot a b = let (q,_) = quotRem a b in q > rem a b = let (_,r) = quotRem a b in r > quotRem a b = let (d,m) = divMod a b in > if (signum d < (fromInteger 0)) then > (d+(fromInteger 1),m-b) else (d,m) Remember that divMod does not specify exactly a `quot` b should be, mainly because there is no sensible way to define it in general. For an instance of RealIntegral a, it is expected that a `quot` b will round towards minus infinity and a `div` b will round towards 0. > class (Real a) => SmallReal a where > toRational :: a -> Rational > class (SmallReal a, RealIntegral a) => SmallIntegral a where > toInteger :: a -> Integer These two classes exist to allow convenient conversions, primarily between the built-in types. These classes are "small" in the sense that they can be converted to integers (resp. rationals) without loss of information. They should satisfy fromInteger . toInteger === id fromRational . toRational === id toRational . toInteger === toRational > --- Numerical functions > subtract :: (Additive a) => a -> a -> a > subtract = flip (-) > > even, odd :: (Eq a, Integral a) => a -> Bool > even n = n `mod` (one + one) == fromInteger zero > odd = not . even Additional standard libraries might include IEEEFloat (including the bulk of the functions in Haskell 98's RealFloat class), VectorSpace, Ratio, and Lattice. > -- Support functions so that this whole thing can be tested on top > -- of a standard prelude. > -- Alternative: use "newtype". > type Integer = P.Integer > type Int = P.Int > type Float = P.Float > type Double = P.Double > type Rational = P.Rational -- This one is lame. > instance Additive P.Integer where > (+) = (P.+) > zero = 0 > negate = P.negate > instance Num P.Integer where > (*) = (P.*) > one = 1 > instance Integral P.Integer where > divMod = P.divMod > instance Real P.Integer > instance RealIntegral P.Integer > instance SmallReal P.Integer where > toRational = P.toRational > instance SmallIntegral P.Integer where > toInteger = id > data T a = T a --dDRMvlgZJXvWKvBx Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="VectorSpace.lhs" > module VectorSpace where > import NumPrelude > import qualified Prelude > > -- Is this right? > infixl 7 *>, <* > > class (Num a, Additive b) => Module a b where > (*>) :: a -> b -> b A module over a ring satisfies: a *> (b + c) === a *> b + a *> c (a * b) *> c === a *> (b *> c) (a + b) *> c === a *> c + b *> c For instance, the following function can be used to define any Additive as a module over Integer: > integerMultiply :: (SmallIntegral a, Additive b) => a -> b -> b > integerMultiply a b = reduceRepeated (+) zero b (toInteger a) There are no instance declarations by default, since they would overlap with too many other instances and would be slower than desired. > class (Num a, Additive b) => RightModule a b where > (<*) :: b -> a -> b > class (Fractional a, Additive b) => VectorSpace a b > class (VectorSpace a b) => DivisibleSpace a b where > () :: b -> b -> a DivisibleSpace is used for free one-dimensional vector spaces. It satisfies (a b) *> b = a Examples include dollars and kilometers. --dDRMvlgZJXvWKvBx Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="PowerSeries.lhs" > module PowerSeries where > import NumPrelude > import qualified Prelude as P > import VectorSpace > import Prelude hiding ( > Int, Integer, Float, Double, Rational, Num(..), Real(..), > Integral(..), Fractional(..), Floating(..), RealFrac(..), > RealFloat(..), subtract, even, odd, > gcd, lcm, (^), (^^)) Power series, either finite or unbounded. (zipWith does exactly the right thing to make it work almost transparently.) > newtype PowerSeries a = PS [a] deriving (Eq, Ord, Show) > stripPS (PS l) = l > truncatePS :: Int -> PowerSeries a -> PowerSeries a > truncatePS n (PS a) = PS (take n a) Note that the derived instances only make sense for finite series. > instance (Additive a) => Additive (PowerSeries a) where > negate (PS l) = PS (map negate l) > (PS a) + (PS b) = PS (zipWith (+) a b) > zero = PS (repeat zero) > > instance (Num a) => Num (PowerSeries a) where > one = PS (one:repeat zero) > fromInteger n = PS (fromInteger n : repeat zero) > PS (a:as) * PS (b:bs) = PS ((a*b):stripPS (a *> PS bs + PS as*PS (b:bs))) > PS _ * PS _ = PS [] > > instance (Num a) => Module a (PowerSeries a) where > a *> (PS bs) = PS (map (a *) bs) It would be nice to also provide: instance (Module a b) => Module a (PowerSeries b) where a *> (PS bs) = PS (map (a *>) bs) maybe with instance (Num a) => Module a a where (*>) = (*) > instance (Integral a) => Integral (PowerSeries a) where > divMod a b = (\(x,y)-> (PS x, PS y)) (unzip (aux a b)) > where aux (PS (a:as)) (PS (b:bs)) = > let (d,m) = divMod a b in > (d,m):aux (PS as - d *> (PS bs)) (PS (b:bs)) > aux _ _ = [] --dDRMvlgZJXvWKvBx-- From wli@holomorphy.com Wed Feb 14 00:20:01 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Tue, 13 Feb 2001 16:20:01 -0800 Subject: Primitive types and Prelude shenanigans In-Reply-To: <20010212143825.O641@holomorphy.com>; from wli@holomorphy.com on Mon, Feb 12, 2001 at 02:38:25PM -0800 References: <20010212004357.K641@holomorphy.com> <20010212143825.O641@holomorphy.com> Message-ID: <20010213162001.Z641@holomorphy.com> On Mon, 12 Feb 2001, William Lee Irwin III wrote: >>> I'd also like to see where some of the magic behind the typing of >>> various other built-in constructs happens, like list comprehensions, >>> tuples, and derived classes. On Mon, Feb 12, 2001 at 11:00:02AM +0100, Marcin 'Qrczak' Kowalczyk wrote: >> Inside the compiler, not in libraries. On Mon, Feb 12, 2001 at 02:38:25PM -0800, William Lee Irwin III wrote: > I had in mind looking within the compiler, actually. Where in the > compiler? It's a big program, it might take me a while to do an > uninformed search. I've peeked around a little bit and not gotten > anywhere. If anyone else is pursuing thoughts along the same lines as I am (and I have suspicions), TysWiredIn.lhs appears quite relevant to the set of primitive data types, though there is no obvious connection to the module issue (PrelBase.Bool vs. Foo.Bool). PrelMods.lhs appears to shed more light on that issue in particular. $TOP/ghc/compiler/prelude/ was the gold mine I encountered. In DsExpr.lhs, I found: ] \subsection[DsExpr-literals]{Literals} ] ... ] We give int/float literals type @Integer@ and @Rational@, respectively. ] The typechecker will (presumably) have put \tr{from{Integer,Rational}s} ] around them. and following this pointer, I found TcExpr.lhs (lines 213ff) had more material of interest. While I can't say I know how to act on these "discoveries" (esp. since I don't really understand OverloadedIntegral and OverloadedFractional's treatment(s) yet), perhaps this might be useful to others interested in ideas along the same lines as mine. Happy hacking, Bill From qrczak@knm.org.pl Wed Feb 14 05:08:16 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 14 Feb 2001 05:08:16 GMT Subject: Revised numerical prelude, version 0.02 References: <20010213183221.B973@math.harvard.edu> Message-ID: Tue, 13 Feb 2001 18:32:21 -0500, Dylan Thurston pisze: > I'd like to copy this over to this revised library. But the numeric > constants have to be wrapped in explicit calls to fromInteger. ghc's docs (the CVS version) say that -fno-implicit-prelude causes numeric literals to use whatever fromInteger is in scope. AFAIR it worked at some time, but it does not work anymore! BTW, why not let 'import MyPrelude as Prelude' work that way? -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From andrew@andrewcooke.free-online.co.uk Wed Feb 14 17:02:24 2001 From: andrew@andrewcooke.free-online.co.uk (andrew@andrewcooke.free-online.co.uk) Date: Wed, 14 Feb 2001 17:02:24 +0000 Subject: Typing units correctly In-Reply-To: <20010212135154.G4259@math.harvard.edu>; from dpt@math.harvard.edu on Mon, Feb 12, 2001 at 01:51:54PM -0500 References: <20010211174215.A2033@math.harvard.edu> <3A876585.6D4B2B8A@boutel.co.nz> <14983.29211.220380.337502@waytogo.peace.co.nz> <200102120908.KAA01548@cuchulain.it.kth.se> <20010212135154.G4259@math.harvard.edu> Message-ID: <20010214170224.A1800@liron> Hi, I don't know if this is useful, but in response to a link to that article that I posted on Lambda, someone posted a link arguing that such an approach (at least in Ada) was impractical. To be honest, I don't find it very convincing, but I haven't been following this discussion in detail. It might raise some problems you have not considered. Anyway, if you are interested, it's all at http://lambda.weblogs.com/discuss/msgReader$818 Apologies if it's irrelevant or you've already seen it, Andrew On Mon, Feb 12, 2001 at 01:51:54PM -0500, Dylan Thurston wrote: [...] > The papers I could find (e.g., > http://citeseer.nj.nec.com/kennedy94dimension.html, "Dimension Types") > mention extensions to ML. I wonder if it is possible to work within > the Haskell type system, which is richer than ML's type system. [...] -- http://www.andrewcooke.free-online.co.uk/index.html From akenn@microsoft.com Wed Feb 14 16:10:39 2001 From: akenn@microsoft.com (Andrew Kennedy) Date: Wed, 14 Feb 2001 08:10:39 -0800 Subject: Typing units correctly Message-ID: <0C682B70CE37BC4EADED9D375809768A56CD03@red-msg-04.redmond.corp.microsoft.com> To be frank, the poster that you cite doesn't know what he's talking about. He makes two elementary mistakes: (a) attempting to encode dimension/unit checking in an existing type system; (b) not appreciating the need for parametric polymorphism over dimensions/units. As others have pointed out, (a) doesn't work because the algebra of units of measure is not free - units form an Abelian group (if integer exponents are used) or a vector space over the rationals (if rational exponents are used) and so it's not possible to do unit-checking by equality-on-syntax or unit-inference by ordinary syntactic unification. Furthermore, parametric polymorphism is essential for code reuse - one can't even write a generic squaring function (say) without it. Best to ignore the poster and instead read the papers that contributors to this thread have cited :-) To turn to the original question, I did once give a moment's thought to the combination of type classes and types for units-of-measure. I don't think there's any particular problem: units (or dimensions) are a new "sort" or "kind", just as "row" is in various proposals for record polymorphism in Haskell. As long as this is tracked through the type system, everything should work out fine. Of course, I may have missed something, in which case I'd be very interested to know about it. - Andrew Kennedy. > -----Original Message----- > From: andrew@andrewcooke.free-online.co.uk > [mailto:andrew@andrewcooke.free-online.co.uk] > Sent: Wednesday, February 14, 2001 5:02 PM > To: haskell-cafe@haskell.org > Subject: Re: Typing units correctly > > > > Hi, > > I don't know if this is useful, but in response to a link to that > article that I posted on Lambda, someone posted a link arguing that > such an approach (at least in Ada) was impractical. To be honest, I > don't find it very convincing, but I haven't been following this > discussion in detail. It might raise some problems you have not > considered. > > Anyway, if you are interested, it's all at > http://lambda.weblogs.com/discuss/msgReader$818 > > Apologies if it's irrelevant or you've already seen it, > Andrew > > On Mon, Feb 12, 2001 at 01:51:54PM -0500, Dylan Thurston wrote: > [...] > > The papers I could find (e.g., > > http://citeseer.nj.nec.com/kennedy94dimension.html, > "Dimension Types") > > mention extensions to ML. I wonder if it is possible to work within > > the Haskell type system, which is richer than ML's type system. > [...] > > -- > http://www.andrewcooke.free-online.co.uk/index.html > From dpt@math.harvard.edu Wed Feb 14 19:14:36 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Wed, 14 Feb 2001 14:14:36 -0500 Subject: Typing units correctly In-Reply-To: <0C682B70CE37BC4EADED9D375809768A56CD03@red-msg-04.redmond.corp.microsoft.com>; from akenn@microsoft.com on Wed, Feb 14, 2001 at 08:10:39AM -0800 References: <0C682B70CE37BC4EADED9D375809768A56CD03@red-msg-04.redmond.corp.microsoft.com> Message-ID: <20010214141436.A4782@math.harvard.edu> --opJtzjQTFsWo+cga Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Wed, Feb 14, 2001 at 08:10:39AM -0800, Andrew Kennedy wrote: > To be frank, the poster that you cite doesn't know what he's talking > about. He makes two elementary mistakes: Quite right, I didn't know what I was talking about. I still don't. But I do hope to learn. > (a) attempting to encode dimension/unit checking in an existing type > system; We're probably thinking about different contexts, but please see the attached file (below) for a partial solution. I used Hugs' dependent types to get type inference. This makes me uneasy, because I know that Hugs' instance checking is, in general, not decidable; I don't know if the fragment I use is decidable. You can remove the dependent types, but then you need to type all the results, etc., explicitly. This version doesn't handle negative exponents; perhaps what you say here: > As others have pointed out, (a) doesn't work because the algebra of > units of measure is not free - units form an Abelian group (if > integer exponents are used) or a vector space over the rationals (if > rational exponents are used) and so it's not possible to do > unit-checking by equality-on-syntax or unit-inference by ordinary > syntactic unification. ... is that I won't be able to do it? Note that I didn't write it out, but this version can accomodate multiple units of measure. > (b) not appreciating the need for parametric polymorphism over > dimensions/units. > ... Furthermore, parametric polymorphism is > essential for code reuse - one can't even write a generic squaring > function (say) without it. I'm not sure what you're getting at here; I can easily write a squaring function in the version I wrote. It uses ad-hoc polymorphism rather than parametric polymorphism. It also gives much uglier types; e.g., the example from your paper f (x,y,z) = x*x + y*y*y + z*z*z*z*z gets some horribly ugly context: f :: (Additive a, Mul b c d, Mul c c e, Mul e c b, Mul d c a, Mul f f a, Mul g h a, Mul h h g) => (f,h,c) -> a Not that I recommend this solution, mind you. I think language support would be much better. But specific language support for units rubs me the wrong way: I'd much rather see a general notion of types with integer parameters, which you're allowed to add. This would be useful in any number of places. Is this what you're suggesting below? > To turn to the original question, I did once give a moment's thought > to the combination of type classes and types for units-of-measure. I > don't think there's any particular problem: units (or dimensions) > are a new "sort" or "kind", just as "row" is in various proposals > for record polymorphism in Haskell. As long as this is tracked > through the type system, everything should work out fine. Of course, > I may have missed something, in which case I'd be very interested to > know about it. Incidentally, I went and read your paper just now. Very interesting. You mentioned one problem came up that sounds interesting: to give a nice member of the equivalence class of the principal type. This boils down to picking a nice basis for a free Abelian group with a few distinguished elements. Has any progress been made on that? Best, Dylan Thurston --opJtzjQTFsWo+cga Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="dim3.hs" module Dim3 where default (Double) infixl 7 *** infixl 6 +++ data Zero = Zero data Succ x = Succ x class Peano a where value :: a -> Int element :: a instance Peano Zero where value Zero = 0 ; element = Zero instance (Peano a) => Peano (Succ a) where value (Succ x) = value x + 1 ; element = Succ element class (Peano a, Peano b, Peano c) => PeanoAdd a b c | a b -> c instance (Peano a) => PeanoAdd Zero a a instance (PeanoAdd a b c) => PeanoAdd (Succ a) b (Succ c) data (Peano a) => Dim a b = Dim a b deriving (Eq) class Mul a b c | a b -> c where (***) :: a -> b -> c instance Mul Double Double Double where (***) = (*) instance (Mul a b c, PeanoAdd d e f) => Mul (Dim d a) (Dim e b) (Dim f c) where (Dim _ a) *** (Dim _ b) = Dim element (a *** b) instance (Show a, Peano b) => Show (Dim b a) where show (Dim b a) = show a ++ " d^" ++ show (value b) class Additive a where (+++) :: a -> a -> a zero :: a instance Additive Double where (+++) = (+) ; zero = 0 instance (Peano a, Additive b) => Additive (Dim a b) where Dim a b +++ Dim c d = Dim a (b+++d) zero = Dim element zero scalar :: Double -> Dim Zero Double scalar x = Dim Zero x unit = scalar 1.0 d = Dim (Succ Zero) 1.0 f (x,y,z) = x***x +++ y***y***y +++ z***z***z***z***z --opJtzjQTFsWo+cga-- From qrczak@knm.org.pl Wed Feb 14 21:53:16 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 14 Feb 2001 21:53:16 GMT Subject: Revised numerical prelude, version 0.02 References: <20010213183221.B973@math.harvard.edu> Message-ID: Tue, 13 Feb 2001 18:32:21 -0500, Dylan Thurston pisze: > Here's a revision of the numerical prelude. I like it! > > class (Real a, Floating a) => RealFrac a where > > -- lifted directly from Haskell 98 Prelude > > properFraction :: (Integral b) => a -> (b,a) > > truncate, round :: (Integral b) => a -> b > > ceiling, floor :: (Integral b) => a -> b These should be SmallIntegral. > For an instance of RealIntegral a, it is expected that a `quot` b > will round towards minus infinity and a `div` b will round towards 0. The opposite. > > class (Real a) => SmallReal a where > > toRational :: a -> Rational > > class (SmallReal a, RealIntegral a) => SmallIntegral a where > > toInteger :: a -> Integer > > These two classes exist to allow convenient conversions, primarily > between the built-in types. These classes are "small" in the sense > that they can be converted to integers (resp. rationals) without loss > of information. I find names of these classes unclear: Integer is not small integral, it's big integral (as opposed to Int)! :-) Perhaps these classes should be called Real and Integral, with different names for current Real and Integral. But I don't have a concrete proposal. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From dpt@math.harvard.edu Wed Feb 14 22:20:11 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Wed, 14 Feb 2001 17:20:11 -0500 Subject: Revised numerical prelude, version 0.02 In-Reply-To: ; from qrczak@knm.org.pl on Wed, Feb 14, 2001 at 09:53:16PM +0000 References: <20010213183221.B973@math.harvard.edu> Message-ID: <20010214172011.A1356@math.harvard.edu> On Wed, Feb 14, 2001 at 09:53:16PM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Tue, 13 Feb 2001 18:32:21 -0500, Dylan Thurston pisze: > > Here's a revision of the numerical prelude. > I like it! I'd like to start using something like this in my programs. What are the chances that the usability issues will be addressed? (The main one is all the fromInteger's, I think.) > > > class (Real a, Floating a) => RealFrac a where > > > -- lifted directly from Haskell 98 Prelude > > > properFraction :: (Integral b) => a -> (b,a) > > > truncate, round :: (Integral b) => a -> b > > > ceiling, floor :: (Integral b) => a -> b > These should be SmallIntegral. It could be either one, since they produce the type on output (it calls fromInteger). I changed it, on the theory that it might be less confusing. But it should inherit from SmallReal. (Oh, except then RealFloat inherits from SmallReal, which it shouldn't have to. Gah.) > > For an instance of RealIntegral a, it is expected that a `quot` b > > will round towards minus infinity and a `div` b will round towards 0. > The opposite. Thanks. > > > class (Real a) => SmallReal a where > > > toRational :: a -> Rational > > > class (SmallReal a, RealIntegral a) => SmallIntegral a where > > > toInteger :: a -> Integer > ... > I find names of these classes unclear: Integer is not small integral, > it's big integral (as opposed to Int)! :-) I agree, but I couldn't think of anything better. I think this end of the heirarchy (that inherits from Real) could use some more work. RealIntegral and SmallIntegral could possibly be merged, except that it violates the principle of not combining semantically disparate operations in a single class. Best, Dylan Thurston From simonpj@microsoft.com Wed Feb 14 22:19:39 2001 From: simonpj@microsoft.com (Simon Peyton-Jones) Date: Wed, 14 Feb 2001 14:19:39 -0800 Subject: Primitive types and Prelude shenanigans Message-ID: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> | On Mon, Feb 12, 2001 at 02:38:25PM -0800, William Lee Irwin III wrote: | > I had in mind looking within the compiler, actually. Where in the | > compiler? It's a big program, it might take me a while to do an | > uninformed search. I've peeked around a little bit and not gotten | > anywhere. | | If anyone else is pursuing thoughts along the same lines as I | am (and I | have suspicions), TysWiredIn.lhs appears quite relevant to the set of | primitive data types, though there is no obvious connection to the | module issue (PrelBase.Bool vs. Foo.Bool). PrelMods.lhs | appears to shed | more light on that issue in particular. $TOP/ghc/compiler/prelude/ was | the gold mine I encountered. Perhaps I should add something here. I'm very sympathetic to the idea of making it possible to do entirely without the standard Prelude, and to substitute a Prelude of one's own. The most immediate and painful stumbling block in Haskell 98 is that numeric literals, like 3, turn into (Prelude.fromInt 3), where "Prelude.fromInt" really means "the fromInt from the standard Prelude" regardless of whether the standard Prelude is imported scope. Some while ago I modified GHC to have an extra runtime flag to let you change this behaviour. The effect was that 3 turns into simply (fromInt 3), and the "fromInt" means "whatever fromInt is in scope". The same thing happens for - numeric patterns - n+k patterns (the subtraction is whatever is in scope) - negation (you get whatever "negate" is in scope, not Prelude.negate) (Of course, this is not Haskell 98 behaviour.) I think I managed to forget to tell anyone of this flag. And to my surprise I can't find it any more! But several changes I made to make it easy are still there, so I'll reinstate it shortly. That should make it easy to define a new numeric class structure. So much for numerics. It's much less obvious what to do about booleans. Of course, you can always define your own Bool type. But we're going to have to change the type that if-then-else uses, and presumably guards too. Take if-then-else. Currently it desugars to case e of True -> then-expr False -> else-expr but your new boolean might not have two constructors. So maybe we should simply assume a function if :: Bool -> a -> a -> a and use that for both if-then-else and guards.... I wonder what else? For example, can we assume that f x | otherwise = e is equivalent to f x = e That is, "otherwise" is a guard that is equivalent to the boolean "true" value. ("otherwise" might be bound to something else if you import a non-std Prelude.) If we don't assume this, we may generate rather bizarre code: f x y | x==y = e1 | otherwise = e2 ===> f x y = if (x==y) e1 (if otherwise e2 (error "non-exhaustive patterns for f")) And we'll get warnings from the pattern-match compiler. So perhaps we should guarantee that (if otherwise e1 e2) = e1. You may say that's obvious, but the point is that we have to specify what can be assumed about an alien Prelude. Matters get even more tricky if you want to define your own lists. There's quite a lot of built-in syntax for lists, and type checking that goes with it. Last time I thought about it, it made my head hurt. Tuples are even worse, because they constitute an infinite family. The bottom line is this. a) It's desirable to be able to substitute a new prelude b) It's not obvious exactly what that should mean c) And it may not be straightforward to implement It's always hard to know how to deploy finite design-and-implementation resources. Is this stuff important to a lot of people? If you guys can come up with a precise specification for (b), I'll think hard about how hard (c) really is. Simon From qrczak@knm.org.pl Thu Feb 15 00:01:23 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 15 Feb 2001 00:01:23 GMT Subject: Primitive types and Prelude shenanigans References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> Message-ID: Wed, 14 Feb 2001 14:19:39 -0800, Simon Peyton-Jones pisze: > Some while ago I modified GHC to have an extra runtime flag to let > you change this behaviour. The effect was that 3 turns into simply > (fromInt 3), and the "fromInt" means "whatever fromInt is in scope". Wasn't that still fromInteger? > I think I managed to forget to tell anyone of this flag. I remember that it has been advertised. > And to my surprise I can't find it any more! Me neither. But it's still documented. It must have been list during some branch merging I guess. May I propose an alternative way of specifying an alternative Prelude? Instead of having a command line switch, let's say that 3 always means Prelude.fromInteger 3 - for any *module Prelude* which is in scope! That is, one could say: import Prelude () import MyPrelude as Prelude IMHO it's very intuitive, contrary to -fno-implicit-prelude flag. I see only one problem with that: inside the module MyPrelude it is not visible as Prelude yet. But it's easy to fix. Just allow a module to import itself! module MyPrelude where import Prelude as P import MyPrelude as Prelude Now names qualified with Prelude refer to entities defined in this very module, including implicit Prelude.fromInteger. I don't know if such self-import should hide MyPrelude qualification or not. I guess it should, similarly as explicit import of Prelude hides its implicit import. That is, each module implicitly imports itself, unless it imports itself explicitly (possibly under a different name) - same as for Prelude. > So much for numerics. It's much less obvious what to do about booleans. IMHO a natural generalization (not necessarily useful) is to follow the definition of the 'if' syntactic sugar literally. 'if' expands to the appropriate 'case'. So Prelude.True and Prelude.False must be defined, and they must have the same type (otherwise we get a type error each time we use 'if'). This would allow even data FancyBool a = True | False | DontKnow a The main problem is probably the current implementation: syntactic sugar like 'if' is typechecked prior to desugaring. The same problem is with the 'do' notation. But I don't see conceptual dilemmas. > For example, can we assume that > f x | otherwise = e > is equivalent to > f x = e We should not need this information except for performance and warnings. Semantically otherwise is just a normal variable. So it does not matter much. Non-standard 'otherwise' is the same as currently would be foo :: Bool foo = True The compiler could be improved by examining the unfolded definition for checking whether to generate warnings, instead of relying on special treatment of the particular qualified name. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From matth@ninenet.com Thu Feb 15 05:27:55 2001 From: matth@ninenet.com (Matt Harden) Date: Wed, 14 Feb 2001 23:27:55 -0600 Subject: Scalable and Continuous References: Message-ID: <3A8B68DB.F52BF668@ninenet.com> Marcin 'Qrczak' Kowalczyk wrote: > > I'm afraid of making too many small classes. But it would perhaps be not > so bad if one could define superclass' methods in subclasses, so that one > can forget about exact structure of classes and treat a bunch of classes > as a single class if he wishes. It would have to be combined with > compiler-inferred warnings about mutual definitions giving bottoms. I totally agree with this. We should be able to split up Num into many superclasses, while still retaining the traditional Num, and not inconveniencing anybody currently using Num. We could even put the superclasses into Library modules, so as not to "pollute" the standard Prelude's namespace. The Prelude could import those modules, then define Num and Num's instances, and only export the Num stuff. We shouldn't have to be afraid of making too many classes, if that more precisely reflects reality. It is only the current language definition that makes us afraid of this. We should be able to work with a class, subclass it, and define instances of it, without needing to know about all of its superclasses. This is certainly true in OOP, although I realize of course that OOP classes are not Haskell classes. I also wonder: should one be allowed to create new superclasses of an existing class without updating the original class's definition? Also, should the subclass be able to create new default definitions for functions in the superclasses? I think it should; such defaults would only be legal if the superclass did not define a default for the same function. What do you mean by mutual definitions? Matt Harden From qrczak@knm.org.pl Thu Feb 15 07:44:40 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 15 Feb 2001 07:44:40 GMT Subject: Scalable and Continuous References: <3A8B68DB.F52BF668@ninenet.com> Message-ID: Wed, 14 Feb 2001 23:27:55 -0600, Matt Harden pisze: > I also wonder: should one be allowed to create new superclasses of an > existing class without updating the original class's definition? It would not buy anything. You could not make use of the superclass in default definitions anyway (because they are already written). And what would happen to types which are instances of the subclass but not of the new superclass? > Also, should the subclass be able to create new default definitions > for functions in the superclasses? I hope the system can be designed such that it can. > such defaults would only be legal if the superclass did not define > a default for the same function. Not necessarily. For example (^) in Num (of the revised Prelude) has a default definition, but Fractional gives the opportunity to have better (^) defined in terms of other methods. When a type is an instance of Fractional, it should always have the Fractional's (^) in practice. When not, Num's (^) is always appropriate. I had many cases like this when trying to design a container class system. It's typical that a more specialized class has something generic as a superclass, and that a more generic function can easily be expressed in terms of specialized functions (but not vice versa). It follows that many kinds of types have the same written definition for a method, which cannot be put in the default definition in the class because it needs a more specialized context. It would be very convenient to be able to do that, but it cannot be very clear design. It relies on the absence of an instance, a negative constraint. Hopefully it will be OK, since it's determined once for a type - it's not a systematic way of parametrizing code over negative constrained types, which would break the principle that additional instances are harmless to old code. This design does have some problems. For example what if there are two subclasses which define the default method in an incompatible ways. We should design the system such that adding a non-conflicting instance does not break previously written code. It must be resolved once per module, probably complaining about the ambiguity (ugh!), but once the instance is generated, it's cast in stone for this type. > What do you mean by mutual definitions? Definitions of methods in terms of each other. Suppose there is a class having only (-) and negate, with default definitions: a - b = a + negate b negate b = zero - b When we make an instance of its subclass but don't make an explicit instance of this class and don't write (-) or negate explicitly, it would be dangerous if the compiler silently included definitions generated by the above, because both are functions which always return bottoms. The best solution I can think of is to let the compiler deduce that these default definitions lead to a useless instance, and give a warning when both are instantiated from the default. It cannot be an error because there is no formal way we can distinguish bad mutual recursion from good mutual recursion. The validity of the code cannot depend on heuristics, but warnings can. There are already warnings when a method without default is not defined explicitly (although people say it should be an error; it is diagnosable). -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From koen@cs.chalmers.se Thu Feb 15 09:07:41 2001 From: koen@cs.chalmers.se (Koen Claessen) Date: Thu, 15 Feb 2001 10:07:41 +0100 (MET) Subject: Primitive types and Prelude shenanigans In-Reply-To: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> Message-ID: Simon Peyton-Jones wrote: | I'm very sympathetic to the idea of making it possible | to do entirely without the standard Prelude, and to | substitute a Prelude of one's own. I think this is a very good idea. | Some while ago I modified GHC to have an extra runtime | flag to let you change this behaviour. The effect was | that 3 turns into simply (fromInt 3), and the | "fromInt" means "whatever fromInt is in scope". Hmmm... so how about: foo fromInt = 3 Would this translate to: foo f = f 3 ? How about alpha renaming? | [...] guarantee that (if otherwise e1 e2) = e1. I do not understand this. "otherwise" is simply a function name, that can be used, redefined or hidden, by anyone. It is not used in any desugaring. Why change that behaviour? | It's always hard to know how to deploy finite | design-and-implementation resources. Is this stuff | important to a lot of people? I think it is important to define a minimalistic Prelude, so that people at least know what is standard and what is not. Try to put everything else in modules. /Koen. -- Koen Claessen http://www.cs.chalmers.se/~koen phone:+46-31-772 5424 mailto:koen@cs.chalmers.se ----------------------------------------------------- Chalmers University of Technology, Gothenburg, Sweden From Malcolm.Wallace@cs.york.ac.uk Thu Feb 15 10:50:01 2001 From: Malcolm.Wallace@cs.york.ac.uk (Malcolm Wallace) Date: Thu, 15 Feb 2001 10:50:01 +0000 Subject: Revised numerical prelude, version 0.02 In-Reply-To: <20010214172011.A1356@math.harvard.edu> Message-ID: Dylan Thurston writes: > I'd like to start using something like this in my programs. What are > the chances that the usability issues will be addressed? (The main > one is all the fromInteger's, I think.) Have you tried using your alternative Prelude with nhc98? Offhand, I couldn't be certain it would work, but I think nhc98 probably makes fewer assumptions about the Prelude than ghc. You will need something like import qualified Prelude as NotUsed import Dylan'sPrelude as Prelude in any module that wants to use your prelude. IIRC, nhc98 treats 'fromInteger' exactly as the qualified name 'Prelude.fromInteger', so in theory it should simply pick up your replacement definitions. (In practice, it might actually do the resolution of module 'as' renamings a little too early or late, but the easiest way to find out for certain is to try it.) Regards, Malcolm From simonmar@microsoft.com Thu Feb 15 10:54:24 2001 From: simonmar@microsoft.com (Simon Marlow) Date: Thu, 15 Feb 2001 10:54:24 -0000 Subject: Primitive types and Prelude shenanigans Message-ID: <9584A4A864BD8548932F2F88EB30D1C61157DF@TVP-MSG-01.europe.corp.microsoft.com> > (Of course, this is not Haskell 98 behaviour.) I think I=20 > managed to forget > to tell anyone of this flag. And to my surprise I can't find=20 > it any more! It's lumped in with -fno-implicit-prelude, but the extra functionality isn't supported in 4.08.2 (but hopefully will be in 5.00). Cheers, Simon From akenn@microsoft.com Thu Feb 15 15:18:14 2001 From: akenn@microsoft.com (Andrew Kennedy) Date: Thu, 15 Feb 2001 07:18:14 -0800 Subject: Typing units correctly Message-ID: <0C682B70CE37BC4EADED9D375809768A56CD04@red-msg-04.redmond.corp.microsoft.com> First, I think there's been a misunderstanding. I was referring to the poster ("Christoph Grein") of http://www.adapower.com/lang/dimension.html when I said that "he doesn't know what he's talking about". I've not been following the haskell cafe thread very closely, but from what I've seen your (Dylan's) posts are well-informed. Sorry if there was any confusion. As you suspect, negative exponents are necessary. How else would you give a polymorphic type to \ x -> 1.0/x ? However, because of the equivalence on type schemes that's not just alpha-conversion, many types can be rewritten to avoid negative exponents, though I don't think that this is particularly desirable. For example the type of division can be written / :: Real (u.v) -> Real u -> Real v or / :: Real u -> Real v -> Real (u.v^-1) where u and v are "unit" variables. In fact, I have since solved the simplification problem mentioned in my ESOP paper, and it would assign the second of these two (equivalent) types, as it works from left to right in the type. I guess it does boil down to choosing a nice basis; more precisely it corresponds to the Hermite Normal Form from the theory of integer matrices (more generally: modules over commutative rings). For more detail see my thesis, available from http://research.microsoft.com/users/akenn/papers/index.html By the way, type system pathologists might be interested to know that the algorithm described in ESOP'94 doesn't actually work without an additional step in the rule for let (he says shamefacedly). Again all this is described in my thesis - but for a clearer explanation of this issue you might want to take a look at my technical report "Type Inference and Equational Theories". Which brings me to your last point: some more general system that subsumes the rather specific dimension/unit types system. There's been some nice work by Martin Sulzmann et al on constraint based systems which can express dimensions. See http://www.cs.mu.oz.au/~sulzmann/ for more details. To my taste, though, unless you want to express all sorts of other stuff in the type system, the equational-unification-based approach that I described in ESOP is simpler, even with the fix for let. I've been promising for years that I'd write up a journal-quality (and correct!) version of my ESOP paper including all the relevant material from my thesis. As I have now gone so far as to promise my boss that I'll do such a thing, perhaps it will happen :-) - Andrew. > -----Original Message----- > From: Dylan Thurston [mailto:dpt@math.harvard.edu] > Sent: Wednesday, February 14, 2001 7:15 PM > To: Andrew Kennedy; haskell-cafe@haskell.org > Subject: Re: Typing units correctly > > > On Wed, Feb 14, 2001 at 08:10:39AM -0800, Andrew Kennedy wrote: > > To be frank, the poster that you cite doesn't know what he's talking > > about. He makes two elementary mistakes: > > Quite right, I didn't know what I was talking about. I still don't. > But I do hope to learn. > > > (a) attempting to encode dimension/unit checking in an existing type > > system; > > We're probably thinking about different contexts, but please see the > attached file (below) for a partial solution. I used Hugs' dependent > types to get type inference. This makes me uneasy, because I know that > Hugs' instance checking is, in general, not decidable; I don't know if > the fragment I use is decidable. You can remove the dependent types, > but then you need to type all the results, etc., explicitly. This > version doesn't handle negative exponents; perhaps what you say here: > > > As others have pointed out, (a) doesn't work because the algebra of > > units of measure is not free - units form an Abelian group (if > > integer exponents are used) or a vector space over the rationals (if > > rational exponents are used) and so it's not possible to do > > unit-checking by equality-on-syntax or unit-inference by ordinary > > syntactic unification. ... > > is that I won't be able to do it? > > Note that I didn't write it out, but this version can accomodate > multiple units of measure. > > > (b) not appreciating the need for parametric polymorphism over > > dimensions/units. > > ... Furthermore, parametric polymorphism is > > essential for code reuse - one can't even write a generic squaring > > function (say) without it. > > I'm not sure what you're getting at here; I can easily write a > squaring function in the version I wrote. It uses ad-hoc polymorphism > rather than parametric polymorphism. It also gives much uglier > types; e.g., the example from your paper > f (x,y,z) = x*x + y*y*y + z*z*z*z*z > gets some horribly ugly context: > f :: (Additive a, Mul b c d, Mul c c e, Mul e c b, Mul d c a, > Mul f f a, Mul g h a, Mul h h g) => (f,h,c) -> a > > Not that I recommend this solution, mind you. I think language > support would be much better. But specific language support for units > rubs me the wrong way: I'd much rather see a general notion of types > with integer parameters, which you're allowed to add. This would be > useful in any number of places. Is this what you're suggesting below? > > > To turn to the original question, I did once give a moment's thought > > to the combination of type classes and types for units-of-measure. I > > don't think there's any particular problem: units (or dimensions) > > are a new "sort" or "kind", just as "row" is in various proposals > > for record polymorphism in Haskell. As long as this is tracked > > through the type system, everything should work out fine. Of course, > > I may have missed something, in which case I'd be very interested to > > know about it. > > Incidentally, I went and read your paper just now. Very interesting. > You mentioned one problem came up that sounds interesting: to give a > nice member of the equivalence class of the principal type. This > boils down to picking a nice basis for a free Abelian group with a few > distinguished elements. Has any progress been made on that? > > Best, > Dylan Thurston > From kort@wins.uva.nl Thu Feb 15 16:16:07 2001 From: kort@wins.uva.nl (Jan Kort) Date: Thu, 15 Feb 2001 17:16:07 +0100 Subject: framework for composing monads? References: Message-ID: <3A8C00C7.21F22C61@wins.uva.nl> Andy Gill's Monad Template Library is good for that, but the link from the Haskell library page is broken: http://www.cse.ogi.edu/~andy/monads/doc.htm Jan From zulf_jafferi@hotmail.com Thu Feb 15 20:55:51 2001 From: zulf_jafferi@hotmail.com (zulf jafferi) Date: Thu, 15 Feb 2001 20:55:51 -0000 Subject: Downloading Hugs Message-ID: hi, I tried to download the Hugs 98.after downloading Hugs 98,when i try to click on the Hugs icon it gives me an error saying COULD NOT LOAD PRELUDE.i am using Windows 2000. I would be much obliged if you could help me solve the problem. cheers!! _________________________________________________________________________ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com. From konsu@microsoft.com Fri Feb 16 01:11:20 2001 From: konsu@microsoft.com (Konst Sushenko) Date: Thu, 15 Feb 2001 17:11:20 -0800 Subject: need help w/ monad comprehension syntax Message-ID: <1E27BBCDDE50914C99517B4D7EC5D5251A33FF@RED-MSG-13.redmond.corp.microsoft.com> This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_001_01C097B5.5F16060C Content-Type: text/plain; charset="iso-8859-1" hello, i am having trouble getting my program below to work. i think i implemented the monad methods correctly, but the function 'g' does not type as i would expect. Hugs thinks that it is just a list (if i remove the explicit typing). i want it to be functionally identical to the function 'h'. what am i missing? thanks konst > newtype State s a = ST (s -> (a,s)) > > unST (ST m) = m > > instance Functor (State s) where > fmap f m = ST (\s -> let (a,s') = unST m s in (f a, s')) > > instance Monad (State s) where > return a = ST (\s -> (a,s)) > m >>= f = ST (\s -> let (a,s') = unST m s in unST (f a) s') > > --g :: State String Char > g = [ x | x <- return 'a' ] > > h :: State String Char > h = return 'a' ------_=_NextPart_001_01C097B5.5F16060C Content-Type: text/html; charset="iso-8859-1"
hello,
 
i am having trouble getting my program below to work.
i think i implemented the monad methods correctly, but
the function 'g' does not type as i would expect. Hugs
thinks that it is just a list (if i remove the explicit
typing). i want it to be functionally identical to the
function 'h'.
 
what am i missing?
 
thanks
konst
 
 
> newtype State s a = ST (s -> (a,s))
>
> unST (ST m) = m
>
> instance Functor (State s) where
>     fmap f m = ST (\s -> let (a,s') = unST m s in (f a, s'))
>
> instance Monad (State s) where
>     return a = ST (\s -> (a,s))
>     m >>= f  = ST (\s -> let (a,s') = unST m s in unST (f a) s')
>
> --g :: State String Char
> g = [ x | x <- return 'a' ]
>
> h :: State String Char
> h = return 'a'
 
------_=_NextPart_001_01C097B5.5F16060C-- From Tom.Pledger@peace.com Fri Feb 16 01:22:28 2001 From: Tom.Pledger@peace.com (Tom Pledger) Date: Fri, 16 Feb 2001 14:22:28 +1300 Subject: need help w/ monad comprehension syntax In-Reply-To: <1E27BBCDDE50914C99517B4D7EC5D5251A33FF@RED-MSG-13.redmond.corp.microsoft.com> References: <1E27BBCDDE50914C99517B4D7EC5D5251A33FF@RED-MSG-13.redmond.corp.microsoft.com> Message-ID: <14988.32980.809202.370032@waytogo.peace.co.nz> Konst Sushenko writes: | what am i missing? : | > --g :: State String Char | > g = [ x | x <- return 'a' ] Hi. The comprehension syntax used to be for monads in general (in Haskell 1.4-ish), but is now (Haskell 98) back to being specific to lists. Does it help if you use do-notation instead? Regards, Tom From konsu@microsoft.com Fri Feb 16 01:43:04 2001 From: konsu@microsoft.com (Konst Sushenko) Date: Thu, 15 Feb 2001 17:43:04 -0800 Subject: need help w/ monad comprehension syntax Message-ID: <1E27BBCDDE50914C99517B4D7EC5D52516AC4F@RED-MSG-13.redmond.corp.microsoft.com> thanks, did not know that. the articles that i read are outdated in that respect... using the "do" notation is just fine. with the list notation not working i thought that i musunderstand something about monads. ;-) konst -----Original Message----- From: Tom Pledger [mailto:Tom.Pledger@peace.com] Sent: Thursday, February 15, 2001 5:22 PM To: haskell-cafe@haskell.org Subject: need help w/ monad comprehension syntax Konst Sushenko writes: | what am i missing? : | > --g :: State String Char | > g = [ x | x <- return 'a' ] Hi. The comprehension syntax used to be for monads in general (in Haskell 1.4-ish), but is now (Haskell 98) back to being specific to lists. Does it help if you use do-notation instead? Regards, Tom _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe From wli@holomorphy.com Fri Feb 16 04:56:20 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Thu, 15 Feb 2001 20:56:20 -0800 Subject: Primitive types and Prelude shenanigans In-Reply-To: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com>; from simonpj@microsoft.com on Wed, Feb 14, 2001 at 02:19:39PM -0800 References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> Message-ID: <20010215205620.D641@holomorphy.com> On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote: > The most immediate and painful stumbling block in Haskell 98 is that > numeric literals, like 3, turn into (Prelude.fromInt 3), where > "Prelude.fromInt" really means "the fromInt from the standard Prelude" > regardless of whether the standard Prelude is imported scope. > Some while ago I modified GHC to have an extra runtime flag to let you > change this behaviour. The effect was that 3 turns into simply > (fromInt 3), and the "fromInt" means "whatever fromInt is in scope". > The same thing happens for > - numeric patterns > - n+k patterns (the subtraction is whatever is in scope) > - negation (you get whatever "negate" is in scope, not Prelude.negate) For the idea for numeric literals I had in mind (which is so radical I don't intend to seek much, if any help in implementing it other than general information), even this is insufficient. Some analysis of the value of the literal would need to be incorporated so that something like the following happens: literal "0" gets mapped to zero :: AdditiveMonoid t => t literal "1" gets mapped to one :: MultiplicativeMonoid t => t literal "5" gets mapped to (fromPositiveInteger 5) literal "-9" gets mapped to (fromNonZeroInteger -9) literal "5.0" gets mapped to (fromPositiveReal 5.0) literal "-2.0" gets mapped to (fromNonZeroReal -2.0) literal "0.0" gets mapped to (fromReal 0.0) etc. A single fromInteger or fromIntegral won't suffice here. The motivation behind this is so that some fairly typical mathematical objects (multiplicative monoid of nonzero integers, etc.) can be directly represented by numerical literals (and primitive types). I don't for a minute think this is suitable for general use, but I regard it as an interesting (to me) experiment. On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote: > (Of course, this is not Haskell 98 behaviour.) I think I managed to > forget to tell anyone of this flag. And to my surprise I can't find > it any more! But several changes I made to make it easy are still > there, so I'll reinstate it shortly. That should make it easy to > define a new numeric class structure. It certainly can't hurt; even if the code doesn't help directly with my dastardly plans, examining how the handling of overloaded literals differs will help me understand what's going on. On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote: > So much for numerics. It's much less obvious what to do about booleans. > Of course, you can always define your own Bool type. But we're going to > have to change the type that if-then-else uses, and presumably guards too. > Take if-then-else. Currently it desugars to > case e of > True -> then-expr > False -> else-expr > but your new boolean might not have two constructors. So maybe we should > simply assume a function > if :: Bool -> a -> a -> a > and use that for both if-then-else and guards.... I wonder what else? I had in mind that there might be a class of suitable logical values corresponding to the set of all types suitable for use as such. As far as I know, the only real restriction on subobject classifiers for logical values is that it be a pointed set where the point represents truth. Even if it's not the most general condition, it's unlikely much can be done computationally without that much. So since we must be able to compare logical values to see if they're that distinguished truth value: \begin{pseudocode} class Eq lv => LogicalValue lv where definitelyTrue :: lv \end{pseudocode} From here, ifThenElse might be something like: \begin{morepseudocode} ifThenElse :: LogicalValue lv => lv -> a -> a -> a ifThenElse isTrue thenValue elseValue = case isTrue == definitelyTrue of BooleanTrue -> thenValue _ -> elseValue \end{morepseudocode} or something on that order. The if/then/else syntax is really just a combinator like this with a mixfix syntax, and case is the primitive, so quite a bit of flexibility is possible given either some "hook" the mixfix operator will use or perhaps even means for defining arbitrary mixfix operators. (Of course, a hook is far easier.) The gains from something like this are questionable, but it's not about gaining anything for certain, is it? Handling weird logics could be fun. On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote: [interesting example using otherwise in a pattern guard elided] > And we'll get warnings from the pattern-match compiler. So perhaps we > should guarantee that (if otherwise e1 e2) = e1. I'm with you on this, things would probably be too weird otherwise. On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote: > You may say that's obvious, but the point is that we have to specify > what can be assumed about an alien Prelude. There is probably a certain amount of generality that would be desirable to handle, say, Dylan Thurston's prelude vs. the standard prelude. I'm willing to accept compiler hacking as part of ideas as radical as mine. Some reasonable assumptions: (1) lists are largely untouchable (2) numeric monotypes present in the std. prelude will also be present (3) tuples probably won't change (4) I/O libs will probably not be toyed with much (monads are good!) (5) logical values will either be a monotype or a pointed set class (may be too much to support more than a monotype) (6) relations (==), (<), etc. will get instances on primitive monotypes (7) Read and Show probably won't change much (8) Aside from perhaps Arrows, monads probably won't change much (Arrows should be able to provide monad compatibility) (9) probably no one will try to alter application syntax to operate on things like instances of class Applicable (10) the vast majority of the prelude changes desirable to support will have to do with the numeric hierarchy These are perhaps not a terribly useful set of assumptions. On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote: > Matters get even more tricky if you want to define your own lists. > There's quite a lot of built-in syntax for lists, and type checking > that goes with it. Last time I thought about it, it made my head > hurt. Tuples are even worse, because they constitute an infinite family. The only ideas I have about lists are maybe to reinstate monad comprehensions. As far as tuples go, perhaps a derived or automagically defined Functor (yes, I know it isn't derivable now) instance and other useful instances (e.g. AdditiveMonoid, PointedSet, other instances where distinguished elements etc. cannot be written for the infinite number of instances required) would have interesting consequences if enough were cooked up to bootstrap tuples in a manner polymorphic in the dimension (fillTuple :: Tuple t => (Natural -> a) -> t a ?, existential tuples?) Without polytypism or some other mechanism for defining instances on these infinite families of types, achieving the same effect(s) would be difficult outside of doing it magically in the compiler. Neither looks easy to pull off in any case, so I'm wary of these ideas. On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote: > The bottom line is this. > a) It's desirable to be able to substitute a new prelude > b) It's not obvious exactly what that should mean > c) And it may not be straightforward to implement > It's always hard to know how to deploy finite design-and-implementation > resources. Is this stuff important to a lot of people? > If you guys can come up with a precise specification for (b), I'll > think hard about how hard (c) really is. I think Dylan Thurston's proposal is probably the best starting point for something that should really get support. If other alternatives in the same vein start going around, I'd think supporting them would also be good, but much of what I have in mind is probably beyond reasonable expectations, and will probably not get broadly used. Cheers, Bill From fjh@cs.mu.oz.au Fri Feb 16 06:14:14 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Fri, 16 Feb 2001 17:14:14 +1100 Subject: Primitive types and Prelude shenanigans In-Reply-To: <20010215205620.D641@holomorphy.com> References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> <20010215205620.D641@holomorphy.com> Message-ID: <20010216171414.A10199@hg.cs.mu.oz.au> On 15-Feb-2001, William Lee Irwin III wrote: > Some reasonable assumptions: I disagree about the reasonableness of many of your assumptions ;-) > (1) lists are largely untouchable I want to be able to write a Prelude that has lists as a strict data type, rather than a lazy data type. > (4) I/O libs will probably not be toyed with much (monads are good!) > (5) logical values will either be a monotype or a pointed set class > (may be too much to support more than a monotype) I think that that replacing the I/O libs is likely to be a much more useful and realistic proposition than replacing the boolean type. > (9) probably no one will try to alter application syntax to operate > on things like instances of class Applicable That's a separate issue; you're talking here about a language extension, not just a new Prelude. > (10) the vast majority of the prelude changes desirable to support > will have to do with the numeric hierarchy s/numeric hierarchy/class hierarchy/ -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From ketil@ii.uib.no Fri Feb 16 06:21:46 2001 From: ketil@ii.uib.no (Ketil Malde) Date: 16 Feb 2001 07:21:46 +0100 Subject: Primitive types and Prelude shenanigans In-Reply-To: William Lee Irwin III's message of "Thu, 15 Feb 2001 20:56:20 -0800" References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> <20010215205620.D641@holomorphy.com> Message-ID: William Lee Irwin III writes: > Some analysis of the value of the literal would need to be > incorporated so that something like the following happens: > literal "0" gets mapped to zero :: AdditiveMonoid t => t > literal "1" gets mapped to one :: MultiplicativeMonoid t => t Indeed. Is it a reasonable assumption that all values of literal "0" are intended to be the additive identity element? How about "1", might we not have it as a successor element in a group, where multiplication isn't defined? I guess other behaviour (e.g. using implicit fromInteger) assumes even more about the classes that can be represented by literal numbers, so it appears this would be an improvement, if it is at all workable. > literal "5" gets mapped to (fromPositiveInteger 5) Is something like (fromInteger 5) *> one (where *> is Module scalar multiplication from the left? Could we avoid having lots and lots of from* functions? I guess having to declare any datatype as Module is as bad as the explicit conversion functions... Anyway, I like it, especially the zero and one case. > I think Dylan Thurston's proposal is probably the best starting point > for something that should really get support. Indeed. How far are we from being able to import it, e.g. with "import DTlude as Prelude" or whatever mechanism, so that we can start to play with it, and see how it works out in practice? -kzm -- If I haven't seen further, it is by standing in the footprints of giants From qrczak@knm.org.pl Fri Feb 16 08:09:58 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 16 Feb 2001 08:09:58 GMT Subject: Primitive types and Prelude shenanigans References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> <20010215205620.D641@holomorphy.com> Message-ID: Thu, 15 Feb 2001 20:56:20 -0800, William Lee Irwin III pisze: > literal "0" gets mapped to zero :: AdditiveMonoid t => t > literal "1" gets mapped to one :: MultiplicativeMonoid t => t > literal "5" gets mapped to (fromPositiveInteger 5) > literal "-9" gets mapped to (fromNonZeroInteger -9) Actually -9 gets mapped to negate (fromInteger 9). At least in theory, because in ghc it's fromInteger (-9) AFAIK. > The motivation behind this is so that some fairly typical > mathematical objects (multiplicative monoid of nonzero integers, > etc.) can be directly represented by numerical literals (and > primitive types). I am definitely against it, especially the zero and one case. When one can write 1, he should be able to write 2 too obtaining the same type. It's not hard to write zero and one. What next: 0 for nullPtr and []? Moreover, the situation where each integer literal means applied fromInteger is simple to understand, remember and use. I don't want to define a bunch of operations for the same thing. Please keep Prelude's rules simple. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From wli@holomorphy.com Fri Feb 16 08:26:05 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Fri, 16 Feb 2001 00:26:05 -0800 Subject: Primitive types and Prelude shenanigans In-Reply-To: <20010216171414.A10199@hg.cs.mu.oz.au>; from fjh@cs.mu.oz.au on Fri, Feb 16, 2001 at 05:14:14PM +1100 References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> <20010215205620.D641@holomorphy.com> <20010216171414.A10199@hg.cs.mu.oz.au> Message-ID: <20010216002605.E641@holomorphy.com> On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote: > I disagree about the reasonableness of many of your assumptions ;-) Great! =) On 15-Feb-2001, William Lee Irwin III wrote: >> (1) lists are largely untouchable On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote: > I want to be able to write a Prelude that has lists as a strict data > type, rather than a lazy data type. Hmm, sounds like infinite lists might have trouble there, but I hereby cast out that assumption. On 15-Feb-2001, William Lee Irwin III wrote: >> (4) I/O libs will probably not be toyed with much (monads are good!) >> (5) logical values will either be a monotype or a pointed set class >> (may be too much to support more than a monotype) On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote: > I think that that replacing the I/O libs is likely to be a much more > useful and realistic proposition than replacing the boolean type. I won't pretend for an instant that replacing the Boolean type will be remotely useful to more than a handful of people. On 15-Feb-2001, William Lee Irwin III wrote: >> (9) probably no one will try to alter application syntax to operate >> on things like instances of class Applicable On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote: > That's a separate issue; you're talking here about a language > extension, not just a new Prelude. I'm not sure one would have to go that far (though I'm willing to be convinced), but either way, we need not concern ourselves. On 15-Feb-2001, William Lee Irwin III wrote: >> (10) the vast majority of the prelude changes desirable to support >> will have to do with the numeric hierarchy On Fri, Feb 16, 2001 at 05:14:14PM +1100, Fergus Henderson wrote: > s/numeric hierarchy/class hierarchy/ I suppose I was trying to narrow it down as far as possible, but if people really are touching every place in the class hierarchy, then I can't do better than that. Cheers, Bill From wli@holomorphy.com Fri Feb 16 09:17:38 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Fri, 16 Feb 2001 01:17:38 -0800 Subject: Primitive types and Prelude shenanigans In-Reply-To: ; from qrczak@knm.org.pl on Fri, Feb 16, 2001 at 08:09:58AM +0000 References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> <20010215205620.D641@holomorphy.com> Message-ID: <20010216011738.F641@holomorphy.com> William Lee Irwin III pisze: >> literal "0" gets mapped to zero :: AdditiveMonoid t => t >> literal "1" gets mapped to one :: MultiplicativeMonoid t => t >> literal "5" gets mapped to (fromPositiveInteger 5) >> literal "-9" gets mapped to (fromNonZeroInteger -9) On Fri, Feb 16, 2001 at 08:09:58AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Actually -9 gets mapped to negate (fromInteger 9). At least in theory, > because in ghc it's fromInteger (-9) AFAIK. Sorry I was unclear about this, I had in mind that in the scheme I was going to implement that the sign of the literal value would be discerned and negative literals carried to fromNonZeroInteger (-9) etc. William Lee Irwin III pisze: >> The motivation behind this is so that some fairly typical >> mathematical objects (multiplicative monoid of nonzero integers, >> etc.) can be directly represented by numerical literals (and >> primitive types). On Fri, Feb 16, 2001 at 08:09:58AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > I am definitely against it, especially the zero and one case. > When one can write 1, he should be able to write 2 too obtaining the > same type. It's not hard to write zero and one. The real hope here is to get the distinct zero and one for things that are already traditionally written that way, like the multiplicative monoid of nonzero integers or the additive monoid of natural numbers. Another implication I view as beneficial is that the 0 (and 1) symbols can be used in vector (and perhaps matrix) contexts without the possibility that other integer literals might be used inadvertantly. On Fri, Feb 16, 2001 at 08:09:58AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > What next: 0 for nullPtr and []? It's probably good to point out that this scheme is "permissive" enough, or more specifically, allows enough fine-grained expressiveness to allow the symbol to be overloaded for address types on which arithmetic is permitted, and lists under their natural monoid structure, which I agree is aesthetically displeasing at the very least, and probably undesirable to allow by default. On Fri, Feb 16, 2001 at 08:09:58AM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Moreover, the situation where each integer literal means applied > fromInteger is simple to understand, remember and use. I don't want to > define a bunch of operations for the same thing. Please keep Prelude's > rules simple. I don't think this sort of scheme is appropriate for a standard Prelude either, though I do think it's interesting to me, and perhaps others. I don't mean to give the impression that I'm proposing this for inclusion in any sort of standard Prelude. It's a more radical point in the design space that I am personally interested in exploring both to discover its implications for programming (what's really awkward, what things become convenient, etc.), and to acquaint myself with the aspects of the compiler pertinent to the handling of primitive types. Cheers, Bill From karczma@info.unicaen.fr Fri Feb 16 14:24:50 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Fri, 16 Feb 2001 14:24:50 +0000 Subject: Just for your fun and horror References: <20010213183221.B973@math.harvard.edu> Message-ID: <3A8D3832.B5045915@info.unicaen.fr> Perhaps I mentioned that I use Haskell to teach compilation, since I think that functional structures are good not only for parsers, but for a legible semantics for virtual machines, for the code generators, etc. The main assignment was to write a syntactic converter from a Haskell-like language to Scheme, and the exam included such exercises as Find the type of fm in fm _ z [] = return z fm g z (a:aq) = g z a >>= \y->fm g y aq When I started correcting the exam, I thought I would jump out of the window. First 30 copies: The type of fm is ff -> b -> [c] -> b (with an appropriate constraint for the functional type ff). The result had for them the same type as the type of z. My inquiry proved beyond any doubt that my students are so conditioned by "C", that despite the fact that we worked with monads for several weeks, they *cannot imagine* that "return z" may mean something different than the value of "z". Any suggestions? [Yes, I have one! Stop teaching, find a more appropriate job, e.g., cultivate genetically modified, mutant tomatoes.]] Jerzy Karczmarczuk "C"-aen, Fran-"C"-e. From jans@numeric-quest.com Fri Feb 16 09:17:36 2001 From: jans@numeric-quest.com (Jan Skibinski) Date: Fri, 16 Feb 2001 04:17:36 -0500 (EST) Subject: Just for your fun and horror In-Reply-To: <3A8D3832.B5045915@info.unicaen.fr> Message-ID: On Fri, 16 Feb 2001, Jerzy Karczmarczuk wrote: > My inquiry proved beyond any doubt that my students are so > conditioned by "C", that despite the fact that we worked with > monads for several weeks, they *cannot imagine* that > "return z" > may mean something different than the value of "z". > > Any suggestions? Perhaps the name "return" in the monadic definitions could be replaced by something more suggestive of an action? How about running a little experiment next time, with a new name, to see whether this would remove this unfortunate association with C-like "return" in the minds of your students? Jan From gruenbacher-lists@geoinfo.tuwien.ac.at Fri Feb 16 14:57:34 2001 From: gruenbacher-lists@geoinfo.tuwien.ac.at (Andreas Gruenbacher) Date: Fri, 16 Feb 2001 15:57:34 +0100 (CET) Subject: Just for your fun and horror In-Reply-To: <3A8D3832.B5045915@info.unicaen.fr> Message-ID: On Fri, 16 Feb 2001, Jerzy Karczmarczuk wrote: > [..] > > fm _ z [] = return z > fm g z (a:aq) = g z a >>= \y->fm g y aq > > When I started correcting the exam, I thought I would jump > out of the window. First 30 copies: The type of fm is > > ff -> b -> [c] -> b > > (with an appropriate constraint for the functional type ff). > The result had for them the same type as the type of z. > > My inquiry proved beyond any doubt that my students are so > conditioned by "C", that despite the fact that we worked with > monads for several weeks, they *cannot imagine* that > "return z" > may mean something different than the value of "z". > > Any suggestions? Not that it would help you much, but I also think that return is a rather confusing name for what might otherwise be called liftM0. Regards, Andreas. ------------------------------------------------------------------------ Andreas Gruenbacher gruenbacher@geoinfo.tuwien.ac.at Research Assistant Phone +43(1)58801-12723 Institute for Geoinformation Fax +43(1)58801-12799 Technical University of Vienna Cell phone +43(664)4064789 From simonpj@microsoft.com Fri Feb 16 12:14:24 2001 From: simonpj@microsoft.com (Simon Peyton-Jones) Date: Fri, 16 Feb 2001 04:14:24 -0800 Subject: Primitive types and Prelude shenanigans Message-ID: <37DA476A2BC9F64C95379BF66BA269025EAE39@red-msg-09.redmond.corp.microsoft.com> | | Some while ago I modified GHC to have an extra runtime | | flag to let you change this behaviour. The effect was | | that 3 turns into simply (fromInt 3), and the | | "fromInt" means "whatever fromInt is in scope". | | Hmmm... so how about: | | foo fromInt = 3 | | Would this translate to: | | foo f = f 3 This exactly what will happen. But you are right to say that it is perhaps not what you want. Another alternative would be: "3" turns into "Prelude.fromInt 3", where "Prelude.fromInt" means "whatever Prelude.fromInt is in scope". So then you'd have to say import Prelude () import MyPrelude as Prelude (as Malcolm and Marcin suggested). Maybe that's a good plan; it's a little more heavyweight. [Incidentally, if this is nhc's behaviour, it's not H98. The Report (tries to) stress that you get the "fromInt from the actual standard Prelude" regardless of what is in scope. That's why I'm not going to make it the default behaviour.] Yet another possibility would to be say you get "the unqualified fromInt that's in scope at top level". But that seems worse. Re Bools, Koen and Marcin write (respectively) | | [...] guarantee that (if otherwise e1 e2) = e1. | | I do not understand this. "otherwise" is simply a function | name, that can be used, redefined or hidden, by anyone. It | is not used in any desugaring. Why change that behaviour? | > So much for numerics. It's much less obvious what to do | about booleans. | | IMHO a natural generalization (not necessarily useful) is to follow | the definition of the 'if' syntactic sugar literally. 'if' expands | to the appropriate 'case'. So Prelude.True and Prelude.False must be | defined, and they must have the same type (otherwise we get a type | error each time we use 'if'). This would allow even | data FancyBool a = True | False | DontKnow a The point is that there must be a *defined* desugaring. The desugaring in the report defines the behaviour, but the compiler is free to do differently. If one is to be free to rebind types, the desugaring must be fully defined. Marcin suggests that 'if' is just syntactic sugar. But that would be a disaster if the new Bool type didn't have constructors True and False. For example, maybe Bool becomes a function: type Bool = forall b. b -> b -> b No constructor 'True'! Here I think the right thing is to say that desugaring for boolean constructs uses a function 'if' assumed to have type if :: forall b. Bool -> b -> b -> b Now the programmer can define both Bool and if, and the compiler will be happy. My point is this: there is some *design* to do here. It's not obvious what the design should be. But if anyone feels inclined to do the design (in consultation with the community of course) then I'd be inclined to implement it in GHC. (Though I'm not writing a blank cheque!) Decoupling the prelude is a desirable goal. Simon From matthias@rice.edu Fri Feb 16 15:51:35 2001 From: matthias@rice.edu (Matthias Felleisen) Date: Fri, 16 Feb 2001 09:51:35 -0600 (CST) Subject: Just for your fun and horror In-Reply-To: <3A8D3832.B5045915@info.unicaen.fr> (message from Jerzy Karczmarczuk on Fri, 16 Feb 2001 14:24:50 +0000) References: <20010213183221.B973@math.harvard.edu> <3A8D3832.B5045915@info.unicaen.fr> Message-ID: <200102161551.JAA00603@africa.cs.rice.edu> The problem is Haskell, not your student. Haskell undermines the meaning of 'return', which has the same meaning in C, C++, Java, and who knows whatelse. These languages use 'return' to refer to one part of the denotation of a function return (value) and Haskell uses 'return' to refer to two parts (value, store). These languages have been around forever; Haskell came late. These languages are imperative; Haskell is a wanna-be imperative language. The students know C'ish stuff (and I take it some Scheme); you teach Haskell to introduce them to functional and denotational thinking. That's laudable. It's great. Just don't expect your students to change deeply ingrained habits such as the 'return habit' in a few weeks. Instead, teach explicit store-passing style and do it again and again and again until they ask "isn't this a pattern that we should abstract out". Then show monads and apologize profusely for the abuse of the return syntax in Haskell. If they don't ask, chew them out near the end of the semester for being bad programmers who can't see a pattern when it bites their b...d. Not worth the money. Fired. :-) -- Matthias From Dominic.J.Steinitz@BritishAirways.com Fri Feb 16 15:59:33 2001 From: Dominic.J.Steinitz@BritishAirways.com (Steinitz, Dominic J) Date: 16 Feb 2001 15:59:33 Z Subject: Just for your fun and horror Message-ID: <"0596F3A8D4E65002*/c=GB/admd=ATTMAIL/prmd=BA/o=British Airways PLC/ou=CORPLN1/s=Steinitz/g=Dominic/i=J/"@MHS> I always liked unit rather than return. Dominic. ------------------------------------------------------------------------------------------------- 21st century air travel http://www.britishairways.com From C.Reinke@ukc.ac.uk Fri Feb 16 16:46:53 2001 From: C.Reinke@ukc.ac.uk (C.Reinke) Date: Fri, 16 Feb 2001 16:46:53 +0000 Subject: Just for your fun and horror Message-ID: > `return' in Haskell vs `return' in C,... Unless you're one of Asimov's technicians of eternity, it is a bit difficult to change the history of programming languages, and assuming that the students pay for the opportunity to learn, you can't really fire them either.. but I agree with Mathias' suggestion to go from the specific to the general. Before anyone complains that abstract and generalised concepts are so much more important and powerful that specific and simplified instances - if you believe this, you will also agree that giving students a chance to learn the general process of abstraction for themselves is more important and empowering than teaching them some specific abstractions. (I'm not sure whether it is even possible to reach all students in a course, but I will certainly not recommend to give up trying;-) One way to look at the problem is that some of your students have concrete experience with `return' in different contexts, and that Haskell tries to make different things look similar here. You say "we worked with monads for several weeks" but, you being yourself, this was probably at a fairly abstract and general level, right? My suggestion is to give your students some concrete experience to counter the one they bring into your course, by introducing the abstract monads via an intermediate step of concrete representations. As you're teaching programming language implementation anyway, why not have an algebraic datatype with return and bind *constructors*, together with some explicit *interpreters* (plural) for the language of structures built from those constructors (even as student exercises)? Perhaps we can gain a better understanding of the student perspective if we compare the situation with lists or other data structures: do we start teaching their folds and the fold-representation of data structures right away, or do we start with concrete intermediate structures, and move on to folds and deforestation later? Of course, with a concrete representation of monads, it is difficult to hold up the law, so after this intermediate step (in which students get their hands on `return' et.al., and in which interpreters can interpret `return a' in any way they please), one can move on to an abstract data type of monads. After all, that's what abstract data types are there for. Hth, Claus From qrczak@knm.org.pl Fri Feb 16 17:13:10 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 16 Feb 2001 17:13:10 GMT Subject: Primitive types and Prelude shenanigans References: <37DA476A2BC9F64C95379BF66BA269025EAE39@red-msg-09.redmond.corp.microsoft.com> Message-ID: Fri, 16 Feb 2001 04:14:24 -0800, Simon Peyton-Jones pisze: > [Incidentally, if this is nhc's behaviour, it's not H98. > The Report (tries to) stress that you get the "fromInt from the actual > standard Prelude" regardless of what is in scope. That's why I'm not > going to make it the default behaviour.] But is mere -fglasgow-exts enough to enable it? BTW: fromInt is not H98. However when a compiler uses fromInt instead of fromInteger where the number fits, with a suitable default method for fromInt which is not exported from Prelude, then no program can tell the difference, so it's OK. Unfortunately integer literals cannot expand to Prelude.fromInt, because Prelude does not export fromInt! Currently ghc extension flags can have no effect on module imports, so if fromInt is not visible in standard mode, it will not visible in extended mode either. In such case these two extensions (Prelude substitution and using fromInt for integer literals) are incompatible. > Marcin suggests that 'if' is just syntactic sugar. But that would > be a disaster if the new Bool type didn't have constructors True > and False. Correction: it would be a disaster when there are no Prelude.True and Prelude.False constructors of the same type. It needs not to be called Bool if the desugaring rule does not say so. > Here I think the right thing is to say that desugaring for boolean > constructs uses a function 'if' assumed to have type > if :: forall b. Bool -> b -> b -> b What if somebody wants to make 'if' overloaded on more types than some constant type called Bool? class Condition a where if :: a -> b -> b -> b Generally I don't feel the need of allowing to replace if, Bool and everything else with custom definitions, especially when there is no single obvious way. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Fri Feb 16 17:42:17 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 16 Feb 2001 17:42:17 GMT Subject: Primitive types and Prelude shenanigans References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> <20010215205620.D641@holomorphy.com> Message-ID: Thu, 15 Feb 2001 20:56:20 -0800, William Lee Irwin III pisze: > literal "5" gets mapped to (fromPositiveInteger 5) > literal "-9" gets mapped to (fromNonZeroInteger -9) Note that when a discussed generic Prelude replacement framework is done, and ghc's rules are changed to expand -9 to negate (fromInteger 9) instead of fromInteger (-9), then you don't need uglification of the fromInteger function to be able to define types with only nonnegative numeric values. Just define your negate in an appropriate class, different from the fromInteger's class. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From wli@holomorphy.com Fri Feb 16 19:47:10 2001 From: wli@holomorphy.com (William Lee Irwin III) Date: Fri, 16 Feb 2001 11:47:10 -0800 Subject: Primitive types and Prelude shenanigans In-Reply-To: ; from qrczak@knm.org.pl on Fri, Feb 16, 2001 at 05:42:17PM +0000 References: <37DA476A2BC9F64C95379BF66BA269023D9535@red-msg-09.redmond.corp.microsoft.com> <20010215205620.D641@holomorphy.com> Message-ID: <20010216114710.G641@holomorphy.com> William Lee Irwin III pisze: >> literal "5" gets mapped to (fromPositiveInteger 5) >> literal "-9" gets mapped to (fromNonZeroInteger -9) On Fri, Feb 16, 2001 at 05:42:17PM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Note that when a discussed generic Prelude replacement > framework is done, and ghc's rules are changed to expand -9 to > negate (fromInteger 9) instead of fromInteger (-9), then you don't > need uglification of the fromInteger function to be able to define > types with only nonnegative numeric values. Just define your negate > in an appropriate class, different from the fromInteger's class. Good point, the canonical injection from the positive integers into the various supersets (with structure) thereof handles it nicely. I foresee: fromPositiveInteger :: ContainsPositiveIntegers t => PositiveInteger -> t instance ContainsPositiveIntegers Integer where ... instance AdditiveGroup Integer where ... negate :: AdditiveGroup t => t -> t {- this seems natural, but see below -} fromPositiveInteger 5 :: ContainsPositiveIntegers t => t negate $ fromPositiveInteger 5 :: (AdditiveGroup t, ContainsPositiveIntegers t) => t which is not exactly what I want (and could probably use some aesthetic tweaking); I had in mind that negative integers would somehow imply a ContainsNonZeroIntegers or ContainsAllIntegers instance or the like. The solution actually imposes a rather natural instance (though one which could cause overlaps): instance (AdditiveGroup t, ContainsPositiveIntegers t) => ContainsAllIntegers t where ... I suppose one big wrinkle comes in when I try to discuss negation in the multiplicative monoid of nonzero integers. That question already exists without the Prelude's altered handling of negative literals. negate . fromInteger $ n just brings it immediately to the surface. 0 and 1 will still take some work, but I don't expect help with them. Thanks for the simplification! Cheers, Bill From erik@meijcrosoft.com Fri Feb 16 20:26:00 2001 From: erik@meijcrosoft.com (Erik Meijer) Date: Fri, 16 Feb 2001 12:26:00 -0800 Subject: Just for your fun and horror References: Message-ID: <006401c09856$aeb76d80$5d0c1cac@redmond.corp.microsoft.com> Why should we change and not C? Erik ----- Original Message ----- From: "Jan Skibinski" To: "Jerzy Karczmarczuk" Cc: ; Sent: Friday, February 16, 2001 1:17 AM Subject: Re: Just for your fun and horror > > > On Fri, 16 Feb 2001, Jerzy Karczmarczuk wrote: > > > My inquiry proved beyond any doubt that my students are so > > conditioned by "C", that despite the fact that we worked with > > monads for several weeks, they *cannot imagine* that > > "return z" > > may mean something different than the value of "z". > > > > Any suggestions? > > Perhaps the name "return" in the monadic definitions > could be replaced by something more suggestive of > an action? How about running a little experiment > next time, with a new name, to see whether this would > remove this unfortunate association with C-like > "return" in the minds of your students? > > Jan > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe From matthias@rice.edu Fri Feb 16 21:26:36 2001 From: matthias@rice.edu (Matthias Felleisen) Date: Fri, 16 Feb 2001 15:26:36 -0600 (CST) Subject: Just for your fun and horror In-Reply-To: <006401c09856$aeb76d80$5d0c1cac@redmond.corp.microsoft.com> (erik@meijcrosoft.com) References: <006401c09856$aeb76d80$5d0c1cac@redmond.corp.microsoft.com> Message-ID: <200102162126.PAA00819@africa.cs.rice.edu> Because C was first and you don't have the power to change them. -- Matthias From jhf@lanl.gov Fri Feb 16 23:10:29 2001 From: jhf@lanl.gov (Joe Fasel) Date: Fri, 16 Feb 2001 16:10:29 -0700 (MST) Subject: Just for your fun and horror In-Reply-To: <200102161551.JAA00603@africa.cs.rice.edu> Message-ID: On 16-Feb-2001 Matthias Felleisen wrote: | | The problem is Haskell, not your student. | | Haskell undermines the meaning of 'return', which has the same meaning in | C, C++, Java, and who knows whatelse. These languages use 'return' to | refer to one part of the denotation of a function return (value) and | Haskell uses 'return' to refer to two parts (value, store). These languages | have been around forever; Haskell came late. These languages are | imperative; Haskell is a wanna-be imperative language. The denotation of a return command in a typical imperative language supplies a value and a store to a calling continuation, so why is the name not entirely appropriate? Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Technology Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: TSA-7 MS F609; Los Alamos, NM 87545 From jhf@lanl.gov Fri Feb 16 23:53:13 2001 From: jhf@lanl.gov (jhf@lanl.gov) Date: Fri, 16 Feb 2001 16:53:13 -0700 (MST) Subject: Just for your fun and horror In-Reply-To: <200102162310.RAA00910@africa.cs.rice.edu> Message-ID: <200102162353.SAA04338@blount.mail.mindspring.net> On 16-Feb-2001 Matthias Felleisen wrote: > > Because imperative languages have named one half of the denotation (the > value return) and not all of it for a long long long time. It's too late > for Haskell to change that. -- Matthias Well now, if I am to understand what a return statement in C does, I must realize not only that it may return a value to a calling routine, but also that it preserves the store. If it allowed the store to vanish, it wouldn't be very useful, would it? So I don't see how it's reasonable to assert that "return" means only one of these two things to a C programmer. Cheers, --Joe Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Technology Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: TSA-7 MS F609; Los Alamos, NM 87545 From matthias@rice.edu Fri Feb 16 23:57:41 2001 From: matthias@rice.edu (Matthias Felleisen) Date: Fri, 16 Feb 2001 17:57:41 -0600 (CST) Subject: Just for your fun and horror In-Reply-To: <200102162353.SAA04338@blount.mail.mindspring.net> (jhf@lanl.gov) References: <200102162353.SAA04338@blount.mail.mindspring.net> Message-ID: <200102162357.RAA00940@africa.cs.rice.edu> From: jhf@lanl.gov X-Priority: 3 (Normal) Content-Type: text/plain; charset=us-ascii Date: Fri, 16 Feb 2001 16:53:13 -0700 (MST) Organization: Los Alamos National Laboratory Cc: karczma@info.unicaen.fr, haskell-cafe@haskell.org On 16-Feb-2001 Matthias Felleisen wrote: > > Because imperative languages have named one half of the denotation (the > value return) and not all of it for a long long long time. It's too late > for Haskell to change that. -- Matthias Well now, if I am to understand what a return statement in C does, I must realize not only that it may return a value to a calling routine, but also that it preserves the store. If it allowed the store to vanish, it wouldn't be very useful, would it? So I don't see how it's reasonable to assert that "return" means only one of these two things to a C programmer. Cheers, --Joe Let me spell it out in detail. When a C programmer thinks about the 'return' type of a C function, he thinks about the value-return half of a return statement's denotation. The other half, the modified store, remains entirely implicit as far as types are concerned. This is what Jerzy's exam question was all about. -- Matthias From jhf@lanl.gov Sat Feb 17 00:19:18 2001 From: jhf@lanl.gov (jhf@lanl.gov) Date: Fri, 16 Feb 2001 17:19:18 -0700 (MST) Subject: Just for your fun and horror In-Reply-To: <200102162357.RAA00940@africa.cs.rice.edu> Message-ID: <200102170019.TAA31896@tisch.mail.mindspring.net> Matthias, My apologies for being deliberately obtuse. Of course, I understood what you were saying, but my point is this: The name of the monadic "return" combinator is perfectly sensible to anyone who understands the continuation semantics of imperative languages. While it shouldn't be necessary to be a denotational semanticist to program in Haskell, I think it is essential to appreciate the philosophical difference between the _being_ of functional programming and the _doing_ of imperative programming, if you're going to play with something like the I/O monad in Haskell. If you don't grasp that when you construct a monad, you're creating a value that represents an action, or in other words have a basic understanding of the functional denotation of an imperative command, you don't really understand what you're "doing" with monads, and your program is likely not to compute what you intend. In this sense, maybe it's better not to change the (initially) confusing "return" name, but to regard it as a pons asinorum that the student must cross. Cheers, --Joe On 16-Feb-2001 Matthias Felleisen wrote: > > From: jhf@lanl.gov > X-Priority: 3 (Normal) > Content-Type: text/plain; charset=us-ascii > Date: Fri, 16 Feb 2001 16:53:13 -0700 (MST) > Organization: Los Alamos National Laboratory > Cc: karczma@info.unicaen.fr, haskell-cafe@haskell.org > > > On 16-Feb-2001 Matthias Felleisen wrote: > > > > Because imperative languages have named one half of the denotation (the > > value return) and not all of it for a long long long time. It's too late > > for Haskell to change that. -- Matthias > > Well now, if I am to understand what a return statement in C does, > I must realize not only that it may return a value to a calling > routine, but also that it preserves the store. If it allowed > the store to vanish, it wouldn't be very useful, would it? > So I don't see how it's reasonable to assert that "return" > means only one of these two things to a C programmer. > > Cheers, > --Joe > > > Let me spell it out in detail. When a C programmer thinks about the > 'return' type of a C function, he thinks about the value-return half > of a return statement's denotation. The other half, the modified store, > remains entirely implicit as far as types are concerned. This is what > Jerzy's exam question was all about. > > -- Matthias > Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Technology Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: TSA-7 MS F609; Los Alamos, NM 87545 From p.turner@computer.org Sat Feb 17 01:00:50 2001 From: p.turner@computer.org (Scott Turner) Date: Fri, 16 Feb 2001 20:00:50 -0500 Subject: Just for your fun and horror Message-ID: <3.0.5.32.20010216200050.00c098d0@billygoat.org> Matthias Felleisen wrote: >When a C programmer thinks about the >'return' type of a C function, he thinks about the value-return half >of a return statement's denotation. The other half, the modified store, >remains entirely implicit as far as types are concerned. Just because the type system of C keeps store implicit, it doesn't change the match between the meaning of 'return' in the two languages. The IO monad provides a refined way of typing imperative-style functions, including return statements. If you want to use a return statement in Haskell, you can, and it's called 'return'. (A reasonable alternative would be for 'return' to have second class status, as syntactic sugar for 'unit', analgous to otherwise=True). -- Scott Turner p.turner@computer.org http://www.billygoat.org/pkturner From matthias@rice.edu Sat Feb 17 03:24:50 2001 From: matthias@rice.edu (Matthias Felleisen) Date: Fri, 16 Feb 2001 21:24:50 -0600 (CST) Subject: Just for your fun and horror In-Reply-To: <200102170019.TAA31896@tisch.mail.mindspring.net> (jhf@lanl.gov) References: <200102170019.TAA31896@tisch.mail.mindspring.net> Message-ID: <200102170324.VAA00987@africa.cs.rice.edu> Yes, students must cross the bridge. But the name 'return' may make it more difficult than necessary to cross the bridge. I conjecture that the students of our French friend are just the tip of the iceberg. All functional programmers have problems selling our ware to such people. Haskell could have benefited from using a word such as produce 10 to say that a function produces a 10 and a store or whatever. It could have driven home the point home. Faking to be C or Java is confusing and may create a backlash. Just admit you're different -- and better. We Schemers have different problems. -- Matthias From matth@ninenet.com Sat Feb 17 04:21:57 2001 From: matth@ninenet.com (Matt Harden) Date: Fri, 16 Feb 2001 22:21:57 -0600 Subject: Scalable and Continuous References: <3A8B68DB.F52BF668@ninenet.com> Message-ID: <3A8DFC65.4F59ED27@ninenet.com> Marcin 'Qrczak' Kowalczyk wrote: > > Wed, 14 Feb 2001 23:27:55 -0600, Matt Harden pisze: > > > I also wonder: should one be allowed to create new superclasses of an > > existing class without updating the original class's definition? > > It would not buy anything. You could not make use of the superclass > in default definitions anyway (because they are already written). But that's not the point. The point is you could create objects that were only instances of the new superclass and not of the subclass. It allows us to have hidden superclasses of Num that wouldn't even have to be referenced in the standard Prelude, for instance. It allows users to define (+) for a type without defining (*), by creating an appropriate superclass of Num. We could keep the current Prelude while allowing numerous "Geek Preludes" that could coexist with the std one (at least with regard to this particular issue). > And what would happen to types which are instances of the subclass > but not of the new superclass? They would automatically be instances of the new superclass. Why not? They already have all the appropriate functions defined. Again, I wouldn't allow default definitions for the same function in multiple classes, and this is one of the reasons. It would introduce ambiguity when a type that is an instance of a subclass, and didn't override the default, was considered as an instance of the superclass. > > Also, should the subclass be able to create new default definitions > > for functions in the superclasses? > > I hope the system can be designed such that it can. Me too :). > > such defaults would only be legal if the superclass did not define > > a default for the same function. > > Not necessarily. For example (^) in Num (of the revised Prelude) > has a default definition, but Fractional gives the opportunity to > have better (^) defined in terms of other methods. When a type is an > instance of Fractional, it should always have the Fractional's (^) > in practice. When not, Num's (^) is always appropriate. > > I had many cases like this when trying to design a container class > system. It's typical that a more specialized class has something > generic as a superclass, and that a more generic function can easily > be expressed in terms of specialized functions (but not vice versa). > It follows that many kinds of types have the same written definition > for a method, which cannot be put in the default definition in the > class because it needs a more specialized context. > > It would be very convenient to be able to do that, but it cannot be > very clear design. It relies on the absence of an instance, a negative > constraint. Hopefully it will be OK, since it's determined once for a > type - it's not a systematic way of parametrizing code over negative > constrained types, which would break the principle that additional > instances are harmless to old code. What happens if classes A and B are superclasses of C, all three define a default for function foo, and we have a type that's an instance of A and B, but not C, which doesn't override foo? Which default do we use? It's not only a problem for the compiler to figure out, it also quickly becomes confusing to the programmer. I'd rather just make the simple rule of a single default per function. If multiple "standard" definitions for a function make sense, then be explicit about which one you want for each type; i.e.: instance Fractional MyFraction where (^) = fractionalPow > This design does have some problems. For example what if there are two > subclasses which define the default method in an incompatible ways. > We should design the system such that adding a non-conflicting instance > does not break previously written code. It must be resolved once per > module, probably complaining about the ambiguity (ugh!), but once > the instance is generated, it's cast in stone for this type. Yeah, ugh. I hate having opportunities for ambiguity. Simple rules and obvious results are far better, IMHO. > > What do you mean by mutual definitions? (snipped explanation of mutual definitions) OK, that's what I thought :). I didn't really think this was of particular importance with allowing the definition of superclass's instances in subclasses, but now I think I see why you said that. It would be easy to forget to define one of the functions if the defaults are way up the hierarchy in one of the superclasses. Btw, I'm one of those who agrees that omitting a definition of a class function in an instance should be an error. If you really intend to omit the implementation of a function without a default, define it as (error "Intentionally omitted")! Matt Harden From jf15@hermes.cam.ac.uk Sat Feb 17 11:09:52 2001 From: jf15@hermes.cam.ac.uk (Jon Fairbairn) Date: Sat, 17 Feb 2001 11:09:52 +0000 (GMT) Subject: Just for your fun and horror In-Reply-To: <3.0.5.32.20010216200050.00c098d0@billygoat.org> Message-ID: On Fri, 16 Feb 2001, Scott Turner wrote: > Just because the type system of C keeps store implicit, it doesn't > change the match between the meaning of 'return' in the two languages. Or to put it another way, _all_ types in C are IO something. I think from a didactic point of view making this observation could be very valuable. --=20 J=F3n Fairbairn Jon.Fairbairn@cl.cam.ac.uk 31 Chalmers Road jf@cl.cam.ac.uk Cambridge CB1 3SZ +44 1223 570179 (pm only, please) From elke.kasimir@catmint.de Sat Feb 17 11:24:03 2001 From: elke.kasimir@catmint.de (Elke Kasimir) Date: Sat, 17 Feb 2001 12:24:03 +0100 (CET) Subject: Just for your fun and horror In-Reply-To: <3A8D3832.B5045915@info.unicaen.fr> Message-ID: Another good exam question (Hmm!): What does last (last (map return [1..])) lastly return given that last (return (not True))? I also would prefer "unit". "return" makes sense for me as syntactic sugar in the context of a "do"-expression (and then please like an unary prefix-operat or with low binding power...). An alternative sugary would be "compute": When a monad represents a computation, "init" returns a computation with a result, not just the result: foo x = if x > 0 then compute x*x else compute -x*x By the way, an alternative for "do" would be "seq" (as in occam) to indicate that operations are sequenced: getLine = seq c <- readChar if c == '\n' then compute "" else seq l <- getLine compute c:l But such a discussion has probably already been taken place some years ago. It would be interesting for me to know the arguments that led to the choice of "return" (and "do"). Elke. --- "If you have nothing to say, don't do it here..." Elke Kasimir Skalitzer Str. 79 10997 Berlin (Germany) fon: +49 (030) 612 852 16 mail: elke.kasimir@catmint.de> see: for pgp public key see: From p.turner@computer.org Sat Feb 17 20:27:31 2001 From: p.turner@computer.org (Scott Turner) Date: Sat, 17 Feb 2001 15:27:31 -0500 Subject: [newbie] Lazy >>= ?! In-Reply-To: <20010217132530.B2091@liron> References: <3.0.5.32.20010216175108.00c08a70@billygoat.org> <20010216221926.A2091@liron> <3.0.5.32.20010216175108.00c08a70@billygoat.org> Message-ID: <3.0.5.32.20010217152731.00c01860@billygoat.org> Andrew Cooke wrote: >1. After digesting what you wrote I managed to make a lazy list of IO >monads containing random numbers, but couldn't make an IO monad that >contained a lazy list of random numbers. Is this intentional, me >being stupid, or just chance? I had wondered what kind of thing you were doing with the IO monad. Random numbers are an odd fit. Pseudorandom numbers can be generated in a lazy list easily; you don't need a connection with the IO monad to do it. Using the Random module of the Hugs distribution, it's for example randoms (mkStdGen 1) :: [Int] The IO monad can be brought into this picture easily. return (randoms (mkStdGen 1)) :: IO [Int] But it sounds as if you're looking for something more sophisticated. You want to use randomIO perhaps because it better matches your notion of how random numbers should be generated. Using randomIO places more restrictions on how you operate, because it forces the random numbers to be created in a particular sequence, in relation to any other IO which the program performs. Every random number that is ever accessed must be produced at a particular point in the sequence. An unbounded list of such numbers cannot be returned! That is, you are looking for randomsIO :: IO [a] which yields a lazy list, by means of repeated calls to randomIO. All such calls would have to occur _before_ randomsIO returns, and before _any_ use of the random numbers could be made. The program hangs in the process of making an infinite number of calls to randomIO. But, you may say, those infinite effects are invisible unless part of the list is referenced later in the program, so a truly lazy implementation should be able to skip past that stuff in no time. Well, that's conceivable, but (1) that's making some assumptions about the implemetation of randomIO, and (2) lazy things with no side effects can and should be handled outside of the IO monad. >Also, should I be worried about having more than one IO monad - it >seems odd encapsulating the "outside world" more than once. No. Consider the expression sequence_ [print "1", print "two", print "III"] Try executing it from the Hugs command line, and figure out the type of the list. An expression in the IO monad, such as 'print 1' makes contact with the "outside world" when it executes, but does not take over the entire outside world, even for the period of time that it's active. I moved this to haskell-cafe mailing list, because it's getting a little extended. -- Scott Turner p.turner@computer.org http://www.billygoat.org/pkturner From p.turner@computer.org Sat Feb 17 20:44:32 2001 From: p.turner@computer.org (Scott Turner) Date: Sat, 17 Feb 2001 15:44:32 -0500 Subject: [newbie] Lazy >>= ?! In-Reply-To: <20010217132530.B2091@liron> References: <3.0.5.32.20010216175108.00c08a70@billygoat.org> <20010216221926.A2091@liron> <3.0.5.32.20010216175108.00c08a70@billygoat.org> Message-ID: <3.0.5.32.20010217154432.00c01790@billygoat.org> Andrew Cooke wrote: >2. Why does the following break finite lists? Wouldn't they just >become lazy lists that evaluate to finite lists once map or length or >whatever is applied? > >> Now, if this were changed to >> ~(x:xs) >>= f = f x ++ (xs >>= f) >> (a lazy pattern match) then your listList2 would work, but finite >> lists would stop working. They wouldn't just become lazy lists. A "lazy" pattern match isn't about removing unnecessary strictness. It removes strictness that's necessary for the program to function normally. A normal pattern match involves selecting among various patterns to find the one which matches; so it evaluates the expression far enough to match patterns. In the case of (x:xs) it must evaluate the list sufficiently to know that it is not an empty list. A lazy pattern match gives up the ability to select which pattern matches. For the sake of less evaluation, it opens up the possibility of a runtime error, when a reference to a named variable won't have anything to bind to. The list monad is most often used with complete finite lists, not just their initial portions. The lazy pattern match shown above breaks this because as it operates on the list, it assumes that the list is non-empty, which is not the case when the end of the list is reached. A runtime error is inevitable. -- Scott Turner p.turner@computer.org http://www.billygoat.org/pkturner From dpt@math.harvard.edu Sat Feb 17 22:58:55 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Sat, 17 Feb 2001 17:58:55 -0500 Subject: Scalable and Continuous In-Reply-To: <3A8DFC65.4F59ED27@ninenet.com>; from matth@ninenet.com on Fri, Feb 16, 2001 at 10:21:57PM -0600 References: <3A8B68DB.F52BF668@ninenet.com> <3A8DFC65.4F59ED27@ninenet.com> Message-ID: <20010217175855.B4446@math.harvard.edu> On Fri, Feb 16, 2001 at 10:21:57PM -0600, Matt Harden wrote: > Marcin 'Qrczak' Kowalczyk wrote: > > Wed, 14 Feb 2001 23:27:55 -0600, Matt Harden pisze: > > > such defaults would only be legal if the superclass did not define > > > a default for the same function. > > > > Not necessarily. For example (^) in Num (of the revised Prelude) > > has a default definition, but Fractional gives the opportunity to > > have better (^) defined in terms of other methods. When a type is an > > instance of Fractional, it should always have the Fractional's (^) > > in practice. When not, Num's (^) is always appropriate. > What happens if classes A and B are superclasses of C, all three > define a default for function foo, and we have a type that's an instance > of A and B, but not C, which doesn't override foo? Which default do we > use? It's not only a problem for the compiler to figure out, it also > quickly becomes confusing to the programmer. (Presumably you mean that A and B are subclasses of C, which contains foo.) I would make this an error, easily found by the compiler. But I need to think more to come up with well-defined and uniform semantics. > .. I'd rather just make the > simple rule of a single default per function. If multiple "standard" > definitions for a function make sense, then be explicit about which one > you want for each type; i.e.: > > instance Fractional MyFraction where > (^) = fractionalPow This is another option. It has the advantage of being explicit and allowing you to choose easily in cases of ambiguity. It is more conservative, but possibly less convenient. Best, Dylan Thurston From ham@cs.utexas.edu Sat Feb 17 22:29:56 2001 From: ham@cs.utexas.edu (Hamilton Richards) Date: Sat, 17 Feb 2001 16:29:56 -0600 Subject: Just for your fun and horror In-Reply-To: <200102170324.VAA00987@africa.cs.rice.edu> References: <200102170019.TAA31896@tisch.mail.mindspring.net> (jhf@lanl.gov) <200102170019.TAA31896@tisch.mail.mindspring.net> Message-ID: At 21:24 -0600 2001-02-16, Matthias Felleisen wrote: > ... Haskell could have benefited from using a word such >as > > produce 10 > >to say that a function produces a 10 and a store or whatever. In my classes, I use the term "deliver". This is the first semester I've gone as deeply into monads, so it's a bit early to say how well this terminology works. --HR ------------------------------------------------------------------ Hamilton Richards, PhD Department of Computer Sciences Senior Lecturer Mail Code C0500 512-471-9525 The University of Texas at Austin Taylor Hall 5.138 Austin, Texas 78712-1188 ham@cs.utexas.edu ------------------------------------------------------------------ From chak@cse.unsw.edu.au Sun Feb 18 03:50:16 2001 From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty) Date: Sun, 18 Feb 2001 14:50:16 +1100 Subject: Just for your fun and horror In-Reply-To: References: <3.0.5.32.20010216200050.00c098d0@billygoat.org> Message-ID: <20010218145016X.chak@cse.unsw.edu.au> Jon Fairbairn wrote, > On Fri, 16 Feb 2001, Scott Turner wrote: > > Just because the type system of C keeps store implicit, it doesn't > > change the match between the meaning of 'return' in the two languages. > > Or to put it another way, _all_ types in C are IO > something. I think from a didactic point of view making > this observation could be very valuable. I absolutely agree. The Haskell foo :: IO Int foo = return 42 and C int foo () { return 42; } are exactly the same. It is bar = 42 for which C has no corresponding phrase. So, it is a new concept, which for the students - not surprisingly - is an intellectual challenge. In fact, I think, there is a second lesson in the whole story, too: Syntax is just...well...syntax. Students knowing only one or possibly two related languages, often cannot distinguish between syntax and semantics. Breaking their current, misguided model of programming languages is a first step for them towards gaining a deeper understanding. So, `return' is a feature, not a bug. I guess, the remedy for the course would be to provoke a discussion of the issue of C's return versus Haskell's return before the exam. Cheers, Manuel From ashley@semantic.org Sun Feb 18 03:59:32 2001 From: ashley@semantic.org (Ashley Yakeley) Date: Sat, 17 Feb 2001 19:59:32 -0800 Subject: Just for your fun and horror Message-ID: <200102180359.TAA12250@mail4.halcyon.com> At 2001-02-17 19:50, Manuel M. T. Chakravarty wrote: >It is > > bar = 42 > >for which C has no corresponding phrase. Hmm... #define bar 42 ...although I would always do const int bar = 42 -- Ashley Yakeley, Seattle WA From matth@ninenet.com Sun Feb 18 04:28:39 2001 From: matth@ninenet.com (Matt Harden) Date: Sat, 17 Feb 2001 22:28:39 -0600 Subject: Scalable and Continuous References: <3A8B68DB.F52BF668@ninenet.com> <3A8DFC65.4F59ED27@ninenet.com> <20010217175855.B4446@math.harvard.edu> Message-ID: <3A8F4F77.2E292451@ninenet.com> Dylan Thurston wrote: > > On Fri, Feb 16, 2001 at 10:21:57PM -0600, Matt Harden wrote: > > Marcin 'Qrczak' Kowalczyk wrote: > > > Wed, 14 Feb 2001 23:27:55 -0600, Matt Harden pisze: > > > > such defaults would only be legal if the superclass did not define > > > > a default for the same function. > > > > > > Not necessarily. For example (^) in Num (of the revised Prelude) > > > has a default definition, but Fractional gives the opportunity to > > > have better (^) defined in terms of other methods. When a type is an > > > instance of Fractional, it should always have the Fractional's (^) > > > in practice. When not, Num's (^) is always appropriate. > > What happens if classes A and B are superclasses of C, all three > > define a default for function foo, and we have a type that's an instance > > of A and B, but not C, which doesn't override foo? Which default do we > > use? It's not only a problem for the compiler to figure out, it also > > quickly becomes confusing to the programmer. > > (Presumably you mean that A and B are subclasses of C, which contains > foo.) I would make this an error, easily found by the compiler. > But I need to think more to come up with well-defined and uniform > semantics. No, I meant superclasses. I was referring to the possible feature we (Marcin and I) were discussing, which was the ability to create new superclasses of existing classes. If you are allowed to create superclasses which are not referenced in the definition of the subclass, then presumably you could create two classes A and B that contained foo from C. You would have to then be able to create a new subclass of both of those classes, since C is already a subclass of both. Then the question becomes, if they both have a default for foo, who wins? My contention was that the compiler should not allow a default for foo in the superclass and the subclass because that would introduce ambiguities. I would now like to change my stance on that, and say that defaults in the superclasses could be allowed, and in a class AB subclassing both A and B, there would be no default for foo unless it was defined in AB itself. Also C would not inherit any default from A or B, since it does not mention A or B in its definition. If this feature of creating new superclasses were adopted, I would also want a way to refer explicitly to default functions in a particular class definition, so that one could say that foo in AB = foo from A. BTW, I'm not saying this stuff is necessarily a good idea, just exploring the possibility. Matt Harden From sebc@posse42.net Sun Feb 18 05:17:01 2001 From: sebc@posse42.net (Sebastien Carlier) Date: Sun, 18 Feb 2001 05:17:01 +0000 Subject: Just for your fun and horror In-Reply-To: <20010218145016X.chak@cse.unsw.edu.au>; from chak@cse.unsw.edu.au on Sun, Feb 18, 2001 at 02:50:16PM +1100 References: <3.0.5.32.20010216200050.00c098d0@billygoat.org> <20010218145016X.chak@cse.unsw.edu.au> Message-ID: <20010218051701.A534@posse42.net> Manuel M. T. Chakravarty wrote: > It is > > bar = 42 > > for which C has no corresponding phrase. But it has: #define bar 42 Although then you get call by name, while Haskell provides call by need. Cheers, Sebastien From tom-haskell@moertel.com Sun Feb 18 06:53:41 2001 From: tom-haskell@moertel.com (Tom Moertel) Date: Sun, 18 Feb 2001 01:53:41 -0500 Subject: Literate Programming in Haskell? Message-ID: <3A8F7175.72A3E1C5@moertel.com> In the Haskell community is there a generally accepted best way to approach Literate Programming? The language has support for literate comments, but it seems that many common LP tools don't respect it. For example, in order to convert some .lhs code into LaTeX via the noweb LP tools, I had to write a preprocessor to convert the ">" code blocks into something that noweb would respect. (The preprocessor actually does a bit more and, in conjunection with noweb, gives pretty good results for little effort. For an example, see: http://www.ellium.com/~thor/hangman/cheating-hangman.lhs http://www.ellium.com/~thor/hangman/cheating-hangman.pdf ) Yet somehow, I don't think that my homebrew approach is optimal. Can anybody recommend a particularly elegant LP setup for Haskell programming? Or if you have an approach that works well for you, would you mind sharing it? Cheers, Tom From chak@cse.unsw.edu.au Sun Feb 18 08:54:57 2001 From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty) Date: Sun, 18 Feb 2001 19:54:57 +1100 Subject: Just for your fun and horror In-Reply-To: <200102180359.TAA12250@mail4.halcyon.com> References: <200102180359.TAA12250@mail4.halcyon.com> Message-ID: <20010218195457B.chak@cse.unsw.edu.au> Ashley Yakeley wrote, > At 2001-02-17 19:50, Manuel M. T. Chakravarty wrote: > > >It is > > > > bar = 42 > > > >for which C has no corresponding phrase. > > Hmm... > > #define bar 42 No - this doesn't work as #define bar (printf ("Evil side effect"), 42) is perfectly legal. So, we have an implicit IO monad here, too. What is interesting, however, is that C does not require `return' in all contexts itself. Or in other words, C's comma notation has an implicit `return' in the last expression. > ...although I would always do > > const int bar = 42 That's a good one, however. It in effect rules out side effects by C's definition of constant expressions. So, I guess, I have to extend my example to bar x = x + 42 Cheers, Manuel From gruenbacher-lists@geoinfo.tuwien.ac.at Sun Feb 18 10:00:31 2001 From: gruenbacher-lists@geoinfo.tuwien.ac.at (Andreas Gruenbacher) Date: Sun, 18 Feb 2001 11:00:31 +0100 (CET) Subject: Literate Programming in Haskell? In-Reply-To: <3A8F7175.72A3E1C5@moertel.com> Message-ID: On Sun, 18 Feb 2001, Tom Moertel wrote: > In the Haskell community is there a generally accepted best way to > approach Literate Programming? The language has support for literate > comments, but it seems that many common LP tools don't respect it. I'm also very interested in this, but ideally I would want the output to be in some proportional font, with symbols like =>, ->, <- replaced with arrows, etc. Also, it would be very nice to have the code automatically column aligned (using heuristics). I saw something that looks like this in Mark P. Jones's paper `Typing Haskell in Haskell', but don't know how he did it. Cheers, Andreas. ------------------------------------------------------------------------ Andreas Gruenbacher gruenbacher@geoinfo.tuwien.ac.at Research Assistant Phone +43(1)58801-12723 Institute for Geoinformation Fax +43(1)58801-12799 Technical University of Vienna Cell phone +43(664)4064789 From andrew@andrewcooke.free-online.co.uk Sun Feb 18 11:54:42 2001 From: andrew@andrewcooke.free-online.co.uk (andrew@andrewcooke.free-online.co.uk) Date: Sun, 18 Feb 2001 11:54:42 +0000 Subject: [newbie] Lazy >>= ?! In-Reply-To: <3.0.5.32.20010217152731.00c01860@billygoat.org>; from p.turner@computer.org on Sat, Feb 17, 2001 at 03:27:31PM -0500 References: <3.0.5.32.20010216175108.00c08a70@billygoat.org> <20010216221926.A2091@liron> <3.0.5.32.20010216175108.00c08a70@billygoat.org> <20010217132530.B2091@liron> <3.0.5.32.20010217152731.00c01860@billygoat.org> Message-ID: <20010218115442.C9670@liron> Thanks to everyone for replying. Things make more sense now (I've re-read a chunk of the Haskell Companion that really hammers home the difference between actions and monads). Also, thanks for the pointer to random numbers without IO - I'd actually written my own equivalent, but will now drop it and use that. Cheers, Andrew On Sat, Feb 17, 2001 at 03:27:31PM -0500, Scott Turner wrote: > Andrew Cooke wrote: > >1. After digesting what you wrote I managed to make a lazy list of IO > >monads containing random numbers, but couldn't make an IO monad that > >contained a lazy list of random numbers. Is this intentional, me > >being stupid, or just chance? > > I had wondered what kind of thing you were doing with the IO monad. Random > numbers are an odd fit. Pseudorandom numbers can be generated in a lazy > list easily; you don't need a connection with the IO monad to do it. Using > the Random module of the Hugs distribution, it's for example > randoms (mkStdGen 1) :: [Int] > > The IO monad can be brought into this picture easily. > return (randoms (mkStdGen 1)) :: IO [Int] > > But it sounds as if you're looking for something more sophisticated. You > want to use randomIO perhaps because it better matches your notion of how > random numbers should be generated. Using randomIO places more > restrictions on how you operate, because it forces the random numbers to be > created in a particular sequence, in relation to any other IO which the > program performs. Every random number that is ever accessed must be > produced at a particular point in the sequence. An unbounded list of such > numbers cannot be returned! That is, you are looking for > randomsIO :: IO [a] > which yields a lazy list, by means of repeated calls to randomIO. All such > calls would have to occur _before_ randomsIO returns, and before _any_ use > of the random numbers could be made. The program hangs in the process of > making an infinite number of calls to randomIO. > > But, you may say, those infinite effects are invisible unless part of the > list is referenced later in the program, so a truly lazy implementation > should be able to skip past that stuff in no time. Well, that's > conceivable, but (1) that's making some assumptions about the implemetation > of randomIO, and (2) lazy things with no side effects can and should be > handled outside of the IO monad. > > >Also, should I be worried about having more than one IO monad - it > >seems odd encapsulating the "outside world" more than once. > > No. Consider the expression > sequence_ [print "1", print "two", print "III"] > Try executing it from the Hugs command line, and figure out the type of the > list. An expression in the IO monad, such as 'print 1' makes contact with > the "outside world" when it executes, but does not take over the entire > outside world, even for the period of time that it's active. > > I moved this to haskell-cafe mailing list, because it's getting a little > extended. > > -- > Scott Turner > p.turner@computer.org http://www.billygoat.org/pkturner > -- http://www.andrewcooke.free-online.co.uk/index.html From elke.kasimir@catmint.de Sun Feb 18 11:59:57 2001 From: elke.kasimir@catmint.de (Elke Kasimir) Date: Sun, 18 Feb 2001 12:59:57 +0100 (CET) Subject: framework for composing monads? In-Reply-To: <20010218150118D.chak@cse.unsw.edu.au> Message-ID: (Moving to haskell cafe...) On 18-Feb-2001 Manuel M. T. Chakravarty wrote: >> It is even acceptable for me to manage the state in C - >> independent of the API design - but then some time there >> will be the question: Why do I always say that that Haskell >> is the better programming language, when I'm >> really doing all the tricky stuff in C?... > > Sure - therefore, I proposed to use `IORef's rather than C > routines. Thanks for the hint! I took a look at them and now have some questions: a) It is clear that I need some C-link to access the cli/odbc lib. Up to now I planned to use Haskell Direct for this. Except of this, I want to stick to Haskell 98 and seek for maximal portability. Practically, this raises the question of wether nhc and hbc support hslibs or else I can provide a substitute for IORef's for these compilers. Can someone give me hint? b) What I finally need is "hidden state". My first attempt to get one using IORefs is: > import IOExts > > state :: IORef Int > state = unsafePerformIO $ newIORef 0 > > main = seq state $ do > writeIORef state 1 > currstate <- readIORef state > putStr (show currstate) Is this the right way? Cheers, Elke --- "If you have nothing to say, don't do it here..." Elke Kasimir Skalitzer Str. 79 10997 Berlin (Germany) fon: +49 (030) 612 852 16 mail: elke.kasimir@catmint.de> see: for pgp public key see: From chak@cse.unsw.edu.au Mon Feb 19 02:58:09 2001 From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty) Date: Mon, 19 Feb 2001 13:58:09 +1100 Subject: framework for composing monads? In-Reply-To: References: <20010218150118D.chak@cse.unsw.edu.au> Message-ID: <20010219135809P.chak@cse.unsw.edu.au> Elke Kasimir wrote, > (Moving to haskell cafe...) > > On 18-Feb-2001 Manuel M. T. Chakravarty wrote: > >> It is even acceptable for me to manage the state in C - > >> independent of the API design - but then some time there > >> will be the question: Why do I always say that that Haskell > >> is the better programming language, when I'm > >> really doing all the tricky stuff in C?... > > > > Sure - therefore, I proposed to use `IORef's rather than C > > routines. > > Thanks for the hint! > > I took a look at them and now have some questions: > > a) It is clear that I need some C-link to access the cli/odbc lib. > Up to now I planned to use Haskell Direct for this. Except of this, I want > to stick to Haskell 98 and seek for maximal portability. I am all for portable code, too. > Practically, this raises the question of wether nhc and hbc support hslibs > or else I can provide a substitute for IORef's for these compilers. nhc does supports `IORef's (they come in the module IOExtras). I am not sure whether H/Direct works with nhc, though. Sigbjorn should be able to answer this. > b) What I finally need is "hidden state". My first attempt to get one > using IORefs is: > > > import IOExts > > > > state :: IORef Int > > state = unsafePerformIO $ newIORef 0 > > > > main = seq state $ do > > writeIORef state 1 > > currstate <- readIORef state > > putStr (show currstate) > > Is this the right way? Yes, except that you want to have {-# NOINLINE state #-} too. Wouldn't be nice if ghc were to choose to inline `state', would it? ;-) Cheers, Manuel From karczma@info.unicaen.fr Mon Feb 19 11:08:28 2001 From: karczma@info.unicaen.fr (Jerzy Karczmarczuk) Date: Mon, 19 Feb 2001 11:08:28 +0000 Subject: Just for your fun and horror References: <"0596F3A8D4E65002*/c=GB/admd=ATTMAIL/prmd=BA/o=British Airways PLC/ou=CORPLN1/s=Steinitz/g=Dominic/i=J/"@MHS> Message-ID: <3A90FEAC.DB69A1F6@info.unicaen.fr> Dear all, at haskell-café & plt-scheme. 1. THANK YOU VERY MUCH for enlightening comments about the terminology, student psychology, etc. I will get back to it in a second, for the moment I ask you very politely: survey the adressee list if you "reply-all". For people subscribing both Haskell and Scheme forums means 4 copies of your message, if you send the messages simultaneously to the private address of the previous author...!. I thought that a cross posting might have some merits, and I see now the nuisance. My deep apologies. 2. People suggest that the word return has been badly chosen. I have no strong opinion, I begin to agree... we had unit, result, people propose liftM, compute, deliver, etc. I wonder why return stuck? Just because it exists elsewhere? I believe not, it has some appeal, as Joe Fasel acknowledges. C. Reinke writes: > One way to look at the problem is that some of your students have > concrete experience with `return' in different contexts, and that > Haskell tries to make different things look similar here. You say > "we worked with monads for several weeks" but, you being yourself, > this was probably at a fairly abstract and general level, right? No, not exactly. Being myself, just the opposite. There is *NO* more abstraction in my course than in Wadler's "Essence...". * I begin with a silly functional evaluator of a tree representing an arithmetic expression. * We recognize together with the students that a program may fail, and we introduce Maybe. They see thus a simple monadic generali- sation and the first non-trivial instance of return. We try to implement (in a sketchy way) a tracing generalisation as well. * They have parallelly a course of Prolog, so we play con mucho gusto with a few "non-deterministic" algorithms, such as standard combinatoric exercices : the generation of permutations, of the powerset, etc. On average the students seem to understand the idea and the implementation, and *mind you*: while writing their exercises <> they duly corrected themselves when they were tempted to write "z" instead of "return z". ([] =-> [[]]). * We worked for a reasonable period with monadic parsers. The comment above is valid. Semantically they accepted the difference between "z" and "return z". I couldn't foresee any surprises. * They had to write a serious program in Haskell, so I gave them an introduction to Haskell I/O. They couldn't escape from *practical* Monads (although some of my students "perverted" [with my approval] the idea of writing a *syntactic* converter to Scheme, realizing it not in Haskell but in Scheme...) I spoke of course about types, but not simultaneously. We took advantage of the type inference, and the *type* of return has not been discussed explicitly sufficiently early. This is - I believe - my main, fundamental pedagogical fault! Yes Joe, I think this has been my own <>. If my compilation course survives all this affair (not obvious) I will try to remember Jón Fairbarn's suggestion (repeated by Manuel Chakravarty), and to discuss thoroughly the status of "C" imperative concepts, in order to prevent misunderstandings. C. Reinke again: > Unless you're one of Asimov's technicians of eternity, it is a bit > difficult to change the history of programming languages, and assuming > that the students pay for the opportunity to learn, you can't really > fire them either.. Hm. We are all Technicians able to change the past, but since we do not live outside the System, we do it usually in the Orwellian way: we change the INTERPRETATION of the past. Things which were good (structural top-down programming) become bad (inadapted to object approach). Strong typing? A straitjacket for some, a salvation for the other. Scheme'ists add OO layers in order to facilitate the code reusing, and this smuggles in some typing. Dynamic typing in static languages became a folkloric, never-ending issue... The history of languages is full of "second thoughts". Who will first write a paper with a Wadlerian style [[but taken from earlier literature]] title: "Monads considered harmful" "Return should NOT return its argument" etc.? And in France students don't pay for the opportunity to learn. Regards. Jerzy Karczmarczuk Caen, France From simonpj@microsoft.com Mon Feb 19 09:52:45 2001 From: simonpj@microsoft.com (Simon Peyton-Jones) Date: Mon, 19 Feb 2001 01:52:45 -0800 Subject: FW: Announcing haskelldoc Message-ID: <37DA476A2BC9F64C95379BF66BA2690260DB34@red-msg-09.redmond.corp.microsoft.com> > In the Haskell community is there a generally accepted best way to > approach Literate Programming? The language has support for literate > comments, but it seems that many common LP tools don't respect it. I don't know whether you'd regard this as literate programming, but there's a move afoot to get a widely used Haskell documentation tool (enclosed). Simon -----Original Message----- From: Henrik Nilsson [mailto:nilsson@cs.yale.edu] Sent: 05 February 2001 22:14 To: haskell@haskell.org Subject: Announcing haskelldoc Dear Haskellers, At the recent Haskell Implementors' meeting in Cambridge, UK, it was decided that it would be useful to have a standard for embedded Haskell documentation. Such standards, and associated tools for extracting and formating the documentation in various ways, exists for other languages like Java and Eiffel and have proven to be very useful. Some such tools also exist (and are being actively developed) for Haskell, but there is as yet no generally agreed upon standard for the format of the embedded documentation as such. To address this, a mailing list has been started with the aim of defining a standard for embedded Haskell documentation, and possibly also related standards which would facilitate the development of various tools making use of such documentation (formatters, source code browsers, search tools, etc.). We feel that it is important to involve all who might be interested in this work at an early stage, so that as many aspects as possible can be taken into consideration, and so that the proposal for a standard which hopefully will emerge has a reasonable chance of gaining widespread support. Thus, You are hereby cordially invited to join haskelldoc@haskell.org. To join, just goto http://www.haskell.org/mailman/listinfo/haskelldoc. Best regards, Armin Groesslinger Simon Marlow Henrik Nilsson Jan Skibinski Malcolm Wallace _______________________________________________ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell From bostjan.slivnik@fri.uni-lj.si Mon Feb 19 13:33:44 2001 From: bostjan.slivnik@fri.uni-lj.si (Bostjan Slivnik) Date: Mon, 19 Feb 2001 14:33:44 +0100 Subject: Literate Programming in Haskell? In-Reply-To: (message from Andreas Gruenbacher on Sun, 18 Feb 2001 11:00:31 +0100 (CET)) References: Message-ID: <200102191333.f1JDXiU03160@sliva.fri.uni-lj.si> > > In the Haskell community is there a generally accepted best way to > > approach Literate Programming? The language has support for literate > > comments, but it seems that many common LP tools don't respect it. > > I'm also very interested in this, but ideally I would want the output to > be in some proportional font, with symbols like =>, ->, <- replaced with > arrows, etc. Also, it would be very nice to have the code automatically > column aligned (using heuristics). So am I. Is anybody willing to cooperate on the desing of such tool? The solution based on the package `listings' is really nice (especially because of its simplicity). However, if different proportional fonts are used for different lexical categories and the indentation is preserved (as it should be in Haskell), the package does not produce the best results. > I saw something that looks like this in Mark P. Jones's paper `Typing > Haskell in Haskell', but don't know how he did it. Perhaps he used ``Haskell Style for LaTeX2e'' (written by Manuel Chakravarty); just a guess. Or did it manually. Bo"stjan Slivnik From patrikj@cs.chalmers.se Mon Feb 19 14:07:41 2001 From: patrikj@cs.chalmers.se (Patrik Jansson) Date: Mon, 19 Feb 2001 15:07:41 +0100 (MET) Subject: Literate Programming in Haskell? In-Reply-To: <200102191333.f1JDXiU03160@sliva.fri.uni-lj.si> Message-ID: On Mon, 19 Feb 2001, Bostjan Slivnik wrote: > > > I'm also very interested in this, but ideally I would want the output to > > be in some proportional font, with symbols like =>, ->, <- replaced with > > arrows, etc. Also, it would be very nice to have the code automatically > > column aligned (using heuristics). > > So am I. Is anybody willing to cooperate on the desing of such tool? A tool I am using is Ralf Hinze's lhs2tex http://www.informatik.uni-bonn.de/~ralf/Literate.tar.gz http://www.informatik.uni-bonn.de/~ralf/Guide.ps.gz It transforms .lhs files (with some formatting commands in LaTeX-style comments) to LaTeX. Development based on this idea is something I would be willing to participate in as I already have a fair amount of Haskell code/articles (read: my PhD thesis;-) in this format. Maybe Ralf can say something about his views on further development of lhs2tex (copyright etc.) by other people (us?). /Patrik Jansson PS. I have made some small improvements to lhs2tex locally and I seem to remember that one or two of those were actually needed to get it to run with my ghc version. From Malcolm.Wallace@cs.york.ac.uk Mon Feb 19 14:29:47 2001 From: Malcolm.Wallace@cs.york.ac.uk (Malcolm Wallace) Date: Mon, 19 Feb 2001 14:29:47 +0000 Subject: framework for composing monads? In-Reply-To: Message-ID: Elke Kasimir writes: > Practically, this raises the question of wether nhc and hbc support hslibs > or else I can provide a substitute for IORef's for these compilers. As Manuel reported, nhc98 has IORefs identical to ghc and Hugs, except in module IOExtras. For hbc, you have an equivalent interface in: module IOMutVar where data MutableVar a newVar :: a -> IO (MutableVar a) readVar :: MutableVar a -> IO a writeVar :: MutableVar a -> a -> IO a sameVar :: MutableVar a -> MutableVar a -> Bool Regards, Malcolm From dpt@math.harvard.edu Mon Feb 19 21:05:01 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Mon, 19 Feb 2001 16:05:01 -0500 Subject: Typing units correctly In-Reply-To: <0C682B70CE37BC4EADED9D375809768A56CD04@red-msg-04.redmond.corp.microsoft.com>; from akenn@microsoft.com on Thu, Feb 15, 2001 at 07:18:14AM -0800 References: <0C682B70CE37BC4EADED9D375809768A56CD04@red-msg-04.redmond.corp.microsoft.com> Message-ID: <20010219160501.A9640@math.harvard.edu> On Thu, Feb 15, 2001 at 07:18:14AM -0800, Andrew Kennedy wrote: > First, I think there's been a misunderstanding. I was referring to > the poster ("Christoph Grein") ... but from > what I've seen your (Dylan's) posts are well-informed. Sorry if > there was any confusion. It was easy to get confused, since I was quite clueless in the post in question. No big deal. > As you suspect, negative exponents are necessary. On a recent plane ride, I convinced myself that negative exponents are possible to provide along the same lines, although it's not very elegant: addition seems to require 13 separate cases, depending on the sign of each term, with the representation I picked. There are other representations. There is a binary representation, similar to Chris Okasaki's in the square matrices paper. > In fact, I have since solved the simplification problem mentioned > in my ESOP paper, and it would assign the second of these two > (equivalent) types, as it works from left to right in the type. I > guess it does boil down to choosing a nice basis; more precisely > it corresponds to the Hermite Normal Form from the theory of > integer matrices (more generally: modules over commutative rings). Great. I'll look it up. I had run across similar problems in an unrelated context recently. > Which brings me to your last point: some more general system that > subsumes the rather specific dimension/unit types system. There's > been some nice work by Martin Sulzmann et al on constraint based > systems which can express dimensions. ... To my taste, though, > unless you want to express all sorts of other stuff in the type > system, the equational-unification-based approach that I described > in ESOP is simpler, even with the fix for let. One point of view is that anything you can do inconveniently by hand, as with the Peano integers example I posted, you ought to be able to do conveniently with good language support. I think you can do a lot of these constraint-based systems using PeanoAdd; I may try programming some. Language support does have advantages here: type signatures can often be simplified considerably, and can often be shown to be inconsistent. For instance, a <= b, a <= b+1 can be simplified to a <= b while (PeanoLessEqual a b, PeanoLessEqual a (Succ b)) which means more or less the same thing, cannot be simplified to (PeanoLessEqual a b) though probably a function could be written that converts between the two; but I don't see how to make it polymorphic enough. Your dimension types and Boolean algebra do add something really new that cannot be simulated like this: type inference and principal types. I wonder how they can be incorporated into Haskell in some reasonable and general way. Is a single kind of "dimensions" the right thing? What if, e.g., I care about the distinction between rational and integral exponents, or I want Z/2 torsion? How do I create a new dimension? Is there some function that creates a dimension from a string or some such? What is its type? Can I prevent dimensions from unrelated parts of the program from interfering? Best, Dylan Thurston From chak@cse.unsw.edu.au Mon Feb 19 13:26:08 2001 From: chak@cse.unsw.edu.au (Manuel M. T. Chakravarty) Date: Tue, 20 Feb 2001 00:26:08 +1100 Subject: Just for your fun and horror In-Reply-To: References: <20010218195457B.chak@cse.unsw.edu.au> Message-ID: <20010220002608C.chak@cse.unsw.edu.au> Jon Cast wrote, > Manuel M. T. Chakravarty writes: > > So, I guess, I have to extend my example to > > > > bar x = x + 42 > > > > I don't know if this counts, but gcc allows: > > int bar(int x)__attribute__(const) > { > return(x + 42); > } > > which is the exact C analogue of the Haskell syntax. Sorry, but I would say that it doesn't count as it is a compiler specific extension :-) Nevertheless, a good point. > The majority of `C > functions', I believe, (and especially in well-written code) are intended to > be true functions, not IO monads. They modify the state for > efficiency/ignorance reasons, not because of a conscious decision. Yes and no. I agree that they are often intended to be true functions. However, it is not only efficiency and ignorance which forces side effects on the C programmer. Restrictions of the language like the lack of call-by-reference arguments and (true) multi-valued returns force the use of pointers upon the programmer. Anyway, I don't want to do C bashing here - although, on this list, I might get away with it ;-) Cheers, Manuel From konsu@microsoft.com Tue Feb 20 02:07:17 2001 From: konsu@microsoft.com (Konst Sushenko) Date: Mon, 19 Feb 2001 18:07:17 -0800 Subject: newbie: running a state transformer in context of a state reader Message-ID: <1E27BBCDDE50914C99517B4D7EC5D5251A3401@RED-MSG-13.redmond.corp.microsoft.com> This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_001_01C09AE1.D954882E Content-Type: text/plain; charset="iso-8859-1" hello, i have a parser which is a state transformer monad, and i need to implement a lookahead function, which applies a given parser but does not change the parser state. so i wrote a function which reads the state, applies the parser and restores the state (the State monad is derived from the paper "Monadic parser combinators" by Hutton/Meijer): type Parser a = State String Maybe a lookahead :: Parser a -> Parser a lookahead p = do { s <- fetch ; x <- p ; set s ; return x } now i am curious if it is possible to run the given parser (state transformer) in a context of a state reader somehow, so as the state gets preserved automatically. something that would let me omit the calls to fetch and set methods. i would appreciate any advise thanks konst ------_=_NextPart_001_01C09AE1.D954882E Content-Type: text/html; charset="iso-8859-1"
hello,
 
i have a parser which is a state transformer monad, and i need to implement a lookahead function, which applies a given parser but does not change the parser state. so i wrote a function which reads the state, applies the parser and restores the state (the State monad is derived from the paper "Monadic parser combinators" by Hutton/Meijer):
 
 
type Parser a = State String Maybe a
lookahead  :: Parser a -> Parser a
lookahead p = do { s <- fetch
                 ; x <- p
                 ; set s
                 ; return x
                 }
now i am curious if it is possible to run the given parser (state transformer) in a context of a state reader somehow, so as the state gets preserved automatically. something that would let me omit the calls to fetch and set methods.
 
i would appreciate any advise
 
thanks
konst
------_=_NextPart_001_01C09AE1.D954882E-- From erik@meijcrosoft.com Tue Feb 20 06:00:47 2001 From: erik@meijcrosoft.com (Erik Meijer) Date: Mon, 19 Feb 2001 22:00:47 -0800 Subject: Literate Programming in Haskell? References: Message-ID: <002d01c09b02$79bf6480$0100a8c0@mshome.net> You also might take a look at Maarten Fokkinga's mira.sty http://www.cse.ogi.edu/~mbs/src/textools/ and Mark Shileds' abbrev.sty which was derived from that http://www.cse.ogi.edu/~mbs/src/textools/. Erik ----- Original Message ----- From: "Patrik Jansson" To: Cc: Sent: Monday, February 19, 2001 6:07 AM Subject: Re: Literate Programming in Haskell? > On Mon, 19 Feb 2001, Bostjan Slivnik wrote: > > > > > I'm also very interested in this, but ideally I would want the output to > > > be in some proportional font, with symbols like =>, ->, <- replaced with > > > arrows, etc. Also, it would be very nice to have the code automatically > > > column aligned (using heuristics). > > > > So am I. Is anybody willing to cooperate on the desing of such tool? > > A tool I am using is Ralf Hinze's lhs2tex > > http://www.informatik.uni-bonn.de/~ralf/Literate.tar.gz > > http://www.informatik.uni-bonn.de/~ralf/Guide.ps.gz > > It transforms .lhs files (with some formatting commands in LaTeX-style > comments) to LaTeX. Development based on this idea is something I would be > willing to participate in as I already have a fair amount of Haskell > code/articles (read: my PhD thesis;-) in this format. > > Maybe Ralf can say something about his views on further development of > lhs2tex (copyright etc.) by other people (us?). > > /Patrik Jansson > > PS. I have made some small improvements to lhs2tex locally and I seem to > remember that one or two of those were actually needed to get it to > run with my ghc version. > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe From simonpj@microsoft.com Tue Feb 20 16:33:46 2001 From: simonpj@microsoft.com (Simon Peyton-Jones) Date: Tue, 20 Feb 2001 08:33:46 -0800 Subject: Primitive types and Prelude shenanigans Message-ID: <37DA476A2BC9F64C95379BF66BA2690260DB57@red-msg-09.redmond.corp.microsoft.com> I don't mind doing this, but can someone first give a brief justification about why it's a good idea, independent of the discussion that has taken place on this list? I'd like to add such an explanation to the code. Simon | -----Original Message----- | From: qrczak@knm.org.pl [mailto:qrczak@knm.org.pl] | Sent: 16 February 2001 17:42 | To: haskell-cafe@haskell.org | Subject: Re: Primitive types and Prelude shenanigans | | | Thu, 15 Feb 2001 20:56:20 -0800, William Lee Irwin III | pisze: | | > literal "5" gets mapped to (fromPositiveInteger 5) | > literal "-9" gets mapped to (fromNonZeroInteger -9) | | Note that when a discussed generic Prelude replacement | framework is done, and ghc's rules are changed to expand -9 to | negate (fromInteger 9) instead of fromInteger (-9), then you don't | need uglification of the fromInteger function to be able to define | types with only nonnegative numeric values. Just define your negate | in an appropriate class, different from the fromInteger's class. | | -- | __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ | \__/ | ^^ SYGNATURA ZASTEPCZA | QRCZAK | | | _______________________________________________ | Haskell-Cafe mailing list | Haskell-Cafe@haskell.org | http://www.haskell.org/mailman/listinfo/haskell-cafe | From qrczak@knm.org.pl Tue Feb 20 17:07:28 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 20 Feb 2001 17:07:28 GMT Subject: Primitive types and Prelude shenanigans References: <37DA476A2BC9F64C95379BF66BA2690260DB57@red-msg-09.redmond.corp.microsoft.com> Message-ID: Tue, 20 Feb 2001 08:33:46 -0800, Simon Peyton-Jones pisze: > I don't mind doing this, but can someone first give a brief > justification about why it's a good idea, independent of the > discussion that has taken place on this list? Suppose we build an alternative Prelude with different numeric class hierarchy, and decide that types for natural numbers should not have 'negate' defined, as it's obviously meaningless for them. We can put 'fromInteger' in some class and 'negate' in its subclass, and make only the former instance for natural numbers. So -9 :: Natural should be a compile error. Negation is already an error for all expressions other than literals when negate has a wrong type for them; literals should not be an exception. Negated literals are still treated in a special way in patterns, but -9 in a pattern should expand to testing equality with negate (fromInteger 9), not fromInteger (-9), to catch types which intentionally don't have negate defined. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Tue Feb 20 18:17:07 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 20 Feb 2001 18:17:07 GMT Subject: newbie: running a state transformer in context of a state reader References: <1E27BBCDDE50914C99517B4D7EC5D5251A3401@RED-MSG-13.redmond.corp.microsoft.com> Message-ID: Mon, 19 Feb 2001 18:07:17 -0800, Konst Sushenko pisze: > now i am curious if it is possible to run the given parser (state > transformer) in a context of a state reader somehow, so as the state > gets preserved automatically. something that would let me omit the > calls to fetch and set methods. It should be possible to do something like this: lookahead:: Parser a -> Parser a lookahead p = do { s <- fetch ; lift (evalState p s) } where evalState :: Monad m => State s m a -> s -> m a lift :: Monad m => m a -> State s m a are functions which should be available or implementable in a monad transformer framework. I don't have the Hutton/Meijer's paper at hand so I don't know if they provided them and under which names. Such functions are provided e.g. in the framework provided with ghc (by Andy Gill, inspired by Mark P Jones' paper "Functional Programming with Overloading and Higher-Order Polymorphism"). This definition of lookahead uses a separate state transformer thread instead of making changes in place and undoing them later. I don't think that it could make sense to convert a state transformer to a state reader by replacing its internals, because p does want to transform the state locally; a value of type Parser a represents a state transformation. The changes must be isolated from the main parser, but they must happen in some context. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From jhf@lanl.gov Tue Feb 20 18:40:24 2001 From: jhf@lanl.gov (Joe Fasel) Date: Tue, 20 Feb 2001 11:40:24 -0700 (MST) Subject: Just for your fun and horror In-Reply-To: Message-ID: Despite my arguments (and Jon's and others') about the appropriateness of "return", I must confess that Ham's "deliver" is excellent terminology. --Joe Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Technology Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: TSA-7 MS F609; Los Alamos, NM 87545 From theo@engr.mun.ca Tue Feb 20 21:19:54 2001 From: theo@engr.mun.ca (Theodore Norvell) Date: Tue, 20 Feb 2001 17:49:54 -0330 Subject: Just for your fun and horror References: <20010218195457B.chak@cse.unsw.edu.au> <20010220002608C.chak@cse.unsw.edu.au> Message-ID: <3A92DF79.34BDF1B6@engr.mun.ca> Some comments in this discussion have said that "return" is a good name for "return" since it is analogous to "return" in C (and C++ and fortran and java etc.). At first I thought "good point", but after thinking about it a bit, I'd like to argue the contrary. "return" in Haskell is not analogous to "return" in C. Obviously the analogy, if there is one, is strongest in Monads that model imperative effects, like a state monad or the IO monad, so in the following, I'll assume the monads involved are of this nature. First consider this C subroutine: int f() { const int i = g() ; return h(i) ; // where h may have side effects. } The "return" here serves to indicate the value returned by the subroutine. In Haskell's do syntax, there is no need for this sort of "return" because the value delivered by a "do" is the value delivered by its syntactically last computation, i.e. "return" in the C sense is entirely implicit. Haskell: f = do i <- g() h(i) // No return If "return" in Haskell were analogous to "return" in C, we could write f = do i <- g() return h(i) Indeed you can write that (and I sometimes do!), but the meaning is different; in C executing the "return" executes the side effects of h(i); in Haskell "executing" the "return" returns h(i) with its side effects unexecuted. Now consider Haskell's "return". In Haskell, for the relevant monads, "return" promotes a "pure" expression to an "impure" expression with a side effect; of course the side effect is trivial, but it is still there. In C, as others have pointed out, there is, from a language definition point of view, no distinction between pure and impure expressions, so there is no need for such a promotion operator; and C does not have one. Consider Haskell: m = do j <- return e n(j) C: void m() { const int j = e ; /* No return */ n(j) ; } In C, but not Haskell, "return" has important implications for flow of control. Consider C: void w() { while(a) { if( b ) return c ; d ; } In Haskell there is no close equivalent (which I say is a good thing). The closest analogue is to throw an exception, which is in a sense the opposite of a "return". If you wrote a denotational semantics of C, you'd find that the denotation of "return" is very similar to the denotation of "throw". The implementation of return is easier, since it is statically nested in its "handler", but this distinction probably won't show up in the semantics. In Haskell "return" has no implications for flow of control. Consider Haskell x = do y return e z In we have C: void x() { y() ; e ; // semicolon, no return z() ; } The above code is silly, but the point is that if C's "return" were analogous, we could write int x() { y() ; return e ; z() ; } which is not analogous to the Haskell. This example also shows another type distinction, since in the Haskell version the type of e can be anything, yet in the second C version the type of e must be the same as return type of x(). In short, "return" in C introduces an important side effect (returning from the function) whereas "return" in any Haskell monad should introduce only a trivial (identity) side effect. It could be argued that there is a loose analogy in that "return" in Haskell converts an expression to a command, just as "return" in C converts an expression to a command. But in C, putting a semicolon after an expression also converts an expression to a command, and as the last example shows, this is a better analogue since, unlike "return" there are no additional nontrivial effects introduced. In summary: (0) There is no analogue, in C, to Haskell's return because the is no analogue to Haskell's type distinction between expressions without side effects (pure expressions) and expressions with side effects. (1) The main point of "return" in C is to introduce a nontrivial side effect, and the rule in Haskell is that "return" introduces the trivial side effect. The analogue of C's "return" can instead be built on top of an exception model. I'm not saying that future Haskells shouldn't call "return" "return", or that "return" is not a good name for "return", just that the analogy does not hold up. Cheers, Theo Norvell ---------------------------- Dr. Theodore Norvell theo@engr.mun.ca Electrical and Computer Engineering http://www.engr.mun.ca/~theo Engineering and Applied Science Phone: (709) 737-8962 Memorial University of Newfoundland Fax: (709) 737-4042 St. John's, NF, Canada, A1B 3X5 From konsu@microsoft.com Wed Feb 21 01:52:33 2001 From: konsu@microsoft.com (Konst Sushenko) Date: Tue, 20 Feb 2001 17:52:33 -0800 Subject: newbie: running a state transformer in context of a state rea der Message-ID: <1E27BBCDDE50914C99517B4D7EC5D5251A3402@RED-MSG-13.redmond.corp.microsoft.com> Marcin, thanks for your help. to implement the lift functionality i added these well known definitions: class (Monad m, Monad (t m)) => TransMonad t m where lift :: m a -> t m a instance (Monad m, Monad (State s m)) => TransMonad (State s) m where lift m = ST (\s -> m >>= (\a -> return (a,s))) but my lookahead function lookahead p = do { s <- fetch ; lift (evalState p s) } is typed as lookahead :: State MyState Maybe a -> State MyState Maybe (a,MyState) but i need lookahead :: State MyState Maybe a -> State MyState Maybe a apparently, the (>>=) and return used in the definition of lift above are for the monad (State s m), and not monad m... everything works if i do not use the TransMonad class, but define lift manually as: lift :: Parser a -> Parser a lift m = ST (\s -> unST m s >>= (\(a,_) -> return (a,s))) but this looks like a special case of the lift above, except the right hand side of 'bind' is executed in the right context. i am still missing something konst -----Original Message----- From: Marcin 'Qrczak' Kowalczyk [mailto:qrczak@knm.org.pl] Sent: Tuesday, February 20, 2001 10:17 AM To: haskell-cafe@haskell.org Subject: Re: newbie: running a state transformer in context of a state reader Mon, 19 Feb 2001 18:07:17 -0800, Konst Sushenko pisze: > now i am curious if it is possible to run the given parser (state > transformer) in a context of a state reader somehow, so as the state > gets preserved automatically. something that would let me omit the > calls to fetch and set methods. It should be possible to do something like this: lookahead:: Parser a -> Parser a lookahead p = do { s <- fetch ; lift (evalState p s) } where evalState :: Monad m => State s m a -> s -> m a lift :: Monad m => m a -> State s m a are functions which should be available or implementable in a monad transformer framework. I don't have the Hutton/Meijer's paper at hand so I don't know if they provided them and under which names. Such functions are provided e.g. in the framework provided with ghc (by Andy Gill, inspired by Mark P Jones' paper "Functional Programming with Overloading and Higher-Order Polymorphism"). This definition of lookahead uses a separate state transformer thread instead of making changes in place and undoing them later. I don't think that it could make sense to convert a state transformer to a state reader by replacing its internals, because p does want to transform the state locally; a value of type Parser a represents a state transformation. The changes must be isolated from the main parser, but they must happen in some context. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTEPCZA QRCZAK _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe From fjh@cs.mu.oz.au Wed Feb 21 01:55:37 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Wed, 21 Feb 2001 12:55:37 +1100 Subject: Primitive types and Prelude shenanigans In-Reply-To: <37DA476A2BC9F64C95379BF66BA2690260DB57@red-msg-09.redmond.corp.microsoft.com> References: <37DA476A2BC9F64C95379BF66BA2690260DB57@red-msg-09.redmond.corp.microsoft.com> Message-ID: <20010221125537.A11757@hg.cs.mu.oz.au> On 20-Feb-2001, Simon Peyton-Jones wrote: > I don't mind doing this, but can someone first give a brief justification > about why it's a good idea, independent of the discussion that > has taken place on this list? I'd like to add such an explanation > to the code. How about "Because the Haskell 98 Report says so"? ;-) It's a pity there's no Haskell 98 Rationale, like the Ada 95 Rationale... if there was, then the documentation in the ghc code could just point at it. ---------- There is however one issue with this change that concerns me. I'm wondering about what happens with the most negative Int. E.g. assuming 32-bit Int (as in Hugs and ghc), what happens with the following code? minint :: Int minint = -2147483648 I think the rules in the Haskell report mean that you need to write that example as e.g. minint :: Int minint = -2147483647 - 1 ghc currently allows the original version, since it treats negative literals directly, rather than in the manner specified in the Haskell report. ghc also allows `(negate (fromInteger 2147483648)) :: Int', apparently because ghc's `fromInteger' for Int just extracts the bottom bits (?), so changing ghc to respect the Haskell report's treatment of negative literals won't affect this code. But the code does not work in Hugs, because Hugs follows the Haskell report's treatment of negative literals, and the `fromInteger' in Hugs does bounds checking -- Hugs throws an exception from `fromInteger'. The documentation in the Haskell report does not say what `fromInteger' should do for `Int', but the Hugs behaviour definitely seems preferable, IMHO. However, this leads to the unfortunate complication described above when writing a literal for the most negative Int. Of course using `minBound' is a much nicer way of finding out the minumum integer, at least in hand-written code. But this issue might be a potential pitfall for programs that automatically generate Haskell code. -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From qrczak@knm.org.pl Wed Feb 21 07:04:02 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 21 Feb 2001 07:04:02 GMT Subject: Primitive types and Prelude shenanigans References: <37DA476A2BC9F64C95379BF66BA2690260DB57@red-msg-09.redmond.corp.microsoft.com> <20010221125537.A11757@hg.cs.mu.oz.au> Message-ID: Wed, 21 Feb 2001 12:55:37 +1100, Fergus Henderson pisze: > The documentation in the Haskell report does not say what > `fromInteger' should do for `Int', but the Hugs behaviour definitely > seems preferable, IMHO. Sometimes yes. But for playing with Word8, Int8, CChar etc. it's sometimes needed to just cast bits without overflow checking, to convert between "signed bytes" and "unsigned bytes". -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Wed Feb 21 07:00:39 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 21 Feb 2001 07:00:39 GMT Subject: newbie: running a state transformer in context of a state rea der References: <1E27BBCDDE50914C99517B4D7EC5D5251A3402@RED-MSG-13.redmond.corp.microsoft.com> Message-ID: Tue, 20 Feb 2001 17:52:33 -0800, Konst Sushenko pisze: > lookahead p = do { s <- fetch > ; lift (evalState p s) > } > > is typed as > > lookahead:: State MyState Maybe a -> State MyState Maybe (a,MyState) > > but i need > > lookahead:: State MyState Maybe a -> State MyState Maybe a myEvalState = liftM fst yourEvalState Andy Gill's monadic modules provide evalState as a wrapper for runState, which throws away the state component returned. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From fjh@cs.mu.oz.au Wed Feb 21 12:05:41 2001 From: fjh@cs.mu.oz.au (Fergus Henderson) Date: Wed, 21 Feb 2001 23:05:41 +1100 Subject: Primitive types and Prelude shenanigans In-Reply-To: References: <37DA476A2BC9F64C95379BF66BA2690260DB57@red-msg-09.redmond.corp.microsoft.com> <20010221125537.A11757@hg.cs.mu.oz.au> Message-ID: <20010221230541.A14563@hg.cs.mu.oz.au> On 21-Feb-2001, Marcin 'Qrczak' Kowalczyk wrote: > Wed, 21 Feb 2001 12:55:37 +1100, Fergus Henderson pisze: > > > The documentation in the Haskell report does not say what > > `fromInteger' should do for `Int', but the Hugs behaviour definitely > > seems preferable, IMHO. > > Sometimes yes. But for playing with Word8, Int8, CChar etc. it's > sometimes needed to just cast bits without overflow checking, to > convert between "signed bytes" and "unsigned bytes". Both are desirable in different situations. But if you want to ignore overflow, you should have to say so explicitly. `fromInteger' is implicitly applied to literals, and implicit truncation is dangerous, so `fromInteger' should not truncate. There should be a different function for conversions that silently truncate. You can implement such a function yourself, of course, e.g. as follows: trunc :: (Bounded a, Integral a) => Integer -> a trunc x = res where min, max, size, modulus, result :: Integer min = toInteger (minBound `asTypeOf` res) max = toInteger (maxBound `asTypeOf` res) size = max - min + 1 modulus = x `mod` size result = if modulus > max then modulus - size else modulus res = fromInteger result But it is probably worth including something like this in the standard library, perhaps as a type class method. -- Fergus Henderson | "I have always known that the pursuit | of excellence is a lethal habit" WWW: | -- the last words of T. S. Garp. From tweed@compsci.bristol.ac.uk Wed Feb 21 16:29:32 2001 From: tweed@compsci.bristol.ac.uk (D. Tweed) Date: Wed, 21 Feb 2001 16:29:32 +0000 (GMT) Subject: Inferring from context declarations In-Reply-To: <3A93E714.AC4D391D@ps.uni-sb.de> Message-ID: George Russell wrote: > > (3) Simon Peyton Jones' comments about dictionary passing are a red herring, > since they assume a particular form of compiler. Various (MLj, MLton) > ML compilers already inline out all polymorphism. Some C++ compilers/linkers > do it in a rather crude way as well, for templates. If you can do it, > you can forget about dictionary passing. [Standard disclaimer: I write prototype code that's never `finished' to ever-changing specs in a university environment; other people probably view things differently.] I'm not sure I'd agree about this. Note that there's two levels, inlining polymorphic functions at the call site and `instantiating polymorphic functions at each usage type' without doing the inlining. C++ compilers have to at least do the second because of the prevailing philosophy of what templates are (i.e., that they're safer function-macros). Some of the time this is what's wanted, but sometimes it imposes annoying compilation issues (the source code of the polymorphic function has to be available everytime you want to use the function on a new class, even if its not time critical, which isn't the case for Haskell). I also often write/generate very large polymorphic functions that in an ideal world (where compilers are can do _serious, serious_ magic) I'd prefer to work using something similar to a dictionary passing implementation. I'd argue that keeping flexibility about polymorphic function implementation (which assumes some default but can be overridden by the programmer) in Haskell compilers is a Good Thing. Given that, unless computing hardware really revolutionises, the `speed/memory' profile of todays desktop PC is going to recurr in wearable computers/PDAs/etc I believe that in 20 years time we'll still be figuring out the same trade-offs, and so need to keep flexibility. ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish. From ger@tzi.de Wed Feb 21 16:56:01 2001 From: ger@tzi.de (George Russell) Date: Wed, 21 Feb 2001 17:56:01 +0100 Subject: Inferring from context declarations References: Message-ID: <3A93F321.5F38DF8E@tzi.de> Hmm, this throwaway comment is getting interesting. But please cc any replies to me as I don't normally subscribe to haskell-cafe . . . "D. Tweed" wrote: [snip] > Some of the > time this is what's wanted, but sometimes it imposes annoying compilation > issues (the source code of the polymorphic function has to be available > everytime you want to use the function on a new class, even if its not > time critical, which isn't the case for Haskell). You don't need the original source code, but some pickled form of it, like that GHC already outputs to .hi files when you ask it to inline functions. > I also often > write/generate very large polymorphic functions that in an ideal world > (where compilers are can do _serious, serious_ magic) I'd prefer to work > using something similar to a dictionary passing implementation. Why then? If it's memory size, consider that the really important thing is not how much you need in virtual memory, but how much you need in the various caches. Inlining will only use more cache if you are using two different applications of the same large polymorphic function at approximately the same time. Certainly possible, and like all changes you will be able to construct examples where inlining polymorphism will result in slower execution time, but after my experience with MLj I find it hard to believe that it is not a good idea in general. > I'd argue > that keeping flexibility about polymorphic function implementation (which > assumes some default but can be overridden by the programmer) in Haskell > compilers is a Good Thing. I'm certainly not in favour of decreeing that Haskell compilers MUST inline polymorphism. > > Given that, unless computing hardware really revolutionises, the > `speed/memory' profile of todays desktop PC is going to recurr in wearable > computers/PDAs/etc I believe that in 20 years time we'll still be figuring > out the same trade-offs, and so need to keep flexibility. Extrapolating from the last few decades I predict that (1) memory will get much much bigger. (2) CPU times will get faster. (3) memory access times will get faster, but the ratio of memory access time/CPU processing time will continue to increase. The consequence of the last point is that parallelism and pipelining are going to become more and more important. Already the amount of logic required by a Pentium to try to execute several operations at once is simply incredible, but it only works if you have comparatively long stretches of code where the processor can guess what is going to happen. You are basically stuffed if every three instructions the code executes a jump to a location the processor can't foresee. Thus if you compile Haskell like you do today, the processor will be spending about 10% of its time actually processing, and the other 90% waiting on memory. If Haskell compilers are to take much advantage of processor speeds, I don't see any solution but to inline more and more. From Tom.Pledger@peace.com Wed Feb 21 20:23:03 2001 From: Tom.Pledger@peace.com (Tom Pledger) Date: Thu, 22 Feb 2001 09:23:03 +1300 Subject: making a Set In-Reply-To: References: Message-ID: <14996.9127.998963.43464@waytogo.peace.co.nz> (moved to haskell-cafe) G Murali writes: | hi there, | | I'm tryng to get my concepts right here.. can you please help in | defining a funtion like | | makeSet :: (a->Bool)->Set a | | I understand that we need a new type Set like | data Set a = Set (a->Bool) what puzzles me is how to apply the funtion | to all elements belonging to type a. What other operations do you need to implement for "Set a"? Is there anything that can't be expressed in terms of those set membership functions you already have? From ketil@ii.uib.no Thu Feb 22 10:15:39 2001 From: ketil@ii.uib.no (Ketil Malde) Date: 22 Feb 2001 11:15:39 +0100 Subject: Primitive types and Prelude shenanigans In-Reply-To: qrczak@knm.org.pl's message of "20 Feb 2001 17:07:28 GMT" References: <37DA476A2BC9F64C95379BF66BA2690260DB57@red-msg-09.redmond.corp.microsoft.com> Message-ID: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) writes: > 'negate' defined, as it's obviously meaningless for them. We can put > 'fromInteger' in some class and 'negate' in its subclass, and make > only the former instance for natural numbers. Nitpick: not necessarily its subclass, either. We can probably imagine types where negate makes sense, but fromInteger does not, as well as vice versa. -kzm -- If I haven't seen further, it is by standing in the footprints of giants From tweed@compsci.bristol.ac.uk Thu Feb 22 13:28:50 2001 From: tweed@compsci.bristol.ac.uk (D. Tweed) Date: Thu, 22 Feb 2001 13:28:50 +0000 (GMT) Subject: Inferring from context declarations In-Reply-To: <3A93F321.5F38DF8E@tzi.de> Message-ID: On Wed, 21 Feb 2001, George Russell wrote: > Hmm, this throwaway comment is getting interesting. But please cc any replies to > me as I don't normally subscribe to haskell-cafe . . . To be honest, I suspect I was talking complete & unadulterated rubbish. (Not that that's unusual.) ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish. From lars@prover.com Fri Feb 23 09:19:23 2001 From: lars@prover.com (Lars Lundgren) Date: Fri, 23 Feb 2001 10:19:23 +0100 (CET) Subject: unliftM In-Reply-To: Message-ID: On 23 Feb 2001, Julian Assange wrote: > > Is there a standard construct for something of this ilk: > > unliftM :: Monad m a -> a > I do not know if it is a standard, but every monad usually has a "runMonad" function. For ST you have runST, for IO you have unsafePerformIO and for your own monad you need to define it. Note, that if you use unsafePerformIO you, the action must not have (important) sideeffects. The burden of proof is upon YOU. /Lars L From promocionesdosmiluno@yahoo.es Sun Feb 25 18:23:46 2001 From: promocionesdosmiluno@yahoo.es (promocionesdosmiluno@yahoo.es) Date: Sun, 25 Feb 2001 13:23:46 -0500 (EST) Subject: Lo mejor de Internet aquí Message-ID: <20010225182346.88B69255B3@www.haskell.org> This is a multi-part message in MIME format. --Z_MULTI_PART_MAIL_BOUNDAEY_S Content-Type: text/plain Content-Transfer-Encoding: base64 VmlzaXRhIGVzdGEgd2ViDQoNCmh0dHA6Ly9NdW5kb0VzcGFueWEucmVkaXJlY2Npb24uY29t DQoNCiAgICA= --Z_MULTI_PART_MAIL_BOUNDAEY_S-- From lars@prover.com Mon Feb 26 15:40:50 2001 From: lars@prover.com (Lars Lundgren) Date: Mon, 26 Feb 2001 16:40:50 +0100 (CET) Subject: Tree handling In-Reply-To: <000901c09ffd$39847c20$ec260ac1@martin> Message-ID: On Mon, 26 Feb 2001, Martin Gustafsson wrote: > Hello=20 >=20 > I'm a haskell newbie that tries to create a tree with arbitary numbers of= childs.=20 > I create the data structure but i can't do anything on it can someone ple= ase help > me with a small function that sums the values of the leafs, so i don=B4t = loose my hair > so fast. >=20 > The datastructure looks like this and a binary tree built with it would l= ook like this: >=20 >=20 > data GeneralTree =3D Nil | Node (Integer,[GeneralTree]) >=20 As you said you were a newbie I will ask a few questions about your datastructure. Do you know that there is no need to tuple the elements in the Node if you do not want to. You can write: data GeneralTree =3D Nil | Node Integer [GeneralTree] What is the intended difference between (Node 5 []) and (Node 5 [Nil]) ? >=20 > tree =3D=20 > (20, > [ > (-20,[(30,[Nil]),(20,[Nil])]), > (40,[(65,[Nil]),(-40,[Nil])]) > ] > ) This is not of type GeneralTree! (And its layout is messed up) Hint: write the type of every expression you write, and debugging will be much easier. tree :: GeneralTree ERROR tree.hs:8 - Type error in explicitly typed binding *** Term : tree *** Type : (a,[(b,[(c,[GeneralTree])])]) *** Does not match : GeneralTree This is an expression with type GeneralTree: tree :: GeneralTree tree =3D Node 20 [Node (-20) [Node 30 [Nil], Node 20 [Nil]], Node 40 [Node 65 [Nil], Node (-40) [Nil]]] Now it should be very easy to write a function to sum the nodes in a tree sumTree :: GeneralTree -> Integer sumTree Nil =3D 0 sumTree (Node n ts) =3D ... write this yourself=20 hint - sum and map are very useful functions (defined in the prelude) as are recursion. Good luck! /Lars L From p.turner@computer.org Mon Feb 26 14:38:36 2001 From: p.turner@computer.org (Scott Turner) Date: Mon, 26 Feb 2001 09:38:36 -0500 Subject: stack overflow In-Reply-To: <37DA476A2BC9F64C95379BF66BA2690260D8E2@red-msg-09.redmond. corp.microsoft.com> Message-ID: <3.0.5.32.20010226093836.00bed100@billygoat.org> At 01:26 2001-02-26 -0800, Simon Peyton-Jones wrote: >And so on. So we build up a giant chain of thunks. >Finally we evaluate the giant chain, and that builds up >a giant stack. > ... >If GHC were to inline foldl more vigorously, this would [not] happen. I'd hate to have my programs rely on implementation-dependent optimizations. BTW, I've wondered why the Prelude provides foldl, which commonly leads to this trap, and does not provide the strict variant foldl', which is useful enough that it's defined internal to the Hugs prelude. Simple prejudice against strictness? -- Scott Turner p.turner@computer.org http://www.billygoat.org/pkturner From konsu@microsoft.com Mon Feb 26 21:07:51 2001 From: konsu@microsoft.com (Konst Sushenko) Date: Mon, 26 Feb 2001 13:07:51 -0800 Subject: examples using built-in state monad Message-ID: <1E27BBCDDE50914C99517B4D7EC5D5251A3406@RED-MSG-13.redmond.corp.microsoft.com> This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_001_01C0A038.2E1BC689 Content-Type: text/plain; charset="iso-8859-1" hello, in my program i used my own parameterised state transformer monad, which is well described in literature: newtype State s m a = ST (s -> m (a,s)) ghc and hugs contain built in implementation of state monad ST. is it the same thing? the documentation is not clear on that. if it is the same, is it faster? also, could someone please recommend any samples that use the built in ST monad? thanks konst ------_=_NextPart_001_01C0A038.2E1BC689 Content-Type: text/html; charset="iso-8859-1"
hello,
 
in my program i used my own parameterised state transformer monad, which is well described in literature:
 
newtype State s m a     = ST (s -> m (a,s))
ghc and hugs contain built in implementation of state monad ST.
 
is it the same thing? the documentation is not clear on that.
 
if it is the same, is it faster?
 
also, could someone please recommend any samples that use the built in ST monad?
 
thanks
konst
------_=_NextPart_001_01C0A038.2E1BC689-- From dpt@math.harvard.edu Tue Feb 27 18:00:26 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Tue, 27 Feb 2001 13:00:26 -0500 Subject: Primitive types and Prelude shenanigans In-Reply-To: ; from qrczak@knm.org.pl on Fri, Feb 16, 2001 at 05:13:10PM +0000 References: <37DA476A2BC9F64C95379BF66BA269025EAE39@red-msg-09.redmond.corp.microsoft.com> Message-ID: <20010227130026.A18316@math.harvard.edu> On Fri, Feb 16, 2001 at 05:13:10PM +0000, Marcin 'Qrczak' Kowalczyk wrote: > Fri, 16 Feb 2001 04:14:24 -0800, Simon Peyton-Jones pisze: > > Here I think the right thing is to say that desugaring for boolean > > constructs uses a function 'if' assumed to have type > > if :: forall b. Bool -> b -> b -> b > > What if somebody wants to make 'if' overloaded on more types than > some constant type called Bool? > > class Condition a where > if :: a -> b -> b -> b (Note that Hawk does almost exactly this.) > Generally I don't feel the need of allowing to replace if, Bool and > everything else with custom definitions, especially when there is no > single obvious way. Why not just let if x then y else z be syntactic sugar for Prelude.ifThenElse x y z when some flag is given? That allows a Prelude hacker to do whatever she wants, from the standard ifThenElse :: Bool -> x -> x -> x ifThenElse True x _ = x ifThenElse True _ y = y to something like class (Boolean a) => Condition a b where ifThenElse :: a -> b -> b -> b ("if" is a keyword, so cannot be used as a function name. Hawk uses "mux" for this operation.) Compilers are good enough to inline the standard definition (and compile it away when appropriate), right? Pattern guards can be turned into "ifThenElse" as specified in section 3.17.3 of the Haskell Report. Or maybe there should be a separate function "evalGuard", which is ordinarily of type evalGuard :: [(Bool, a)] -> a -> a (taking the list of guards and RHS, together with the default case). It's less clear that compilers would be able to produce good code in this case. But this would have to be changed: An alternative of the form pat -> exp where decls is treated as shorthand for: pat | True -> exp where decls Best, Dylan Thurston From bhalchin@hotmail.com Wed Feb 28 07:44:42 2001 From: bhalchin@hotmail.com (Bill Halchin) Date: Wed, 28 Feb 2001 07:44:42 Subject: Literate Programming in Haskell? Message-ID: Hello Haskell Community, Probably somebody else has already brought this issue up already. Why can't we have some kind of integrated literate programming model where I can I can have hyperlinks in comments to documents represented in XML?? In other words, a kind of seamless literate progarmming environment in Haskell with XML, i.e. Haskell and XML are seamless?? E.g. here is a step in the right direction by writing in Literate Haskell in HTML!: http://www.numeric-quest.com/haskell/ The stuff at this URL is pretty cool, i.e. "Haskell" scripts written in HTML. I want to also see hyperlinks to XML docs in Literate Haskell comments or maybe even to Haskell code! Regards, Bill Halchin >From: "Erik Meijer" >To: "Patrik Jansson" , >CC: >Subject: Re: Literate Programming in Haskell? >Date: Mon, 19 Feb 2001 22:00:47 -0800 > >You also might take a look at Maarten Fokkinga's mira.sty >http://www.cse.ogi.edu/~mbs/src/textools/ and Mark Shileds' abbrev.sty >which >was derived from that http://www.cse.ogi.edu/~mbs/src/textools/. > >Erik >----- Original Message ----- >From: "Patrik Jansson" >To: >Cc: >Sent: Monday, February 19, 2001 6:07 AM >Subject: Re: Literate Programming in Haskell? > > > > On Mon, 19 Feb 2001, Bostjan Slivnik wrote: > > > > > > > I'm also very interested in this, but ideally I would want the >output >to > > > > be in some proportional font, with symbols like =>, ->, <- replaced >with > > > > arrows, etc. Also, it would be very nice to have the code >automatically > > > > column aligned (using heuristics). > > > > > > So am I. Is anybody willing to cooperate on the desing of such tool? > > > > A tool I am using is Ralf Hinze's lhs2tex > > > > http://www.informatik.uni-bonn.de/~ralf/Literate.tar.gz > > > > http://www.informatik.uni-bonn.de/~ralf/Guide.ps.gz > > > > It transforms .lhs files (with some formatting commands in LaTeX-style > > comments) to LaTeX. Development based on this idea is something I would >be > > willing to participate in as I already have a fair amount of Haskell > > code/articles (read: my PhD thesis;-) in this format. > > > > Maybe Ralf can say something about his views on further development of > > lhs2tex (copyright etc.) by other people (us?). > > > > /Patrik Jansson > > > > PS. I have made some small improvements to lhs2tex locally and I seem to > > remember that one or two of those were actually needed to get it to > > run with my ghc version. > > > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe@haskell.org > > http://www.haskell.org/mailman/listinfo/haskell-cafe > > >_______________________________________________ >Haskell-Cafe mailing list >Haskell-Cafe@haskell.org >http://www.haskell.org/mailman/listinfo/haskell-cafe _________________________________________________________________ Get your FREE download of MSN Explorer at http://explorer.msn.com From simonpj@microsoft.com Wed Feb 28 10:05:27 2001 From: simonpj@microsoft.com (Simon Peyton-Jones) Date: Wed, 28 Feb 2001 02:05:27 -0800 Subject: Primitive types and Prelude shenanigans Message-ID: <37DA476A2BC9F64C95379BF66BA2690260DB9B@red-msg-09.redmond.corp.microsoft.com> | Why not just let | | if x then y else z | | be syntactic sugar for | | Prelude.ifThenElse x y z The burden of my original message was that a) this is reasonable, but b) it would have to become the *defined behaviour* As you say, the "defined behaviour" would have to cover guards as well, and I'm not absolutely certain what else. The way GHC is set up now, it's relatively easy to make such changes (this wasn't true before). But it takes some design work. If someone cares enough to do the design work, and actively wants the result, I'll see how hard it is to implement. Simon From qrczak@knm.org.pl Wed Feb 28 15:17:02 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 28 Feb 2001 15:17:02 GMT Subject: Primitive types and Prelude shenanigans References: <37DA476A2BC9F64C95379BF66BA2690260DB9B@red-msg-09.redmond.corp.microsoft.com> Message-ID: Wed, 28 Feb 2001 02:05:27 -0800, Simon Peyton-Jones pisze: > If someone cares enough to do the design work, and actively wants > the result, I'll see how hard it is to implement. IMHO it should not be done only because it's possible. If a part of Prelude is to be replaceable, there should be a chance that it's useful for something. You can't replace the whole Prelude anyway, e.g. (->) and Integer don't look as if it was possible for them. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From qrczak@knm.org.pl Wed Feb 28 15:28:29 2001 From: qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) Date: 28 Feb 2001 15:28:29 GMT Subject: examples using built-in state monad References: <1E27BBCDDE50914C99517B4D7EC5D5251A3406@RED-MSG-13.redmond.corp.microsoft.com> Message-ID: Mon, 26 Feb 2001 13:07:51 -0800, Konst Sushenko pisze: > newtype State s m a = ST (s -> m (a,s)) > > ghc and hugs contain built in implementation of state monad ST. > > is it the same thing? No. GHC's and Hugs' ST allows dynamic creation of arbitrary number of mutable variables of arbitrary types, using operations newSTRef :: a -> ST s (STRef s a) readSTRef :: STRef s a -> ST s a writeSTRef :: STRef s a -> a -> ST s () The type variable 's' is used in a very tricky way, to ensure safety when runST :: (forall s. ST s a) -> a is used to wrap the ST-monadic computation in a purely functional interface. It does not correspond to the type of data being manipulated. GHC >= 4.06 contains also a monad like yours, in module MonadState, available when -package lang option is passed to the compiler. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTĘPCZA QRCZAK From dpt@math.harvard.edu Wed Feb 28 20:51:54 2001 From: dpt@math.harvard.edu (Dylan Thurston) Date: Wed, 28 Feb 2001 15:51:54 -0500 Subject: Primitive types and Prelude shenanigans In-Reply-To: <37DA476A2BC9F64C95379BF66BA2690260DB9B@red-msg-09.redmond.corp.microsoft.com>; from simonpj@microsoft.com on Wed, Feb 28, 2001 at 02:05:27AM -0800 References: <37DA476A2BC9F64C95379BF66BA2690260DB9B@red-msg-09.redmond.corp.microsoft.com> Message-ID: <20010228155154.D21767@math.harvard.edu> On Wed, Feb 28, 2001 at 02:05:27AM -0800, Simon Peyton-Jones wrote: > If someone cares enough > to do the design work, and actively wants the result, I'll see how > hard it is to implement. I've been thinking some about the design, and I'd be happy to finish it, but I can't honestly say I would use it much (other than for the numeric types) in the near future. Best, Dylan Thurston