[Haskell-cafe] opengl type confusion
L Corbijn
aspergesoepje at gmail.com
Mon Jun 17 00:22:57 CEST 2013
I seem to making a mess of it, first accidentally posting an empty message
and then forgetting to reply to the list. Thirdly I forgot to mention that
my message only describes the 'GHCi magic'.
Lars
P.S. Conclusion, I shouldn't write complicated email this late on the
evening.
---------- Forwarded message ----------
From: L Corbijn <aspergesoepje at gmail.com>
Date: Mon, Jun 17, 2013 at 12:07 AM
Subject: Re: [Haskell-cafe] opengl type confusion
To: briand at aracnet.com
On Sun, Jun 16, 2013 at 11:10 PM, L Corbijn <aspergesoepje at gmail.com> wrote:
>
>
>
> On Sun, Jun 16, 2013 at 10:42 PM, <briand at aracnet.com> wrote:
>
>> On Sun, 16 Jun 2013 16:15:25 -0400
>> Brandon Allbery <allbery.b at gmail.com> wrote:
>>
>> > On Sun, Jun 16, 2013 at 4:03 PM, <briand at aracnet.com> wrote:
>> >
>> > > Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO()
>> and
>> > > using
>> > > (0.0::GLdouble) fixes it, and I'm not clear on why it's not automagic.
>> > > There are many times I see the
>> >
>> >
>> > Haskell never "automagic"s types in that context; if it expects
>> GLdouble,
>> > it expects GLdouble. Pretending it's Double will not work. It "would" in
>> > the specific case that GLdouble were actually a type synonym for Double;
>> > however, for performance reasons it is not. Haskell Double is not
>> directly
>> > usable from the C-based API used by OpenGL, so GLdouble is a type
>> synonym
>> > for CDouble which is.
>> >
>> > compiler doing type conversion an numerican arguments although sometimes
>> > > the occasional fracSomethingIntegralorOther is required.
>> > >
>> >
>> > I presume the reason the type specification for numeric literals is
>> because
>> > there is no defaulting (and probably can't be without introducing other
>> > strange type issues) for GLdouble.
>> >
>>
>> What I was thinking about, using a very poor choice of words, was this :
>>
>>
>> *Main> let a = 1
>> *Main> :t a
>> a :: Integer
>> *Main> let a = 1::Double
>> *Main> a
>> 1.0
>> *Main> :t a
>> a :: Double
>> *Main>
>>
>> so normally 1 would be interpreted as an int, but if I declare 'a' a
>> Double then it gets "promoted" to a Double without me having to call a
>> conversion routine explicitly.
>>
>> That seems automagic to me.
>>
>> (0.0::GLdouble) works to make the compiler happy. So it appears to be
>> taking care of the conversion automagically.
>>
>> So maybe a better question, I hope, is:
>>
>> How can I simply declare 0.0 to be (0.0::GLdouble) and have the
>> functional call work. Doesn't a conversion have to be happening, i.e.
>> shouldn't I really have to do (realToFrac 0.0) ?
>>
>> Brian
>>
>>
>> _______________________________________________
>> Haskell-Cafe mailing list
>> Haskell-Cafe at haskell.org
>> http://www.haskell.org/mailman/listinfo/haskell-cafe
>>
>
>
Oops sorry for the empty reply, I accidentally hit the sent button.
What you are seeing is the defaulting (see
http://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-790004.3.4).
Which roughly speaking means that if you need a specific instance of a
number first try Integer then Double and as a last resort fail.
Prelude> :t 1
1 :: Num a => a
Prelude> :t 1.0
1.0 :: Fractional a => a
So normally a number can be just any instance of the Num class, and any
number with a decimal can be any Fractional instance. And now with let
bindings
The need for defaulting is caused by the monomorphism restriction (
http://www.haskell.org/haskellwiki/Monomorphism_restriction), which states
that let binds should be monomorphic, or roughly speaking it should contain
no type variables (unless of course you provide a type signature).
Prelude> let b = 1
Prelude> :t b
b :: Integer
Prelude> let c = 1.0
Prelude> :t c
c :: Double
So here you see the result of the combination. The monomorphism restriction
doesn't allow 'Num a => a' as type for 'b'. So the defaulting kicks in and
finds that its first guess 'Integer' fits. Therefore 'b' gets type
Integer. Though for 'c' the guess 'Integer' fails as it isn't a Fractional.
Its second guess, Double, is a fractional so 'c' gets type Double.
You can see that the monomorphism restriction is to blame by disabling it
Prelude> :set -XNoMonomorphismRestriction
Prelude> let b = 1
Prelude> :t b
b :: Num a => a
But you shouldn't normally need to do this, as you can provide a specific
type signature.
(in a fresh GHCi session)
Prelude> let {b :: Num a => a; b = 1}
Prelude> :t b
b :: Num a => a
On Sun, Jun 16, 2013 at 10:42 PM, <briand at aracnet.com> wrote:
> On Sun, 16 Jun 2013 16:15:25 -0400
> Brandon Allbery <allbery.b at gmail.com> wrote:
>
> > On Sun, Jun 16, 2013 at 4:03 PM, <briand at aracnet.com> wrote:
> >
> > > Changing the declaration to GLdouble -> GLdouble -> GLdouble -> IO()
> and
> > > using
> > > (0.0::GLdouble) fixes it, and I'm not clear on why it's not automagic.
> > > There are many times I see the
> >
> >
> > Haskell never "automagic"s types in that context; if it expects GLdouble,
> > it expects GLdouble. Pretending it's Double will not work. It "would" in
> > the specific case that GLdouble were actually a type synonym for Double;
> > however, for performance reasons it is not. Haskell Double is not
> directly
> > usable from the C-based API used by OpenGL, so GLdouble is a type synonym
> > for CDouble which is.
> >
> > compiler doing type conversion an numerican arguments although sometimes
> > > the occasional fracSomethingIntegralorOther is required.
> > >
> >
> > I presume the reason the type specification for numeric literals is
> because
> > there is no defaulting (and probably can't be without introducing other
> > strange type issues) for GLdouble.
> >
>
> What I was thinking about, using a very poor choice of words, was this :
>
>
> *Main> let a = 1
> *Main> :t a
> a :: Integer
> *Main> let a = 1::Double
> *Main> a
> 1.0
> *Main> :t a
> a :: Double
> *Main>
>
> so normally 1 would be interpreted as an int, but if I declare 'a' a
> Double then it gets "promoted" to a Double without me having to call a
> conversion routine explicitly.
>
> That seems automagic to me.
>
> (0.0::GLdouble) works to make the compiler happy. So it appears to be
> taking care of the conversion automagically.
>
> So maybe a better question, I hope, is:
>
> How can I simply declare 0.0 to be (0.0::GLdouble) and have the functional
> call work. Doesn't a conversion have to be happening, i.e. shouldn't I
> really have to do (realToFrac 0.0) ?
>
> Brian
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20130617/251c6ba8/attachment.htm>
More information about the Haskell-Cafe
mailing list