[Haskell-cafe] Re: Re: instance Eq (a -> b)
ekmett at gmail.com
Tue Apr 20 19:44:44 EDT 2010
I don't mind the 0.0 == -0.0 case, its the NaN /= NaN one that gets me. ;)
The former just says that the equivalence relation you are using isn't
structural. The latter breaks the notion that you have an equivalence
relation by breaking reflexivity.
Eq doesn't state anywhere that the instances should be structural, though in
general where possible it is a good idea, since you don't have to worry
about whether or not functions respect your choice of setoid.
Ultimately, I find myself having to play a bit unsafe with lazy bytestrings
more often than I'd like to admit. Use of toChunks should always be careful
to be safe to work regardless of the size of the chunks exposed, and can
also rely on the extra-logical fact enforced by the bytestring internals
that each such chunk is non-empty.
It greatly facilitates 'lifting' algorithms that work over strict
bytestrings to work over their lazy kin, and its omission would deal a
terrible blow to the practical usability and efficiency of the bytestring
library. I frankly would be forced to reimplement them from scratch in
several packages were it gone.
Ultimately, almost any libraries relies on a contract that extends beyond
the level of the type system to ensure they are used them correctly. A
malformed 'Ord' instance can wreak havoc with Set, a non-associative
'Monoid' can leak structural information out of a FingerTree.
Similarly, the pleasant fiction that x == y ==> f x == f y -- only holds if
the Eq instance is structural, and toChunks can only 'safely' be used in a
manner that is oblivious to the structural partitioning of the lazy
On Mon, Apr 19, 2010 at 6:02 PM, Ashley Yakeley <ashley at semantic.org> wrote:
> Why is a function that gets a bunch of strict ByteStrings out of a lazy
> one exposed?
> In any case, it sounds like a similar situation to (==) on Float and
> Double. There's a mismatch between the "Haskellish" desire for a law on
> (==), and the "convenient" desire for -0.0 == 0.0, or for exposing toChunks.
> Which one you prefer depends on your attitude. My point is not so much to
> advocate for the Haskellish viewpoint than to recognise the tension in the
> design. Float and Double are pretty ugly anyway from a Haskell point of
> view, since they break a bunch of other desirable properties for (+), (-)
> and so on.
> The theoretical reason for using floating point rather than fixed point is
> when one needs relative precision over a range of scales: for other needs
> one should use fixed point or rationals. I added a Fixed type to base, but
> it doesn't implement the functions in the Floating class and I doubt it's as
> fast as Double for common arithmetic functions.
> It would be possible to represent the IEEE types in a Haskellish way,
> properly revealing all their ugliness. This would be gratifying for us
> purists, but would annoy those just trying to get some numeric calculations
> Ashley Yakeley
> On Mon, 2010-04-19 at 15:32 -0400, Edward Kmett wrote:
> Because it is the most utilitarian way to get a bunch of strict ByteStrings
> out of a lazy one.
> Yes it exposes an implementation detail, but the alternatives involve an
> unnatural amount of copying.
> -Edward Kmett
> On Sat, Apr 17, 2010 at 6:37 PM, Ashley Yakeley <ashley at semantic.org>
> Ketil Malde wrote:
> Do we also want to modify equality for lazy bytestrings, where equality
> is currently independent of chunk segmentation? (I.e.
> toChunks s1 == toChunks s2 ==> s1 == s2
> but not vice versa.)
> Why is toChunks exposed?
> Ashley Yakeley
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> Haskell-Cafe mailing listHaskell-Cafe at haskell.orghttp://www.haskell.org/mailman/listinfo/haskell-cafe
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Haskell-Cafe