A general question about the use of classes in defining interfaces

S. Doaitse Swierstra doaitse at swierstra.net
Wed Oct 8 09:13:08 EDT 2008


Ok,

then I will carefully complete and refactor my new combinators (and  
rewrite part of the manual ;-}{) and come up with a proposal,

  Doaitse




On 8 okt 2008, at 14:20, Ross Paterson wrote:

> On Wed, Oct 08, 2008 at 12:36:02PM +0200, S. Doaitse Swierstra wrote:
>> Stimulated by remarks made during the discussion on the future of
>> Haskell at the last Haskell Symposium, I have started to convert my  
>> new
>> parsing library (constructed for the Lernet summerschool in  
>> Uruguay) into
>> Cabalised form. In this library I have amongst others the class:
>>
>> class  Applicative p where
>>  (<*>)     ::   p (b -> a)  -> p b   ->   p a
>>  (<|>)     ::   p a         -> p a   ->   p a
>>  (<$>)     ::   (b -> a)    -> p b   ->   p a
>>  pReturn   ::   a                    ->   p a
>>  pFail     ::                             p a
>>  f <$> p   =  pReturn f <*> p
>>
>> which extends/deviates from the standard class Applicative, since I
>> think these functions more or less belong together. I am happy to  
>> factor
>> out <|> into a separate class.
>
> This corresponds to Alternative, a subclass of Applicative (except  
> for <$>
> being a function instead of a method).  They certainly belong together
> for parsers, but there are applicative functors that don't have the
> extra monoidal structure.
>
>> The problem which arises now is when I want to use the class  
>> Applicative
>> as it is now defined in Control.Applicative. Functions like <$>, < 
>> $, <*
>> and many have standard implementations in terms of the basic  
>> function pure
>> and <*>. Although this looks fine at first sight, this is not so  
>> fine if
>> we want to give more specialised (optimised, checking)  
>> implementations,
>> as I am doing in my library. An example of this is e.g. in many,  
>> where
>> I want to check that the parameter parser does not recognise the  
>> empty
>> sequence since thi is non-sense, etc. Of course we can describe <* by
>>
>> p <* q = pure const <*> p <*> q
>>
>> but this is also rather inefficient; why first building a result if  
>> you
>> are going to throw it away anyway?
>
> The current definition isn't quite as bad as that:
>
> 	p <* q = const <$> p <*> q
> 	f <$> a = fmap f a
>
> but the general point stands.
>
>> More in general I think it is preferred to place common patterns in
>> classes with a default implementation, so they can be redefined  
>> instead
>> of defining them at top level.
>>
>> 1) Does everyone agree with this observation, and if not what am I
>> missing?
>> 2) Can we change the module Applicative to reflect my view? I think  
>> it
>> can be done without implying heavy changes to other modules.
>
> This seems reasonable to me, as long as there aren't too many of them;
> we do this for lots of other classes.  Would you like to list all the
> functions that you would like to have as redefinable methods?
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://www.haskell.org/mailman/listinfo/libraries



More information about the Libraries mailing list