In hoc signo vinces (Was: Revamping the numeric classes)

William Lee Irwin III wli@holomorphy.com
Mon, 12 Feb 2001 03:24:08 -0800


In a later posting Marcin Kowalczyk says:
>> If (+) can be implicitly lifted to functions, then why not signum?
>> Note that I would lift neither signum nor (+). I don't feel the need.
>>  ...

On Mon, Feb 12, 2001 at 09:33:03AM +0000, Jerzy Karczmarczuk wrote:
> I not only feel the need, but I feel that this is important that the
> additive structure in the codomain is inherited by functions. In a more
> specific context: the fact that linear functionals over a vector space
> form also a vector space, is simply *fundamental* for the quantum 
> mechanics, for the cristallography, etc. You don't need to be a Royal
> Abstractor to see this. 

I see this in a somewhat different light, though I'm in general agreement.

What I'd like to do is to be able to effectively model module structures
in the type system, and furthermore be able to simultaneously impose
distinct module structures on a particular type. For instance, complex
n-vectors are simultaneously C-modules and R-modules. and an arbitrary
commutative ring R is at once a Z-module and an R-module. Linear
functionals, which seem like common beasts (try a partially applied
inner product) live in the mathematical structure Hom_R(M,R) which is once
again an R-module, and, perhaps, by inheriting structure on R, an R'
module from various R'. So how does this affect Prelude design? Examining
a small bit of code could be helpful:

-- The group must be Abelian. I suppose anyone could think of this.
class (AdditiveGroup g, Ring r) => LeftModule g r where
	(&) :: r -> g -> g

instance AdditiveGroup g => LeftModule g Integer where
	n & x	| n == 0 = one
		| n < 0  = -(n & (-x))
		| n > 0  = x + (n-1) & x

... and we naturally acquire the sort of structure we're looking for.
But this only shows a possible outcome, and doesn't motivate the
implementation. What _will_ motivate the implementation is the sort
of impact this has on various sorts of code:

(1) The fact that R is an AdditiveGroup immediately makes it a
	Z-module, so we have mixed-mode arithmetic by a different
	means from the usual implicit coercion.

(2) This sort of business handles vectors quite handily.

(3) The following tidbit of code immediately handles curried innerprods:

instance (AdditiveGroup group, Ring ring) => LeftModule (group->ring) ring
	where
		r & g = \g' -> r & g g'

(4) Why would we want to curry innerprods? I envision:

type SurfaceAPoles foo = SomeGraph (SomeVector foo)

and then

	surface :: SurfaceAPoles bar
	innerprod v `fmap` normalsOf faces where faces = facesOf surface

(5) Why would we want to do arithmetic on these beasts now that
	we think we might need them at all?

If we're doing things like determining the light reflected off of the
various surfaces we will want to scale and add together the various
beasties. Deferring the innerprod operation so we can do this is inelegant
and perhaps inflexible compared to:

	lightSources :: [(SomeVector foo -> Intensity foo, Position)]
	lightSources = getLightSources boundingSomething
	reflection = sum $ map (\(f,p) -> getSourceWeight p * f) lightSources
	reflection `fmap` normalsOf faces where faces = facesOf surface

and now in the lightSources perhaps ambient light can be represented
very conveniently, or at least the function type serves to abstract out
the manner in which the orientation of a surface determines the amount
of light reflected off it.

(My apologies for whatever inaccuracies are happening with the optics
here, it's quite far removed from my direct experience.)

Furthermore, within things like small interpreters, it is perhaps
convenient to represent the semantic values of various expressions by
function types. If one should care to define arithmetic on vectors and
vector functions in the interpreted language, support in the source
language allows a more direct approach. This would arise within solid
modelling and graphics once again, as little languages are often used
to describe objects, images, and the like.

How can we anticipate all the possible usages of pretty-looking vector
and matrix algebra? I suspect graphics isn't the only place where
linear algebra could arise. All sorts of differential equation models
of physical phenomena, Markov models of state transition systems, even
economic models at some point require linear algebra in their
computational methods.  It's something I at least regard as a fairly
fundamental and important aspect of computation. And to me, that means
that the full power of the language should be applied toward beautifying,
simplifying, and otherwise enabling linear algebraic computations.


Cheers,
Bill
P.S.:	Please forgive the harangue-like nature of the post, it's the best
	I could do at 3AM.