Library hierarchy, contd.

Jan Skibinski jans@numeric-quest.com
Fri, 25 May 2001 08:24:41 -0400 (EDT)


	Sorry for the lengthy discourse, but I was unable
	to cut it down even after I re-read it twice :-).

On Fri, 25 May 2001, Dylan Thurston wrote:

> Here's a concrete proposal for a Mathematics subtree.  Following Jan
> Skibinski's comment about linear algebra and some thought, I convinced
> myself that there is no natural division between
> numerical/computational libraries and more "pure" mathematics.

	Good!

	I'll start with some critical comments first, taking
	one of the items from your list as an example.

> Description:
> Mathematics
>   Libraries for mathematical structures or algorithms.  
> 
> ------
> Mathematics
>     Prelude		-- The so-called "numeric" hierarchy from
> 			-- the H98 Prelude
>     Complex		-- Standard H98
>     Ratio		-- Standard H98
>     DSP
>         FastFourierTransform
		
	Why here? Unless you are referring to one specific
	FFT implementation (Donatio?) the FFT can be looked upon
	as just one of many tools of linear algebra.

	Why FFT? What so sanctimonious about it? Nothing; it
	is popular because of its speed, and it is versatile
	because of its popularity (I admit, I exaggerate quite a bit
	here to make my point :-)). In fact, DFT is one of many
	possible vector transformations, and FFT is its fast
	implementation. In this case - the unitary transformation.
	But you can implement it on many levels: using streams
	of bits, blocks of complex numbers (vectors, matrices), etc.

	But other unitary transformations have their uses too.
   	
	Have you ever heard about Walsh transformation, with
	its (orthogonal) basis vectors made of (+1, -1) components, such
	as: [1,1,1,1,1,1,1,1], [1,1,1,1,-1,-1,-1,-1], etc? Geophysists
	used to employ it (or still do it?) for analysis of their seismic
	data. Incidentally, there is also a Walsh-Hadamard transform, a
	basic mixing transformation for quantum computations. 

	How about Hilbert transformation? You can use it to upgrade 
	real signals to their complex equivalents, such as 
	sin k*n -> exp i*k*n. Imagine a plot of your original 
	real signal in X-T coordinates, suddenly augmented by Y-T
	"wing", as by some miracle. Very useful, believe me! 

	Sorry for ruining the FFT altar, but I think we should keep
	certain things in a proper perspective. I believe there will
	be many implementations of FFT in Haskell libraries - all
	depending on the context. DSP, images, generic linear
	algebra, etc. would all want their own versions of FFT,
	because of specific representation they use.
	
	And that brings me to the second point I want to make.
	If I could describe an algorithm, such as FFT, in generic
	terms than I would have and example of a fine generic
	algorithm. Unfortunately, more often than not I must be datatype
	specific. And that means some choice of Matrix and Vector. 
	A very controversial issue. How to design these two to
 	satisfy everyone? 

	I do not think it's possible at all. For a naive user of graphics
	those are 2D or 3D entities (or 4D for implementation
	reasons). And real too! Represented by arrows so to speak.
  
	Some other users would like to have it n-dimensional
	but Double, some need Complex Double, some would be
	only happy with a generic "a". Then there is a question
	of underlying implementations. Arrays? If yes, then
	what kind of arrays? Would lists be satisfactory
	and if yes with what size limitations? Or maybe I
	can forget about lists and arrays and choose something
	else altogether (as I do it in QuantumVector module)? 
	Facing all those possible choices, should I be arrogant 
	enough to claim the generic names Vector and Matrix for 
	something that I choose to implement my own way?

	[We once sidestepped at least this controversial naming issue
	by calling the things BasicVector and BasicMatrix, respectively.
	For the same reason I have introduced the QuantumVector, not just
	THE Vector].

	------------------------------------------------------

	But all those remarks were somewhat pessimistic and destructive. 
	So here are some constructive ideas:

	+ Keep low profile regarding Matrix and Vector concepts. 
	  Do not build anything around Matrix and Vector, but rather
	  think algorithmically instead. If you must use the two,
	  treat them more or less as your private stuff. 

	  For example, my module Eigenproblem casts several algoritms in
	  terms of abstract Hilbert operators and Hilbert spaces.
	  This is as far as I was able to go abstract-wise for
	  a moment and I still might improve on it one day. 

	  But that module relies on the lower level implementation module
	  LinearAlgoritms, which implements everything as lists. Why
	  lists? Because I still do not like Haskell Arrays as they are
	  and because I am lazy and lists are easy to work with. But
	  that's not the point. What is important are the ideas
	  implemented in those modules, which can be easily re-worked for
	  any data structure, Haskell arrays included.

	+ Expect that each proposed linear algebra library covers a
	  significant chunk of such "territory" because only then it
	  can be proven useful and consistent. If a library just
	  defines a Matrix, it is useless by a virtue of arguments
	  presented before. If it goes only as far as LUDecomposition
	  it is naive and simple minded -- good only for solving
	  linear equations. If it inverts the equations and also
	  finds eigenvalues of symmetric matrices it is a step in
	  right direction. If it handles eigenproblems (eigenvalues
	  and eigenvectors - all or some) of Hermitian operators,
	  nonsymmetric real operators, up to any kind of operator
	  of small and medium size problems it is useful.
 	  If it handles big eigenproblems (n > 1000 or so) - no
	  matter what technology (using "sparse" matrices perhaps)
	  it uses, it is very good. If it provides Singular Value
	  Decomposition on a top of all of the above than it should be
	  considered excellent.

	+ Of course, any good linear algebra library would provide
	  many tools for handling the tasks which I outlined above.
	  But those are not the goals in themselves, they are
	  just the tools. What I have in mind here are all those
	  Choleskys and LUDecompositions, and triangularizations and
	  tridiagonalization procedures, and Hessenberg matrices, 
	  and QR or QL decompositions, etc.
	  Do not measure the value of a library by a number of
	  such tools appearing in it. Value its consistency and
	  ability to achieve the final solution.

	----------------------------------------------------------
   
	Summarizing: LinearAlgebra appearing in hierarchy (after Dylan)

	Mathematics
	    ...
	    LinearAlgebra
	    NonlinearAlgebra (perhaps)
	    ....

	is just one element which should be clearly marked as very
	important. But I'd suggest not to mark any of its specifics,
	because that's very much the design and implementation choice.
	But we could specify what should be expected from a good
	linear algebra library, as I tried to describe it above.

	Jan