arrays and lamda terms
Wed, 5 Feb 2003 09:34:52 +0200
> The formula involves the addition of two numbers -- which is,
> generally, a O(log(n)) operation. Granted, if we operate in the
> restricted domain of mod 2^32 integers, addition can be considered
> O(1). If we stay in this domain however, the size of all our arrays is
> limited to M (which is 2^32, often less). If we implement our arrays
> as linked lists, the cost of accessing one element will not exceed
> M*2*cost(dereference), which is constant, which is O(1). Thus
> pedantically speaking, within the restricted integral domain, any
> terminating array access method is O(1).
Yes but then it will unstable complexity, whereas usually when "array
access" is considered worst/best/average case complexity are equal(i.e. one
can write a program to a hard realtime system). With linked list
implementation you'll never have that.
> The choice of "hardware" may greatly affect the apparent
> complexity. For example, associative memory access is difficult on
> conventional architectures and fast for holographic memory.
> > Or is it possible in any rewriting system?
> If all your arrays are limited in size to 2^32, and if your re-writing
> system offers projection operators pi_0, pi_1, ... pi_4294967295
> (i.e., the right "hardware"), well, then it is possible.
The projection operators all should take equal amount of reduction steps.
So, does that mean it is not possible to do this in lamda calculus if the
hardware only supports alpha,beta reduction for example (and without
cheating behind the scenes for projection functions)?