[Haskell-cafe] Re: Suggested algorithm to control upper bound
of space "leaks"
Shelby Moore
shelby at coolpage.com
Mon Nov 2 17:11:33 EST 2009
Simon Marlow replied:
http://www.haskell.org/pipermail/cvs-ghc/2009-November/050946.html
=============
Also I received privately a very much appreciated and stimulating reply
about the convolution of the competition for the shared resources in the
external state machine. My quick (not deeply thought out) response is
that the algorithm's target is not fixed (not a hard-coded upper bound),
but rather the algorithm should be a gradient search for the local minima
(which will always be a moving target, but that is okay). The problem is
when it gets stuck in some local minima that is far from the global minima
(e.g. some cycling resonance with the external competition), so to the
degree that is a common, we occasionally we need to test some random
position in the solution space (simulated annealing).
That is analogous to how autonomous actors optimize problems in
real-world, e.g. humans in their interaction with their shared reality
with other actors, they bite off some goal and go for it, then they scrap
the goal if it is useless local minima.
In short, we trade speed of convergence for determinism of convergence,
i.e. in annealing (very slow cooling), then ice has fewer cracks. (I
learned this stuff 20+ years ago from reading about one model of how
neural networks can learn and converge).
So yes I agree that the nondeterminism can still creep back in as aliasing
error in the algorithm's sampling for a global solution. But according to
the fundamental theorems of the universe (
http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html ),
nothing is every finally optimized (except that "faith exists") to a
globally closed solution (even if you have closed solution mathematical
model which says it is, i.e. space-time did not finalize the later
inconsistency of quantum theory), i.e. there is no global solution in life
only a relative one (there is always some more complex external state,
i.e. the universe trends to maximum disorder).
But the practical motivation is that to the degree the gradient search is
reasonably more deterministic in common usage (see my "KEY POINT" in
response to Simon Marlow as for why I think that is likely achievable),
then the space nondeterminism should in theory scale more continuously
more often.
Regarding the private point about overbooking, since the CPU is much
faster than the hard disk, it should be true that even if the paging load
is 100% of CPU allocation, then the CPU is not overbooked. And if the
paging load is so overbooked due to competiting actors, you might as well
just turn off your machine :). And I agree with the private point that
the gradient search algorithm should incorporate a gradient reduction in
it's velocity towards the target as it approaches the target minima, i.e.
that systems do not perform reliabily if resource allocation strategies
are edge discontinuous. I disagree that the programmer/user needs to be
burdened with this, as is the case now. I am arguing it should be
automatic, unless the programmer wants to do space profile optimization
work.
Maybe I am very wrong. It is research after all.
More information about the Haskell-Cafe
mailing list