[Haskell-cafe] Real-time garbage collection for Haskell

Neil Davies semanticphilosopher at googlemail.com
Mon Mar 1 16:34:03 EST 2010


I don't know that hanging your hat on the deterministic coat hook is  
the right thing to do.

The way that I've always looked at this is more probabilistic - you  
want the result to arrive within a certain time frame for a certain  
operation with a high probability.  There is always the probability  
that the h/w will fail anyway - you could even reason that the  
software taking too long is just  a transient fault that clears -  
random (non-correlated - preferably a bernoulli choice) failures are  
OK, non-deterministic ones aren't.

This probabilistic, low probability of being at the tail of timing,  
approach would give a lot more flexibility in any form of (say  
incremental) GC - you may not be able to bound the incremental steps  
absolutely but a strong probabilistic bound might well be more  
achievable.

Neil


On 1 Mar 2010, at 21:06, Job Vranish wrote:

>
>
> On Mon, Mar 1, 2010 at 2:37 PM, Thomas Schilling <nominolo at googlemail.com 
> > wrote:
> On 1 March 2010 16:27, Job Vranish <job.vranish at gmail.com> wrote:
> > My current area of work is on realtime embedded software  
> programming for
> > avionics systems. We do most of our coding in Ada but I've been  
> dreaming of
> > using haskell instaed.
>
> A possible workaround would be to sprinkle lots of 'rnf's around your
> code to make sure you don't build up a thunk or two that will delay
> you later.  And if you do this, aren't you essentially programming in
> a strict functional language (like SML or O'Caml)?  By careful
> profiling you and auditing you can probably rule out most of the
> potential bad cases, so it can be acceptable for a soft real-time
> system (Galois did something like this, I believe).  But for avionics
> systems you probably want to more assurances than that, don't you?
>
> Yes and no.
> It's true that lazy evaluation makes reasoning about timings a bit  
> more difficult (and might not be usable in very time critical  
> scenarios) but it is still has well defined deterministic behavior.
>
> It's the referential transparency that saves us here. If you run a  
> lazy function with the same objects (in the same evaluation state)  
> it should _theoretically_ take the same amount of time to run. All  
> of our toplevel inputs will be strict, and if we keep our frame-to- 
> frame state strick, our variances in runtimes, given the same  
> inputs, should be quite low modulo the GC.
>
> Even our current code can take significantly different amounts of  
> time to compute things depending on what you're doing. Some  
> waypoints take longer to lookup from the database than others.  
> Predicting the time to arrival can take significantly longer/shorter  
> depending on seemingly trivial parameters, etc...
>
> It matters less that code always takes the same amount of time to  
> run (though it needs to always be less than the frame time)  and  
> more so that it always takes the same amount of time to run given  
> the same initial conditions.
>
> - Job
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20100301/2a81d7ab/attachment.html


More information about the Haskell-Cafe mailing list