[Haskell-cafe] Memory efficiency questions for real-time graphics
T Willingham
t.r.willingham at gmail.com
Sat Nov 1 14:57:54 EDT 2008
On Tue, Oct 28, 2008 at 3:24 PM, Sebastian Sylvan
<sebastian.sylvan at gmail.com> wrote:
> 2008/10/28 T Willingham <t.r.willingham at gmail.com>
>>
>> To give a context for all of this, I am applying a non-linear
>> transformation to an object on every frame. (Note: non-linear, so a
>> matrix transform will not suffice.)
>
> Any reason why you can not do this in the vertex shader? You really should
> avoid trying to touch the vertices with the CPU if at all possible.
The per-vertex computation is a quite complex time-dependent function
applied to the given domain on each update. Yet even if it were
simple, I would still first implement the formulas in Haskell and
leave the optimization for later, if at all. The current C++
implementation which uses userland-memory vertex arrays already
performs very well.
On Sat, Nov 1, 2008 at 3:15 AM, Neal Alexander <wqeqweuqy at hotmail.com> wrote:
> Even when generating one or more copies of "world" per frame the performance
> stays fine and allocations are minimal.
Who says? That may be your particular experience from your particular
tests. In my case, any copy of the "world" on each frame would have a
catastrophic effect on the framerate, for any such definition of
"world".
> From what ive seen, the OpenGL calls are whats going to bottle neck.
Yes, that may as well be a tautology. The problem is sporadic lag and
jittering from occasional large allocations and/or garbage collection
from frequent small allocations.
It's a unique situation where even the best profiling cannot pinpoint
what is blatantly obvious to the human eye. Though the profiler may
register it as 0.01%, the actual effect is glitchy behavior which
comes off as unprofessional.
More information about the Haskell-Cafe
mailing list