[Haskell-cafe] OpenGL performance issue on OSX

Jason Dagit dagitj at gmail.com
Tue May 20 18:46:10 UTC 2014

On Tue, May 20, 2014 at 8:36 AM, Michael Baker <michaeltbaker at gmail.com>wrote:

> On Mon, May 19, 2014 at 11:43 PM, Anthony Cowley <acowley at gmail.com>wrote:
>> > On May 20, 2014, at 12:20 AM, Michael Baker <michaeltbaker at gmail.com>
>> wrote:
>> >
>> > I'm using OpenGLRaw and I'm getting 3 frames per second trying to draw
>> 8000 triangles. When I profile the application, I see that almost all of
>> the time is taken by a call to CGLFlushDrawable, which is apparently OSX's
>> function for swapping the back buffer to the front buffer.
>> Can you put the code somewhere so we can take a look? OpenGL offers 8000
>> ways to draw 8000 triangles.
>> Anthony
>> >
>> > Has anyone else run into a problem like this? I seem to recall a thread
>> a while ago about a Haskell specific OpenGL performance issue, but I can't
>> find it now.
>> > _______________________________________________
>> > Haskell-Cafe mailing list
>> > Haskell-Cafe at haskell.org
>> > http://www.haskell.org/mailman/listinfo/haskell-cafe
> Here's the code https://gist.github.com/MichaelBaker/4429c93f2aca04bc79bb.
> I think I have everything important in there. Line 53 is the one that
> causes the slow down. I guess some things to note are that "Triangle" is
> Storable and the vector I'm creating on line 14 and writing to on line 47
> is a mutable storable vector.

I think I know what is causing the slow down you're seeing. I want to point
out some ways that you can get better help in the future. Perhaps you
already know these things and you were in a hurry or something like that.
In that case, I still want to point out these tips anyway for the benefit
of others :)

Thank you for posting the code, but please understand that I can't find
withBasicWindow on hoogle or google and there isn't a single import in that
code. How does it work? What is triangles? It's hard for me to help people
if I don't have the complete code or know which libraries they're using.
I'm sure others are in a similar situation.

OpenGL is a library/api for specifying the lighting, transformations,
geometry, colors, raster effects, shaders and that sort of thing. OpenGL
doesn't mention anything about making a window, putting pixels in the
window, nor interfacing with the OS. On the other hand, performance
problems don't discriminate and can happen at any level. In other words,
you'll get much better help if you provide runnable examples. Yes that
means more work for you, but I promise that being in the habit of making
minimal/reproducible test cases will hugely improve your (or anyone's)
skills as a software engineer. Furthermore, because I'm not testing with
your code, anything I suggest should considered speculation.

In terms of graphics performance most people are accustomed to talking
about FPS, but FPS is hard to work with. Think about this, if you can
render one effect and get 60 FPS and you add another effect that renders at
30 FPS, what should be the resulting FPS? Instead, set a budget for
rendering time by taking 1/(desired FPS), 60 FPS = 16.67 ms, and focus on
how long each operation takes to render. At least then you can simply add
the costs and figure out how much time you have left. Additionally, when
you measure the FPS of rendering something there is usually a frame rate
limit, say 60 FPS, so you might think your effect takes 16.7 ms but it
really takes 2 ms.

With that out of the way, I see that your `tris` list is 80 elements. If
you're rendering 8000 triangles then are you saying you call that 100 times
per frame? glDrawArray is slow per-call and is designed to work with a lot
of data per-call.

I haven't tried criterion with opengl before, but that's what I would do
next. I'd use it to figure out the time per-call of your glDrawArray. Then
you can get a sense of how many glDrawArray you can use as-per your
rendering budget.

I'd also look up the tricks that people use to reduce the number of
individual calls to glDrawArray. Look up 'texture atlas'. I've never
actually used it so don't trust my explanation :) Roughly, I think you are
restricted to one texture per glDrawArray call so you 'cheat' by copying
all your textures into one big texture and note the boundary points of each
texture and provide those as the texture coordinates for your triangles as
appropriate. You could probably make this nicer to work with by using
Data.Map to map between your original texture id and the coordinates in the

I  hope that helps,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20140520/be32fcc7/attachment.html>

More information about the Haskell-Cafe mailing list