[Haskell-cafe] OpenGL performance issue on OSX
svenpanne at gmail.com
Fri May 23 07:20:04 UTC 2014
2014-05-22 22:29 GMT+02:00 Michael Baker <michaeltbaker at gmail.com>:
> [...] Is there some benchmark or tool I could have used to figure that out? Something
> that would show me the time spent filling fragments vs the time spent
> processing triangles vs time spent uploading data to the graphics card?
I don't think there are general cross-platform tools for this, but
e.g. if you have NVIDIA hardware, your platform is supported and you
go through the initial pain of installing/learning the tool, NVIDIA
Nsight or PerfKit can quickly answer such questions. No idea if
something similar exists for AMD or Intel GPUs, but it's likely.
Apart from that, there are few rules of thumb and techniques to
determine the bottleneck in your rendering pipeline: Vary the size of
the window you're drawing to and see if performance changes. If yes,
you are probably limited by the fill rate of your GPU. Another test:
Keep the window size, but vary the complexity of the geometry. If
performance changes, it could be the calculation of the geometry on
the CPU or the transformation of the geometry on the GPU (depending on
how you do things). You could even calculate and vary the geometry,
but don't actually send it for rendering to see the effect of your
CPU, you could play some OpenGL tricks to measure/visualize the amount
of overdraw etc. etc.
More information about the Haskell-Cafe