From claude at mathr.co.uk Mon Jan 19 19:58:06 2015 From: claude at mathr.co.uk (Claude Heiland-Allen) Date: Mon, 19 Jan 2015 19:58:06 +0000 Subject: [HOpenGL] two bugs in ShaderObjects.hs Message-ID: <54BD61CE.4000705@mathr.co.uk> Hi all, I don't have a github.com account, so reporting here. Hope that's ok. I found these bugs after Cale posted a link to some problematic behaviour (which probably failed initially due to lack of an OpenGL context): http://lpaste.net/118704 The first bug is lack of error checking in createShader: ---8<--- createShader :: ShaderType -> IO Shader createShader = fmap Shader . glCreateShader . marshalShaderType ---8<--- http://hackage.haskell.org/package/OpenGL-2.10.0.0/docs/src/Graphics-Rendering-OpenGL-GL-Shaders-ShaderObjects.html#createShader glCreateShader returns 0 on error, and using that "not a shader" id for other calls is bound to cause chaos. https://www.opengl.org/sdk/docs/man/html/glCreateShader.xhtml The second bug is much more serious, a lack of error checking in shaderVar: ---8<--- shaderVar :: (GLint -> a) -> GetShaderPName -> Shader -> GettableStateVar a shaderVar f p shader = makeGettableStateVar $ alloca $ \buf -> do glGetShaderiv (shaderID shader) (marshalGetShaderPName p) buf peek1 f buf ---8<--- http://hackage.haskell.org/package/OpenGL-2.10.0.0/docs/src/Graphics-Rendering-OpenGL-GL-Shaders-ShaderObjects.html#shaderVar glGetShaderiv doesn't modifiy the contents of buf on error. This means uninitialized memory is read by peek1 and then presumably used by f, which can cause a crash (in the best case) or wrong/undefined behaviour (in the worst case). https://www.opengl.org/sdk/docs/man/html/glGetShader.xhtml Thanks for reading, Claude -- http://mathr.co.uk From svenpanne at gmail.com Tue Jan 20 08:26:21 2015 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 20 Jan 2015 09:26:21 +0100 Subject: [HOpenGL] two bugs in ShaderObjects.hs In-Reply-To: <54BD61CE.4000705@mathr.co.uk> References: <54BD61CE.4000705@mathr.co.uk> Message-ID: 2015-01-19 20:58 GMT+01:00 Claude Heiland-Allen : > I don't have a github.com account, so reporting here. Hope that's ok. Yep. > I found these bugs after Cale posted a link to some problematic behaviour > (which probably failed initially due to lack of an OpenGL context): > http://lpaste.net/118704 Not probably, definitely! :-) Without a context, OpenGL can't e.g. even know if shaders are supported at all. > The first bug is lack of error checking in createShader: > > ---8<--- > createShader :: ShaderType -> IO Shader > createShader = fmap Shader . glCreateShader . marshalShaderType > ---8<--- > http://hackage.haskell.org/package/OpenGL-2.10.0.0/docs/src/Graphics-Rendering-OpenGL-GL-Shaders-ShaderObjects.html#createShader > > glCreateShader returns 0 on error, and using that "not a shader" id for > other calls is bound to cause chaos. > https://www.opengl.org/sdk/docs/man/html/glCreateShader.xhtml This is not a bug, this is intentional. "Shader" is an instance of "ObjectName", so you can check via "isObjectName" if you actually got a shader back. Another possibility is to retrieve the value of the "errors" state variable. In general, it's a good idea during development to check "errors" from time to time at a few strategic places (or use "reportErrors" from the GLUT package). Almost all API calls can fail in one way or the other, and doing error checking in the binding layer would lead to horrible performance. This is totally in the "OpenGL spirit" where basically no API call returns a success/failure indicator (apart from the global error state). > The second bug is much more serious, a lack of error checking in shaderVar: > > ---8<--- > shaderVar :: (GLint -> a) -> GetShaderPName -> Shader -> GettableStateVar a > shaderVar f p shader = > makeGettableStateVar $ > alloca $ \buf -> do > glGetShaderiv (shaderID shader) (marshalGetShaderPName p) buf > peek1 f buf > ---8<--- > http://hackage.haskell.org/package/OpenGL-2.10.0.0/docs/src/Graphics-Rendering-OpenGL-GL-Shaders-ShaderObjects.html#shaderVar > > glGetShaderiv doesn't modifiy the contents of buf on error. This means > uninitialized memory is read by peek1 and then presumably used by f, which > can cause a crash (in the best case) or wrong/undefined behaviour (in the > worst case). > https://www.opengl.org/sdk/docs/man/html/glGetShader.xhtml Again, you should better make sure on the application level that the "Shader" you pass into the shader queries is actually a shader name. Otherwise you get nonsensical values for the shader state you queried or even a Haskell "error" in the case of "shaderType". Just out of curiosity: What has been your expectation regarding error handling/detection? Using exceptions would be horrible, because as an application programmer you would have a very hard time dealing with them without causing resource leaks and/or inconsistent program state. Another option would be wrapping almost all stuff in the OpenGL binding into Maybe/Either, but I don't think anybody would welcome that. From claude at mathr.co.uk Tue Jan 20 10:53:01 2015 From: claude at mathr.co.uk (Claude Heiland-Allen) Date: Tue, 20 Jan 2015 10:53:01 +0000 Subject: [HOpenGL] two bugs in ShaderObjects.hs In-Reply-To: References: <54BD61CE.4000705@mathr.co.uk> Message-ID: <54BE338D.3000605@mathr.co.uk> On 20/01/15 08:26, Sven Panne wrote: >> glCreateShader returns 0 on error, and using that "not a shader" id for >> >other calls is bound to cause chaos. >> >https://www.opengl.org/sdk/docs/man/html/glCreateShader.xhtml > This is not a bug, this is intentional. "Shader" is an instance of > "ObjectName", so you can check via "isObjectName" if you actually got > a shader back. Another possibility is to retrieve the value of the > "errors" state variable. In general, it's a good idea during > development to check "errors" from time to time at a few strategic > places (or use "reportErrors" from the GLUT package). Almost all API > calls can fail in one way or the other, and doing error checking in > the binding layer would lead to horrible performance. This is totally > in the "OpenGL spirit" where basically no API call returns a > success/failure indicator (apart from the global error state). Ok, that makes sense. But for the second bug, the only way to tell if the glGetShader call failed and left memory unchanged is by checking the GL errors within the binding. I doubt shaderVar will be used in an inner loop so the performance hit should be acceptable. >> The second bug is much more serious, a lack of error checking in shaderVar: >> > >> >---8<--- >> >shaderVar :: (GLint -> a) -> GetShaderPName -> Shader -> GettableStateVar a >> >shaderVar f p shader = >> > makeGettableStateVar $ >> > alloca $ \buf -> do repeatedly call glGetError here until it returns 0, to reset GL flags https://www.opengl.org/sdk/docs/man/html/glGetError.xhtml >> > glGetShaderiv (shaderID shader) (marshalGetShaderPName p) buf call glGetError here, if it isn't 0 then throw an exception to avoid reading uninitialized memory >> > peek1 f buf >> >---8<--- >> >http://hackage.haskell.org/package/OpenGL-2.10.0.0/docs/src/Graphics-Rendering-OpenGL-GL-Shaders-ShaderObjects.html#shaderVar >> > >> >glGetShaderiv doesn't modifiy the contents of buf on error. This means >> >uninitialized memory is read by peek1 and then presumably used by f, which >> >can cause a crash (in the best case) or wrong/undefined behaviour (in the >> >worst case). >> >https://www.opengl.org/sdk/docs/man/html/glGetShader.xhtml > Again, you should better make sure on the application level that the > "Shader" you pass into the shader queries is actually a shader name. > Otherwise you get nonsensical values for the shader state you queried > or even a Haskell "error" in the case of "shaderType". > > Just out of curiosity: What has been your expectation regarding error > handling/detection? Using exceptions would be horrible, because as an > application programmer you would have a very hard time dealing with > them without causing resource leaks and/or inconsistent program state. > Another option would be wrapping almost all stuff in the OpenGL > binding into Maybe/Either, but I don't think anybody would welcome > that. I think throwing catchable IO exceptions in the very rare (exceptional!) occasions where the alternative is undefined behaviour (including crashes) would be much nicer than wrapping all the results. Claude From svenpanne at gmail.com Tue Jan 20 12:14:33 2015 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 20 Jan 2015 13:14:33 +0100 Subject: [HOpenGL] two bugs in ShaderObjects.hs In-Reply-To: <54BE338D.3000605@mathr.co.uk> References: <54BD61CE.4000705@mathr.co.uk> <54BE338D.3000605@mathr.co.uk> Message-ID: 2015-01-20 11:53 GMT+01:00 Claude Heiland-Allen : > [...] But for the second bug, the only way to tell if the > glGetShader call failed and left memory unchanged is by checking the GL > errors within the binding. I doubt shaderVar will be used in an inner loop > so the performance hit should be acceptable. Again, there is no need for the binding to check anything, the general contract is: If before any call there are no registered errors, nothing bad will happen. Randomly inserting some checks in some arbitrary places in the binding to check for application programmer errors is probably not the way to go. There are more than 80 other places in the binding where something similar might happen, why should shader queries be special? And as I've previously mentioned, exceptions generate more problems than they solve IMHO, at least in conjunction with resources and/or mutable state. In a nutshell: If you want to debug your application, regularly check the OpenGL error state, don't expect any kind of error checking being done for you. And more often than not, even disable those error state checks in the final product for performance reasons. From claude at mathr.co.uk Tue Jan 20 14:28:09 2015 From: claude at mathr.co.uk (Claude Heiland-Allen) Date: Tue, 20 Jan 2015 14:28:09 +0000 Subject: [HOpenGL] two bugs in ShaderObjects.hs In-Reply-To: References: <54BD61CE.4000705@mathr.co.uk> <54BE338D.3000605@mathr.co.uk> Message-ID: <54BE65F9.7000603@mathr.co.uk> On 20/01/15 12:14, Sven Panne wrote: > 2015-01-20 11:53 GMT+01:00 Claude Heiland-Allen : >> [...] But for the second bug, the only way to tell if the >> glGetShader call failed and left memory unchanged is by checking the GL >> errors within the binding. I doubt shaderVar will be used in an inner loop >> so the performance hit should be acceptable. > > Again, there is no need for the binding to check anything, the general > contract is: If before any call there are no registered errors, > nothing bad will happen. Randomly inserting some checks in some > arbitrary places in the binding to check for application programmer > errors is probably not the way to go. There are more than 80 other > places in the binding where something similar might happen, why should > shader queries be special? And as I've previously mentioned, > exceptions generate more problems than they solve IMHO, at least in > conjunction with resources and/or mutable state. > > In a nutshell: If you want to debug your application, regularly check > the OpenGL error state, don't expect any kind of error checking being > done for you. And more often than not, even disable those error state > checks in the final product for performance reasons. Ok, I do understand this philosophy, but one last suggestion: replace 'alloca' with 'with 0' to at least avoid reading uninitialized memory when the application programmer makes a mistake (0 should be a sensible default in most cases I imagine). If git patches for this replacement would be accepted, I would be willing to submit them for as many places in the binding as I can find. Claude From svenpanne at gmail.com Tue Jan 20 15:24:48 2015 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 20 Jan 2015 16:24:48 +0100 Subject: [HOpenGL] two bugs in ShaderObjects.hs In-Reply-To: <54BE65F9.7000603@mathr.co.uk> References: <54BD61CE.4000705@mathr.co.uk> <54BE338D.3000605@mathr.co.uk> <54BE65F9.7000603@mathr.co.uk> Message-ID: 2015-01-20 15:28 GMT+01:00 Claude Heiland-Allen : > Ok, I do understand this philosophy, Just to be clear: This is basically not *my* philosophy, but the one used in OpenGL itself. It is partly explained in the OpenGL spec itself, partly by the fact that almost no OpenGL call has a return value to indicate success/failure (unlike e.g. the *nix system calls). This is probably an artifact from the time of the buffer-less fixed rendering pipeline, when you had tons of OpenGL calls frame and any kind of error checking would have been prohibitively costly. If OpenGL would be designed today, things would look different, I guess. Anyway, in my experience fighting against any kind of design philosophy in your program is a lost battle in the end, and that's why the OpenGL binding is like it is. > but one last suggestion: replace 'alloca' with 'with 0' to at least avoid reading uninitialized memory when > the application programmer makes a mistake (0 should be a sensible default > in most cases I imagine). If git patches for this replacement would be > accepted, I would be willing to submit them for as many places in the > binding as I can find. I think it might make sense for the places where the value is actually unmarshaled, so the unmarshal function might call "error" for unknown values. For basic types like GLfloat/GLint/... it doesn't really make sense and doesn't hurt: The value is just undefined, and 0 is not really better. In our concrete case of "shaderVar", poking 0 into "buf" first would be OK then. If you like, you can prepare a GitHub pull request for this and other sensible places. That's easier than sending patches around by hand and comment them. Create a GitHub account, fork the OpenGL repo, upload a fix and send this as a pull request, it's only a few clicks... :-)