Skip to content
July 11, 2011 / elandar1

Finally… an update!

It has been a while since we last posted on here. There has been a lot going on with me and Clinton and we have had very little time. We have gotten some programming done on The Sphere (if you ask me not as much as I would like). We have gotten the audio programmed in our busy summers and hope to have more done in the near future.

We choose to use the SDL music library and OpenAL for a couple different reasons. They are both multi-platform, easy to use, and have a diverse functionality. I programmed a wrapper dynamic library that we could use with The Sphere or other projects. I tried to make it as functional as we would need it. There is the standard play stop pause functions then fade in and fade out (which is native to SDL music). I have programmed it so that there is a playlist of the music playing and when one is finished it moves the next and the most recent music played is put on the bottom of the list. The user can also skip over the current music selection to the next one. Naturally, the user can also change the volume of the music.

All of this functionality in the music and sound allows us to provide a nice aspect to The Sphere. Giving us the ability to make an overlooked part of games hopefully to be better noticed.

May 11, 2011 / racoonacoon

Preping for cross-platform development

So about a week ago I started thinking. Being that we are going to remake The Sphere and Claudius with OpenGL, why not take it a step further and provide cross-platform support to all those *nix systems out there? I mean, it seems like a pretty cool idea and it would allow us to gain a lot of experience, scars, and (hopefully) customers. Plus it would be cool to show some love to the undervalued Linux community. With this idea stuck in my head, CJ and I went on a search for all the different applications and code stuffs that we will need to undertake this task and work cross-platform effectively. I also started reading a bit into OpenGL and all the stuff that it includes (or rather lacks.)

When coming from DirectX, OpenGL can seem a bit…bare I suppose. OpenGL lacks texture loading capabilities, sound playback abilities, input capabilities, and, what gets me the most, it lacks any sort of vector math library. It just seems inane to me to have an API of this sort without a decent math library to help you actually get the graphics on the screen. Still, I suppose this is necessary for OpenGL to be as cross-platform as possible, however painful that may be. Overall though, I love what I have seen of the actual library so far; it feels much more natural and streamlined than DirectX does.

Thankfully, CJ and I were able to get a pretty good list together of all our cross-platform necessities. The first item on the list was the IDE(s) (the application we use to program and compile our programs in) that we are going to be using. I really wish Microsoft Visual Studio was cross platform, because that is what we are both familiar with (I have been using it since the 2003 edition!) and we both just really like it. Instead, we begrudgingly decided to use the next best thing: CodeBlocks (which really is worlds away from the holy grail that is Visual Studio but we don’t have much of a choice.) I’m also looking into using MonoDevelop in case I get a C# fancy while running my newly installed Ubuntu partition.

With that out of the way, we had to pick some software libraries that could fill some of the gaps of OpenGL. CJ suggested I look into SDL for the window management stuff, as it handles all the nasty, extremely gritty, low-level stuff required to create a window. Better yet, it does it all in a cross-platform way, so if you are careful about things you can get a window up and running across multiple platforms. Which, *surprise surprise*, I have already done!

#define TANGENT

I am trying my hardest to make Claudius as smart as possible when it comes to this low-level stuff. If I don’t set everything up correctly now, it will be nearly impossible to do so once a game has been built on top of it. So far I have got it to where Claudius can return a list of supported resolutions and automatically clamp your resolution request to the bounds of these supported resolutions. It worked fine under Win7, but it failed initially under Linux. Apparently the backbuffer I was requesting had too high a bit depth for the Linux OpenGL driver. My next step then is to make sure the backbuffer parameters are queried and set to something legal if the requested one doesn’t exist. After that, the next significant thing that needs  set up is an event system so that buffers can be notified when the device is lost or reset (I assume this can happen under OpenGL) so they can rebuild themselves without user intervention.

#undef TANGENT

For sound we plan to use OpenAL, which seems like it could be a nice replacement to XAudio2. I believe CJ wanted to take advantage of SDL’s media features as well. For texture loading there exists both DevIL and FreeImage, but their licenses aren’t all that clear so this is still up in the air. For general stuff we plan to take advantage of the Boost C++ library, which should prove extremely useful, esp. in the threaded areas of the engine. The math library is something I’m not so sure on at the moment. There is glm, which looks great, but I might end up writing my own library so I can fool around a bit with lady SIMD. For those who don’t know, SIMD is the ability for an x86 processor (an Intel or Intel Clone (AMD)) to execute a single instruction on multiple pieces of data simultaneously. This can be used to make 3D math execute crazy fast. I think it will be a good learning experience.

This was a very long winded post, but I had a lot of stuffs to cover. Hope to see you back next time!

May 7, 2011 / racoonacoon

The Sphere. Finished?

Just a few days ago CJ and I presented The Sphere to our graphics class. For some reason I wasn’t as nervous as I tend to be when standing in front of a group of peers and strangers and saying words. Perhaps it was the fact that I got little sleep the night before and was in a strange daze, or perhaps the gum I was rapidly chewing actually did help diffuse some of my anxiety.

Our project turned out pretty much how we envisioned it, which is awesome! We were wise with our scope, and, as a result, got a nice simple little platfomer with decent gameplay and good graphics. There were some things that didn’t go so well too, but first, a video.

What went right

CJ and I were able to stay pretty tight throughout the development of The Sphere, and, because of this, we managed to get quite a bit accomplished. CJ designed the levels and did a lot of the supporting programming, while I did a majority of the engine work and messed around with gameplay mechanics.

We had a good idea and stuck with it. We would have never gotten anything done if we would have tried to do something much more complicated.

We used some tools, but we didn’t overdue it. I assert that any good 3D game simply cannot be created without some form of visual tools. CJ and I realized this from the beginning and utilized Blender 2.5 and its flexible scripting engine to allow us to get a level from Blender to Claudius in as short as time as possible. On the other hand, we didn’t overdue it and attempt to create a level builder from the ground up, which would have been too much a timesink for us to actually get something done.

What went wrong

Timecrunch. CJ and I were literally working up until the last minute on TheSphere. This resulted in there being a couple of unexpected bugs when we presented to the class. They weren’t too major, but they were disappointing nonetheless.

Object Batching. Remember this post in which I was working on object batching to improve performance? It was a mistake. I should have been looking into instancing instead (which does exactly the same thing except it is much easier to pull off, is supported directly by the hardware, and executes much faster.) For some reason the fact that Object Batching is virtually the same as instancing  totally escaped me, and as a result I spent a lot of time working on custom technology that I shouldn’t have.

The .claud format. Near the end of the project I was working on a custom exporter within Blender that would export the scene into a format Claudius could more easily understand. The format was, in my eyes, superior to .obj and would have been a great asset could I have gotten it working in time. Unfortunately I had to drop it in order to get the main aspects of the game completed for the presentation. If I were to do things over I would have built the .claud format from the get-go.

Lack of different collision types. The Sphere only supports Sphere to Axis Aligned Bounding Box collision detection. That means we could have no collision with rotated boxes or angled surfaces, causing the gameplay to get stale quick. The .claud format would have provided the necessary information to compute arbitrarily rotated box collisions, but, as I never got the format finished, this remained impossible.

The Future

So there we go! The Sphere is a 100% completed project that served its purpose of getting us both an A in graphics class, right? I’m glad to say that no, it isn’t. CJ and I decided that it would be great to convert Claudius over to OpenGL and do a total rewrite of the Sphere so that it is more stable and cross platform. Besides that, it is somewhat necessary to do this conversion if we want to sell the game, else the school could potentially claim ownership due to the fact that it was created specifically for a class. Rewriting it on our own time, however, will guarantee that we fully own the project.

There are some problems we still have to iron out though. On the technical side of things, neither CJ or I have worked much in OpenGL, or spent much time on cross platform projects, so there is going to be a bit of a learning curve for both of us. We also need to get some sort of SVN server set up so we can store our code on a central database we can both access. The biggest issue I see is that CJ doesn’t have access to the internet at his home in Boondock Hills, Ohio, so that could complicate things a bit if he can’t find a way to access our SVN server. That problem aside, I’m excited to get things moving along. I can’t wait to see what this game will look like when fully realized!

Anyway, I’ll make sure to keep this blog updated with our progress, so don’t forget to come around every now and then!

April 25, 2011 / elandar1

Neat effects

Water: We have implemented water now with deferred shading and it looks nice and it runs fairly well. At first when we put deferred shading and water in the water would appear over the other objects in the scene. This was fixed by us sampling the depth buffer to see where the objects should appear. We hope to put in reflections once we can effectively render the scene again without taking a serious hit to our frame rates.

Particle Systems (trail): We also had wanted to have a trail on our sphere when it is rolling around. How we had it set up it was a little difficult to access the position of sphere. So we set up so that we could name the objects in object registry and then at a later time call the objects in sphere get the position and then have the object update and render like all the other objects. Calling an instance of the trail particle system would not give the effect that is desired because the skybox would render over the trail, and the only way the player could see the trail is if there is an object that renders before it already there.

Transitions: We are using render targets to help with our transitions. At the moment it is very costly and rather difficult to do to render from one scene to another. We hope to be putting in a scene class that will hold all the information for each level and when it is called the scene will load up and render everything in that scene. Allowing us to transition from one scene to another in a smooth transition. The great thing is our scenes have been implemented so this is feasible now =).

3D sound: I have finished implementing audio3D on top of xaudio2. It has stereo, attenuation distance, and (hoping untested at the moment) 5.1 and 7.1 surround sound. We will be also implementing reverb effects for all the sounds now so that the atmosphere of the game seems more realistic with the sound echoing off the surroundings and give a more realistic feel to the environment. It was a very long process with the emitter listener and the the 3D audio engines parameters that needed to be set, but well worth it when finished.

April 22, 2011 / racoonacoon

Setting up a Scene class…and breaking everything else

In my last post I mentioned how we got the basics of deferred rendering working. Since then I have made a bit of an improvement to the frame rate by making use of the stencil buffer. I would normally delve into explanation on the stencil buffer and show examples of what I am taking about, but since Claudius is currently in a broken state I am unable to do so. Wait? Claudius is broken? Yea, well, one of the things CJ and I really want to do before the class is over is have a bunch of levels that you can progress to. You know, you start at level 1, and, once you beat it, go to the next level and so on and so forth in an endlessly blissful progression. Regrettably, Claudius didn’t really have the facilities to handle this well. This was perhaps an oversight on my part, but given that the core of Claudius was quickly built to help me catch up on long overdue graphics labs a semester back (and to force myself to learn C++ correctly), I’m not really going to be all that hard on myself. Basically, the internal design of the system went something like this:

To put it simply, there were far too many interdependencies. The Updater and Renderer relied on Claudius which in turn relied on the Updater and Renderer which in turn used Claudius to get a reference to the single Register. While I enjoy the concept of having a separate class that acts like a database to store all of our objects (the object register) it wasn’t implemented all that well with the rest of the system. This also made level progression difficult and error prone. Instead of being able to create a logical grouping of the items you want in a level, you instead have to manually add and remove all the items from the register. If you left something there you didn’t want then it would erroneously appear in the new level. We had to do something about this…

As always in computer programming, anything can be solved with another layer of indirection:

The solution to remove all the nasty interdependencies was to create a Scene class that has its own personal copy of the Register, Updater, and Renderer. The Updater and Renderer now requires a Scene pointer to be passed to them when they are created to bind them to that one scene. This allows the updater & renderer to easily ask the scene for a pointer to the register so it can get the items it needs. Claudius now only has to worry about keeping tabs on a single Scene object.  The beautiful thing about this design is that the Scene in Claudius can easily be swapped out for a different scene, allowing for multiple levels to be created very easily. It’s literally as easy as this:

Scene* scene1 = new Scene();

/*Add all the objects you want into this scene*/


Once it gets time to go to the next level, its a breeze.

Scene* scene2 = new Scene();

/*Add all the objects you want in the new scene*/


Another thing I am working on is getting level loading to be asynchronous. In other words, we don’t freeze up the entire system while a level loads. Instead, we still allow the game loop to process while we load a scene. This will allows us to display another scene (such as an interactive loading screen) while the scene we want loads in the background. I plan to use the OpenMP cross platform threading library for this aspect, as I’m not a masochist who likes working directly with Window’s hardware threads.

So yeah, as you can probably expect, moving from diagram1 to diagram2 results in many things being broken along the way. Mostly internal things, but a few external code bits also relied on Claudius for a reference for the ObjectRegister. Since the ObjectRegister is one of the most important parts of Claudius (it allows components to talk to one another) I’ll have to make sure objects can still grab a copy of the register if they need it. Anyway, I better get back to fixing everything so we have something more to show for than just a broken engine! Over and out!

April 18, 2011 / elandar1

Sphere, caught on tape!

Your eyes will water and your bladder will run as you gaze upon the first ever Sphere video! But really, we hope you enjoy it =)

April 6, 2011 / racoonacoon

The Deferred Renderer

Oh yea! The first version of the deferred renderer is finally complete! Although I must admit that some parts of it are hacked and the performance isn’t exactly what I want to it be yet. Still, I’m happy how relatively easy the thing was to implement, minus the tedious and slow work of refactoring some classes. Here are a couple of comparison screens for our rabid fans to digest:

As you might be able to notice, the differences are few, and that is a good thing! Ideally, deferred shading should look exactly like the forward rendering model, but that isn’t always possible do to performance (and memory) reasons. To understand why this is so (and why anyone would even use deferred rendering in the first place if it is supposed to be the same as forward rendering), we first have to understand the basics and differences between forward and deferred shading models.

Forward Shading

With forward shading you light each polygon as it comes through the shader. That is pretty much all there is to it. Well, except for the disadvantages. One of the problems is less-than-optimal performance. If there are many polygons that overlap one another, say 1000, and the polygons are rendered in a back-to-front order, than the lighting calculations will be ran 1000 times for that same pixel. Not so great. Another problem is shader complexity. With forward rendering the drawing of the vertex and the lighting of the vertex are closely bound to one another. This forces all your shaders to incorporate a large blob of lighting code, which is both nasty and complex.

Deferred Shading

Deferred Shading attempts to reduce some of the disadvantages of Forward Rendering by decoupling the rendering of the vertex and the lighting of the vertex into two separate phases, i.e. it defers the lighting phase. The basic concept is this: instead of actually rendering out the scene, we render out information about the scene. For example, we render out the diffuse color of the image, the normal at that pixel location, and the world position at that point.

Essentially what we are doing is combining the individual data of all our objects in our scene into a single buffer (known as the GBuffer.) Once our scene is rendered in this GBuffer we run a separate full screen pass that combines all this information and applies our lighting equations. The advantage to this is that we don’t have to worry about a pixel going through its lighting equations multiple times if the geometry overlaps, as the light equations are ran only once on the pixels that are guaranteed to be visible. Another advantage to this method is that it greatly simplifies the shaders that render out our geometry. No more do we have to worry about lighting in our mesh shaders. We defer all that complex stuff into a single shader that executes once everything has drawn.

However, there is a cost associated with each additional render target we store data to, so there is a strong incentive to bind the least number of render targets possible. I was binding four render targets, and using the last one to store the specular color (the color of the ‘highlights’ of the texture.) The performance drop was just too much to be justifiable though, so I opted to remove the fourth render target and use the diffuse color (texture color) of the image for the specular color. That is the differences between the images above. You will notice that the Forward Rendering model has more white highlight on the square pillars, while the Deferred Rendering model’s highlights are less noticeable. This is because the Deferred Model has less flexibility in the parameters we can use to light the objects.

This isn’t really want we want though. It would be nice to have the exact same (or at least very very nearly so) tweakable parameters that we do in Forward Shading. Is there a way to get around this? I believe the answer to this is yes. One of the ideas that has been floating around in my head (probably from this pdf) is to store a material id at each pixel in one of our render targets. We could then push over all our Material information into an array on our DeferredRender effects file, and look up the exact lighting parameters for each pixel. This would return to us all the flexibility we had before (or at least most of it), but there are a few things we need to worry about. One thing is the number of Materials we have in the scene. What happens if the material count in our scene exceeds the maximum amount our shader can handle? We also have to worry about the bandwidth cost of pushing over all the material data each frame. I think this will probably work fine with our game as it is, but I’m not positive this will work as a long term solution.


One of the things I am not too happy about yet is the performance of our current deferred renderer implementation. It is actually slower (by about 20 frames) over our forward shading model. This is probably do to the fact that deferred rendering requires extra bandwidth to store and load the texture data and that I am not yet using the stencil buffer to cull away any pixels not being lit by a light. As it currently stands, each pixel manually loops through each light in the scene and accumulates the influence of the light into that pixel color. So if we have 16 lights and a 1280×720 buffer, we are performing 1280x720x16 = 14,745,600 light calculations per frame. Keep in mind that this is likely less than the forward shading model, where it is likely to perform lighting equations more than once per pixel.

Another thing that could be done to reduce the number of render targets being used is to make heavier use of data packing. I have found out (thanks to this KillZone presentation) that it is possible to store only the X and Y normals and compute Z by using the formula normal.z = sqrt(1.0 – normal.x^2-normal.y^2). It is also possible to only store the Z value at a location and recompute its world x and y (thanks to Wolfgang’s blog for this insight.)  If I use these methods I may be able to reduce the number of render targets I am using to 2, which could help boost the performance a bit.

So there you go, a huge post on deferred shading! My goal now is to try and improve the performance of our deferred renderer so that we can actually have an advantage using it. Besides that I really want to start working on some more gameplay elements. The class we are building the game for will be over in roughly a month, so we don’t have too much more time to construct a game out of this. I would like to get in a couple of more collision types, asynchronous loading, events, multiple scenes, bezier curves, a menu system, and a bunch of more levels. I guess we’ll just have to wait and see how all that turns out 🙂