Christmas for me always signifies a bunch of free time to really catch up on my learning and programming.

As an aspiring programmer, especially with a desire to pursue games development, ever since Christmas 2007 (and also every Easter and maybe the odd random time!), I’ve read Masters of Doom by David Kushner (Buy on Amazon UK UK Flag or Buy on Amazon US US Flag), a story about the rise of Id Software, starting from the childhoods of the two Id superstars, Carmack and Romero, all the way up to the Quake games and their falling out.

It’s an absolutely awesome, inspiring read and while I’ve read this book 4 times and it never fails to leave me unable to sleep and will always inspire me straight out of bed and down to the computer to get on with stuff.

I highly recommend reading the book if you’re even remotely interested in games development, or even just programming in general, as the talk of Carmack’s programming prowess in the book never fails to impress!

There are also a few other works that come to mind which have been inspiring to me:

Does anybody else know of any inspiring books or videos?

Hope everybody has a good time. Things have been a little slow at this end. Just been mainly been recovering from a very tough last weekend.

Stereoscopic Imaging

I went to see Avatar in “Real D 3D”, which was really good and obviously I had to have a quick read around into the topic of stereoscopic imaging and about the various types of polarized glasses, and a little about how the projectors work and whatnot.

While I’m not expert, it was quickly apparent that without some decent technology, I wouldn’t be writing my own 3D demos any time soon.

What I could find out was that the stereoscopy refers to method of recording and presenting a three-dimensional to create the illusion of depth and as you’ll see, in most 3D films, there appears to be two images superimposed on one and another, and for the best part, this is actually what it is.

The image that would have been shot from the left and right cameras are superimposed into the one image, and as a result, we have depth, however, that’s the easy bit, the hard bit comes when you need to separate the two images for each eye to receive the appropriate image.

The versions at the cinema use special projectors that project each image using a particular polarization, and when combined with the special polarized glasses, the light is blocked appropriately so that each eye receives only the correct image and from there on, we perceive depth much like normal.

My initial interest while watching the movie, before knowing anything about it, was to do a graphics demo of some kind which got reading into Nvidia’s 3D Vision technology, which sounded a lot like some old Wicked3D eyeSCREAM glasses I had some years back, whereby the game presents the left and right images in an alternating fashion (effectively halving your perceived frame rate) and the special glasses shutter out the left and right eye appropriately.

A graphics demo is out, but then again, the old Red/Cyan glasses might be worth a laugh to write something.

Indirect Lighting (Photon Mapping, Radiosity, etc)

The idea of indirect lighting has always fascinated me and for me is a must have.

I read into some tutorials and papers on Radiosity and I’m still in the process of mulling over that particular problem.

I’m not really after real-time performance, but just to be able to light my own geometry to create my own assets easily would be fantastic.

With regards to Radiosity, while there’s some great information on the actual algorithms, there’s not much on divvying up the geometry into the so-called patches.

Photon Mapping looked interesting as an alternative.

I’ve tried to find out how the early unreal engine (Unreal, Unreal Tournament), Quake II and Quake III managed to create their lightmaps with their editors to almost no avail also.

Hopefully by the end of Christmas, I’ll be more “enlightened”.. Wow.. That was almost as bad as BBC’s humour!

Things have been a little slow recently. I’ve been generally reading about other aspects of graphics programming as well as generally being busy and now I’ve come down with something. Blocked ear, runny nose, bad headaches, dizziness and more.

My best excuse for not getting as much programming done has to be that I’ve given coffee a bit of a hiatus for a while just to get some of the benefits of caffeine back. Good for the health though! … 😛

I’ve still managed to get the majority of the ray tracer working in a DirectX 11 Compute Shader (working on DirectX 10 hardware using CS_4_0), although I have some small issues that I can’t track down before I can release a screenshot that looks identical to what I already have put up, albeit at 100 times better performance 🙂

Unfortunately there are no recursive functions allowed in the Compute Shaders, which means that I’m going to need to rethink stuff like the reflection traces and whatnot.

DirectX 11 (and DirectX in general) is still weird to work with, but slowly, things are coming together.

On a side note, I spent some of today looking at the Quake source and thinking about how cool a DirectX 11 port might be.

I came across my first significant bug in the DirectX 11 Compute Shaders (Using CS_4_0 for compatibility with my DirectX 10 card) although it’s my fault for using old drivers 🙂

When filling my RWStructuredBuffer with something easy to debug such as:

float4(1.1, 2.2, 3.3, 4.4);

What actually occurred, when checking on both the CPU and GPU (via pixel shader), was the following:

float4(1.1, 1.1, 1.1, 1.1);

The X or R component is overwriting all of the other components (Y, Z, W aka G, B, A)

It required a simple driver update from the current Windows 7 “Windows Update” drivers of 191.xx to 195.62.

Problem fixed.

I’ve just started dabbling with DirectX 10 and learnt enough to convert the basic Microsoft triangle demo over to DirectX 11, which is my primary goal given the nifty compute shaders. It’s worth noting that it’s quite a savage humbling to be taken back to being capable of rendering just a single primitive to the screen, I can tell you that 🙂

I’ve decided to finally learn DirectX after all these years of using OpenGL and a portion of time spent using XNA, which has been a decent intermediary to give me a bit of background on Direct3D (Change from right-handed to left-handed matrix, changing to GLSL from to HLSL, etc) and my first DirectX 11 project will be biting off more than I can chew by using the compute shaders in DirectX 11 to accelerate ray tracing.

It’ll be refreshing to be working on something graphics programming related that is current and isn’t catching up on something that happened anywhere from 5 to 20 years ago, which has been the case right up until now. At least as far as DirectX 11 goes anyway.. Ray tracing on the other hand, is like a plethora of graphics programming techniques, in that the algorithms and/or concepts have been around for years, but only now are they becoming a possibility in real-time rendering.

As far as the compute shaders go, they are quite a different experience and will take some getting used to. I hope to post some progress over the next week or two.