# Reconstructing Linear World-Space Z Values from the Depth Buffer

or, What? That ain't my Z fool.

I've been playing around a bit more with my SSAO implementation, it's still a rather naive implementation, but I wanted to make sure that the underpinnings were correct, it just didn't seem like the linearised z values were quite correct. They were 'pretty much linear' but they were not the values I was expecting and certainly not in a range that made much sense.

At first I just went with some approaches that people had used on the internet, but none of them seemed right, or I wasn't happy with the proofs. So I wen't back to first principles, wrote up some debugging code in my shader so I could inspect the range and mapping of the values calculated, and grabbed a notepad (and excel for some number crunching). It seems that the differences I found were possibly ideosyncracies of using GL rather than DX, GL goes from a right-handed coordinate system pre-projection to a left-handed coordinate system post projection, and this involves a number of strategically placed minus signs.

With some of the code snippets I'd found on the internet, they were either missing the minuses, or were just not expecting the change of co-ordinate system in the first place (anything in DX is left->left).

So, pictures, the left hand side of the image are the actual z coordinates written to a texture, and the right hand side are the linear z values reconstructed from the z buffer (GL_DEPTH_ATTACHMENT). The image is of a piece of terrain, I've set the near and far planes so both ends are visibly clipped. The debugging code overlays the reconstructed z values on the actual render, colouring bands around certain values (pink at 0, orange a 1, then other coloured bands at 0.125, 0.25, 0.5, 0.75, and 0.875). And here is the GLSL code - d is just a value sampled straight from the depth buffer (that is the real depth buffer obtained from binding GL_DEPTH_ATTACHMENT):

``````float LineariseDepth(float d)
{
float f = g_fFarClip;
float n = g_fNearClip;
float A = -(f+n)/(f-n);
float B = -2 * f * n / (f - n);

// Scale/Bias 0..1 -> -1..1
d = d * 2 - 1;

// Linearise - value will now be in the range (n..f)
d = B / (A + d);

return d;
}
``````

Oh and as an extra, here's my DebugBoundaries function that I used for debugging, its not pretty, but certainly useful. I also often use `vec4(mod(fVal, 100)/100)` which you can use to render a value as repeating gradient bands of a particular width, helps work out what scale numbers your working with when you get some values that you can't fathom.

``````vec4 DebugBoundaries(float fVal)
{
if (abs(fVal) < 0.01)
{
return vec4(1,0,1,1);
}
else if (abs(fVal) < 0.02)
{
return vec4(1,0.5,1,1);
}
else if (abs(fVal - 0.125) < 0.005)
{
return vec4(0.1,0.5,0.0,1);
}
else if (abs(fVal - 0.25) < 0.005)
{
return vec4(0.3,1,0.0,1);
}
else if (abs(fVal - 0.5) < 0.005)
{
return vec4(0.5,0.2,0.0,1);
}
else if (abs(fVal - 0.75) < 0.005)
{
return vec4(0.0,1,0.4,1);
}
else if (abs(fVal - 0.875) < 0.005)
{
return vec4(0.0,0.5,0.2,1);
}
else if (abs(fVal + 0.5) < 0.01)
{
return vec4(0,0.2,0.5,1);
}

else if (abs(fVal - 1) < 0.01)
{
return vec4(1,0.4,0.0,1);
}
else if (abs(fVal - 1) < 0.02)
{
return vec4(1,0.6,0.0,1);
}

else if (abs(fVal + 1) < 0.01)
{
return vec4(0,0.4,1,1);
}

return vec4(fVal);
}
``````

Update:

Of course the point of all this is to recreate the view-space position from the depth buffer, to test that the position is correct I coloured the fragment based upon its distance in view space from a fixed point relative to the camera - ideally this should colour any pixels within a set sphere of influence. And it seems, my maths was spot on :D Next job, improve SSAO by using a oriented hemispheric kernel instead of a non-rotated sphere to get nicer results with a smaller number of samples.

Posted By 1 at 15:22:49 on 2013-06-18.