# Quantitative Analysis of Z-Buffer Precision

High Z-buffer precision is something that we take for granted these days. Since the introduction of reversed floating-point Z-buffering (and, assuming you use the standard tricks like camera-relative transforms), most depth precision issues are a thing of the past. You ask the rasterizer to throw triangles at the screen and, almost magically, and they appear at the right place and in the right order.

There are many existing articles concerning Z-buffer precision (1, 2, 3, 4, 5, 6, 7, the last one being my favorite), as well as sections in the Real-Time Rendering and the Foundations of Game Engine Development books. So, why bother writing another one? While there is nothing wrong with the intuition and the results presented there, I find the numerical analysis part (specifically, its presentation) somewhat lacking. It is clear that "the quasi-logarithmic distribution of floating-point somewhat cancels the $$1/z$$ nonlinearity", but what does that mean exactly? What is happening at the binary level? Is the resulting distribution of depth values linear? Logarithmic? Or is it something else entirely? Even after reading all these articles, I still had these nagging questions that made me feel that I had failed to achieve what Jim Blinn would call the ultimate understanding of the concept.

In short, we know that reversed Z-buffering is good, and we know why it is good; what we would like to know is what good really means. This blog post is not meant to replace the existing articles but, rather, to complement them.

# Sampling Analytic Participating Media

Rendering of participating media is an important aspect of every modern renderer. When I say participating media, I am not just talking about fog, fire, and smoke. All matter is composed of atoms, which can be sparsely (e.g. in a gas) or densely (e.g. in a solid) distributed in space. Whether we consider the particle or the wave nature of light, it penetrates all matter (even metals) to a certain degree and interacts with its atoms along the way. The nature and the degree of "participation" depend on the material in question.

# Sampling Burley's Normalized Diffusion Profiles

A couple of years ago, I worked on an implementation of Burley's Normalized Diffusion (a.k.a. Disney SSS). The original paper claims that the CDF is not analytically invertible. I have great respect for both authors, Brent Burley and Per Christensen, so I haven't questioned their claim for a second. Turns out, "Question Everything" is probably a better mindset.

I've been recently alerted by @stirners_ghost on Twitter (thank you!) that the CDF is, in fact, analytically invertible. Moreover, the inversion process is almost trivial, as I will demonstrate below.

# Normal Mapping Using the Surface Gradient

Realistic rendering at high frame rates remains at the core of real-time computer graphics. High performance and high fidelity are often at odds, requiring clever tricks and approximations to reach the desired quality bar.

One of the oldest tricks in the book is bump mapping. Introduced in 1978 by Jim Blinn, it is a simple way to add mesoscopic detail without increasing the geometric complexity of the scene. Most modern real-time renderers support a variation of this technique called normal mapping. While it's fast and easy to use, certain operations, such as blending, are not as simple as they seem. This is where the so-called "surface gradient framework" comes into play.

# Alternative Take on the Split Sum Approximation for Cubemap Pre-filtering

Pre-filtered cubemaps remain an important source of indirect illumination for those of us who still haven't purchased a Turing graphics card.

To my knowledge, most implementations use the split sum approximation originally introduced by Brian Karis. It is a simple technique, and generally works well given the inherent view independence limitation.

But why does it work so well, and is there a better way to pre-filter? Let's find out.

# Deep Compositing and Reprojection

Most graphics programmers are familiar with the concept of alpha. It has two interpretations - geometrical and optical. The former corresponds to coverage, while the latter refers to opacity.

Regular compositing assumes non-overlapping objects. Typically, the over operator is used. It implies that one of the objects is in front of the other, and thereby attenuates its contribution to the image.

This blog post will cover deep compositing.