Three.js From Zero · Article s5-03

S5-03 TAA & Denoising

Season 5 · Article 03

TAA & Denoising — reusing yesterday's pixels

Forward-render at 1 sample per pixel, accumulate over time. When the camera moves, reproject. When a disocclusion happens, reject. TAA is the reason modern games look clean.

1. The idea

You render frame N. The pixel at (120, 240) had some color. Next frame, the camera moved slightly. The object that was at (120, 240) is now at (124, 241). If you project the old frame forward using motion vectors, you can look up what the new (124, 241) pixel "should have" been.

Blend 5-10% of the new render with 90-95% of the reprojected history → smooth, stable, AA'd image. That's TAA.

2. What you need

  • History buffer: previous frame's final color.
  • Motion vectors: per-pixel, how much did this surface move in screen space? Requires rendering the previous view-projection matrix.
  • Jitter: sub-pixel camera offset each frame (Halton 2,3 sequence is standard) so averaged samples hit different sub-pixel positions → free anti-aliasing.
  • Neighborhood clamping: constrain the reprojected history to the min/max of current-frame neighbors → hides ghosting.

3. The core equation

// Per pixel this frame:
vec2 prevUv = currentUv - motionVector;
vec3 history = textureSample(historyTex, prevUv);

// Neighborhood bounds (3x3)
vec3 cMin = vec3(1e9), cMax = vec3(-1e9);
for (each of 9 neighbors) { cMin = min(cMin, n); cMax = max(cMax, n); }
history = clamp(history, cMin, cMax);  // anti-ghost

vec3 current = render_this_frame();
vec3 blended = mix(current, history, 0.95);
store(blended);

4. Jitter sequence (Halton 2,3)

float halton(int i, int base) {
  float f = 1.0, r = 0.0;
  while (i > 0) { f /= float(base); r += f * float(i % base); i /= base; }
  return r;
}
// Each frame:
vec2 j = (vec2(halton(frame, 2), halton(frame, 3)) - 0.5) / resolution;
camera.projectionMatrix[2][0] += j.x;
camera.projectionMatrix[2][1] += j.y;

5. Live demo — jagged vs. TAA

Left half: raw 1-spp render. Right half: 1-spp with TAA accumulation. Rotate camera to see how TAA stabilizes.

6. Disocclusion

Player walks forward. New pixels come into view that were hidden behind something last frame. No history → must use current.

Detection: compare depths. If abs(historyDepth - currentDepth) > threshold, reject history. Confidence-weighted blend.

7. Denoising (SVGF)

Spatiotemporal Variance-Guided Filter. Two parts:

  1. Temporal accumulation (TAA-like, but HDR, with variance).
  2. À-trous wavelet spatial filter: 5 passes of growing dilation, weighted by normal/depth/color similarity.

Input: 1-spp noisy path-traced frame. Output: clean. That's how real-time PT (Cyberpunk RT Overdrive) works.

8. ML denoisers

OptiX, Intel OIDN, NVIDIA DLSS's denoiser. Small convolutional nets trained on millions of noisy/clean pairs. Input: noisy color + albedo + normal. Output: clean. Quality jumps but needs ~1ms dedicated GPU.

9. DLSS / FSR / XeSS

Upscaling via temporal accumulation + a small neural network (DLSS) or hand-tuned shader (FSR 2). Render at 540p, upscale to 1080p with history buffer assistance. Quality close to native, half the GPU cost. Not in WebGPU yet, but the TAA half of the recipe is.

10. Takeaways

  • TAA = reproject last frame using motion vectors, blend with current. Free AA.
  • Jitter the camera sub-pixel → sampled positions hit different pixel centers → smoothed.
  • Neighborhood clamp fights ghosting. Depth check fights disocclusion.
  • SVGF + AI denoisers are how path tracing becomes real time.