Learn Creative Coding (#43) - Post-Processing Effects
Learn Creative Coding (#43) - Post-Processing Effects

We've been rendering scenes directly to the screen for the entire shader arc. Raymarched 3D worlds with Phong lighting and soft shadows (episodes 38-40), fractals with smooth coloring and orbit traps (episodes 41-42). The output goes straight from the fragment shader to the display. What you compute is what you see.
But every film, every game, every polished visual piece does something after the initial render. The raw 3D render or the raw fractal output is just the starting point. Then you add bloom to make bright areas glow. Chromatic aberration to simulate lens imperfections. Vignette to darken the edges. Film grain for texture. Color grading to set the mood. These are post-processing effects -- they don't change what's in the scene, they change how the scene looks on screen.
The concept is straightforward. Instead of rendering your scene directly to the screen, you render it to an off-screen buffer (a texture). Then you draw a full-screen quad with a second shader that reads that texture and applies effects to it. The second shader sees the entire rendered image as input and can manipulate every pixel based on the pixels around it. That's the post-processing pipeline.
In a full WebGL/OpenGL setup you'd use framebuffer objects (FBOs) to do this properly. But for learning the effects themselves, we can simulate the pipeline in a single fragment shader. We compute the "scene" color at each pixel, then immediately apply post-processing to it before outputting. It's not a true multi-pass pipeline, but it teaches the same math. And some effects -- vignette, color grading, film grain -- don't need neighboring pixel data at all, so they work perfectly in a single pass.
The simplest effect: vignette
Start with something you can add to any shader in three lines. A vignette darkens the edges of the image, drawing the viewer's eye toward the center. It mimics the natural light falloff in camera lenses.
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 centered = uv - 0.5;
// some base scene color (gradient for demo)
vec3 color = vec3(0.4 + 0.3 * sin(uv.x * 6.28 + u_time),
0.3 + 0.2 * cos(uv.y * 4.0 - u_time * 0.5),
0.5);
// vignette
float dist = length(centered);
float vignette = smoothstep(0.7, 0.3, dist);
color *= vignette;
gl_FragColor = vec4(color, 1.0);
}
The length(centered) gives the distance from the pixel to the center of the screen. smoothstep(0.7, 0.3, dist) creates a smooth falloff: pixels near the center (dist close to 0) get a multiplier near 1.0. Pixels near the edges (dist approaching 0.7) get a multiplier near 0.0. The transition between them is the smooth S-curve that smoothstep provides.
The two parameters to smoothstep control the vignette shape. The first value (0.7) is where the darkening starts. The second value (0.3) is where the image is fully bright. Swap the values if you want to invert it. Make them closer together for a sharper edge. Further apart for a gradual fade.
You can also make the vignette elliptical instead of circular by scaling the UV before computing the distance:
vec2 centered = (uv - 0.5) * vec2(u_resolution.x / u_resolution.y, 1.0);
This accounts for the aspect ratio so the vignette follows the screen shape rather than being a perfect circle. On a widescreen monitor the circular vignette leaves the corners too dark -- the elliptical version looks more natural.
Film grain: instant texture
Film grain adds random noise to the image, simulating the physical grain structure of photographic film. A little grain makes digital renders feel organic and cinematic. Too much grain makes them look like found footage horror :-) Either way it's useful.
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
// simple hash for noise
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453123);
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
// base scene
vec3 color = vec3(0.35, 0.25, 0.4);
// film grain
float grain = hash(gl_FragCoord.xy + fract(u_time) * 100.0);
grain = (grain - 0.5) * 0.15; // center around 0, control intensity
color += grain;
// vignette on top
vec2 centered = uv - 0.5;
float vig = smoothstep(0.7, 0.3, length(centered));
color *= vig;
gl_FragColor = vec4(color, 1.0);
}
The hash function takes a 2D position and returns a pseudo-random number between 0 and 1. By adding fract(u_time) * 100.0 to the position, the noise pattern changes every frame -- the grain is animated, just like real film grain. Without the time component you'd get a static noise pattern, which looks wrong.
The (grain - 0.5) * 0.15 centers the noise around zero (so it brightens and darkens equally) and scales the intensity. At 0.15, the grain is subtle -- barely noticable unless you're looking for it. At 0.3 it's obvious. At 0.5 it's agressive. For most creative work, 0.08 to 0.15 is the sweet spot.
Notice I stacked the vignette AFTER the grain. Order matters in post-processing. If you add grain after the vignette, the edges get grainy at the same intensity as the center, which looks weird because the dark edges shouldn't have visible grain. By graining first and then darkening, the grain gets suppressed in the dark areas naturally.
Chromatic aberration: splitting the spectrum
Chromatic aberration happens in real lenses when different wavelengths of light refract at slightly different angles. The red, green, and blue channels don't perfectly overlap at the edges of the image, creating colorful fringing. In photography it's usually considered a defect. In creative coding it's an aesthetic choice.
The implementation: sample each color channel at a slightly different UV position.
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453123);
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 centered = uv - 0.5;
// chromatic aberration: offset per channel
float aberration = 0.005;
vec2 dir = normalize(centered) * aberration * length(centered);
// sample each channel at offset position
// (using a simple gradient as the "scene texture")
vec2 uvR = uv + dir;
vec2 uvG = uv;
vec2 uvB = uv - dir;
// scene color function (imagine this reads from a texture)
float r = 0.4 + 0.4 * sin(uvR.x * 12.0 + u_time);
float g = 0.3 + 0.4 * sin(uvG.x * 12.0 + u_time + 2.0);
float b = 0.5 + 0.4 * sin(uvB.x * 12.0 + u_time + 4.0);
vec3 color = vec3(r, g, b);
// vignette
float vig = smoothstep(0.7, 0.3, length(centered));
color *= vig;
gl_FragColor = vec4(color, 1.0);
}
The dir vector points from the center of the screen outward. Red gets pushed outward, blue gets pulled inward, green stays put. The length(centered) multiplier means the aberration is strongest at the edges (where it would be strongest in a real lens) and zero at the center. The aberration constant controls the magnitude -- 0.005 is subtle, 0.02 is dramatic.
In a real multi-pass setup, you'd read from the scene texture at three different UV positions to get the three channel values. In our single-pass simulation, we evaluate the "scene function" three times with different UVs. Same math, just without the texture indirection.
The effect is most visible on high-contrast edges. A bright object on a dark background gets a red fringe on one side and a blue fringe on the other. On low-contrast areas you barely see it. That's exactly how real chromatic aberration works -- it's an edge phenomenon.
Gaussian blur: the foundation of bloom
Before we can do bloom, we need blur. Gaussian blur averages each pixel with its neighbors, weighted by a bell curve. Pixels close to the center get high weight, pixels far away get low weight. The result is a smooth, natural-looking blur.
A proper Gaussian blur samples a 2D kernel -- for a radius of, say, 5 pixels, that's a 11x11 grid of samples (121 texture reads per pixel). That's expensive. The trick is separable blur: do two 1D passes instead. First blur horizontally (11 samples), then blur vertically (11 samples). 22 samples instead of 121, same result. This requires two render passes though -- horizontal blur to a temp buffer, then vertical blur from the temp buffer to the output.
In a single fragment shader we can't do real multi-pass. But we CAN compute the blur inline by evaluating our scene function at multiple offsets. It's expensive (evaluating the scene N times per pixel) but it works for learning:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
vec3 scene(vec2 uv) {
// animated pattern as our "scene"
float d = length(uv - 0.5);
float ring = smoothstep(0.2, 0.19, abs(d - 0.3));
float ring2 = smoothstep(0.15, 0.14, abs(d - 0.15));
vec3 col = vec3(ring * 0.9, ring * 0.4 + ring2 * 0.5, ring2 * 0.8);
col += vec3(0.05);
return col;
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 pixel = 1.0 / u_resolution;
// 1D horizontal blur (9 taps, Gaussian-ish weights)
float weights[5];
weights[0] = 0.227;
weights[1] = 0.194;
weights[2] = 0.122;
weights[3] = 0.054;
weights[4] = 0.016;
vec3 blurred = scene(uv) * weights[0];
for (int i = 1; i < 5; i++) {
float offset = float(i) * pixel.x * 2.0;
blurred += scene(uv + vec2(offset, 0.0)) * weights[i];
blurred += scene(uv - vec2(offset, 0.0)) * weights[i];
}
// vertical blur on top (making it a 2D blur in one pass)
vec3 blurred2 = blurred * weights[0];
// ... in practice you'd need the horizontal result first
// this is why real blur needs two passes!
gl_FragColor = vec4(blurred, 1.0);
}
The weights array holds the Gaussian distribution coefficients. They sum to approximately 1.0 (including both sides of the symmetric kernel). The center tap gets the highest weight (0.227), and it falls off toward the edges. The * 2.0 on the offset spreads the samples further apart, effectively doubling the blur radius without adding more taps.
This is only a horizontal blur. For a true 2D Gaussian blur you need to also blur vertically, which means you need the horizontally-blurred result as input. That's the two-pass approach. In a single shader you can approximate it by doing both directions simultaneously, but it's not mathematically identical to the separable approach. For creative purposes, it looks close enough.
Bloom: making bright things glow
Bloom is the signature post-processing effect. Bright areas of the image bleed light into surrounding pixels, creating a soft glow. Think of looking at a streetlight at night -- the light doesn't just stop at the edge of the bulb, it halos outward. That's bloom.
The algorithm:
- Extract bright pixels (anything above a luminance threshold)
- Blur the bright pixels (Gaussian blur)
- Add the blurred result back to the original image
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
vec3 scene(vec2 uv) {
vec2 centered = (uv - 0.5) * vec2(u_resolution.x / u_resolution.y, 1.0);
// some bright spots on a dark background
float d1 = length(centered - vec2(0.2 * sin(u_time), 0.1 * cos(u_time * 0.7)));
float d2 = length(centered - vec2(-0.15, -0.1 + 0.05 * sin(u_time * 1.3)));
float d3 = length(centered + vec2(0.2, -0.05));
vec3 col = vec3(0.02, 0.02, 0.04);
col += vec3(1.0, 0.6, 0.2) * (0.01 / (d1 * d1 + 0.001));
col += vec3(0.3, 0.7, 1.0) * (0.008 / (d2 * d2 + 0.001));
col += vec3(0.8, 0.2, 0.9) * (0.006 / (d3 * d3 + 0.001));
return col;
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 pixel = 1.0 / u_resolution;
// original scene
vec3 original = scene(uv);
// extract bright parts and blur them
vec3 bloom = vec3(0.0);
float bloomRadius = 4.0;
for (int x = -4; x <= 4; x++) {
for (int y = -4; y <= 4; y++) {
vec2 offset = vec2(float(x), float(y)) * pixel * bloomRadius;
vec3 sampled = scene(uv + offset);
// threshold: only keep bright parts
vec3 bright = max(sampled - vec3(1.0), vec3(0.0));
// gaussian-ish weight based on distance
float w = exp(-0.5 * float(x * x + y * y) / 4.0);
bloom += bright * w;
}
}
bloom /= 16.0; // normalize
// add bloom to original
vec3 color = original + bloom;
// tone mapping (clamp the bright values back to displayable range)
color = color / (color + vec3(1.0));
// vignette
vec2 centered = uv - 0.5;
color *= smoothstep(0.75, 0.3, length(centered));
gl_FragColor = vec4(color, 1.0);
}
OK so this is doing something wild -- 81 scene evaluations per pixel (9x9 grid). That's really expensive. In a real engine you'd blur a downscaled texture, which is way cheaper. But for understanding what bloom IS, this works. And the GPU handles it fine for simple scenes.
The bright pixel extraction is max(sampled - vec3(1.0), vec3(0.0)). Anything below 1.0 becomes zero (below threshold, no bloom contribution). Anything above 1.0 keeps the excess as bloom intensity. A value of 2.0 contributes 1.0 to the bloom. This means only genuinely bright pixels produce glow.
The 1.0 / (d * d + 0.001) in the scene function creates inverse-square-falloff point lights -- they're physically bright (values well above 1.0 near the center) which makes the bloom threshold work naturally. If your scene only has values between 0 and 1, you'd need to lower the threshold.
The tone mapping line color / (color + 1.0) is Reinhard tone mapping. It compresses HDR values (stuff above 1.0) back into the displayable 0-1 range without clipping. Bright values get compressed more than dim values, so you get a natural-looking result where blown-out highlights are soft white rather than hard-clipped. Without tone mapping, the bloom would push values past 1.0 and they'd clip to white unnaturally.
CRT scanlines: retro display simulation
Scanline effects mimic old CRT monitors and TVs. Every other horizontal line is slightly darker, simulating the visible raster lines on a cathode ray tube. Add some RGB subpixel offsetting and barrel distortion and you've got a convincing retro display.
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
// barrel distortion (curved screen)
vec2 centered = uv * 2.0 - 1.0;
float barrel = dot(centered, centered) * 0.15;
vec2 distorted = uv + centered * barrel;
// check if distorted UV is out of bounds (black border)
if (distorted.x < 0.0 || distorted.x > 1.0 ||
distorted.y < 0.0 || distorted.y > 1.0) {
gl_FragColor = vec4(vec3(0.0), 1.0);
return;
}
// base scene (retro-style gradient)
vec3 color = vec3(
0.3 + 0.3 * sin(distorted.x * 10.0 + u_time),
0.4 + 0.2 * cos(distorted.y * 8.0 - u_time * 0.7),
0.5 + 0.3 * sin(distorted.x * 6.0 + distorted.y * 6.0 + u_time * 0.3)
);
// scanlines
float scanline = sin(gl_FragCoord.y * 3.14159) * 0.5 + 0.5;
scanline = pow(scanline, 0.8);
color *= 0.7 + 0.3 * scanline;
// RGB subpixel simulation
float subpixel = mod(gl_FragCoord.x, 3.0);
if (subpixel < 1.0) {
color.gb *= 0.85;
} else if (subpixel < 2.0) {
color.rb *= 0.85;
} else {
color.rg *= 0.85;
}
// brightness boost to compensate for darkening
color *= 1.3;
// slight green tint (old phosphor monitors)
color *= vec3(0.95, 1.05, 0.95);
// vignette (CRT screens are dimmer at edges)
vec2 vig = uv * 2.0 - 1.0;
color *= 1.0 - dot(vig, vig) * 0.25;
gl_FragColor = vec4(color, 1.0);
}
The barrel distortion maps the UV coordinates through a quadratic function that curves them outward from the center. This simulates the curved glass of a CRT screen. Points near the center barely move. Points near the edges shift outward significantly. When the distorted UV falls outside the 0-1 range, we output black -- that's the rounded corners of the CRT screen where the curved glass bends past the phosphor coating.
The scanlines use sin(gl_FragCoord.y * PI) which creates a sine wave with exactly one cycle per pixel row. Every other row gets darkened. The pow(scanline, 0.8) adjusts the shape of the sine wave -- lower powers make the bright lines wider, higher powers make them narrower. At power 0.8, the bright lines are slightly wider than the dark gaps, which looks more accurate to actual CRT phosphor behavior.
The subpixel simulation is a fun detail. Real CRT pixels (and modern LCD pixels) are made of three colored sub-elements: red, green, blue, arranged in columns. We simulate this by slightly dimming two of the three channels based on the horizontal pixel position modulo 3. At normal viewing distance you can't see it but if you zoom in (or screenshot and enlarge) you'll see the RGB stripe pattern. It adds that authentic low-res glow.
Glitch effects: controlled chaos
Digital glitch effects simulate data corruption, signal interference, or decoding errors. They break the image in controlled, deliberate ways. Horizontal line offsets, color channel swaps, block artifacts -- the vocabulary of corrupted video.
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
float hash(float n) {
return fract(sin(n) * 43758.5453123);
}
float hash2(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
// base scene
vec3 color = vec3(
0.3 + 0.3 * sin(uv.x * 8.0 + u_time * 0.5),
0.4 + 0.3 * cos(uv.y * 6.0 + u_time * 0.3),
0.35 + 0.25 * sin((uv.x + uv.y) * 5.0 - u_time)
);
// glitch timing: trigger every few seconds, last for a short burst
float glitchTime = floor(u_time * 2.0);
float glitchActive = step(0.7, hash(glitchTime));
if (glitchActive > 0.5) {
// horizontal line displacement
float lineHash = hash(floor(uv.y * 30.0) + glitchTime * 7.0);
if (lineHash > 0.6) {
float offset = (hash(floor(uv.y * 30.0) + glitchTime * 13.0) - 0.5) * 0.1;
uv.x += offset;
}
// color channel shift
float channelShift = hash(glitchTime * 3.0) * 0.03;
color.r = 0.3 + 0.3 * sin((uv.x + channelShift) * 8.0 + u_time * 0.5);
color.b = 0.35 + 0.25 * sin(((uv.x - channelShift) + uv.y) * 5.0 - u_time);
// block corruption
vec2 blockUV = floor(uv * vec2(20.0, 15.0));
float blockHash = hash2(blockUV + glitchTime);
if (blockHash > 0.92) {
color = vec3(hash2(blockUV + 1.0), hash2(blockUV + 2.0), hash2(blockUV + 3.0));
}
}
gl_FragColor = vec4(color, 1.0);
}
The key to convincing glitch effects is intermittency. Real glitches don't happen continuously -- they appear in brief bursts then vanish. The glitchActive flag uses a hash of the time to randomly trigger glitch frames. At any given half-second interval, there's roughly a 30% chance the glitch fires. The rest of the time the image is clean.
When the glitch IS active, three things happen. Horizontal line displacement shifts random rows left or right by a small random amount. This simulates analog signal interference where the horizontal sync goes wrong for a few scanlines. Color channel shifting moves the red and blue channels horizontally by a small offset, creating the classic RGB-split glitch look. And block corruption replaces random rectangular blocks with pure random colors, simulating MPEG decoding errors where a macroblock gets corrupted.
Each of these uses floor() to quantize -- lines snap to discrete rows (floor(uv.y * 30.0) gives 30 possible line regions), blocks snap to a grid (floor(uv * vec2(20, 15)) gives a 20x15 block grid). Without the quantization the corruption would be per-pixel and look like static noise. The quantization gives it that blocky, digital-artifact character.
Pixelation: the mosaic look
Pixelation reduces the effective resolution by snapping UV coordinates to a grid. Each "big pixel" gets a single color sampled from its center. Instant retro aesthetic.
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
// pixelation amount (lower = bigger pixels)
float pixelSize = 64.0 + 48.0 * sin(u_time * 0.5);
// snap UVs to pixel grid
vec2 pixelUV = floor(uv * pixelSize) / pixelSize;
// scene at snapped position
vec3 color = vec3(
0.4 + 0.4 * sin(pixelUV.x * 10.0 + u_time),
0.3 + 0.4 * cos(pixelUV.y * 8.0 - u_time * 0.6),
0.5 + 0.3 * sin((pixelUV.x + pixelUV.y) * 7.0 + u_time * 0.4)
);
// optional: add pixel grid lines
vec2 grid = fract(uv * pixelSize);
float border = step(0.05, grid.x) * step(0.05, grid.y);
color *= 0.85 + 0.15 * border;
gl_FragColor = vec4(color, 1.0);
}
The entire effect is floor(uv * pixelSize) / pixelSize. That's it. The floor() snaps the continuous UV to discrete grid positions. All pixels within the same grid cell get the same UV, so they get the same color. Bigger pixelSize = more cells = higher effective resolution. Smaller = fewer cells = bigger blocky pixels.
The grid lines overlay adds a subtle darkening at the borders of each big pixel, making the grid structure visible. The fract(uv * pixelSize) gives the position within each cell (0 to 1), and step(0.05, grid.x) * step(0.05, grid.y) darkens a thin strip along the left and bottom edges of each cell.
Animating pixelSize with a sine wave smoothly transitions between high and low resolution. At peak pixelization the image is a mosaic of maybe 20x15 blocks. At minimum it looks nearly normal. The transition itself is visually interesting -- watching detail emerge from abstraction.
Edge detection: Sobel filter
Edge detection finds boundaries between different colors/brightnesses in the image. The Sobel filter computes the gradient (rate of change) in X and Y directions by comparing neighboring pixels. High gradient magnitude = edge.
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
float scene_lum(vec2 uv) {
vec3 col = vec3(
0.4 + 0.4 * sin(uv.x * 10.0 + u_time),
0.3 + 0.3 * cos(uv.y * 8.0 + u_time * 0.7),
0.5 + 0.3 * sin((uv.x + uv.y) * 7.0 - u_time * 0.3)
);
// luminance
return dot(col, vec3(0.299, 0.587, 0.114));
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 px = 1.0 / u_resolution;
// Sobel kernel: sample 3x3 neighborhood
float tl = scene_lum(uv + vec2(-px.x, px.y));
float tc = scene_lum(uv + vec2(0.0, px.y));
float tr = scene_lum(uv + vec2(px.x, px.y));
float ml = scene_lum(uv + vec2(-px.x, 0.0));
float mr = scene_lum(uv + vec2(px.x, 0.0));
float bl = scene_lum(uv + vec2(-px.x, -px.y));
float bc = scene_lum(uv + vec2(0.0, -px.y));
float br = scene_lum(uv + vec2(px.x, -px.y));
// Sobel gradients
float gx = -tl - 2.0 * ml - bl + tr + 2.0 * mr + br;
float gy = -tl - 2.0 * tc - tr + bl + 2.0 * bc + br;
float edge = sqrt(gx * gx + gy * gy);
// white edges on dark background
vec3 color = vec3(edge * 3.0);
// or: overlay edges on original
// vec3 original = scene(uv);
// vec3 color = mix(original, vec3(1.0), edge * 2.0);
gl_FragColor = vec4(color, 1.0);
}
The Sobel filter uses a 3x3 grid of samples. The horizontal gradient gx subtracts the left column from the right column (with double weight on the middle row). The vertical gradient gy subtracts the top row from the bottom row (with double weight on the middle column). These are the standard Sobel kernels -- they're designed to respond to edges while being somewhat resistant to noise.
The edge magnitude is sqrt(gx*gx + gy*gy) -- the combined gradient magnitude across both directions. High values mean the brightness changes rapidly at that point -- an edge. Low values mean a smooth region.
The scene_lum function converts the color to a single luminance value using the standard human-perception weights (0.299 red, 0.587 green, 0.114 blue). Edge detection works on single-channel data. You COULD detect edges per channel and combine them, but luminance edges catch most of what matters visually.
The * 3.0 boost makes the edges more visible. Without it, subtle edges would be nearly invisible. Adjust to taste -- higher values catch more subtle edges but also amplify noise.
Color grading: setting the mood
Color grading is the final artistic pass. It adjusts the overall color balance, contrast, and tonal distribution of the image. It's how a sunny outdoor scene can be made to feel cold and bleak, or how a dark interior can feel warm and inviting. Same content, completely different emotional read.
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
// base scene
vec3 color = vec3(
0.5 + 0.3 * sin(uv.x * 8.0 + u_time),
0.4 + 0.3 * cos(uv.y * 6.0 - u_time * 0.5),
0.45 + 0.25 * sin((uv.x + uv.y) * 5.0 + u_time * 0.3)
);
// --- color grading stack ---
// 1. exposure (overall brightness)
float exposure = 1.2;
color *= exposure;
// 2. contrast (push darks darker, brights brighter)
float contrast = 1.3;
color = (color - 0.5) * contrast + 0.5;
// 3. saturation
float luma = dot(color, vec3(0.299, 0.587, 0.114));
float saturation = 1.2;
color = mix(vec3(luma), color, saturation);
// 4. color balance: shift shadows toward blue, highlights toward warm
vec3 shadowTint = vec3(0.9, 0.9, 1.15);
vec3 highlightTint = vec3(1.1, 1.05, 0.9);
color *= mix(shadowTint, highlightTint, luma);
// 5. gamma curve (per channel for fine control)
color = pow(color, vec3(0.9, 0.95, 1.05));
// clamp to valid range
color = clamp(color, 0.0, 1.0);
gl_FragColor = vec4(color, 1.0);
}
Five operations, each doing something different:
Exposure is a simple multiply. Values above 1.0 brighten the image, below 1.0 darken it. Like opening the aperture on a camera.
Contrast scales values relative to middle gray (0.5). Multiplying by a value above 1.0 pushes brights further from 0.5 (brighter) and darks further from 0.5 (darker). The midpoint stays the same. Below 1.0 flattens the image toward uniform gray.
Saturation is controlled by mixing between the original color and the grayscale (luminance) value. A saturation of 1.0 is unchanged. Above 1.0 pushes colors further from gray (more vivid). Below 1.0 pulls colors toward gray. At 0.0 the image is fully desaturated -- pure grayscale.
Color balance tints shadows and highlights differently. We use the luminance as a blend factor -- dark pixels (luma near 0) get multiplied by the shadow tint, bright pixels (luma near 1) get multiplied by the highlight tint. The cold-shadow, warm-highlight combo is a classic cinematic look. Flip it (warm shadows, cold highlights) for a different feel entirely.
Gamma applies a power curve per channel. Values below 1.0 brighten that channel, above 1.0 darken it. The vec3(0.9, 0.95, 1.05) very subtly warms the image -- red and green channels are slightly brightened, blue slightly darkened. Per-channel gamma is incredibly powerful for fine-tuning the overall color temperature.
Stacking it all: a full post-processing chain
The real power comes from combining effects. Each one is a few lines, and they chain naturally. Here's a complete post-processing stack applied to a simple animated scene:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
vec3 scene(vec2 uv) {
vec2 centered = (uv - 0.5) * vec2(u_resolution.x / u_resolution.y, 1.0);
// animated orbs
float d1 = length(centered - 0.25 * vec2(sin(u_time * 0.8), cos(u_time * 0.6)));
float d2 = length(centered + 0.2 * vec2(cos(u_time * 0.5), sin(u_time * 0.9)));
vec3 col = vec3(0.03, 0.03, 0.06);
col += vec3(1.2, 0.5, 0.2) * 0.01 / (d1 * d1 + 0.0005);
col += vec3(0.3, 0.6, 1.2) * 0.008 / (d2 * d2 + 0.0005);
return col;
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 px = 1.0 / u_resolution;
vec2 centered = uv - 0.5;
// 1. chromatic aberration
float abr = 0.004 * length(centered);
vec2 dir = normalize(centered + 0.0001);
vec3 color;
color.r = scene(uv + dir * abr).r;
color.g = scene(uv).g;
color.b = scene(uv - dir * abr).b;
// 2. simple bloom (reduced sample count for perf)
vec3 bloom = vec3(0.0);
for (int x = -3; x <= 3; x++) {
for (int y = -3; y <= 3; y++) {
vec2 off = vec2(float(x), float(y)) * px * 3.0;
vec3 s = scene(uv + off);
vec3 bright = max(s - vec3(0.8), vec3(0.0));
float w = exp(-0.5 * float(x * x + y * y) / 3.0);
bloom += bright * w;
}
}
bloom /= 12.0;
color += bloom;
// 3. tone mapping (Reinhard)
color = color / (color + vec3(1.0));
// 4. color grading
float exposure = 1.4;
color *= exposure;
float contrast = 1.15;
color = (color - 0.5) * contrast + 0.5;
float luma = dot(color, vec3(0.299, 0.587, 0.114));
color = mix(vec3(luma), color, 1.1);
// 5. film grain
float grain = (hash(gl_FragCoord.xy + fract(u_time) * 137.0) - 0.5) * 0.08;
color += grain;
// 6. vignette
float vig = smoothstep(0.75, 0.3, length(centered));
color *= vig;
// 7. gamma
color = pow(max(color, vec3(0.0)), vec3(0.85));
gl_FragColor = vec4(clamp(color, 0.0, 1.0), 1.0);
}
Seven effects in sequence, each one building on the previous output. The order matters:
- Chromatic aberration first, because it samples the raw scene
- Bloom next, also from the raw scene (before any color adjustments)
- Tone mapping converts the HDR bloom result to displayable range
- Color grading adjusts the mood (exposure, contrast, saturation)
- Film grain adds texture (applied after color grading so the grain takes on the scene's color temperature)
- Vignette darkens edges (applied late so the grain also gets darkened at edges)
- Gamma as the very last step, adjusting the final brightness curve
If you swapped the grain and vignette order, the grain would be uniformly visible across the whole image, including the dark edges. That might be what you want -- but usually it looks better when the grain fades with the vignette.
The max(color, vec3(0.0)) before the gamma pow() prevents NaN values. pow() with a negative base and fractional exponent produces NaN in GLSL, and one NaN pixel can cause weird rendering artifacts. Always clamp before raising to a power.
The total cost of this shader is high -- the bloom loop evaluates the scene 49 times (7x7 grid), plus the 3 chromatic aberration samples, plus the original. That's 53 scene evaluations per pixel. For our simple orb scene that's fine. For a complex raymarched scene with 100+ steps per evaluation... you'd want real multi-pass with textures. But the math is identical -- only the efficiency changes.
Creative exercise: post-process a fractal
Take the Julia set shader from episode 42 and add a post-processing stack. Render the fractal as the "scene", then apply bloom (to make the bright escape-speed regions glow), chromatic aberration (for lens character), vignette, grain, and a cold-blue color grade:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
vec2 cmul(vec2 a, vec2 b) {
return vec2(a.x * b.x - a.y * b.y, a.x * b.y + a.y * b.x);
}
vec3 palette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
return a + b * cos(6.28318 * (c * t + d));
}
vec3 fractalScene(vec2 uv) {
vec2 z = uv * 2.5;
float angle = u_time * 0.1;
vec2 c = vec2(0.38 * cos(angle) - 0.25, 0.38 * sin(angle));
int maxIter = 200;
int escaped = maxIter;
for (int i = 0; i < 200; i++) {
z = cmul(z, z) + c;
if (dot(z, z) > 16.0) { escaped = i; break; }
}
vec3 col = vec3(0.01, 0.01, 0.03);
if (escaped < maxIter) {
float sv = float(escaped) + 1.0 - log2(log2(length(z)));
float t = sv * 0.025;
col = palette(t,
vec3(0.5), vec3(0.5),
vec3(1.0, 0.7, 0.4), vec3(0.0, 0.15, 0.2));
// boost brightness for bloom to catch
col *= 1.5;
}
return col;
}
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
void main() {
vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
vec2 screenUV = gl_FragCoord.xy / u_resolution;
vec2 px = 1.0 / u_resolution;
vec2 centered = screenUV - 0.5;
// chromatic aberration
float abr = 0.003 * length(centered);
vec2 dir = normalize(centered + 0.0001);
vec3 color;
color.r = fractalScene(uv + dir * abr).r;
color.g = fractalScene(uv).g;
color.b = fractalScene(uv - dir * abr).b;
// simple bloom
vec3 bloom = vec3(0.0);
for (int x = -3; x <= 3; x++) {
for (int y = -3; y <= 3; y++) {
vec2 off = vec2(float(x), float(y)) / u_resolution.y * 3.0;
vec3 s = fractalScene(uv + off);
vec3 bright = max(s - vec3(0.6), vec3(0.0));
float w = exp(-0.5 * float(x * x + y * y) / 3.0);
bloom += bright * w;
}
}
color += bloom / 10.0;
// tone map
color = color / (color + vec3(1.0));
// cold color grade
color *= vec3(0.85, 0.95, 1.15);
color = (color - 0.5) * 1.2 + 0.5;
// grain
float grain = (hash(gl_FragCoord.xy + fract(u_time) * 200.0) - 0.5) * 0.06;
color += grain;
// vignette
color *= smoothstep(0.72, 0.3, length(centered));
// gamma
color = pow(max(color, vec3(0.0)), vec3(0.9));
gl_FragColor = vec4(clamp(color, 0.0, 1.0), 1.0);
}
The fractal itself hasn't changed. Same Julia set iteration, same cosine palette. But the bloom makes the bright regions bleed light into the dark interior. The chromatic aberration adds subtle rainbow fringing at the edges of the fractal boundary. The cold color grade (blue-boosted, red-suppressed) gives it an icy, ethereal feel. The grain adds organic texture to the mathematical precision.
Take this and tweak the color grade. Try warm instead of cold (vec3(1.15, 1.0, 0.85)). Try high contrast (1.5 instead of 1.2). Try heavy grain (0.15) for a grungy look. Try no vignette for a flat, clinical feel. Each combination produces a compeltely different emotional response from the same fractal data. That's the power of post-processing -- it separates the content from the presentation.
When to use what
Not every effect belongs in every piece. Some quick guidelines:
Vignette -- almost always. It's subtle, it's cheap, and it focuses attention. The only time to skip it is when you specifically want a flat, uniform-brightness look.
Film grain -- for anything that should feel organic, cinematic, or analog. Skip it for clean, digital, geometric work where precision is the aesthetic.
Bloom -- for scenes with bright light sources or highlights. Makes point lights and hot spots feel luminous. Useless if your scene is uniformly dim with no bright peaks.
Chromatic aberration -- for lens-simulation aesthetics. Works great with bloom. Can feel forced if overused. Keep it subtle (0.003-0.005) unless you're going for a deliberately broken look.
CRT/scanlines -- for retro aesthetics specifically. Doesn't fit modern or clean looks. Commit to the whole CRT package (barrel distortion + scanlines + phosphor glow) or don't bother.
Glitch -- for tension, energy, or digital-chaos aesthetics. Best when intermittent. Permanent glitch is just noise. The contrast between clean and corrupted is what makes it effective.
Color grading -- always. Even if it's just a gamma adjustment and a slight warmth shift. Raw renders look flat. Five minutes of color grading makes them look intentional.
Next time we'll work with textures in shaders -- reading image data, using textures for displacement, lookup tables for complex coloring, and procedural texture generation. The post-processing techniques from today will stack naturally with texture-based rendering.
't Komt erop neer...
- Post-processing effects transform the rendered scene without changing the scene itself. The pipeline: render scene to buffer, apply effects to the buffer, output to screen
- Vignette:
smoothstep(outer, inner, distance_from_center)darkens edges. Three lines, always useful - Film grain: hash-based noise per pixel per frame.
(hash(position + time) - 0.5) * intensity. Subtle values (0.08-0.15) for cinematic feel - Chromatic aberration: sample R, G, B channels at slightly different UV offsets. Strongest at screen edges, zero at center. Simulates lens imperfections
- Gaussian blur: weighted average of neighboring pixels. Separable (horizontal then vertical) is way cheaper than 2D kernel. Foundation for bloom
- Bloom: extract bright pixels (threshold), blur them, add back to original. Reinhard tone mapping (
color / (color + 1)) prevents clipping - CRT scanlines:
sin(y * PI)darkens alternating rows. Barrel distortion curves the UV space. Subpixel RGB simulation for authenticity - Glitch: horizontal line displacement, color channel shifts, block corruption. Intermittent triggering is key -- constant glitch is just noise
- Pixelation:
floor(uv * size) / sizesnaps pixels to a grid. Animating the grid size creates resolution transitions - Edge detection: Sobel filter samples a 3x3 neighborhood, computes horizontal and vertical gradients, outputs the gradient magnitude as brightness
- Color grading: exposure (multiply), contrast (scale from 0.5), saturation (mix with luminance), color balance (tint shadows/highlights), gamma (power curve per channel)
- Effect order matters: chromatic aberration and bloom first (they sample the scene), then tone mapping, color grading, grain, vignette, gamma last
Sallukes! Thanks for reading.
X