Learn Creative Coding (#32) - Shader Uniforms and Time

avatar

Learn Creative Coding (#32) - Shader Uniforms and Time

cc-banner

Back in episode 21 we had our first taste of shaders -- compiled a fragment shader, drew a gradient, made some concentric rings chase the mouse around. We set up the boilerplate, learned the GLSL basics, and got a feel for per-pixel thinking. But that was a single episode wedged into the middle of Phase 3. A taste. Now we're going deep.

This is the start of the shader arc. Over the next bunch of episodes we're going to build a serious understanding of GPU-side creative coding -- signed distance functions, noise on the GPU, feedback loops, raymarching, fractals. Territory that Canvas can't touch. But all of that depends on one foundational concept we barely explained last time: uniforms.

Uniforms are the bridge between your JavaScript world and the shader world. They're how you tell the GPU what time it is, how big the canvas is, where the mouse is, what color you want things to be -- any value that stays constant for all pixels in a single frame but changes between frames. Without uniforms, your shader is frozen. Static. Just a function from coordinate to color with no external input. With uniforms, it becomes a living, breathing, interactive thing.

So let's build that bridge properly. We'll set up a reusable template, understand each uniform type, and build a real shader that breathes and responds to you.

The three essential uniforms

Almost every creative coding shader uses these three:

  • u_resolution -- the canvas size in pixels (vec2). You need this to normalize pixel coordinates into the 0-to-1 range. Without it, your shader has no idea how big the canvas is.
  • u_time -- elapsed time in seconds (float). This is what makes things move. Every animated shader depends on time.
  • u_mouse -- the cursor position in pixels (vec2). This is what makes things interactive.

That's it. Three values and you can build almost anything. More complex shaders add custom uniforms (color, intensity, number of iterations, whatever) but these three are the foundation.

Let me show you what each one does, one at a time.

u_resolution: knowing your canvas

We touched on this in episode 21 but it deserves a proper explanation. gl_FragCoord.xy gives you the absolute pixel position of the current fragment -- like (342, 187) on a 600x400 canvas. But raw pixel coordinates are useless for portable shader code. A circle at position 300 looks centered on a 600px canvas but off to the left on a 1200px canvas.

The fix: normalize. Divide by the resolution to get coordinates in the 0-to-1 range:

precision mediump float;
uniform vec2 u_resolution;

void main() {
  vec2 uv = gl_FragCoord.xy / u_resolution;
  // uv.x goes from 0 (left) to 1 (right)
  // uv.y goes from 0 (bottom) to 1 (top)
  gl_FragColor = vec4(uv.x, uv.y, 0.5, 1.0);
}

That's the classic gradient from episode 21. Red increases left to right, green bottom to top. But there's a subtlety. On a non-square canvas (say 800x400), uv.x goes from 0 to 1 across 800 pixels while uv.y goes from 0 to 1 across 400 pixels. That means a circle based on length(uv - 0.5) would look like an ellipse -- squished horizontally because the X range covers more physical pixels than Y.

The standard fix for aspect-correct coordinates:

vec2 uv = gl_FragCoord.xy / u_resolution;
uv.x *= u_resolution.x / u_resolution.y;  // correct aspect ratio

Or the shorter version we used in episode 21 -- normalize by height only:

vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;

This centers the origin at the middle of the canvas and normalizes by height. On an 800x400 canvas, uv.y ranges from -0.5 to 0.5, and uv.x ranges from -1.0 to 1.0. Circles stay circular. Aspect ratio preserved. I use this second version in almost every shader now because having (0,0) at the center is just more natural for creative work.

Passing u_resolution from JavaScript

On the JavaScript side, you look up the uniform location and send the value:

let resLoc = gl.getUniformLocation(program, 'u_resolution');
gl.uniform2f(resLoc, canvas.width, canvas.height);

getUniformLocation finds where in the compiled shader program the u_resolution variable lives. uniform2f sends two float values (width and height). The 2f suffix means "two floats" -- WebGL has a whole family of these: uniform1f for a single float, uniform2f for a vec2, uniform3f for a vec3, uniform4f for a vec4, uniform1i for an integer. Match the suffix to the GLSL type or you'll get silent failures. No error message. Just wrong values. Ask me how many hours I lost to that :-)

You only need to send u_resolution once (or when the canvas resizes). It doesn't change between frames, so there's no reason to set it inside the animation loop. Small optimization, but good habit.

u_time: making things breathe

Time is what separates a painting from an animation. In Canvas code we tracked frame count or used performance.now(). In shaders, we pass elapsed seconds as a uniform:

let timeLoc = gl.getUniformLocation(program, 'u_time');
let startTime = performance.now();

function render(now) {
  let seconds = (now - startTime) / 1000.0;
  gl.uniform1f(timeLoc, seconds);
  gl.drawArrays(gl.TRIANGLES, 0, 6);
  requestAnimationFrame(render);
}

requestAnimationFrame(render);

Why seconds and not frame count? Frame-independent animation. A frame count of 60 means different things at 30fps versus 60fps versus 144fps. But 1.0 seconds is always 1.0 seconds regardless of frame rate. Your animation looks the same on a gaming monitor at 144Hz and a laptop chugging at 30fps. The speed is locked to wall-clock time, not frame count. We talked about this same principle back in episode 16 with lerp and easing -- time-based animation is always more reliable than frame-based.

Now let's use u_time for something visual. Pulsing rings that radiate outward from the center:

precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;

void main() {
  vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
  float d = length(uv);

  // rings that expand outward over time
  float rings = sin(d * 40.0 - u_time * 4.0) * 0.5 + 0.5;

  // fade with distance
  float fade = 1.0 / (1.0 + d * 8.0);

  gl_FragColor = vec4(vec3(rings * fade), 1.0);
}

The math: sin(d * 40.0 - u_time * 4.0) creates a sine wave pattern based on distance from center. The d * 40.0 part sets the ring frequency -- higher number means more rings, packed tighter. The - u_time * 4.0 part is what makes them move outward. Every second, the phase shifts by 4.0 radians, pushing the rings away from the center. Same trig we used in episode 13 for oscillation, now applied per-pixel across the entire canvas.

The * 0.5 + 0.5 maps sin's -1..1 range to 0..1 (valid brightness range). The fade term is inverse distance -- bright at center, dim at edges. Without it the rings would extend to the canvas edges at full brightness, which looks harsh.

The palette trick

White rings on black are fine for testing, but let's add color. A common shader technique: use sin with offsets to create RGB color cycling:

precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;

void main() {
  vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
  float d = length(uv);

  float rings = sin(d * 30.0 - u_time * 3.0);

  // palette: offset sin waves per channel
  vec3 color;
  color.r = sin(rings * 3.14 + u_time * 0.5) * 0.5 + 0.5;
  color.g = sin(rings * 3.14 + u_time * 0.5 + 2.094) * 0.5 + 0.5;
  color.b = sin(rings * 3.14 + u_time * 0.5 + 4.189) * 0.5 + 0.5;

  float fade = 1.0 / (1.0 + d * 6.0);
  color *= fade;

  gl_FragColor = vec4(color, 1.0);
}

Those magic numbers -- 2.094 and 4.189 -- are 2PI/3 and 4PI/3. They space the three color channels evenly around the sine wave, creating a smooth rainbow cycle. As u_time advances, the colors shift. Combined with the ring pattern and distance fade, you get this hypnotic pulsing mandala of shifting color. Twelve lines of shader code. Try doing that with Canvas in twelve lines.

u_mouse: interaction on the GPU

Passing mouse position works exactly like the other uniforms:

let mouseLoc = gl.getUniformLocation(program, 'u_mouse');

canvas.addEventListener('mousemove', function(e) {
  let rect = canvas.getBoundingClientRect();
  let mx = e.clientX - rect.left;
  let my = canvas.height - (e.clientY - rect.top);
  gl.uniform2f(mouseLoc, mx, my);
});

The Y-flip is critical. Browser coordinates have Y=0 at the top, increasing downward. WebGL has Y=0 at the bottom, increasing upward. If you forget to flip, your mouse effect appears mirrored vertically and you'll spend twenty minutes wondering why. I mentioned this in episode 21 but it bears repeating becaue I still forget it sometimes.

Now in the shader, normalize the mouse position the same way you normalize fragment coordinates:

precision mediump float;
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;

void main() {
  vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
  vec2 mouse = (u_mouse - u_resolution * 0.5) / u_resolution.y;

  float d = length(uv - mouse);

  // glow around cursor
  float glow = 0.015 / d;
  glow = min(glow, 1.0);  // clamp so center doesn't blow out

  vec3 color = vec3(0.3, 0.6, 1.0) * glow;

  gl_FragColor = vec4(color, 1.0);
}

A soft blue glow follows your mouse. 0.015 / d is inverse distance -- physically correct light falloff (well, approximately -- real light falls off with distance squared, but inverse distance looks better on screen). The min(glow, 1.0) clamp prevents the very center pixel from going to infinity. Without it, you'd get a white-hot pixel at the exact mouse position which can look glitchy.

Combining time and mouse

The real magic happens when you combine all three uniforms. Here's a shader where rings emanate from wherever the mouse is, with time controlling the expansion and color:

precision mediump float;
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;

void main() {
  vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
  vec2 mouse = (u_mouse - u_resolution * 0.5) / u_resolution.y;

  // distance from mouse, not from center
  float d = length(uv - mouse);

  // rings from mouse position
  float rings = sin(d * 50.0 - u_time * 5.0) * 0.5 + 0.5;

  // color shifts with time
  float hueShift = u_time * 0.3;
  vec3 color;
  color.r = sin(hueShift) * 0.5 + 0.5;
  color.g = sin(hueShift + 2.094) * 0.5 + 0.5;
  color.b = sin(hueShift + 4.189) * 0.5 + 0.5;

  // apply ring pattern and distance fade
  float fade = 1.0 / (1.0 + d * 4.0);
  color *= rings * fade;

  // subtle background so it's not pure black
  color += vec3(0.02, 0.02, 0.04);

  gl_FragColor = vec4(color, 1.0);
}

Move your mouse -- the rings follow. The colors shift slowly over time. The background is almost-black with a faint blue tint so the edges aren't harsh. This is interactive, animated, colorful generative art running entirely on the GPU, and the fragment shader is 20 lines. The per-pixel computation -- distance, trig, color mapping -- is exactly the kind of work the GPU was designed for.

The reusable template

You should have a template you can copy for every new shader experiment. Here's the one I use -- it sets up a full-screen quad with all three standard uniforms and handles resize:

<!DOCTYPE html>
<html>
<head>
<style>
  body { margin: 0; overflow: hidden; background: #000; }
  canvas { display: block; width: 100vw; height: 100vh; }
</style>
</head>
<body>
<canvas id="c"></canvas>
<script>
var canvas = document.getElementById('c');
var gl = canvas.getContext('webgl');

function resize() {
  canvas.width = window.innerWidth;
  canvas.height = window.innerHeight;
  gl.viewport(0, 0, canvas.width, canvas.height);
}
resize();
window.addEventListener('resize', resize);

var vertSrc = 'attribute vec2 p;void main(){gl_Position=vec4(p,0,1);}';

var fragSrc = [
  'precision mediump float;',
  'uniform vec2 u_resolution;',
  'uniform float u_time;',
  'uniform vec2 u_mouse;',
  '',
  'void main() {',
  '  vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;',
  '  vec2 m = (u_mouse - u_resolution * 0.5) / u_resolution.y;',
  '',
  '  // --- your shader code here ---',
  '  float d = length(uv);',
  '  gl_FragColor = vec4(vec3(d), 1.0);',
  '}'
].join('\n');

function makeShader(type, src) {
  var s = gl.createShader(type);
  gl.shaderSource(s, src);
  gl.compileShader(s);
  if (!gl.getShaderParameter(s, gl.COMPILE_STATUS)) {
    console.error(gl.getShaderInfoLog(s));
    return null;
  }
  return s;
}

var prog = gl.createProgram();
gl.attachShader(prog, makeShader(gl.VERTEX_SHADER, vertSrc));
gl.attachShader(prog, makeShader(gl.FRAGMENT_SHADER, fragSrc));
gl.linkProgram(prog);
gl.useProgram(prog);

var buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
  -1,-1, 1,-1, -1,1, -1,1, 1,-1, 1,1
]), gl.STATIC_DRAW);

var pLoc = gl.getAttribLocation(prog, 'p');
gl.enableVertexAttribArray(pLoc);
gl.vertexAttribPointer(pLoc, 2, gl.FLOAT, false, 0, 0);

var uRes = gl.getUniformLocation(prog, 'u_resolution');
var uTime = gl.getUniformLocation(prog, 'u_time');
var uMouse = gl.getUniformLocation(prog, 'u_mouse');

var mouseX = 0, mouseY = 0;
canvas.addEventListener('mousemove', function(e) {
  mouseX = e.clientX;
  mouseY = canvas.height - e.clientY;
});

var t0 = performance.now();

function frame(now) {
  gl.uniform2f(uRes, canvas.width, canvas.height);
  gl.uniform1f(uTime, (now - t0) / 1000.0);
  gl.uniform2f(uMouse, mouseX, mouseY);
  gl.drawArrays(gl.TRIANGLES, 0, 6);
  requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
</script>
</body>
</html>

Copy this. Save it as shader-template.html. When you want to try a new shader, duplicate the file and replace the fragment shader string. The vertex shader, buffer setup, uniform wiring, animation loop -- all handled. You just write the main() function in GLSL.

The vertex shader is compressed onto one line because you'll literally never change it for 2D work. It just passes the position through. The fragment shader is broken into an array of strings joined with newlines so you can read it more easily in the JavaScript source.

Precision qualifiers

You've seen precision mediump float; at the top of every fragment shader. What does it actually mean?

GPUs have different precision levels for floating-point numbers:

  • lowp -- low precision (8-bit). Fastest, but only useful for very simple stuff. Color values, basically.
  • mediump -- medium precision (16-bit). Good enough for most 2D creative coding. This is what we use.
  • highp -- high precision (32-bit). Same as CPU floats. Needed for very large coordinates, very small deltas, or anything involving accumulated error over many iterations.

The precision mediump float; line sets the default for ALL float variables in the shader. Without it, some mobile GPUs won't compile your shader at all -- they require an explicit precision declaration.

When does precision actually matter? Honestly, for most of what we're doing, mediump is fine. You'd need highp if you were:

  • Doing fractal zooms where you need very precise coordinate calculation deep in the zoom
  • Working with very large time values (after hours of runtime, mediump loses precision)
  • Computing cumulative effects where tiny floating-point errors build up

For now, use mediump everywhere and switch to highp if you see visual artifacts like banding, jittering, or patterns breaking at certain positions. On desktop GPUs mediump and highp are usually identical anyway -- the distinction mainly matters on mobile.

// you can also declare precision per-variable
highp float preciseTime = u_time;
mediump vec2 approxPos = gl_FragCoord.xy;

But this is rarely necessary. Just set the default at the top and forget about it until something looks wrong.

Custom uniforms

Beyond the big three, you can pass any value you want as a uniform. Want to control the ring frequency with a slider? Add a custom uniform:

precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform float u_freq;      // custom: ring frequency
uniform vec3 u_baseColor;  // custom: base color

void main() {
  vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
  float d = length(uv);

  float rings = sin(d * u_freq - u_time * 3.0) * 0.5 + 0.5;
  float fade = 1.0 / (1.0 + d * 5.0);

  vec3 color = u_baseColor * rings * fade;
  gl_FragColor = vec4(color, 1.0);
}

JavaScript side:

let freqLoc = gl.getUniformLocation(prog, 'u_freq');
let colorLoc = gl.getUniformLocation(prog, 'u_baseColor');

// in the animation loop
gl.uniform1f(freqLoc, parseFloat(slider.value));
gl.uniform3f(colorLoc, 0.2, 0.6, 1.0);

Wire up an HTML range input to slider.value, and you can tweak the ring density in real time. Add a color picker, read its RGB values, pass them as u_baseColor. Suddenly your shader is a tool with knobs. Same fragment shader code, infinitely tunable from the outside.

This is how creative coding tools like Shadertoy work under the hood. They have a text editor for the GLSL, a set of built-in uniforms (time, resolution, mouse), and channels for textures and audio. The magic isn't in the tool -- it's in the uniform system that bridges JavaScript and GLSL.

A "breathing" background

Let's put it all together into something you could actually use as a website background or a screensaver. A slowly shifting color field that responds to the mouse with a soft spotlight:

precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform vec2 u_mouse;

void main() {
  vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
  vec2 mouse = (u_mouse - u_resolution * 0.5) / u_resolution.y;

  // slow breathing color based on position and time
  float t = u_time * 0.2;
  vec3 bg;
  bg.r = sin(uv.x * 2.0 + t) * 0.15 + 0.08;
  bg.g = sin(uv.y * 2.0 + t * 1.3 + 1.0) * 0.12 + 0.06;
  bg.b = sin((uv.x + uv.y) * 1.5 + t * 0.7 + 2.0) * 0.18 + 0.12;

  // mouse spotlight
  float md = length(uv - mouse);
  float spotlight = 0.06 / (md + 0.06);

  // spotlight reveals a brighter, warmer version
  vec3 warm = vec3(0.25, 0.15, 0.08);
  vec3 color = bg + warm * spotlight;

  gl_FragColor = vec4(color, 1.0);
}

The background breathes -- dark blues and purples slowly shifting, never static, never bright enough to be distracting. The mouse creates a warm spotlight that reveals brighter tones underneath. Move the mouse and it's like shining a flashlight across a dark canvas. Leave it still and the background keeps breathing on its own.

The 0.06 / (md + 0.06) is a softer version of the inverse distance glow. Adding 0.06 to the denominator prevents the infinity spike at zero distance and controls how wide the spotlight is. Larger value = wider, softer spotlight. Smaller value = tighter, more focused. I find 0.04 to 0.08 works well for this kind of ambient effect.

Notice how restrained the colors are. The rgb values never go above maybe 0.3. This is intentional -- a breathing background should be subtle, atmospheric, not screaming for attention. The spotlight adds maybe another 0.25 at most. Even at peak brightness you're at half intensity. Subtlety is what makes this feel sophisticated rather than garish.

frame-independent animation: why u_time beats frame count

I touched on this earlier but it's worth a dedicated section because I see beginners get this wrong all the time.

If you use frame count for animation:

// BAD -- depends on frame rate
float animation = sin(float(frameCount) * 0.05);

On a 60fps machine, this oscillates at about 0.48 Hz. On a 30fps machine, it oscillates at about 0.24 Hz -- half as fast. On a 144fps machine, it's 1.15 Hz. Your animation looks completely different on every device.

With u_time in seconds:

// GOOD -- same speed everywhere
float animation = sin(u_time * 3.0);

This oscillates at 3.0 radians per second regardless of whether you're getting 30fps, 60fps, or 144fps. The GPU renders as many frames as it can, but the animation advances at the same real-world speed.

There's one caveat: u_time is a float, and floats lose precision as they get large. After about 4 hours of continuous runtime at mediump precision, you might start seeing jitter because the float can't represent fine time increments anymore. After about 3 days at highp, same problem. The fix: use modular time. Instead of raw seconds, pass fmod(seconds, 1000.0) -- the time resets every 1000 seconds, which is plenty of range for smooth animation and prevents precision loss. For a 15-minute creative coding sketch, raw seconds are fine. For an installation running 24/7, you'd want the modular approach.

// for long-running installations
let seconds = ((now - startTime) / 1000.0) % 1000.0;
gl.uniform1f(timeLoc, seconds);

Debugging uniforms

What do you do when your shader compiles but nothing moves? Or the mouse does nothing? Nine times out of ten, it's a uniform problem. Here's my debugging checklist:

1. Check the uniform location isn't null.

let loc = gl.getUniformLocation(program, 'u_time');
console.log('u_time location:', loc);
// null means the uniform doesn't exist in the shader
// (or the compiler optimized it away because you never used it)

If you declare a uniform in GLSL but never use it in the actual computation, the compiler removes it. getUniformLocation returns null. Then gl.uniform1f(null, value) silently does nothing. Check that every uniform is actually referenced in the shader code.

2. Verify your values are reaching the shader.

Temporarily replace your shader's output with the uniform value as a color:

// debug: is u_time arriving?
gl_FragColor = vec4(fract(u_time), 0.0, 0.0, 1.0);
// should pulse red every second

fract(u_time) gives the fractional part of time -- it ramps from 0 to 1 every second, then resets. If the screen pulses red, the uniform is arriving. If it stays black, the uniform isn't being set correctly on the JavaScript side.

3. Check the uniform function signature.

gl.uniform1f for float, gl.uniform2f for vec2, gl.uniform3f for vec3. If your shader declares uniform vec2 u_mouse but you call gl.uniform1f(mouseLoc, mouseX), it won't work. The function must match the type. No error message. Just silence and confusion.

Texture uniforms (a preview)

Uniforms aren't limited to numbers. You can also pass textures -- images, canvases, video frames, the previous frame's output. The uniform type in GLSL is sampler2D:

uniform sampler2D u_texture;

void main() {
  vec2 uv = gl_FragCoord.xy / u_resolution;
  vec4 texColor = texture2D(u_texture, uv);
  gl_FragColor = texColor;
}

Setting up texture uniforms involves more WebGL boilerplate (create texture, bind it, set filtering modes, assign to a texture unit). We'll cover this properly when we get to feedback loops and post-processing in a later episode. For now, just know that it exists -- the uniform system can carry images, not just numbers. That's how shader-based effects like blur, distortion, and feedback work: the previous frame is passed back into the shader as a texture uniform, and the shader reads and transforms it.

A creative exercise

Here's a challenge for you. Take the template from earlier and build a "color field" shader with these rules:

  1. The base color slowly shifts over time (using u_time and sin)
  2. The mouse position creates a ripple effect -- concentric rings that fade outward
  3. The ripple color is the complement of whatever the base color currently is
  4. The whole thing should feel calm and meditative, not flashy

The techniques you need are all in this episode: time-based animation for the color shift, distance and sin for the ripples, inverse distance for the fade. For the complement color, if your base color is vec3(r, g, b), the complement is vec3(1.0 - r, 1.0 - g, 1.0 - b). That's it. Try it before reading on.

Here's my version:

precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform vec2 u_mouse;

void main() {
  vec2 uv = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
  vec2 m = (u_mouse - u_resolution * 0.5) / u_resolution.y;

  // slow base color
  float t = u_time * 0.15;
  vec3 base;
  base.r = sin(t) * 0.3 + 0.35;
  base.g = sin(t + 2.094) * 0.25 + 0.25;
  base.b = sin(t + 4.189) * 0.3 + 0.4;

  // ripple from mouse
  float md = length(uv - m);
  float ripple = sin(md * 25.0 - u_time * 2.0);
  ripple = ripple * 0.5 + 0.5;
  ripple *= smoothstep(0.6, 0.0, md);  // fade with distance

  // complement color in the ripple
  vec3 comp = vec3(1.0) - base;
  vec3 color = mix(base, comp, ripple * 0.4);

  gl_FragColor = vec4(color, 1.0);
}

The smoothstep(0.6, 0.0, md) creates a smooth falloff that's 1.0 at the mouse and 0.0 at distance 0.6. Reversed arguments because we want the falloff to go from strong (near) to weak (far). The mix(base, comp, ripple * 0.4) blends between the base and complement colors, but only 40% at most -- keeping it subtle. Calm and meditative, like the brief said.

The 0.15 multiplier on time makes the base color shift very slowly. It takes about 40 seconds for a full cycle. That's the "meditative" part -- you barely notice the change unless you look away and come back. Fast color cycling is exciting but exhausting. Slow shifts are hypnotic.

Allez, wa weten we nu allemaal?

  • Uniforms are the bridge between JavaScript and GLSL -- they carry values from your code to the GPU
  • The three essential uniforms: u_resolution (canvas size), u_time (elapsed seconds), u_mouse (cursor position)
  • Normalize pixel coordinates by resolution for portable, aspect-correct shaders
  • Use u_time in seconds (not frame count) for frame-rate-independent animation
  • WebGL uniform functions must match the GLSL type: uniform1f for float, uniform2f for vec2, uniform3f for vec3
  • Custom uniforms let you add sliders, color pickers, or any external control to your shader
  • Debug by outputting uniform values as colors -- if the screen changes, the value is arriving
  • Precision qualifiers (lowp, mediump, highp) control float accuracy -- mediump is fine for most 2D work
  • Texture uniforms (sampler2D) can carry images and frames -- the foundation for feedback effects
  • For long-running shaders, use modular time to prevent floating-point precision loss

This is the foundation everything in the shader arc builds on. Every shader we write from here will use this template and these three uniforms. Next up we're tackling signed distance functions -- the technique for drawing circles, boxes, triangles, and any shape you can imagine using pure math. Once you can describe shapes as distance fields, you unlock boolean operations, smooth blending, and the entire raymarching paradigm. It's where shaders stop being "fancy gradients" and start being a whole new medium :-)

Sallukes! Thanks for reading.

X

@femdev



0
0
0.000
0 comments