
Learn Creative Coding (#36) - Shader Feedback Loops and Trails
Learn Creative Coding (#36) - Shader Feedback Loops and Trails

Every shader we've written so far has been stateless. Each frame starts fresh. The fragment shader runs, every pixel gets a color based on the current time and coordinates, and then the whole thing is thrown away. Next frame, same process, zero memory of what came before.
That changes today.
Feedback loops give a shader memory. The idea is straightforward: render a frame, save it as a texture, then feed that texture back into the shader as input for the next frame. The output becomes the input. Whatever was on screen one frame ago is now data you can read, distort, blend, layer on top of. Things accumulate. Trails form. Patterns evolve.
This is how analog video artists like Nam June Paik worked in the 1970s -- point a camera at a monitor showing the camera's own feed. The image copies itself, shifts, decays, and generates these beautiful spiraling patterns from nothing but the loop. We're doing the same thing, but digitally, with total control over every step.
The catch is that WebGL doesn't let you read from and write to the same texture in a single pass. You can't say "read pixel (x,y) from the screen, modify it, and write it back." The GPU doesn't work that way. So we need a trick: ping-pong buffers.
Framebuffer objects: off-screen rendering
Before we can do feedback, we need to understand framebuffer objects (FBOs). Normally, your shader renders to the screen -- the default framebuffer. An FBO lets you render to a texture instead. The shader runs exactly the same, but the pixels go into a texture sitting in GPU memory rather than onto the screen.
Here's the setup code. It's a lot of boilerplate, but you only write it once:
function createFBO(gl, width, height) {
let texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0,
gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
let fbo = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fbo);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D, texture, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.bindTexture(gl.TEXTURE_2D, null);
return { fbo: fbo, texture: texture };
}
createTexture() makes an empty texture -- no image data, just allocated memory (null as the last argument). createFramebuffer() makes the FBO. framebufferTexture2D() attaches the texture to the FBO so rendering output goes there.
The CLAMP_TO_EDGE wrapping is important. Without it, sampling at the texture edges might wrap around and pull in pixels from the opposite side, which creates weird border artifacts. LINEAR filtering gives smooth interpolation when we sample between pixels -- critical for smooth feedback effects.
Ping-pong buffers
Here's the core idea. You create TWO FBOs. Call them A and B. On frame N, the shader reads from A's texture and writes to B's framebuffer. On frame N+1, the shader reads from B's texture and writes to A's framebuffer. They alternate roles every frame. A becomes B, B becomes A. Ping-pong.
let fboA = createFBO(gl, canvas.width, canvas.height);
let fboB = createFBO(gl, canvas.width, canvas.height);
let current = fboA;
let previous = fboB;
function frame(now) {
// bind the "current" FBO as render target
gl.bindFramebuffer(gl.FRAMEBUFFER, current.fbo);
// bind the "previous" texture as input
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, previous.texture);
gl.uniform1i(prevFrameLoc, 0);
// set uniforms, draw the quad
gl.uniform2f(uRes, canvas.width, canvas.height);
gl.uniform1f(uTime, (now - t0) / 1000.0);
gl.drawArrays(gl.TRIANGLES, 0, 6);
// now copy current to screen
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
// (draw again with a simple passthrough shader, or blit)
// swap
let temp = current;
current = previous;
previous = temp;
requestAnimationFrame(frame);
}
The swap at the end is the ping-pong. After rendering, what was "current" (the output) becomes "previous" (the input for next frame). And what was "previous" (already read from) becomes "current" (the next output target). No memory allocation per frame. No copying. Just swapping two pointers. Super cheap.
Why can't we just use one FBO and read/write to it simultaneously? Because GPUs pipeline operations. If the shader is reading from a texture while also writing to it, you'd get race conditions -- some pixels read old data, some read data written by the current frame. The result is undefined. Garbage. Two buffers eliminate the problem entirely because reading and writing always happen on different textures.
The simplest feedback effect: trails
Let's start with the most basic feedback loop. Blend the current frame with the previous frame at reduced opacity. Moving objects leave fading trails:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_prevFrame;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 centered = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
// read previous frame
vec4 prev = texture2D(u_prevFrame, uv);
// draw something new: a moving circle
vec2 pos = vec2(sin(u_time * 0.7) * 0.3, cos(u_time * 0.5) * 0.2);
float d = length(centered - pos) - 0.04;
float circle = smoothstep(0.005, 0.0, d);
vec3 circleColor = vec3(0.9, 0.4, 0.3);
// blend: previous frame fades, new content added on top
vec3 color = prev.rgb * 0.97 + circleColor * circle;
gl_FragColor = vec4(color, 1.0);
}
That prev.rgb * 0.97 is everything. Each frame, every pixel's brightness drops to 97% of what it was. After one frame, a pixel at 1.0 becomes 0.97. After two frames, 0.94. After ten frames, 0.74. After 100 frames, 0.048 -- nearly black. The trail fades exponentially.
The + circleColor * circle adds the new drawing on top. Where the circle currently is, the pixel gets full brightness. Where it was one frame ago, 97%. Two frames ago, 94%. And so on. A smooth glowing trail behind the moving circle.
Try changing the 0.97 to different values. 0.99 gives long, slow-fading trails. 0.90 gives short, snappy trails. 0.999 makes trails that hang around almost forever -- the canvas slowly fills up with ghost images. 0.5 gives basically no trail at all, just a faint echo. The decay factor is your primary creative control.
A complete working example
Let me show the full JavaScript setup so you can actually run this. It's the template from episode 32 but extended with ping-pong FBOs:
<!DOCTYPE html>
<html>
<head>
<style>
body { margin: 0; overflow: hidden; background: #000; }
canvas { display: block; width: 100vw; height: 100vh; }
</style>
</head>
<body>
<canvas id="c"></canvas>
<script>
var canvas = document.getElementById('c');
var gl = canvas.getContext('webgl');
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
gl.viewport(0, 0, canvas.width, canvas.height);
var vertSrc = 'attribute vec2 p;void main(){gl_Position=vec4(p,0,1);}';
var feedbackFrag = [
'precision mediump float;',
'uniform vec2 u_resolution;',
'uniform float u_time;',
'uniform sampler2D u_prevFrame;',
'',
'void main() {',
' vec2 uv = gl_FragCoord.xy / u_resolution;',
' vec2 c = (gl_FragCoord.xy - u_resolution*0.5) / u_resolution.y;',
' vec4 prev = texture2D(u_prevFrame, uv);',
' vec2 pos = vec2(sin(u_time*0.7)*0.3, cos(u_time*0.5)*0.2);',
' float d = length(c - pos) - 0.04;',
' float s = smoothstep(0.005, 0.0, d);',
' vec3 col = prev.rgb * 0.97 + vec3(0.9,0.4,0.3) * s;',
' gl_FragColor = vec4(col, 1.0);',
'}'
].join('\\n');
var copyFrag = [
'precision mediump float;',
'uniform sampler2D u_tex;',
'uniform vec2 u_resolution;',
'void main() {',
' gl_FragColor = texture2D(u_tex, gl_FragCoord.xy/u_resolution);',
'}'
].join('\\n');
function makeShader(type, src) {
var s = gl.createShader(type);
gl.shaderSource(s, src);
gl.compileShader(s);
return s;
}
function makeProg(frag) {
var p = gl.createProgram();
gl.attachShader(p, makeShader(gl.VERTEX_SHADER, vertSrc));
gl.attachShader(p, makeShader(gl.FRAGMENT_SHADER, frag));
gl.linkProgram(p);
return p;
}
var feedbackProg = makeProg(feedbackFrag);
var copyProg = makeProg(copyFrag);
var buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
-1,-1, 1,-1, -1,1, -1,1, 1,-1, 1,1
]), gl.STATIC_DRAW);
function createFBO(w, h) {
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, w, h, 0,
gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
var fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D, tex, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return { fbo: fb, texture: tex };
}
var fboA = createFBO(canvas.width, canvas.height);
var fboB = createFBO(canvas.width, canvas.height);
var current = fboA, previous = fboB;
var t0 = performance.now();
function frame(now) {
var t = (now - t0) / 1000.0;
// render feedback pass into current FBO
gl.bindFramebuffer(gl.FRAMEBUFFER, current.fbo);
gl.useProgram(feedbackProg);
var pLoc = gl.getAttribLocation(feedbackProg, 'p');
gl.enableVertexAttribArray(pLoc);
gl.vertexAttribPointer(pLoc, 2, gl.FLOAT, false, 0, 0);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, previous.texture);
gl.uniform1i(gl.getUniformLocation(feedbackProg, 'u_prevFrame'), 0);
gl.uniform2f(gl.getUniformLocation(feedbackProg, 'u_resolution'),
canvas.width, canvas.height);
gl.uniform1f(gl.getUniformLocation(feedbackProg, 'u_time'), t);
gl.drawArrays(gl.TRIANGLES, 0, 6);
// copy to screen
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.useProgram(copyProg);
pLoc = gl.getAttribLocation(copyProg, 'p');
gl.enableVertexAttribArray(pLoc);
gl.vertexAttribPointer(pLoc, 2, gl.FLOAT, false, 0, 0);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, current.texture);
gl.uniform1i(gl.getUniformLocation(copyProg, 'u_tex'), 0);
gl.uniform2f(gl.getUniformLocation(copyProg, 'u_resolution'),
canvas.width, canvas.height);
gl.drawArrays(gl.TRIANGLES, 0, 6);
// swap
var tmp = current;
current = previous;
previous = tmp;
requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
</script>
</body>
</html>
That's a lot of JavaScript but most of it is WebGL setup you never touch again. The interesting parts are the feedback shader (13 lines of GLSL) and the swap at the bottom. Save this as your feedback template and reuse it for everything else in this episode.
Smearing: displacing the UV
Plain trails fade in place. The circle moves and leaves a ghost behind, and the ghost just sits there and dims. That's nice, but we can make it wilder. What if the ghost didn't stay put? What if we shifted the UV coordinates slightly when sampling the previous frame, so the old pixels drift?
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_prevFrame;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 centered = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
// displace the sampling position
vec2 offset = vec2(0.002, 0.001);
vec4 prev = texture2D(u_prevFrame, uv + offset);
// draw a circle
vec2 pos = vec2(sin(u_time * 0.8) * 0.25, cos(u_time * 0.6) * 0.2);
float d = length(centered - pos) - 0.03;
float circle = smoothstep(0.004, 0.0, d);
vec3 color = prev.rgb * 0.96 + vec3(0.3, 0.8, 0.6) * circle;
gl_FragColor = vec4(color, 1.0);
}
That uv + offset is the key change. Instead of reading the previous pixel at the same position, we read it from slightly above and to the right. Every frame, every pixel shifts its history by that offset. The trail smears in one direction -- like dragging wet paint across a canvas.
A constant offset gives a constant smear direction. But what if the offset varies per pixel? Use noise:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_prevFrame;
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 centered = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
// noise-based displacement
float nx = hash(uv * 100.0 + u_time * 0.1) * 2.0 - 1.0;
float ny = hash(uv * 100.0 + u_time * 0.1 + vec2(37.0, 91.0)) * 2.0 - 1.0;
vec2 offset = vec2(nx, ny) * 0.003;
vec4 prev = texture2D(u_prevFrame, uv + offset);
// draw
vec2 pos = vec2(sin(u_time * 0.6) * 0.3, cos(u_time * 0.4) * 0.25);
float d = length(centered - pos) - 0.05;
float circle = smoothstep(0.005, 0.0, d);
vec3 color = prev.rgb * 0.97 + vec3(0.9, 0.5, 0.2) * circle;
gl_FragColor = vec4(color, 1.0);
}
Now each pixel gets pushed in a random-ish direction every frame. The trail dissolves into a smeared, painterly mess. It looks like watercolor running on wet paper. The hash function from episode 35 is back -- same fract(sin(dot(...))) trick, just used for displacement instead of color.
The * 0.003 controls the smear intensity. Bigger values give more violent displacement. Smaller values give subtle drifting. Try 0.001 for a gentle haze, 0.01 for aggressive paint-smear. Above 0.02 it starts looking chaotic and the image dissolves too fast to see anything meaningful.
Kaleidoscopic feedback: rotate and scale
This is where feedback gets truly psychedelic. Instead of just fading the previous frame, apply a slight rotation and scale before blending:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_prevFrame;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 centered = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
// rotate and scale the UV before sampling previous frame
float angle = 0.01;
float scale = 0.995;
float c = cos(angle);
float s = sin(angle);
vec2 rotUV = centered;
rotUV = vec2(rotUV.x * c - rotUV.y * s, rotUV.x * s + rotUV.y * c);
rotUV *= scale;
// convert back to 0-1 UV space
vec2 sampleUV = (rotUV * u_resolution.y + u_resolution * 0.5) / u_resolution;
vec4 prev = texture2D(u_prevFrame, sampleUV);
// draw an orbiting dot
float t = u_time * 0.5;
vec2 pos = vec2(cos(t) * 0.15, sin(t) * 0.15);
float d = length(centered - pos) - 0.02;
float dot = smoothstep(0.003, 0.0, d);
vec3 hue = vec3(
sin(u_time * 0.3) * 0.4 + 0.6,
sin(u_time * 0.3 + 2.094) * 0.3 + 0.5,
sin(u_time * 0.3 + 4.189) * 0.4 + 0.6
);
vec3 color = prev.rgb * 0.98 + hue * dot;
gl_FragColor = vec4(color, 1.0);
}
Every frame, the previous image is rotated by 0.01 radians (~0.6 degrees) and scaled down to 99.5% size. The dot orbits in a circle, leaving a trail that spirals inward toward the center. After a few seconds you get this infinite tunnel effect -- copies of copies of copies, each slightly rotated and smaller, receding into the middle of the screen like looking down a fractal corridor.
The scale factor is crucial. Below 1.0 (like our 0.995), the feedback shrinks toward the center -- you get an inward spiral. Above 1.0 would expand outward, but that quickly fills the screen with amplified noise and blows up. Equal to 1.0 (no scaling, just rotation), the copies don't shrink or grow, they just rotate -- you get concentric rings. Each value produces a completly different visual effect from the exact same code structure.
The rotation amount matters too. 0.01 radians gives a tight spiral. 0.05 gives a loose one. Negative rotation spirals the other direction. You can even make the rotation vary with time:
float angle = sin(u_time * 0.1) * 0.02;
Now the spiral direction oscillates slowly. The tunnel breathes, opening and closing. It's mesmerizing. I've lost twenty minutes staring at this kind of thing more than once :-)
Edge detection feedback
Here's a more advanced technique. Apply a Sobel edge detection filter to the previous frame before blending. Lines grow and evolve over time as the edge detector feeds back into itself:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_prevFrame;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 centered = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
vec2 px = 1.0 / u_resolution;
// sobel edge detection on previous frame
float tl = length(texture2D(u_prevFrame, uv + vec2(-px.x, px.y)).rgb);
float t = length(texture2D(u_prevFrame, uv + vec2(0.0, px.y)).rgb);
float tr = length(texture2D(u_prevFrame, uv + vec2(px.x, px.y)).rgb);
float l = length(texture2D(u_prevFrame, uv + vec2(-px.x, 0.0)).rgb);
float r = length(texture2D(u_prevFrame, uv + vec2(px.x, 0.0)).rgb);
float bl = length(texture2D(u_prevFrame, uv + vec2(-px.x, -px.y)).rgb);
float b = length(texture2D(u_prevFrame, uv + vec2(0.0, -px.y)).rgb);
float br = length(texture2D(u_prevFrame, uv + vec2(px.x, -px.y)).rgb);
float gx = -tl - 2.0*l - bl + tr + 2.0*r + br;
float gy = -tl - 2.0*t - tr + bl + 2.0*b + br;
float edge = sqrt(gx * gx + gy * gy);
// read previous frame normally
vec4 prev = texture2D(u_prevFrame, uv);
// draw something
vec2 pos = vec2(sin(u_time * 0.4) * 0.2, cos(u_time * 0.3) * 0.15);
float d = length(centered - pos) - 0.06;
float shape = smoothstep(0.005, 0.0, d);
// blend: previous fades, edges accumulate, new shape added
vec3 color = prev.rgb * 0.94;
color += vec3(0.4, 0.6, 0.9) * edge * 0.3;
color += vec3(0.9, 0.3, 0.2) * shape;
gl_FragColor = vec4(color, 1.0);
}
The Sobel operator samples a 3x3 neighborhood around each pixel from the previous frame. gx detects horizontal edges, gy detects vertical edges. The magnitude sqrt(gx*gx + gy*gy) gives edge strength. We add this as a blue tint to the current frame.
What happens over time: the circle moves, leaving a red trail. The edge detector finds the boundary of that trail and adds a blue outline. Next frame, that blue outline IS part of the image, so the edge detector finds ITS boundary and adds another outline. Outlines breed outlines. After a dozen frames, the moving circle has spawned a web of fine blue lines radiating outward from everywhere it's been. It looks like nerve fibers growing, or ice crystals forming.
The * 0.94 decay is agressive here because the edge detection amplifies features -- without strong decay, the whole screen would saturate to white within seconds. You need to balance the feedback gain (how much the edges contribute) against the decay rate. Too much gain + too little decay = white noise. Too little gain + too much decay = you barely see any effect. Finding the sweet spot is the art.
A reaction-diffusion preview
Feedback loops are the foundation of reaction-diffusion systems. The basic idea: each pixel's next state depends on its neighbors' current state. Here's a simplified two-chemical reaction-diffusion step using the Gray-Scott model:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_prevFrame;
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 px = 1.0 / u_resolution;
// sample center and neighbors
vec4 c = texture2D(u_prevFrame, uv);
vec4 n = texture2D(u_prevFrame, uv + vec2(0.0, px.y));
vec4 s = texture2D(u_prevFrame, uv - vec2(0.0, px.y));
vec4 e = texture2D(u_prevFrame, uv + vec2(px.x, 0.0));
vec4 w = texture2D(u_prevFrame, uv - vec2(px.x, 0.0));
// laplacian (how different this pixel is from its neighbors)
vec4 lap = (n + s + e + w) - 4.0 * c;
// treat r channel as chemical A, g channel as chemical B
float a = c.r;
float b = c.g;
// Gray-Scott parameters
float feed = 0.037;
float kill = 0.06;
float Da = 1.0;
float Db = 0.5;
float dt = 1.0;
// reaction-diffusion equations
float reaction = a * b * b;
float newA = a + (Da * lap.r - reaction + feed * (1.0 - a)) * dt;
float newB = b + (Db * lap.g + reaction - (kill + feed) * b) * dt;
// seed: add some chemical B near center on first few frames
vec2 centered = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
if (u_time < 0.5) {
float seed = smoothstep(0.1, 0.0, length(centered));
newA = mix(newA, 1.0, seed * 0.5);
newB = mix(newB, 1.0, seed * 0.3);
}
gl_FragColor = vec4(newA, newB, 0.0, 1.0);
}
This won't look like much on its own because we're storing raw chemical concentrations as color channels. You'd need a separate pass that reads the concentrations and maps them to actual colors. We'll build a full reaction-diffusion system in a future episode -- this is just showing how the feedback structure enables it.
The key insight: each pixel reads its four neighbors from the previous frame (the laplacian measures how different the center is from its surroundings), applies the reaction equations, and writes the new concentrations. That's feedback. The output of frame N becomes the input of frame N+1, and the system evolves. Spots form. Fingers branch. Patterns emerge from uniform initial conditions. All because of the loop.
Mouse-driven feedback painting
Let's build something interactive. A canvas where your mouse strokes leave marks that slowly dissolve, smear, and evolve through feedback. A living sketchpad:
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
uniform vec2 u_mouse;
uniform sampler2D u_prevFrame;
float hash(vec2 p) {
return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453);
}
void main() {
vec2 uv = gl_FragCoord.xy / u_resolution;
vec2 centered = (gl_FragCoord.xy - u_resolution * 0.5) / u_resolution.y;
vec2 mouse = (u_mouse - u_resolution * 0.5) / u_resolution.y;
// slight noise displacement for organic smearing
float nx = (hash(uv * 50.0 + u_time) - 0.5) * 0.002;
float ny = (hash(uv * 50.0 + u_time + vec2(43.0, 71.0)) - 0.5) * 0.002;
vec4 prev = texture2D(u_prevFrame, uv + vec2(nx, ny));
// mouse brush
float d = length(centered - mouse);
float brush = smoothstep(0.04, 0.0, d);
// color cycles with time
vec3 brushColor = vec3(
sin(u_time * 0.5) * 0.4 + 0.6,
sin(u_time * 0.5 + 2.094) * 0.35 + 0.5,
sin(u_time * 0.5 + 4.189) * 0.4 + 0.6
);
vec3 color = prev.rgb * 0.985 + brushColor * brush;
gl_FragColor = vec4(color, 1.0);
}
Move your mouse and you paint glowing color onto the canvas. The noise displacement makes your strokes slowly dissolve into organic smears rather than fading uniformly. The color shifts over time, so strokes made a minute ago are a different hue than strokes made now. Leave the canvas alone and the whole image slowly evaporates -- or just keeps drifting slightly as the noise pushes pixels around.
This is genuinely fun to play with. I've used variations of this as visual instruments at small live coding events -- project it on a wall, hand someone a mouse, and let them paint with light that slowly melts. The feedback does the interesting work. The human just provides the initial impulse.
Performance notes
Feedback loops are surprisingly cheap on the GPU. The FBO switch (binding a different framebuffer) is a well-optimized operation -- GPUs do this constantly for shadow maps, reflections, post-processing. It's the bread and butter of rendering pipelines.
The expensive part is texture reads. Each texture2D() call is a memory fetch. The Sobel filter example does 9 texture reads per pixel -- that's 9 memory lookups for every one of the two million pixels on a 1080p screen. On a decent GPU, that's still 60fps. On a phone, it might drop to 30. On integrated laptop graphics, somewhere in between.
The biggest performance trap with feedback is gl.readPixels(). If you try to read the framebuffer contents back to JavaScript (to save an image, or to use pixel data in CPU code), that forces the GPU to finish all pending work, copy the framebuffer contents across the GPU-to-CPU bus, and wait. It's agonizingly slow -- easily 10-50 milliseconds for a single frame. Never do it inside the animation loop. If you need to save a frame, do it once on a keypress, not every frame. Keep the feedback loop entirely on the GPU.
Another thing to watch: precision accumulation. Each frame multiplies by the decay factor. After thousands of frames, very dark pixels might not reach true zero -- they hover at some tiny value due to 8-bit texture precision (UNSIGNED_BYTE textures have 256 levels per channel). This usually doesn't matter visually, but if you notice a faint haze that never quite disappears, that's why. You can fix it by adding a tiny max(color - 0.004, 0.0) threshold, but honestly, a slight persistent haze often looks good. Adds atmosphere.
Video feedback art history
What we're doing has deep roots in analog video art. Nam June Paik, the father of video art, discovered video feedback in the 1960s -- point a camera at a monitor displaying the camera's feed. The round-trip delay (camera captures, signal processes, monitor displays, camera captures again) creates spiraling, fractal patterns. No computer involved. Just electricity and glass.
The Sandin Image Processor (1973) was one of the first tools built specifically for real-time video manipulation. Dan Sandin at the University of Illinois designed analog circuitry that could color-shift, distort, and feedback video signals. Artists like Tom DeFanti and Phil Morton used it to create some of the earliest electronic art installations.
What's changed since then is control. Paik's feedback was analog -- beautiful but unpredictable. We can set the rotation to exactly 0.01 radians, the decay to exactly 0.97, the displacement to exactly 0.003 pixels. We get the same emergent complexity but with reproducible parameters. You can save a set of values and get the exact same feedback pattern tomorrow. That's the digital advantage.
Where feedback leads
Feedback is the foundation for a huge chunk of advanced shader work. Post-processing effects (blur, bloom, glow) work by rendering a scene to an FBO and then processing that texture. Multi-pass shaders (compute something in one pass, use it in the next) use the same FBO ping-pong. Particle systems on the GPU can store particle positions in a texture and update them via feedback each frame.
The reaction-diffusion preview we looked at earlier -- in a future episode we'll build that into a full system with proper coloring, interactive seeding, and parameter control. It's essentially a specialized feedback loop where the "shader" is a physics simulation. Turing patterns, coral growth, animal spots -- all from the same structure of "read neighbors, compute rules, write output, feed back."
And color manipulation in shaders -- how to bend, shift, and transform colors in ways that would be impossible pixel-by-pixel on the CPU -- will open up even more possibilities when combined with feedback. Imagine feedback trails that shift hue as they fade, or edge detection where the edges change color based on their angle. The techniques stack.
For now, the important thing is that you understand the infrastructure: FBOs, ping-pong buffers, the sampler2D uniform for the previous frame. That's the machinery. What you do with it is up to you. Paint with it. Spiral it. Edge-detect it. Smear it. The loop will do the rest :-)
't Komt erop neer...
- Feedback loops give shaders memory -- the output of one frame becomes the input for the next
- WebGL can't read and write to the same texture, so we use ping-pong buffers: two FBOs alternating roles each frame
- Simplest effect: multiply previous frame by a decay factor (0.97 = long trails, 0.90 = short trails)
- Smearing: displace the UV when sampling the previous frame. Constant offset = directional smear, noise offset = organic dissolve
- Kaleidoscopic feedback: rotate and scale the previous frame before blending. Scale < 1.0 = inward spiral, rotation = tunnel effect
- Edge detection + feedback: Sobel filter on the previous frame causes outlines to breed outlines, generating network-like structures
- Reaction-diffusion is a specialized feedback loop -- each pixel's next state depends on its neighbors' current state
- Keep feedback entirely on GPU -- never use
gl.readPixels()inside the animation loop - The
CLAMP_TO_EDGEtexture wrapping prevents artifacts at screen borders - Analog video artists (Nam June Paik, Dan Sandin) discovered these same patterns in the 1960s-70s using cameras pointed at monitors
Sallukes! Thanks for reading.
X
Estimated Payout
$0.14
Discussion
No comments yet. Be the first!