
Learn Creative Coding (#30) - Mini-Project: Generative NFT Collection
Learn Creative Coding (#30) - Mini-Project: Generative NFT Collection

This is it. The big one. Everything from Phase 4 -- seeds, composition, typography, texture, color palettes, blockchain output -- comes together in this episode. We're building a complete generative art collection from scratch. Not a single-output sketch. Not a tech demo. A real collection where every seed produces a cohesive, interesting, distinct piece of art.
I'm calling it "Drift." Flow field curves with layered texture, palette variation, and compositional traits. The concept is simple -- abstract landscapes of flowing curves driven by seeded noise. But the output space is huge. Same algorithm, infinite variations. That's the whole point of generative collections, right?
If you've followed along since episode 23 where we talked about what makes art generative, through seeds in 24, composition in 25, typography in 26, texture in 27, color systems in 28, and blockchain output formats in 29 -- you already have every building block. This episode is the assembly step. Taking those individual tools and wiring them into one coherent system. It's the most satisying part of any project, honestly. When the pieces click together and suddenly there's a real thing :-)
What Drift actually does
Each piece varies across five independent traits:
- Palette -- warm sunset, cool ocean, natural forest, neon, monochrome, earth tones. Six options with different rarity weights.
- Density -- how many curves fill the canvas. Sparse pieces have 80-150 curves, dense ones go up to 900.
- Flow style -- smooth (gentle, sweeping), turbulent (tight, chaotic), or layered (each curve gets its own noise offset, creating a multi-layer depth effect).
- Texture -- clean, subtle grain, or heavy grain applied to the final output.
- Composition -- full field (curves everywhere), circular mask (curves concentrated in a circle), or split (one half dense, other half sparse).
Five traits with multiple values each. That's a big combinatorial space from a very focused visual concept. And that's exactly what makes a good collection -- one clear idea (flowing curves) with enough variation (trait combinations) that each piece feels unique without feeling random.
The name "Drift" works on two levels. The curves drift across the canvas following the flow field. And each piece drifts slightly from the same core system -- same DNA, different expression.
The PRNG
We built seeded randomness in episode 24. Here's the full sfc32 implementation for Drift, with utility functions we'll need throughout:
function sfc32(a, b, c, d) {
return function() {
a >>>= 0; b >>>= 0; c >>>= 0; d >>>= 0;
let t = (a + b) | 0;
a = b ^ b >>> 9;
b = c + (c << 3) | 0;
c = (c << 21 | c >>> 11);
d = d + 1 | 0;
t = t + d | 0;
c = c + t | 0;
return (t >>> 0) / 4294967296;
};
}
function createRNG(seed) {
let rng = sfc32(seed, seed ^ 0xDEADBEEF, seed ^ 0xCAFEBABE, seed ^ 0x12345678);
// warm up -- discard first 15 values
for (let i = 0; i < 15; i++) rng();
return {
random: function(min, max) {
if (min === undefined) return rng();
if (max === undefined) { max = min; min = 0; }
return min + rng() * (max - min);
},
randomInt: function(min, max) {
return Math.floor(min + rng() * (max - min));
},
pick: function(arr) {
return arr[Math.floor(rng() * arr.length)];
},
weighted: function(opts) {
let total = opts.reduce((s, o) => s + o.w, 0);
let r = rng() * total;
for (let o of opts) { r -= o.w; if (r <= 0) return o.v; }
return opts[opts.length - 1].v;
},
boolean: function(prob) {
return rng() < (prob || 0.5);
}
};
}
The four XOR constants (0xDEADBEEF, 0xCAFEBABE, 0x12345678) spread the initial state across the sfc32 internal registers. The 15-round warmup discards the initial output where sfc32 hasn't fully mixed yet. After that, the output is statistically solid -- passes all the standard randomness tests. We talked about why this matters in episode 24 (uniform distribution, no visible patterns, no correlation between consecutive values).
The utility functions wrap the raw rng() into practical helpers. random(min, max) for ranges. randomInt for integers. pick grabs a random array element. weighted does weighted selection (same pattern from episode 28's palette system). boolean for coin flips with adjustable probability.
Seeded Perlin noise
Same noise implementation from episode 12, but seeded through our RNG instead of using Math.random():
function createNoise(R) {
let perm = [];
for (let i = 0; i < 256; i++) perm[i] = i;
for (let i = 255; i > 0; i--) {
let j = Math.floor(R.random() * (i + 1));
[perm[i], perm[j]] = [perm[j], perm[i]];
}
for (let i = 0; i < 256; i++) perm[256 + i] = perm[i];
function fade(t) { return t * t * t * (t * (t * 6 - 15) + 10); }
function grad(h, x, y) {
let a = h & 3;
let u = a < 2 ? x : y;
let v = a < 2 ? y : x;
return ((a & 1) ? -u : u) + ((a & 2) ? -v : v);
}
return function(x, y) {
let xi = Math.floor(x) & 255, yi = Math.floor(y) & 255;
let xf = x - Math.floor(x), yf = y - Math.floor(y);
let u = fade(xf), v = fade(yf);
let aa = perm[perm[xi] + yi], ab = perm[perm[xi] + yi + 1];
let ba = perm[perm[xi + 1] + yi], bb = perm[perm[xi + 1] + yi + 1];
let x1 = grad(aa, xf, yf) + (grad(ba, xf - 1, yf) - grad(aa, xf, yf)) * u;
let x2 = grad(ab, xf, yf - 1) + (grad(bb, xf - 1, yf - 1) - grad(ab, xf, yf - 1)) * u;
return x1 + (x2 - x1) * v;
};
}
The critical difference from a standard Perlin noise implementation: the permutation table is shuffled using our seeded RNG, not Math.random(). That means the same seed produces the same noise field every time. Deterministic noise is non-negotiable for blockchain art -- we covered why in episode 29 (same input, same output, every browser, every machine, forever).
The fade function is Ken Perlin's improved smoothstep (6t^5 - 15t^4 + 10t^3). The grad function maps hash values to gradient vectors. If you want to understand the math more deeply, episode 12 walks through it step by step.
Trait generation
This is where the collection's character is defined. Trait weights control rarity -- lower weight means rarer, more collectable:
function generateTraits(R) {
let traits = {};
traits.palette = R.weighted([
{ v: 'sunset', w: 3 },
{ v: 'ocean', w: 3 },
{ v: 'forest', w: 2 },
{ v: 'neon', w: 1 },
{ v: 'monochrome', w: 2 },
{ v: 'earth', w: 2 },
]);
traits.density = R.weighted([
{ v: 'sparse', w: 2 },
{ v: 'medium', w: 5 },
{ v: 'dense', w: 3 },
]);
traits.flow = R.weighted([
{ v: 'smooth', w: 4 },
{ v: 'turbulent', w: 3 },
{ v: 'layered', w: 2 },
]);
traits.texture = R.weighted([
{ v: 'clean', w: 3 },
{ v: 'grain', w: 4 },
{ v: 'heavyGrain', w: 2 },
]);
traits.composition = R.weighted([
{ v: 'full', w: 4 },
{ v: 'circular', w: 3 },
{ v: 'split', w: 2 },
]);
return traits;
}
Look at the weights. Neon palette has weight 1 out of a total of 13 -- roughly 7.7% of outputs. That makes it the rarest palette. Medium density has weight 5 out of 10 -- 50% of outputs. Common, expected, the baseline. Designing these distributions is a creative decision as much as a technical one. We talked about this exact pattern back in episode 28 when building the palette system -- weighted selection is the core mechanism for controlling variety in generative systems.
One important detail: traits are generated BEFORE anything is drawn. The trait values drive the rendering. This means the first few calls to the RNG are always consumed by trait generation, and the remaining calls always start from the same internal state for a given trait combination. That's how you get consistency -- same traits, same art, regardless of what the traits happen to be.
Palette definitions
Six palettes, each with five colors. The first color is always the background:
const PALETTES = {
sunset: ['#1a1a2e', '#e94560', '#f5a623', '#ffd460', '#16213e'],
ocean: ['#0b132b', '#1c2541', '#3a506b', '#5bc0be', '#6fffe9'],
forest: ['#1b2d1b', '#2d5a27', '#4a7c3f', '#8cb369', '#f4e285'],
neon: ['#0a0a0a', '#ff006e', '#8338ec', '#3a86ff', '#ffbe0b'],
monochrome: null, // generated from seed
earth: ['#1c1107', '#582f0e', '#7f4f24', '#936639', '#c2956e'],
};
function getPalette(name, R) {
if (name === 'monochrome') {
let hue = R.random(0, 360);
let sat = R.random(15, 40);
return [
hsl(hue, sat, 5),
hsl(hue, sat, 20),
hsl(hue, sat, 40),
hsl(hue, sat, 60),
hsl(hue, sat, 80),
];
}
return [...PALETTES[name]];
}
function hsl(h, s, l) {
return `hsl(${h}, ${s}%, ${l}%)`;
}
The monochrome palette is special -- it's generated dynamically from the seed. A random hue and low saturation (15-40%) produces muted tonal variations. Could be a dusty blue, a warm gray, a faded green. The seed decides. Every other palette is hardcoded because those specific color combinations have been tested and they work. Remember from episode 28 -- curated palettes are often more reliable than purely algorithmic ones.
The spread operator ([...PALETTES[name]]) creates a copy so modifications downstream don't corrupt the original. Defensive programming. Small thing, saves you from a really annoying bug where piece #47 looks wrong because piece #46 mutated the shared palette array.
The rendering engine
This is the core. Everything flows (pun intended) from the noise field:
function render(canvas, seed) {
let ctx = canvas.getContext('2d');
let W = canvas.width;
let H = canvas.height;
let S = Math.min(W, H);
let R = createRNG(seed);
let noise = createNoise(R);
let traits = generateTraits(R);
let palette = getPalette(traits.palette, R);
// background
ctx.fillStyle = palette[0];
ctx.fillRect(0, 0, W, H);
// flow parameters from traits
let numCurves, noiseScale, curveLength, stepSize;
switch (traits.density) {
case 'sparse': numCurves = R.randomInt(80, 150); break;
case 'medium': numCurves = R.randomInt(200, 400); break;
case 'dense': numCurves = R.randomInt(500, 900); break;
}
switch (traits.flow) {
case 'smooth':
noiseScale = R.random(0.001, 0.004);
curveLength = R.randomInt(80, 150);
stepSize = R.random(2, 4);
break;
case 'turbulent':
noiseScale = R.random(0.005, 0.012);
curveLength = R.randomInt(40, 80);
stepSize = R.random(1.5, 3);
break;
case 'layered':
noiseScale = R.random(0.002, 0.006);
curveLength = R.randomInt(60, 120);
stepSize = R.random(2, 3.5);
break;
}
// composition mask
let mask = null;
if (traits.composition === 'circular') {
mask = { type: 'circle', cx: W / 2, cy: H / 2, r: S * 0.4 };
} else if (traits.composition === 'split') {
let splitAngle = R.random(0, Math.PI);
mask = { type: 'split', angle: splitAngle, cx: W / 2, cy: H / 2 };
}
// draw curves
let drawColors = palette.slice(1); // skip background color
for (let i = 0; i < numCurves; i++) {
let x = R.random(0, W);
let y = R.random(0, H);
// check composition mask
if (mask) {
if (mask.type === 'circle') {
let dx = x - mask.cx;
let dy = y - mask.cy;
if (dx * dx + dy * dy > mask.r * mask.r) {
if (R.boolean(0.85)) continue;
}
} else if (mask.type === 'split') {
let side = Math.cos(mask.angle) * (x - mask.cx) +
Math.sin(mask.angle) * (y - mask.cy);
if (side < 0 && R.boolean(0.7)) continue;
}
}
let color = R.pick(drawColors);
let alpha = R.random(0.05, 0.35);
let lineW = R.random(0.5, 2.5) * (S / 800);
ctx.beginPath();
ctx.moveTo(x, y);
let noiseOffset = traits.flow === 'layered' ? R.random(0, 100) : 0;
for (let s = 0; s < curveLength; s++) {
let angle = noise(x * noiseScale + noiseOffset, y * noiseScale + noiseOffset);
angle *= Math.PI * (traits.flow === 'turbulent' ? 6 : 4);
x += Math.cos(angle) * stepSize;
y += Math.sin(angle) * stepSize;
ctx.lineTo(x, y);
if (x < -10 || x > W + 10 || y < -10 || y > H + 10) break;
}
ctx.strokeStyle = color;
ctx.globalAlpha = alpha;
ctx.lineWidth = lineW;
ctx.lineCap = 'round';
ctx.stroke();
}
ctx.globalAlpha = 1;
// texture overlay
if (traits.texture !== 'clean') {
let intensity = traits.texture === 'heavyGrain' ? 35 : 18;
let imageData = ctx.getImageData(0, 0, W, H);
let data = imageData.data;
for (let i = 0; i < data.length; i += 4) {
let grain = (R.random() - 0.5) * intensity;
data[i] += grain;
data[i + 1] += grain;
data[i + 2] += grain;
}
ctx.putImageData(imageData, 0, 0);
}
// subtle vignette
let vignette = ctx.createRadialGradient(
W / 2, H / 2, S * 0.25,
W / 2, H / 2, S * 0.75
);
vignette.addColorStop(0, 'rgba(0,0,0,0)');
vignette.addColorStop(1, 'rgba(0,0,0,0.25)');
ctx.fillStyle = vignette;
ctx.fillRect(0, 0, W, H);
return traits;
}
That's a lot of code but it breaks down into clear stages. Background fill. Parameter selection from traits. Composition masking. Curve drawing loop. Texture. Vignette. Let me walk through the interesting bits.
The mask system. Circular mask uses basic distance check -- if a starting point is outside the circle radius, skip it 85% of the time. Not 100%, because a few curves leaking past the boundary looks organic. The split mask uses dot product math (same trig from episode 13) to determine which side of a line each point is on. Points on the "off" side get skipped 70% of the time. Again, not a hard cutoff -- soft edges look more natural than sharp ones.
The flow field. Each curve starts at a random position, then steps forward repeatedly. At each step, the noise function gives an angle, and we move in that direction by stepSize pixels. The noiseScale parameter controls how quickly the flow direction changes across space. Low noiseScale (0.001) = gentle, sweeping curves. High noiseScale (0.012) = chaotic, turbulent paths. This is the same flow field principle we used in episode 12, but now it's driving the entire visual output.
The layered trick. When traits.flow is 'layered', each curve gets its own noiseOffset -- a random value added to the noise coordinates. This means each curve samples from a different region of the noise field, so they flow in different directions even when they're close together. It creates a multi-layer depth effect where curves seem to exist on different planes. Simple trick, big visual impact.
Line width scaling. R.random(0.5, 2.5) * (S / 800) scales line width relative to canvas size. On an 800px canvas, lines are 0.5-2.5px. On a 1600px canvas, they're 1-5px. Resolution independence, as we discussed in episode 29. Everything scales proportionally.
The alpha range. 0.05 to 0.35 means every curve is semi-transparent. Where curves overlap, colors accumulate. Dense areas get deeper, richer color. Sparse areas stay light and airy. This is basically the same layering principle as the watercolor technique from episode 27 -- many translucent layers building up density. The overlap IS the visual interest.
Grain. Same pixel manipulation approach from episodes 10 and 27. Loop through every pixel, add random brightness variation. Intensity 18 for subtle grain (barely visible, just breaks up the digital cleanness), intensity 35 for heavy grain (visible texture, moody film stock vibe). Notice the grain uses the seeded RNG, not Math.random(). Deterministic grain. Same seed, same grain pattern, every time.
The complete HTML file
Everything goes in one file. No imports, no CDN, no fetch calls. Self-contained, as episode 29 requires:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
html, body { margin: 0; padding: 0; overflow: hidden; background: #000; }
canvas { display: block; position: absolute; top: 50%; left: 50%;
transform: translate(-50%, -50%); }
</style>
</head>
<body>
<canvas id="c"></canvas>
<script>
// === paste sfc32 + createRNG here ===
// === paste createNoise here ===
// === paste generateTraits here ===
// === paste PALETTES + getPalette + hsl here ===
// === paste render here ===
var canvas = document.getElementById('c');
// get seed from fxhash or URL parameter
var seed;
if (typeof fxhash !== 'undefined') {
seed = Array.from(fxhash).reduce(function(a, c) {
return ((a << 5) - a + c.charCodeAt(0)) | 0;
}, 0);
seed = Math.abs(seed);
} else {
var params = new URLSearchParams(window.location.search);
seed = parseInt(params.get('seed')) || Math.floor(Math.random() * 999999);
}
function resize() {
var size = Math.min(window.innerWidth, window.innerHeight);
canvas.width = size;
canvas.height = size;
}
function go() {
resize();
var traits = render(canvas, seed);
if (typeof window.$fxhashFeatures !== 'undefined') {
window.$fxhashFeatures = {
'Palette': traits.palette,
'Density': traits.density,
'Flow': traits.flow,
'Texture': traits.texture,
'Composition': traits.composition
};
}
if (typeof fxpreview === 'function') fxpreview();
console.log('Seed:', seed, 'Traits:', JSON.stringify(traits));
}
go();
window.addEventListener('resize', go);
// press 's' to save PNG
document.addEventListener('keydown', function(e) {
if (e.key === 's') {
var link = document.createElement('a');
link.download = 'drift-' + seed + '.png';
link.href = canvas.toDataURL('image/png');
link.click();
}
});
</script>
</body>
</html>
The seed comes from fxhash if we're on the platform, or from a URL parameter for local testing, or falls back to a random seed. The go() function handles resize + re-render -- same seed, same PRNG sequence, same art at any resolution. The fxpreview() call tells fxhash to capture the preview thumbnail. And pressing 's' downloads the current render as a PNG.
The comments say "paste X here" because in the actual file, you'd inline all the code from above. No separate files, no modules, no imports. One giant script block. It looks messy in a code editor but that's what the platform requires. Self-contained means self-contained.
Testing with a gallery
Before minting anything, you want to see many outputs at once. A gallery of 30+ seeds, rendered as thumbnails. This is your quality assurance step:
<body style="background:#111; display:flex; flex-wrap:wrap; gap:8px; padding:8px;">
<script>
// include all the Drift code...
for (let s = 0; s < 30; s++) {
let c = document.createElement('canvas');
c.width = 300;
c.height = 300;
c.style.cursor = 'pointer';
c.title = 'Seed: ' + s;
c.addEventListener('click', function() {
window.open('index.html?seed=' + s, '_blank');
});
document.body.appendChild(c);
render(c, s);
}
</script>
</body>
30 thumbnails. Click any one to open it full-size. Scan them. Are there duds? Pieces where the curves all bunch up in one corner? Seeds where the color combination is ugly? Compositions that feel empty or overcrowded?
This is the curation step from episode 23 -- generate many, select the good ones. But for a collection, you can't curate individual outputs. You need the SYSTEM to produce good results consistently. If seed 14 is a dud, don't just skip it -- figure out WHY and adjust the parameters until all 30 seeds look interesting. Then test 50. Then 100.
Verifying determinism
Before trusting your collection, verify that the same seed always produces the same output. We talked about determinism traps in episode 29 (object iteration order, float accumulation, font rendering). Here's a quick sanity check:
function verifyDeterminism(seed) {
let c1 = document.createElement('canvas');
let c2 = document.createElement('canvas');
c1.width = c2.width = 200;
c1.height = c2.height = 200;
render(c1, seed);
render(c2, seed);
let d1 = c1.getContext('2d').getImageData(0, 0, 200, 200).data;
let d2 = c2.getContext('2d').getImageData(0, 0, 200, 200).data;
for (let i = 0; i < d1.length; i++) {
if (d1[i] !== d2[i]) {
console.log('MISMATCH at pixel', Math.floor(i / 4), 'channel', i % 4);
return false;
}
}
console.log('Seed', seed, '- deterministic');
return true;
}
// run across multiple seeds
for (let s = 0; s < 20; s++) verifyDeterminism(s);
Render the same seed twice to separate canvases, compare every pixel. If any pixel differs, you have a non-determinism bug somewhere. Run this for 20 seeds to be sure. This catches the subtle cases -- maybe seed 7 triggers a code path that uses Date.now() or Math.random() somewhere you forgot about.
Displaying trait metadata
For local testing, it helps to show the traits visually. Overlay them on the canvas so you can see what the system picked:
function showTraits(ctx, traits, W, H) {
ctx.globalAlpha = 1;
ctx.font = (W * 0.018) + 'px monospace';
ctx.fillStyle = '#fff';
ctx.shadowColor = '#000';
ctx.shadowBlur = 4;
let lines = Object.keys(traits).map(function(k) {
return k + ': ' + traits[k];
});
for (let i = 0; i < lines.length; i++) {
ctx.fillText(lines[i], W * 0.03, H * 0.05 + i * W * 0.025);
}
ctx.shadowBlur = 0;
}
Call this after render() during development. Remove it before minting -- collectors don't want debug overlay on their art. The font size scales with canvas width (same resolution-independence principle), and the text shadow ensures readability against any background color.
Refining the parameters
After my first gallery test, I noticed a few things:
Sparse pieces with smooth flow sometimes looked too empty -- barely any visual content. The fix: even in sparse mode, guarantee at least 80 curves. And increase the curve length so each one contributes more visual weight.
Turbulent flow with the neon palette was overwhelming -- too much visual noise in bright colors. The fix: reduce the alpha range for turbulent flow. More transparent curves, less visual density.
Heavy grain at small canvas sizes (300px thumbnails) looked like compression artifacts. The fix: scale grain intensity with canvas size. intensity * (S / 800) so grain is proportional.
You can also add a trait distribution checker to make sure your weights produce the variety you expect across a large sample:
function checkDistribution(numSamples) {
let counts = {};
for (let s = 0; s < numSamples; s++) {
let R = createRNG(s);
let traits = generateTraits(R);
for (let key in traits) {
if (!counts[key]) counts[key] = {};
let val = traits[key];
counts[key][val] = (counts[key][val] || 0) + 1;
}
}
for (let key in counts) {
console.log('\n' + key + ':');
for (let val in counts[key]) {
let pct = (counts[key][val] / numSamples * 100).toFixed(1);
console.log(' ' + val + ': ' + counts[key][val] + ' (' + pct + '%)');
}
}
}
checkDistribution(1000);
Run this with 1000 samples and verify the percentages match your intended weights. If neon is supposed to be ~7.7% but comes out at 12%, you've got a bug in your weighted selection. The larger the sample, the closer the percentages should be to the theoretical weights. I usually run 1000 -- that's enough to see clear trends without waiting too long.
These are the kinds of tweaks you only discover by rendering dozens of outputs. You can't predict them from reading code. The gallery is your feedback loop. Render, evaluate, adjust, render again. This iterative refinement is why I said in episode 23 that generative art is as much about curation as creation.
What makes a good collection
After building this (and after looking at a lot of collections on fxhash and Art Blocks), some observations:
Every output must be interesting. If even 5% of seeds produce boring results, collectors will notice. They'll wonder if they got one of the boring ones. Adjust weights and parameters until duds are eliminated. This is harder than it sounds -- eliminating the bottom 5% without making everything too samey is a real design challenge.
Coherence matters. All pieces should clearly belong to the same family. If someone sees three different outputs, they should immediately know they're from the same collection. A consistent visual language -- in Drift's case, flowing curves on dark backgrounds -- is the thread that ties everything together.
Traits should be visually distinct. A "sparse" piece should look obviously different from a "dense" one. If you can't tell which trait a piece has by looking at it, that trait is meaningless from a collector perspective. Every trait should create a visible, recognizable difference.
The best collections have one clear idea. Drift is about flowing curves. That's it. Not flowing curves AND geometric shapes AND particle effects AND text. One concept, explored deeply. Focus beats scope every time. Tyler Hobbs didn't put grids and spirals and flow fields into Fidenza -- he put flow fields, and he explored them fully. One idea, infinite variation.
Rarity should be meaningful. The neon palette is rare (weight 1 out of 13) because it's the most visually striking. If the rarest trait was also the most boring, that's bad design. Rare should mean special, not just uncommon.
A note on the approach
You might be wondering why we built all of this from scratch instead of using libraries. p5.js has noise(). There are palette libraries. Flow field examples exist everywhere. Three reasons:
First, self-contained code. No external dependencies means no broken CDN links in 2035. Your art renders forever. We talked about this in episode 29 and it's not a theoretical concern -- I've seen NFTs that broke because they depended on a jQuery CDN that went down.
Second, understanding. When you wrote the Perlin noise function by hand in episode 12, you understood what every paramater does. When you built the palette system in episode 28, you could tweak the exact rarity distributions. You can't optimize what you don't understand. And optimization is exactly what the gallery-test-refine loop demands.
Third, file size. A generative art piece built from our toolkit is maybe 15-20KB of JavaScript. Import p5.js and you're adding 800KB before your first line of art code. Some platforms have file size limits. And even without limits, smaller is faster. Collectors on slow connections appreciate a piece that loads in 200ms instead of 3 seconds.
't Komt erop neer...
- A complete generative collection needs: seeded PRNG, noise, palette system, trait generation, rendering engine, and output format
- Traits create both visual variety AND collectable rarity through weighted selection
- Composition masks (circular, split) add visual diversity without changing the core algorithm
- Texture (grain, vignette) transforms "computer output" into something that feels crafted -- same techniques from episode 27
- The layered flow style creates multi-plane depth by offsetting each curve's noise sampling
- Test with a gallery of 30+ seeds, then refine parameters until every output is interesting
- Everything in one HTML file, zero external dependencies -- self-contained means forever
- One clear visual idea explored deeply beats a collection trying to do everything
- The gallery-test-refine loop is where good collections become great ones
Phase 4 is done. We went from understanding what generative art IS (episode 23) to building a mint-ready collection. Seeds, composition, typography, texture, color, blockchain output, and now assembly. That's a real toolkit. Whether you actually mint anything is up to you -- the techniques work just as well for print, exhibition, or just making cool stuff on your own screen.
Next up we're wrapping the original Creative Coding series with some thoughts on practice, creative habits, and where to go from here. And after that... well, there's a whole world of shader programming waiting. The fragment shader taste we got in episode 21 barely scratched the surface. Signed distance functions, noise on the GPU, raymarching -- some seriously wild territory ahead :-)
Sallukes! Thanks for reading.
X
Estimated Payout
$0.13
Discussion
No comments yet. Be the first!