Neural Petri Dish

Article Architecture
Initializing...
FPS: -- Step: 0 Brush: DRAW r=10 RUNNING [H] Help
Controls
Mouse Draw / Erase / Noise
1 / 2 / 3 Switch brush mode
+/- Scroll Brush size
Space Pause / Resume
N Single step (paused)
R Reset
C Center seed
X Random seed
S Toggle stats
H Toggle help
V Hidden channels
Touch to draw · Tap buttons above for controls
huh?

What is this?

This is a Neural Cellular Automaton (NCA) — a tiny neural network that runs independently in every cell of a grid, reading only its immediate neighbors. Despite having no global coordination, the cells collectively grow and maintain complex patterns from a simple seed.

How it works

Each cell holds 16 channels: 3 visible (RGB) and 13 hidden "thought" channels. Every step, each cell:

  1. Perceives its neighborhood using learned or Sobel-based filters, including dilated receptive fields for some models
  2. Decides a small update via a multi-layer neural network (2–3 layers, 64–128 hidden units)
  3. Updates stochastically — some models use learned gated firing, others use random ~50% masking

Each model's update rule was found by automated architecture search (Optuna) and trained in PyTorch to regenerate target patterns from damage. This page runs the trained weights in WebGL2 fragment shaders at 60fps, with the same math as the Python original.

Try it

Architecture

16 NCA channels are stored as 4 RGBA float16 textures on the GPU, double-buffered for ping-pong rendering. MLP weights are packed into an RGBA32F texture and read via texelFetch() for a single-pass update (no multi-pass slicing). The update shader is dynamically generated from each model's architecture parameters and recompiled on model switch. Models range from ~5K to ~31K learned parameters depending on hidden size and depth.