Feature: Brain Visualizer (WebGPU)
Date: 2026-03-03 Status: Approved
1. Context & Motivation
The Fractal Brain is Morphee's core intelligence engine — 26 files, 233 tests, ~10,000 lines of Rust. It learns from experience through substrat clustering, neuron tree recall, method neuron maturation, and 9-phase dream consolidation. During AIMO Kaggle benchmarking, we collect rich telemetry (brain_events, brain_snapshots, dream_events) but can only view it as 2D Recharts line/bar charts and tables.
The brain deserves to be seen. We want a GPU-accelerated, interactive 3D visualization that lets you:
- See the entire brain topology as a living cosmos (substrats as nebulae, trees as particles, synapses as light trails)
- Watch signal flow during problem solving (recognition → recall → execution)
- Observe dream consolidation (temperature decay, merges, prunes, method births)
- Replay completed bench runs problem-by-problem, and eventually stream live runs
This visualization serves both development insight (understanding why the brain succeeds/fails) and demonstration value (showing the brain's beauty to the world).
2. Options Investigated
Option A: WebGPU + WGSL (Selected)
- Description: Raw WebGPU API with custom WGSL compute + fragment shaders. Force-directed layout runs entirely on GPU via compute shaders. No framework dependency.
- Pros: Compute shaders for physics ON GPU, 100K+ particles at 60fps, future-proof (replacing WebGL), no library overhead, maximum creative freedom
- Cons: Firefox behind flag (nightly only), more boilerplate, fewer examples than Three.js
- Effort: L (but highest ceiling)
Option B: React Three Fiber (Three.js)
- Description: @react-three/fiber + drei helpers. WebGL2-based. React JSX for 3D scenes.
- Pros: Great DX, huge ecosystem, lots of examples, instanced meshes
- Cons: WebGL2 only (no compute shaders), physics on CPU, library overhead, 10K+ nodes needs optimization
- Effort: M
Option C: Sigma.js / Force-Graph
- Description: Purpose-built graph visualization. WebGL-accelerated node/edge rendering.
- Pros: Built for graphs, ForceAtlas2 layout, handles 50K+ nodes, quick to set up
- Cons: Less creative freedom, can't do neuroscience-style effects, 2D focus
- Effort: S
Option D: Rust + wgpu (Native Window)
- Description: Pure Rust GPU rendering via wgpu + winit + egui/bevy. Separate native window.
- Pros: Maximum performance, direct morphee-core struct access, compute shaders
- Cons: Not in web dashboard (separate window), can't share URL, different deployment, Bevy compile times
- Effort: XL
3. Decision
Chosen approach: Option A — WebGPU + WGSL
Reasoning:
- The brain is a living system with 100K+ potential data points (neurons, synapses, signals). Only GPU compute shaders can simulate physics at this scale while maintaining 60fps.
- WebGPU is the future of web graphics. Chrome, Edge, and Safari ship it. Firefox is catching up. Building on WebGPU means we won't need to migrate later.
- No framework dependency keeps bundle size minimal and gives maximum creative control for custom shaders (Gaussian nebulae, particle systems, light trails).
- The bench dashboard already runs in Chromium-based browsers (dev environment), so Firefox support is not a blocker.
Trade-offs accepted:
- More boilerplate than Three.js — mitigated by building a small abstraction layer for common operations
- Firefox users see a WebGL2 fallback (degraded but functional) or a "use Chrome" message
- Learning curve for WGSL — but this is a valuable investment
Visualization approach: All three views, built in phases:
- Phase 1: Living Brain Overview — Foundation. Substrats as glowing nebulae, trees as particles, synapses as light trails. GPU force-directed layout. Zoom from galaxy → cluster → tree → neuron.
- Phase 2: Neural Signal Flow — Replay problem solving. Animated particles along recognition → recall → execution paths. Color-coded by modality and execution path.
- Phase 3: Dream Consolidation Theater — Watch dream cycles. Temperature decay as cooling colors, merging clusters colliding, pruned neurons dissolving, method neurons born with flash. Timeline scrubber.
Data mode: Both replay (stored data from PostgreSQL) and live streaming (WebSocket from bench runner). Start with replay.
Backend: Extend bench-cli's existing Rust axum dashboard with new viz API endpoints. Direct access to morphee-core structs for maximum fidelity.
4. Architecture
4.1 Data Flow
morphee-core (brain structs)
↓ serialize
bench-cli axum dashboard (/api/viz/*)
↓ JSON over HTTP (replay) or WebSocket (live)
Dashboard frontend (React)
↓ parse into GPU buffers
WebGPU compute shaders (physics simulation)
↓ render
WebGPU fragment shaders (visual output)
↓
<canvas> in Brain page
4.2 Backend: Viz API (extend bench-cli dashboard)
New routes in bench/cli/src/commands/dashboard.rs:
GET /api/viz/brain-state/:runId
→ Full brain topology: substrats, trees (positions only), edges, method neurons
→ Used for: Living Brain Overview initial load
GET /api/viz/substrats/:runId
→ SubstratIndex data: centroids (reduced to 3D via PCA/t-SNE), scopes, confidence, temperature, exemplar counts
→ Used for: Nebula rendering
GET /api/viz/trees/:runId
→ NeuronTree list: id, root neuron embedding (3D reduced), substrat assignments, strength
→ Used for: Particle positions within substrats
GET /api/viz/tree/:treeId
→ Full NeuronTree: all neurons, synapses, recursive children
→ Used for: Zoomed-in tree view
GET /api/viz/timeline/:runId
→ Ordered brain_events with recognition/recall/execution data per problem
→ Used for: Signal Flow replay, timeline scrubber
GET /api/viz/dreams/:runId
→ Dream events with before/after snapshots
→ Used for: Dream Theater
WS /api/viz/stream
→ Real-time brain events during active bench run (Phase 2+)
→ Used for: Live streaming mode
Dimensionality reduction: 384-dim embeddings → 3D positions. Options:
- PCA (fast, deterministic) — compute in Rust, cache per run
- t-SNE/UMAP (better cluster separation) — compute once, store with run
- Recommend: PCA for initial load (fast), optional UMAP toggle
4.3 Frontend: WebGPU Renderer
New directory: bench/dashboard/src/viz/
viz/
├── gpu/
│ ├── context.ts — WebGPU device/adapter init, fallback detection
│ ├── buffers.ts — GPU buffer management (nodes, edges, uniforms)
│ ├── camera.ts — Orbit camera (zoom, pan, rotate)
│ └── pipeline.ts — Render pipeline factory
├── shaders/
│ ├── force-layout.wgsl — Compute: force-directed graph layout
│ ├── particle.wgsl — Compute: particle physics (attraction/repulsion)
│ ├── nebula.vert.wgsl — Vertex: substrat nebula rendering
│ ├── nebula.frag.wgsl — Fragment: Gaussian glow, temperature color
│ ├── node.vert.wgsl — Vertex: instanced neuron spheres
│ ├── node.frag.wgsl — Fragment: neuron coloring (stage, confidence)
│ ├── edge.vert.wgsl — Vertex: synapse lines/trails
│ ├── edge.frag.wgsl — Fragment: edge coloring (type, weight)
│ ├── signal.wgsl — Compute: signal particle movement
│ └── post.wgsl — Fragment: bloom, tone mapping
├── scenes/
│ ├── BrainCosmos.tsx — Living Brain Overview (Phase 1)
│ ├── SignalFlow.tsx — Neural Signal Flow (Phase 2)
│ └── DreamTheater.tsx — Dream Consolidation (Phase 3)
├── components/
│ ├── VizCanvas.tsx — Main <canvas> with WebGPU init
│ ├── Timeline.tsx — Problem-by-problem scrubber
│ ├── Controls.tsx — Zoom, rotation, layer toggles
│ ├── InfoPanel.tsx — Selected node/edge details
│ └── Legend.tsx — Color/size legend
├── data/
│ ├── loader.ts — Fetch from /api/viz/*, parse into typed arrays
│ ├── reducer.ts — Dimensionality reduction (client-side PCA fallback)
│ └── types.ts — TypeScript types matching Rust serialization
└── hooks/
├── useWebGPU.ts — Device init + capability detection
├── useBrainState.ts — Fetch + cache brain topology
└── useTimeline.ts — Timeline playback state
4.4 Visual Design
Color Palette:
| Element | Color | Meaning |
|---|---|---|
| Substrat nebula | Blue → Purple gradient | Scope/size |
| Temperature | Red (hot) → Blue (cold) → Gray (frozen) | Recency |
| Confidence | Bright/saturated → Dim/desaturated | Accuracy |
| Hippocampus neurons | Green glow | Fresh, plastic |
| Neocortex neurons | Blue glow | Consolidated |
| Cerebellum neurons | Gold glow | Automated, mastered |
| Exact recall | White trail | Perfect match |
| Variation recall | Cyan trail | Parameter substitution |
| Novel | Red pulse | Unknown territory |
| Synapse: Temporal | Thin white line | Sequential |
| Synapse: Associative | Dotted blue | Learned co-activation |
| Synapse: CrossSubstrat | Thick purple | Bridges domains |
| Dream merge | Particles flowing together | Consolidation |
| Dream prune | Particles dissolving | Cleanup |
| Method birth | Bright flash + expanding ring | New capability |
Zoom Levels:
- Galaxy — All substrats as nebulae, edges as faint lines. Bird's eye.
- Cluster — Single substrat. Trees visible as individual particles. Method neuron as central core.
- Tree — Single NeuronTree. Fractal recursive structure. Neurons as spheres, synapses as lines.
- Neuron — Single neuron. Fingerprint visualization. Activation vector heatmap.
5. Implementation Plan
| Step | Description | Effort | Dependencies |
|---|---|---|---|
| Phase 1: Living Brain Overview | |||
| 1.1 | WebGPU boilerplate: device init, canvas, camera, render loop | M | None |
| 1.2 | Viz API endpoints in bench-cli dashboard (brain-state, substrats, trees) | M | None |
| 1.3 | Dimensionality reduction (PCA in Rust, 384→3D, cached per run) | S | 1.2 |
| 1.4 | Substrat nebula shader (Gaussian glow, temperature→color, scope→size) | M | 1.1 |
| 1.5 | Neuron tree particles (instanced rendering, position from reduced embeddings) | M | 1.1, 1.3 |
| 1.6 | Force-directed layout compute shader (substrat repulsion + tree attraction) | L | 1.4, 1.5 |
| 1.7 | Synapse edge rendering (line shader, weight→thickness, kind→color) | S | 1.5 |
| 1.8 | Zoom levels (galaxy → cluster → tree → neuron) with smooth transitions | M | 1.4–1.7 |
| 1.9 | Info panel + legend + controls (React overlay on canvas) | S | 1.8 |
| 1.10 | Post-processing: bloom, tone mapping | S | 1.4 |
| Phase 2: Neural Signal Flow | |||
| 2.1 | Timeline API endpoint + timeline scrubber component | M | 1.2 |
| 2.2 | Signal particle compute shader (movement along edges, modality→color) | M | 1.6, 2.1 |
| 2.3 | Recognition → Recall → Execution path visualization | M | 2.2 |
| 2.4 | Execution path speed mapping (cerebellum=instant, raw_llm=slow spiral) | S | 2.3 |
| 2.5 | Problem replay mode (step through, auto-play, speed control) | M | 2.1–2.4 |
| Phase 3: Dream Consolidation Theater | |||
| 3.1 | Dreams API endpoint + dream timeline | S | 1.2 |
| 3.2 | Temperature decay animation (cooling colors over time) | S | 1.4, 3.1 |
| 3.3 | Merge animation (substrat particles flowing together) | M | 3.2 |
| 3.4 | Prune animation (neurons dissolving/fading) | S | 3.2 |
| 3.5 | Method neuron birth animation (flash + expanding ring) | S | 3.2 |
| 3.6 | Promotion animation (stage color transition with effect) | S | 3.2 |
| 3.7 | Dream cycle playback (timeline of all 9 phases with before/after) | M | 3.2–3.6 |
| Phase 4: Live Streaming | |||
| 4.1 | WebSocket endpoint in bench-cli (brain event stream) | M | 1.2 |
| 4.2 | Runner → Hub event forwarding | M | 4.1 |
| 4.3 | Frontend WebSocket consumer + buffer | S | 4.1, 1.1 |
| 4.4 | Real-time visualization (auto-update graph + signal flow) | M | 4.3, Phase 2 |
Total estimated effort: ~20 steps, 6 L + 8 M + 10 S
6. Questions & Answers
| Question | Answer |
|---|---|
| Which GPU API? | WebGPU + WGSL. WebGL2 fallback (degraded) for Firefox. |
| Which visualizations? | All three (Living Brain, Signal Flow, Dream Theater) — phased. |
| Live or replay? | Both. Start with replay (stored data), add live streaming in Phase 4. |
| Where does the API live? | Extend bench-cli's existing axum dashboard. Direct morphee-core access. |
| Dimensionality reduction? | PCA in Rust (fast, deterministic), cached per run. Optional UMAP toggle later. |
| How to handle 100K+ neurons? | GPU instanced rendering + compute shaders for layout. LOD: galaxy view shows substrats only, zoom reveals trees, then neurons. |
| Firefox support? | WebGL2 fallback with reduced effects, or "use Chrome" banner. Dashboard is a dev tool — Chrome is fine. |
| New DB tables? | Yes — need brain_trees and brain_neurons tables to store tree/neuron data per run (currently only aggregates are stored). |
7. Open Items
- New telemetry tables: Current brain_events/snapshots don't store individual neuron positions or tree structures. Need to add brain snapshot export during bench runs (full SubstratIndex + tree list as JSON blob or new tables). This is a prerequisite for Phase 1.
- UMAP in Rust: Consider adding
umap-rscrate for better cluster separation. PCA first, UMAP as enhancement. - WebGPU fallback: Decide exact degradation path. Options: (a) WebGL2 with simplified shaders, (b) static 2D SVG fallback, (c) "upgrade browser" message.
- Performance budget: Target 60fps with 10K nodes on M1 MacBook. Benchmark during Phase 1.6.
- Accessibility: The visualization is supplementary (data tables remain). Add screen reader descriptions for key metrics.
8. References
- WebGPU Spec — W3C specification
- WGSL Spec — WebGPU Shading Language
- WebGPU Samples — Official examples (compute boids, instanced rendering)
- Force-Directed GPU Layout — GPU compute shader force layout reference
- Brain Visualization Inspiration: BrainNet Viewer — Neuroscience-grade brain network viz
- Morphee Fractal Brain Architecture
- Morphee Brain Telemetry
- Current Bench Dashboard