Skip to main content

Feature: Brain Visualizer (WebGPU)

Date: 2026-03-03 Status: Approved


1. Context & Motivation

The Fractal Brain is Morphee's core intelligence engine — 26 files, 233 tests, ~10,000 lines of Rust. It learns from experience through substrat clustering, neuron tree recall, method neuron maturation, and 9-phase dream consolidation. During AIMO Kaggle benchmarking, we collect rich telemetry (brain_events, brain_snapshots, dream_events) but can only view it as 2D Recharts line/bar charts and tables.

The brain deserves to be seen. We want a GPU-accelerated, interactive 3D visualization that lets you:

  • See the entire brain topology as a living cosmos (substrats as nebulae, trees as particles, synapses as light trails)
  • Watch signal flow during problem solving (recognition → recall → execution)
  • Observe dream consolidation (temperature decay, merges, prunes, method births)
  • Replay completed bench runs problem-by-problem, and eventually stream live runs

This visualization serves both development insight (understanding why the brain succeeds/fails) and demonstration value (showing the brain's beauty to the world).


2. Options Investigated

Option A: WebGPU + WGSL (Selected)

  • Description: Raw WebGPU API with custom WGSL compute + fragment shaders. Force-directed layout runs entirely on GPU via compute shaders. No framework dependency.
  • Pros: Compute shaders for physics ON GPU, 100K+ particles at 60fps, future-proof (replacing WebGL), no library overhead, maximum creative freedom
  • Cons: Firefox behind flag (nightly only), more boilerplate, fewer examples than Three.js
  • Effort: L (but highest ceiling)

Option B: React Three Fiber (Three.js)

  • Description: @react-three/fiber + drei helpers. WebGL2-based. React JSX for 3D scenes.
  • Pros: Great DX, huge ecosystem, lots of examples, instanced meshes
  • Cons: WebGL2 only (no compute shaders), physics on CPU, library overhead, 10K+ nodes needs optimization
  • Effort: M

Option C: Sigma.js / Force-Graph

  • Description: Purpose-built graph visualization. WebGL-accelerated node/edge rendering.
  • Pros: Built for graphs, ForceAtlas2 layout, handles 50K+ nodes, quick to set up
  • Cons: Less creative freedom, can't do neuroscience-style effects, 2D focus
  • Effort: S

Option D: Rust + wgpu (Native Window)

  • Description: Pure Rust GPU rendering via wgpu + winit + egui/bevy. Separate native window.
  • Pros: Maximum performance, direct morphee-core struct access, compute shaders
  • Cons: Not in web dashboard (separate window), can't share URL, different deployment, Bevy compile times
  • Effort: XL

3. Decision

Chosen approach: Option A — WebGPU + WGSL

Reasoning:

  • The brain is a living system with 100K+ potential data points (neurons, synapses, signals). Only GPU compute shaders can simulate physics at this scale while maintaining 60fps.
  • WebGPU is the future of web graphics. Chrome, Edge, and Safari ship it. Firefox is catching up. Building on WebGPU means we won't need to migrate later.
  • No framework dependency keeps bundle size minimal and gives maximum creative control for custom shaders (Gaussian nebulae, particle systems, light trails).
  • The bench dashboard already runs in Chromium-based browsers (dev environment), so Firefox support is not a blocker.

Trade-offs accepted:

  • More boilerplate than Three.js — mitigated by building a small abstraction layer for common operations
  • Firefox users see a WebGL2 fallback (degraded but functional) or a "use Chrome" message
  • Learning curve for WGSL — but this is a valuable investment

Visualization approach: All three views, built in phases:

  1. Phase 1: Living Brain Overview — Foundation. Substrats as glowing nebulae, trees as particles, synapses as light trails. GPU force-directed layout. Zoom from galaxy → cluster → tree → neuron.
  2. Phase 2: Neural Signal Flow — Replay problem solving. Animated particles along recognition → recall → execution paths. Color-coded by modality and execution path.
  3. Phase 3: Dream Consolidation Theater — Watch dream cycles. Temperature decay as cooling colors, merging clusters colliding, pruned neurons dissolving, method neurons born with flash. Timeline scrubber.

Data mode: Both replay (stored data from PostgreSQL) and live streaming (WebSocket from bench runner). Start with replay.

Backend: Extend bench-cli's existing Rust axum dashboard with new viz API endpoints. Direct access to morphee-core structs for maximum fidelity.


4. Architecture

4.1 Data Flow

morphee-core (brain structs)
↓ serialize
bench-cli axum dashboard (/api/viz/*)
↓ JSON over HTTP (replay) or WebSocket (live)
Dashboard frontend (React)
↓ parse into GPU buffers
WebGPU compute shaders (physics simulation)
↓ render
WebGPU fragment shaders (visual output)

<canvas> in Brain page

4.2 Backend: Viz API (extend bench-cli dashboard)

New routes in bench/cli/src/commands/dashboard.rs:

GET  /api/viz/brain-state/:runId
→ Full brain topology: substrats, trees (positions only), edges, method neurons
→ Used for: Living Brain Overview initial load

GET /api/viz/substrats/:runId
→ SubstratIndex data: centroids (reduced to 3D via PCA/t-SNE), scopes, confidence, temperature, exemplar counts
→ Used for: Nebula rendering

GET /api/viz/trees/:runId
→ NeuronTree list: id, root neuron embedding (3D reduced), substrat assignments, strength
→ Used for: Particle positions within substrats

GET /api/viz/tree/:treeId
→ Full NeuronTree: all neurons, synapses, recursive children
→ Used for: Zoomed-in tree view

GET /api/viz/timeline/:runId
→ Ordered brain_events with recognition/recall/execution data per problem
→ Used for: Signal Flow replay, timeline scrubber

GET /api/viz/dreams/:runId
→ Dream events with before/after snapshots
→ Used for: Dream Theater

WS /api/viz/stream
→ Real-time brain events during active bench run (Phase 2+)
→ Used for: Live streaming mode

Dimensionality reduction: 384-dim embeddings → 3D positions. Options:

  • PCA (fast, deterministic) — compute in Rust, cache per run
  • t-SNE/UMAP (better cluster separation) — compute once, store with run
  • Recommend: PCA for initial load (fast), optional UMAP toggle

4.3 Frontend: WebGPU Renderer

New directory: bench/dashboard/src/viz/

viz/
├── gpu/
│ ├── context.ts — WebGPU device/adapter init, fallback detection
│ ├── buffers.ts — GPU buffer management (nodes, edges, uniforms)
│ ├── camera.ts — Orbit camera (zoom, pan, rotate)
│ └── pipeline.ts — Render pipeline factory
├── shaders/
│ ├── force-layout.wgsl — Compute: force-directed graph layout
│ ├── particle.wgsl — Compute: particle physics (attraction/repulsion)
│ ├── nebula.vert.wgsl — Vertex: substrat nebula rendering
│ ├── nebula.frag.wgsl — Fragment: Gaussian glow, temperature color
│ ├── node.vert.wgsl — Vertex: instanced neuron spheres
│ ├── node.frag.wgsl — Fragment: neuron coloring (stage, confidence)
│ ├── edge.vert.wgsl — Vertex: synapse lines/trails
│ ├── edge.frag.wgsl — Fragment: edge coloring (type, weight)
│ ├── signal.wgsl — Compute: signal particle movement
│ └── post.wgsl — Fragment: bloom, tone mapping
├── scenes/
│ ├── BrainCosmos.tsx — Living Brain Overview (Phase 1)
│ ├── SignalFlow.tsx — Neural Signal Flow (Phase 2)
│ └── DreamTheater.tsx — Dream Consolidation (Phase 3)
├── components/
│ ├── VizCanvas.tsx — Main <canvas> with WebGPU init
│ ├── Timeline.tsx — Problem-by-problem scrubber
│ ├── Controls.tsx — Zoom, rotation, layer toggles
│ ├── InfoPanel.tsx — Selected node/edge details
│ └── Legend.tsx — Color/size legend
├── data/
│ ├── loader.ts — Fetch from /api/viz/*, parse into typed arrays
│ ├── reducer.ts — Dimensionality reduction (client-side PCA fallback)
│ └── types.ts — TypeScript types matching Rust serialization
└── hooks/
├── useWebGPU.ts — Device init + capability detection
├── useBrainState.ts — Fetch + cache brain topology
└── useTimeline.ts — Timeline playback state

4.4 Visual Design

Color Palette:

ElementColorMeaning
Substrat nebulaBlue → Purple gradientScope/size
TemperatureRed (hot) → Blue (cold) → Gray (frozen)Recency
ConfidenceBright/saturated → Dim/desaturatedAccuracy
Hippocampus neuronsGreen glowFresh, plastic
Neocortex neuronsBlue glowConsolidated
Cerebellum neuronsGold glowAutomated, mastered
Exact recallWhite trailPerfect match
Variation recallCyan trailParameter substitution
NovelRed pulseUnknown territory
Synapse: TemporalThin white lineSequential
Synapse: AssociativeDotted blueLearned co-activation
Synapse: CrossSubstratThick purpleBridges domains
Dream mergeParticles flowing togetherConsolidation
Dream pruneParticles dissolvingCleanup
Method birthBright flash + expanding ringNew capability

Zoom Levels:

  1. Galaxy — All substrats as nebulae, edges as faint lines. Bird's eye.
  2. Cluster — Single substrat. Trees visible as individual particles. Method neuron as central core.
  3. Tree — Single NeuronTree. Fractal recursive structure. Neurons as spheres, synapses as lines.
  4. Neuron — Single neuron. Fingerprint visualization. Activation vector heatmap.

5. Implementation Plan

StepDescriptionEffortDependencies
Phase 1: Living Brain Overview
1.1WebGPU boilerplate: device init, canvas, camera, render loopMNone
1.2Viz API endpoints in bench-cli dashboard (brain-state, substrats, trees)MNone
1.3Dimensionality reduction (PCA in Rust, 384→3D, cached per run)S1.2
1.4Substrat nebula shader (Gaussian glow, temperature→color, scope→size)M1.1
1.5Neuron tree particles (instanced rendering, position from reduced embeddings)M1.1, 1.3
1.6Force-directed layout compute shader (substrat repulsion + tree attraction)L1.4, 1.5
1.7Synapse edge rendering (line shader, weight→thickness, kind→color)S1.5
1.8Zoom levels (galaxy → cluster → tree → neuron) with smooth transitionsM1.4–1.7
1.9Info panel + legend + controls (React overlay on canvas)S1.8
1.10Post-processing: bloom, tone mappingS1.4
Phase 2: Neural Signal Flow
2.1Timeline API endpoint + timeline scrubber componentM1.2
2.2Signal particle compute shader (movement along edges, modality→color)M1.6, 2.1
2.3Recognition → Recall → Execution path visualizationM2.2
2.4Execution path speed mapping (cerebellum=instant, raw_llm=slow spiral)S2.3
2.5Problem replay mode (step through, auto-play, speed control)M2.1–2.4
Phase 3: Dream Consolidation Theater
3.1Dreams API endpoint + dream timelineS1.2
3.2Temperature decay animation (cooling colors over time)S1.4, 3.1
3.3Merge animation (substrat particles flowing together)M3.2
3.4Prune animation (neurons dissolving/fading)S3.2
3.5Method neuron birth animation (flash + expanding ring)S3.2
3.6Promotion animation (stage color transition with effect)S3.2
3.7Dream cycle playback (timeline of all 9 phases with before/after)M3.2–3.6
Phase 4: Live Streaming
4.1WebSocket endpoint in bench-cli (brain event stream)M1.2
4.2Runner → Hub event forwardingM4.1
4.3Frontend WebSocket consumer + bufferS4.1, 1.1
4.4Real-time visualization (auto-update graph + signal flow)M4.3, Phase 2

Total estimated effort: ~20 steps, 6 L + 8 M + 10 S


6. Questions & Answers

QuestionAnswer
Which GPU API?WebGPU + WGSL. WebGL2 fallback (degraded) for Firefox.
Which visualizations?All three (Living Brain, Signal Flow, Dream Theater) — phased.
Live or replay?Both. Start with replay (stored data), add live streaming in Phase 4.
Where does the API live?Extend bench-cli's existing axum dashboard. Direct morphee-core access.
Dimensionality reduction?PCA in Rust (fast, deterministic), cached per run. Optional UMAP toggle later.
How to handle 100K+ neurons?GPU instanced rendering + compute shaders for layout. LOD: galaxy view shows substrats only, zoom reveals trees, then neurons.
Firefox support?WebGL2 fallback with reduced effects, or "use Chrome" banner. Dashboard is a dev tool — Chrome is fine.
New DB tables?Yes — need brain_trees and brain_neurons tables to store tree/neuron data per run (currently only aggregates are stored).

7. Open Items

  • New telemetry tables: Current brain_events/snapshots don't store individual neuron positions or tree structures. Need to add brain snapshot export during bench runs (full SubstratIndex + tree list as JSON blob or new tables). This is a prerequisite for Phase 1.
  • UMAP in Rust: Consider adding umap-rs crate for better cluster separation. PCA first, UMAP as enhancement.
  • WebGPU fallback: Decide exact degradation path. Options: (a) WebGL2 with simplified shaders, (b) static 2D SVG fallback, (c) "upgrade browser" message.
  • Performance budget: Target 60fps with 10K nodes on M1 MacBook. Benchmark during Phase 1.6.
  • Accessibility: The visualization is supplementary (data tables remain). Add screen reader descriptions for key metrics.

8. References