Skip to main content

Digital Brain — Nature-Inspired Intelligence

Status: Design approved, implementation planned Date: March 2, 2026 Prerequisite: fractal-brain.md (current implementation) Core value: AI inspired by nature — not metaphor, mechanism


Table of Contents

  1. The Problem
  2. What We Built and What We Learned
  3. Critical Analysis
  4. The Nature-Inspired Solution
  5. Neuron Growth and Maturation
  6. The Biological Learning Loop
  7. Brain Regions as Neuron Stages
  8. Dream Consolidation — Neurogenesis
  9. Prediction and Surprise
  10. Recursive Decomposition — The Prefrontal Cortex
  11. Attention and Salience
  12. Myelination — The Compilation Chain
  13. What Changes in Code
  14. Implementation Plan
  15. Future Ideas
  16. Design Decisions

The Problem

What we set out to do

Build an AI brain that learns from experience. Not a chatbot that calls an LLM every time — an actual learning system that gets smarter the more it's used, saves LLM calls for known problems, and eventually runs autonomously.

What morphee-core had before the brain

A Pipeline with 26 traits: Embedder, Router, Strategy, Scorer, Inferencer, Executor, etc. Every query goes through the pipeline, every query hits the LLM. No memory of past successes. No learning. Flat 384-dim embedding vectors for similarity search — "have I seen something that looks like this?" with no understanding of structure.

The specific failure of flat embeddings

"Find the GCD of 48 and 18" and "Find the GCD of 360 and 240" appear "similar" in embedding space. But flat cosine similarity can't tell you that the operation matches and only the arguments differ. Every near-miss requires a full LLM call even when parameter substitution would suffice.

The deeper problem

Even if recall works perfectly, the system is still just a cache. It remembers what happened but doesn't understand why. It can't:

  • Plan multi-step solutions to novel problems
  • Compose known sub-procedures into new procedures
  • Abstract from specific examples to general rules
  • Predict outcomes before acting
  • Know what it knows and what it doesn't

What We Built and What We Learned

Phase 1-3: Fractal Brain — Neuron-based Recall

What it does: Treats embedding vectors as neurons. Per-token BERT hidden states are recursively segmented by trajectory direction changes into a NeuronTree. Three recall modes:

ModeConditionActionLLM Calls
ExactAll neurons match (root >0.95, children >0.90)Replay stored solution0
VariationOperation neurons match, leaf neurons differSubstitute parameters0
NovelNo structural matchFull LLM call, store new tree1+

What we learned: This works. bench-cli validated ~50% LLM savings on math problems. The structural matching is genuinely better than flat cosine. SparseFingerprint gives O(1) candidate lookup. The 3-mode recall is the foundation.

Biological Learning Loop (bench-cli)

What it does: Four improvements to the recall system, implemented in bench-cli:

  1. Code-first default — store executable procedures, not text answers. "The cerebellum stores how to throw, not where the ball landed."
  2. Self-verification — re-run stored code 2 times before trusting it. Cost: 0.5s, 0 LLM calls.
  3. Dual-path verification — for uncertain trees (confidence 0.35-0.8), run fresh LLM in parallel and compare. Max 3 verifications per tree lifetime.
  4. Dream replay — re-execute stored code during dream cycle. Boost working code, prune broken code.

What we learned: This is real intelligence. The system gets more confident over time, broken procedures get pruned, robust ones get strengthened. The insight "store procedures not facts" is foundational.

Problem: This logic lives in bench/cli/src/strategies/neuron_recall.rs. It should be the core of morphee-core's brain.

Organism Architecture (morphee-core)

What it does: Universal Organism trait (receive/learn), 6 scales (Neuron to Network), SignalGraphExecutor for signal propagation, SpaceOrganism wrapping Pipeline, LlmOrganism/WasmOrganism wrappers, Edge system with 5 EdgeKinds and AdaptiveFilter, Grammar/Substrat abstractions for multi-modal signals, gRPC proto definitions (7 RPCs), SpaceOrganismRegistry for multi-space management. 21 files, ~7,500 lines, 167 tests.

What we learned: This is infrastructure for intelligence, not intelligence itself. The organism architecture is a deployment model — it describes where intelligence runs, not how it thinks. It was built before we knew what the intelligence actually needs.


Critical Analysis

What's genuinely valuable

  • NeuronTree structure — structural matching is better than flat cosine
  • 3-mode recall (Exact/Variation/Novel) — validated, measurable LLM savings
  • SparseFingerprint — O(1) lookup, fast candidate retrieval
  • NeuronStore (3 implementations) — solid persistence layer
  • Code-first + verification — real learning loop
  • Dream consolidation concept — background self-improvement
  • Feature gating — clean separation, no regressions
  • Confidence tracking (reward.rs) — per-neuron quality signals

What's overengineered (the stacking problem)

The call chain for a simple query goes through too many layers:

Grammar.tokenize()
→ SubstratEncoder.encode()
→ FingerprintIndex.find_similar()
→ compare_trees()
→ NeuronMemory.recall()
→ SpaceOrganism.receive()
→ SignalGraphExecutor.propagate()
→ SpaceOrganismRegistry.send_signal()

Each layer adds abstraction but not intelligence. Many of these are premature abstractions for capabilities that don't exist yet (multi-modal signals, cross-organism communication, audio substrats, gRPC federation).

What's fundamentally missing

1. No compression. The system accumulates forever. After 1000 math problems, you have 1000 stored trees. Intelligence is compression — 1000 examples into 1 rule.

2. No environment model. The system stores what happened but not how the world works. It can't predict, plan, or explain.

3. No compositional reasoning. Novel problems composed of known sub-parts still fail because the system matches holistically. It can't decompose a problem, recognize the pieces, and combine known solutions.

4. Hebbian learning is too simple. weight += learning_rate * reward learns associations but not causation, abstraction, or composition. It's 1949 neuroscience.

The core issue

The system is a sophisticated cache with a biology metaphor painted on top. The biological naming (neurons, synapses, organisms, dream cycles) doesn't produce biological intelligence. The architecture describes the API of intelligence without implementing intelligence.


The Nature-Inspired Solution

Core principle: AI inspired by nature — not metaphor, mechanism

Real biological brains don't do similarity search. They don't route signals through typed graphs. They do something simpler and more powerful:

Predict constantly. Learn from being wrong.

The solution takes real neuroscience mechanisms and implements them faithfully:

Brain regionBiological functionMorphee implementation
HippocampusShort-term memory. Stores recent experiences. Replays during sleep.Experience neurons (fresh, fragile, in NeuronStore)
NeocortexLong-term knowledge. Generalized patterns from hippocampal replay.Method/category neurons (born from dream consolidation)
CerebellumProcedures. Fast, automatic, unconscious.Verified stored_code, compiled to WASM
Prefrontal cortexPlanning. Breaks complex goals into steps. Working memory.Recursive decomposition ("How do I...?")
AmygdalaEmotional tagging. Marks what's important/dangerous/rewarding.Reward signal + confidence
ThalamusRouting. Decides what gets attention and where signals go.Salience-weighted recall matching

These are not separate systems — they are stages of neuron maturation and roles within the same NeuronStore.


Neuron Growth and Maturation

The key insight: a neuron IS a "how do I"

Every neuron represents a piece of knowledge. Some neurons store specific experiences ("GCD of 48 and 18 = 6"). Others store generalized methods ("the Euclidean algorithm"). Others store categories ("number theory operations"). All are the same Neuron struct — the difference is their content and their stage.

Three kinds of neurons (same struct, different content)

KindWhat it storesBorn fromExample
ExperienceA specific instanceA query + LLM response"GCD(48,18) = 6"
MethodA generalized procedureDream compressing similar experiences"Euclidean algorithm"
CategoryA routing decisionDream compressing similar methods"Number theory"

All three are Neuron. The difference is what's in stored_code and where the synapses point:

  • Experience neurons are leaves
  • Method neurons are internal nodes with a verified procedure
  • Category neurons are roots that route to the right method

Neurons grow upward through the dream cycle

Day 1:    50 experience neurons (individual GCD problems)
Dream 1: 1 method neuron born ("Euclidean algorithm")
50 experiences become its children via synapses

Day 5: Method neurons for GCD, LCM, factoring, prime testing
Dream 5: 1 category neuron born ("number theory")
Methods become its children

Day 20: Categories for number theory, algebra, geometry
Dream 20: 1 domain neuron born ("mathematics")

The brain grows a hierarchy. Not designed — emerged from experience and compression. Each level is more abstract, more general, more confident.


The Biological Learning Loop

The universal process

One question, recursively applied:

"How do I X?"
→ Do I know? (confident) → Do it
→ Don't know? → Break X into sub-questions
→ "How do I A?" → I know this! → Do it
→ "How do I B?" → Don't know → Break B into...
→ "How do I B1?" → I know this! → Do it
→ "How do I B2?" → I know this! → Do it
→ Combine A + B → Try it → Did it work?
→ Yes → Now I KNOW how to do X (new neuron, increase confidence)
→ No → Try different decomposition

The recursion stops when it hits grounded confidence — something the brain has actually tried and verified. Everything above that is a plan built from verified pieces.

Why this is better than pattern matching

SituationPattern matching (current)Recursive "How do I...?"
Seen exact problemWorks (Exact recall)Works (confident, just do it)
Seen similar problemWorks-ish (Variation)Works (reuse sub-procedures)
Novel problem, known sub-partsFails (no holistic match)Works (decompose, reuse pieces)
Completely novelLLM fallbackDecompose, try, learn for next time

The third row is the breakthrough. The current system can't handle "novel problem composed of known parts" because it matches holistically. The recursive approach decomposes first, then recognizes the pieces.

This IS the Organism trait, simplified

receive(signal) = "How do I handle this signal?"
→ known with confidence → execute stored procedure
→ unknown → decompose → recursively receive sub-signals → combine → learn

learn(reward) = update confidence on the procedure used
→ high reward → strengthen (increase confidence, strengthen synapses)
→ low reward → weaken (decrease confidence, try different decomposition next time)

The Organism trait doesn't need Signal/Edge/Grammar/Substrat/Executor layers. It's one recursive function that uses neurons directly.


Brain Regions as Neuron Stages

Not separate systems — stages of maturation

A neuron doesn't live in a "region." It matures through regions over its lifetime:

New neuron (experience)

│ confidence < 0.3 — HIPPOCAMPUS
│ Fragile, might be wrong, detailed, specific

▼ dream consolidation (replayed, tested, compared)

│ confidence 0.3–0.7 — NEOCORTEX (maturing)
│ Verified a few times, starting to generalize
│ Method neurons born here from experience clusters

▼ continued verification + dream compression

│ confidence > 0.7 — NEOCORTEX (stable)
│ Reliable method, generalized across many experiences
│ Category neurons born here from method clusters

▼ extensive verification + code compilation

│ confidence > 0.95 + verified code — CEREBELLUM
│ Automatic procedure, compiled to WASM
│ Fast, unconscious, doesn't need LLM

This is myelination — the biological process where frequently-used neural pathways get wrapped in myelin sheath, making them faster. In Morphee: verified procedures get compiled to WASM.

The stage field

pub enum NeuronStage {
/// Fresh experience, fragile, might be wrong
Hippocampus,
/// Verified, starting to generalize (or generalized method)
Neocortex,
/// Automatic procedure, compiled to WASM, fast
Cerebellum,
}

This replaces the implicit distinction. A neuron's stage is determined by its confidence, verification history, and whether it has compiled code.


Dream Consolidation — Neurogenesis

The dream cycle is hippocampus-to-neocortex transfer

In biology, the hippocampus replays recent experiences to the neocortex during sleep. The neocortex gradually extracts patterns and stores them as generalized knowledge. Individual episodic memories fade; general knowledge remains.

Three kinds of neuron births

1. Merge birth (generalization)

50 similar experience neurons → 1 method neuron:

Before dream:
Neuron: "GCD(48,18) = 6" confidence: 0.8
Neuron: "GCD(360,240) = 120" confidence: 0.7
Neuron: "GCD(17,5) = 1" confidence: 0.9
... (47 more)

After dream:
NEW Neuron: "Euclidean algorithm" confidence: 0.95
→ stored_code: "def gcd(a,b): while b: a,b = b,a%b; return a"
→ children: [synapses to all 50 experience neurons]
→ activation: centroid of children's activations
→ fingerprint: broader than any individual (matches more queries)

The method neuron replaces the experience neurons for recall purposes. It matches more broadly (any GCD query, not just ones similar to specific past queries) and has higher confidence (verified across 50 instances).

2. Split birth (mitosis)

1 neuron that covers too many different cases → specialized children:

Before dream:
Neuron: "math operations" — handles GCD, factoring, AND sorting
→ coherence dropping (too diverse)

After dream:
Neuron: "number theory" (GCD, factoring, primes)
Neuron: "algorithms" (sorting, searching)
→ each more coherent, more accurate in its domain

The existing MitosisDetector already handles this — it monitors coherence and triggers splits.

3. Connection birth (association)

Two unrelated neurons that co-activate → new synapse:

Observation: "recipe" neurons and "shopping list" neurons
frequently activate in the same conversation

Birth: new synapse connecting recipe → shopping list
→ next time a recipe is discussed, shopping list neurons
are pre-activated (prediction)

Pruning — unused neurons die

Biological brains prune ~50% of adolescent synapses. "Use it or lose it."

  • Neurons not recalled in 30+ days with low confidence → die
  • Synapses with weight < 0.05 → pruned
  • Experience neurons that a method neuron covers → can be pruned (the method IS the compressed knowledge)

The brain gets leaner and more accurate over time, not just bigger.


Prediction and Surprise

The brain is a prediction machine

This is the dominant theory of how biological brains work (Karl Friston's Free Energy Principle, Andy Clark's Predictive Processing). The brain doesn't react to inputs — it predicts what inputs will arrive and learns from the mismatch.

How prediction works in Morphee

1. PERCEIVE:  embed query, understand context
2. PREDICT: based on current neurons + context, predict what kind of
answer is needed and what the outcome should be
3. ACT: execute the chosen procedure (or decompose if novel)
4. OBSERVE: get the actual result
5. COMPARE: prediction vs reality = surprise signal
6. LEARN: if surprised → update neurons (stronger learning signal)
if not surprised → confirm (small confidence boost)

Why prediction error beats binary reward

Current learning: "this was correct" (+1) or "this was wrong" (-1). Binary.

Prediction-based learning: "I predicted the answer would be 6 but it was 12." This tells you:

  • How you were wrong (direction and magnitude of error)
  • Where in the procedure the error occurred
  • What you need to update (the specific neurons that made bad predictions)

A prediction error is a much richer learning signal than a binary reward.

Practical implementation

Before executing a recalled procedure, the brain predicts:

  • What answer type to expect (number? text? code?)
  • Approximate confidence in the result
  • Which sub-procedures will be needed

After execution, compare. Large surprise → large learning update. Small surprise → small confirmation.

This can start simple: predict confidence (how sure am I this will work?) and compare to actual success/failure. Even this basic prediction loop gives the brain self-awareness — "I predicted I was 80% sure but I was wrong 50% of the time in geometry, so my geometry confidence is miscalibrated."


Recursive Decomposition — The Prefrontal Cortex

Planning as recursive "How do I...?"

The prefrontal cortex breaks complex goals into sequences of simpler actions, each of which the cerebellum/motor cortex knows how to execute.

In Morphee, this is the decompose() step for Novel queries. When no neuron matches with confidence:

  1. Ask the LLM to break the problem into sub-questions
  2. For each sub-question, recursively check: "Do I know this?"
  3. Known sub-parts → execute directly (0 LLM calls for those parts)
  4. Unknown sub-parts → recurse deeper or LLM fallback
  5. Combine results → verify → store as new neuron

Example: "Help me plan a birthday party for a 7-year-old"

"How do I plan a birthday party for a 7-year-old?"
→ Decompose:
"How do I pick age-appropriate activities?" → I know this (confidence 0.8)
"How do I create a guest list?" → I know this (confidence 0.9)
"How do I plan a menu?" → Decompose further:
"How do I find kid-friendly recipes?" → I know this (confidence 0.7)
"How do I estimate quantities?" → I know this (confidence 0.6)
"How do I send invitations?" → I know this (confidence 0.85)
→ Combine sub-answers into a plan
→ Present to user
→ If user confirms it worked → new neuron: "birthday party planning" (confidence: 0.4, first attempt)

Next time someone asks about party planning, the brain has a neuron. After 3 successful party plans, the neuron matures to neocortex stage with a reliable procedure.

The decomposer is initially the LLM, eventually learned

For now: the LLM does the decomposition. Each Novel query costs 1 LLM call for decomposition.

Over time: the brain learns meta-rules about decomposition:

  • "Math problems decompose into: identify operation → apply formula → verify"
  • "Planning problems decompose into: gather requirements → generate options → evaluate → decide"

These meta-rules are themselves neurons — "How do I decompose a math problem?" becomes a confident method neuron.


Attention and Salience

The thalamus filters what reaches consciousness

Without attention filtering, the brain would search every neuron equally for every query. This is wasteful and can produce false matches.

When a query arrives, neurons are weighted by:

FactorWeightRationale
ContextHighNeurons in the current space/domain are more relevant
RecencyMediumRecently used neurons are more likely to be relevant again
ConfidenceMediumMethod neurons (confident) should match before experience neurons (fragile)
SurpriseHighNeurons with recent prediction errors need attention (active learning)

Method and category neurons naturally rank higher because they have broader fingerprints and higher confidence. The thalamus just makes this explicit.

Active learning via surprise

When a neuron has high surprise (prediction error), it becomes more salient. The brain actively seeks opportunities to test and improve that neuron. This is biological — novel or unexpected stimuli grab attention.

In practice: if the brain was recently wrong about geometry, geometry-related neurons become more salient. The next geometry query gets extra attention, possibly triggering dual-path verification even if confidence would normally be high enough to trust.


Myelination — The Compilation Chain

Frequently-used pathways get faster

In biology, myelination wraps axons in insulating sheath, dramatically increasing signal speed. Frequently-used pathways get myelinated first.

In Morphee: verified procedures get compiled to increasingly efficient forms.

Experience neuron (hippocampus):
stored_code: Python string, interpreted
Execution: ~500ms (Python subprocess)

Method neuron (neocortex):
stored_code: verified Python, deterministic
Execution: ~200ms (cached subprocess)

Cerebellum neuron:
stored_code: compiled to WASM
Execution: ~1ms (native WASM runtime)

This connects to the existing compilation chain: LlmRaw → Skill → Wasm. The difference is that the dream cycle drives it automatically based on neuron maturation:

  1. Neuron reaches confidence > 0.95
  2. Neuron has stored_code that has been verified 100+ times
  3. Dream cycle compiles code to WASM
  4. Neuron stage → Cerebellum
  5. Future recalls execute at WASM speed, not LLM speed

The brain literally grows new WASM extensions as it learns. The extension ecosystem becomes the brain's cerebellum.


What Changes in Code

Structural changes (small)

1. Add stage to Neuron (~5 lines in neuron.rs)

pub enum NeuronStage {
Hippocampus, // fresh, fragile
Neocortex, // verified, generalized
Cerebellum, // compiled WASM, automatic
}

Add stage: NeuronStage field to Neuron or NeuronTree.

2. Promote Biological Learning Loop from bench-cli to morphee-core (~200 lines)

Move code-first verification, dual-path, and dream replay from bench/cli/src/strategies/neuron_recall.rs into crates/morphee-core/src/brain/recall.rs. This makes the learning loop available to morphee-server, not just benchmarks.

3. Add decompose() for Novel queries (~50 lines in recall.rs)

When recall returns Novel, ask the LLM to decompose into sub-queries. Recursively recall each sub-query. Combine results.

4. Dream consolidation produces births (~80 lines in dream.rs)

Change dream consolidate() from "merge similar neurons into one" to "create a parent method neuron with children." The method neuron has the generalized procedure; children are the specific experiences.

5. Salience-weighted search (~30 lines in recall.rs)

Weight fingerprint matches by context, recency, confidence, and surprise. Method neurons rank higher than experience neurons.

6. Prediction tracking (~40 lines, new or in reward.rs)

Before execution, record predicted confidence. After execution, compute surprise (prediction vs reality). Use surprise as learning signal alongside reward.

What stays exactly the same

  • Neuron, Synapse, NeuronTree structs
  • NeuronStore trait + 3 implementations (InMemory, File, SQLite)
  • SparseFingerprint, FingerprintIndex
  • TrajectorySegmenter (still useful for perceptual matching)
  • Exact/Variation/Novel recall modes
  • Feature gating (fractal-brain)

What gets paused (not deleted)

  • executor.rs — signal graph propagation (premature until neurons do real work)
  • space_registry.rs — multi-space management (premature until single-space works)
  • llm_organism.rs, wasm_organism.rs — wrappers that add no intelligence
  • proto_convert.rs, organism.proto — gRPC federation (premature)
  • grammar/, substrat.rs — multi-modal abstractions (premature)
  • dream_scheduler.rs — background timer (can wait until dream logic is worth running)

The Organism trait stays as a long-term interface. The paused files stay feature-gated. When the intelligence is working and needs multi-space deployment, the infrastructure is there.


Implementation Plan

Week 1: Promote the learning loop

  • Move code-first + verification + dual-path from bench-cli to morphee-core NeuronMemory
  • Add NeuronStage to Neuron
  • Confidence-gated recall: trust / verify / decompose / anti-recall thresholds
  • This immediately makes the brain useful for morphee-server, not just benchmarks

Week 2: Neuron births in dream cycle

  • Dream consolidation creates parent method neurons from experience clusters
  • Method neurons have generalized procedures (code or templates)
  • Experience neurons become children of method neurons
  • Pruning: low-confidence, unrecalled neurons die

Week 3: Recursive decomposition + prediction

  • decompose() for Novel queries (LLM breaks problem into sub-questions)
  • Recursive how_do_i() — check each sub-question against known neurons
  • Prediction: before executing, predict confidence. After, compute surprise.
  • Surprise-weighted learning (richer signal than binary reward)

Week 4: Compilation chain + salience

  • Myelination: neurons reaching cerebellum stage get compiled to WASM
  • Salience-weighted search: context, recency, confidence, surprise
  • Wire into morphee-server (the brain serves real users)

Verification at each step

# Week 1
cargo test -p morphee-core --features fractal-brain # all brain tests pass
cargo test -p bench-cli # bench-cli still works

# Week 2
# New tests: dream birth, method neuron creation, experience pruning

# Week 3
# New tests: decomposition, recursive recall, prediction tracking

# Week 4
cargo test -p morphee-core --features fractal-brain,wasm-cranelift # WASM compilation
cd frontend && npm run build # frontend unchanged

Future Ideas

Meta-learning: learning how to decompose

The brain currently relies on the LLM for decomposition. Over time, decomposition patterns become neurons themselves:

  • "How do I decompose a math problem?" → method neuron with decomposition template
  • "How do I decompose a planning task?" → method neuron with planning template

The brain learns to decompose faster and more accurately without LLM calls.

Multi-space brain topology

Once single-space intelligence works, SpaceOrganisms can have cross-space edges:

  • A "Math" space has a strong synapse to a "Calculator" space
  • Novel math queries propagate to Calculator via the edge
  • Each space learns independently, edges learn routing

The existing SpaceOrganismRegistry and SignalGraphExecutor infrastructure supports this — it was built early but will become relevant at this stage.

Emotional valence beyond reward

The amygdala doesn't just say "good/bad." It tags memories with emotional nuance:

  • Urgency: "this was time-sensitive" → prioritize in future similar contexts
  • Social: "this involved other people" → consider group dynamics
  • Novelty: "this was completely new" → pay extra attention during consolidation
  • Frustration: "the user had to ask 3 times" → this procedure needs improvement

These could be additional dimensions on the reward signal.

Dreaming with narrative

Biological dreams don't just replay — they create novel combinations. "What if I combined the recipe method with the scheduling method?" This creative recombination during consolidation could discover new procedures that were never explicitly requested.

Federated brain (specialist neurons)

The existing knowledge network infrastructure (specialist neurons, trust roots, federated sync) becomes a network-scale brain:

  • Specialist servers = domain expert neurons
  • Trust roots = credential verification for knowledge sharing
  • Federated sync = brains learning from each other

Each instance's brain shares its compressed method neurons (not raw experiences) with the network. The network brain is made of other brains.

The brain reads its own code

Morphee already has a feature idea for "Self-Aware AI Development" — the AI reads and improves its own codebase. With the brain, this becomes: the brain's cerebellum (WASM extensions) can be inspected, tested, and improved by the brain's prefrontal cortex. The brain literally debugs its own procedures.


Design Decisions

1. Nature as mechanism, not metaphor

Every biological term maps to a concrete implementation. "Hippocampus" = neuron with confidence < 0.3, not a separate module. "Myelination" = compilation to WASM, not a vague concept. If a biological term doesn't map to code, we don't use it.

2. Same Neuron struct everywhere

Experience neurons, method neurons, and category neurons are the same type. The kind is determined by content (has children? has code? is abstract?) and stage (hippocampus/neocortex/cerebellum). No type proliferation.

3. Intelligence before infrastructure

Build the learning loop, then the deployment model. Don't build signal graphs, multi-space registries, and gRPC services before a single space can learn effectively.

4. Compression IS intelligence

Storing 1000 examples is memory. Compressing 1000 examples into 1 rule is intelligence. The dream cycle's primary job is compression, not deduplication.

5. Prediction error over binary reward

"I predicted X, got Y, the error is Z" is a richer learning signal than "correct/incorrect." Prediction error tells you how, where, and why you were wrong.

6. Grounded confidence

The brain doesn't trust similarity. It trusts verified experience. A neuron is confident because its procedure has been executed and verified, not because its embedding is close to a query.

7. The brain grows its own extensions

The WASM compilation chain is the brain's cerebellum. Learned procedures that reach maximum maturation get compiled to WASM automatically. The extension ecosystem IS the brain's long-term procedural memory.

8. Feature-gated incremental evolution

All changes are behind fractal-brain feature gate. The Pipeline continues to work. The brain is opt-in. If it breaks, turn it off. If it works, turn it on.