Morphee Architecture
Overview
Morphee is a conversational AI agent platform for groups — families, classrooms, teams. The architecture is built on one principle: everything is an Integration.
The LLM, the memory system, the frontend, and external services (Gmail, Calendar, etc.) are all Integrations with the same interface. The Agent Orchestrator is the only special component — it's the loop that uses the LLM Integration to think and other Integrations to act.
┌──────────────────────────────────────────────────────────────────────┐
│ User (Chat Input) │
└──────────────────────────────┬───────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────┐
│ Frontend (React + Tauri v2) │
│ │
│ Chat UI ←──── Dynamic Components (AI-controlled) │
│ │ ▲ │
│ │ │ render commands │
│ ▼ │ │
│ WebSocket / SSE ───── ──┤ │
│ │ │ │
│ │ HTTP │ IPC (invoke) │
│ ▼ │ ▼ │
│ ┌───────────┐ ┌────────────────────────────────────────────┐ │
│ │ Python │ │ Tauri Rust Backend (local) │ │
│ │ Backend │ │ │ │
│ │ (remote) │ │ ┌─────────┐ ┌───────┐ ┌──────┐ ┌──────┐ │ │
│ │ │ │ │ LanceDB │ │ Git │ │ ONNX │ │ GGUF │ │ │
│ │ │ │ │ (native)│ │(git2) │ │(ort) │ │(candle)│ │ │
│ │ │ │ └─────────┘ └───────┘ └──────┘ └──────┘ │ │
│ └─────┬─────┘ └────────────────────────────────────────────┘ │
└────────┼────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────┐
│ Python Backend (api.morphee.app) │
│ │
│ ┌───────────────────────────────────────────────────────────────┐ │
│ │ Agent Orchestrator │ │
│ │ │ │
│ │ Message → LLM Integration (think) → Tool Calls │ │
│ │ │ │ │
│ │ ┌──────────────────────────┤ │ │
│ │ ▼ ▼ ▼ │ │
│ │ Memory Int. Frontend Int. Service Int. │ │
│ │ (recall) (render UI) (Gmail, Cal...) │ │
│ └───────────────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────┐ ┌──────────┐ ┌──────────────────────────┐ │
│ │ Task System │ │ Events │ │ Auth & Groups │ │
│ │ (execution │ │ (Redis │ │ (Supabase JWT) │ │
│ │ tracking) │ │ pub/sub)│ │ │ │
│ └──────┬───────┘ └────┬─────┘ └──────────────────────────┘ │
│ │ │ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ VaultProvider (credential storage — NOT an Integration) │ │
│ │ EnvVault │ Keychain │ 1Password │ Mobile │ Cloud KMS │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
└──────────┼───────────────┼────────────────────────────────────────────┘
│ │
┌──────▼──────┐ ┌─────▼────────┐
│ PostgreSQL │ │ Redis │
│ (data + │ │ (events) │
│ index) │ │ │
└─────────────┘ └──────────────┘
The frontend talks to two backends:
- Python backend (HTTP) — auth, groups, cloud LLM, external integrations, orchestration
- Tauri Rust backend (IPC) — local memory (LanceDB + Git), local ML (ONNX/GGUF), audio/video
The Integration/Interface abstraction means the orchestrator doesn't know or care which backend handles a given action. A "Cloud Claude" Interface calls the Python backend; a "Local Llama" Interface calls Tauri Rust. Same contract.
Core Concepts
Group
A collection of people sharing access to Morphee. Supports families, classrooms, teams, clubs.
Space
The central organizing concept in Morphee. An isolated context where conversations, tasks, memory, and Interfaces live together.
Spaces can be nested — a Space can contain sub-Spaces, forming a hierarchy. Sub-Spaces inherit their parent's Interfaces and memory access but can override with their own. This scales from a simple family to complex multi-client work environments.
Every user has an implicit personal Space. Shared Spaces are created for group contexts. The AI manages Space routing naturally through conversation.
Everything is an Integration
| Integration | Role | Actions |
|---|---|---|
| LLM | The brain — thinks, decides, generates | chat, complete, embed, summarize |
| Memory | The knowledge — remembers, recalls, searches | search, store, recall, forget |
| Frontend | The face — renders UI dynamically (10 actions) | show_card, show_list, show_form, show_choices, show_actions, show_progress, show_table, render, update, dismiss |
| Tasks | Task management | list, create, update_status |
| Spaces | Space management | list, get_current |
| Cron | Scheduling | schedule, list, get, cancel |
| Notifications | Alerts & reminders | send, list, mark_read |
| Google Calendar | Calendar events | list_events, create_event, update_event, delete_event, check_availability |
| Gmail | Email service | list_emails, read_email, send_email, draft_email, reply_email |
| Filesystem | Local file management | list_files, read_file, write_file, delete_file, search_files |
| Skills | Dynamic workflow management | create, list, get, update, delete |
| Conversations | Conversation management | list_conversations, get_conversation, update_conversation, delete_conversation |
| Settings | Conversational settings (V0.9) | get_profile, update_profile, get_notification_preferences, update_notification_preferences, get_privacy_settings, update_privacy_settings, get_appearance_settings, update_appearance_settings |
| Slack | Team messaging | send_message, list_channels, read_messages |
| JIRA | Issue tracking | list_issues, get_issue, create_issue, update_issue, transition_issue, add_comment, list_projects |
| Echo | Testing | echo, delay_echo |
| Webhook | HTTP | receive, send |
| Onboarding | New user setup | create_group, create_spaces, complete |
Each Integration can have multiple Interfaces (configured instances). Example: Two LLM Interfaces — "Claude" for reasoning, "Haiku" for summarization.
Interface = Integration + Configuration + Vault
An Interface adds credentials and settings to make an Integration operational:
- LLM Interface: API key (vault) + model + temperature
- Gmail Interface: OAuth tokens (vault) + account
- Memory Interface: pgvector (via PostgreSQL) + Git repo path
- Frontend Interface: WebSocket connection
Secret values (API keys, OAuth tokens) are never stored in the database. The Interface config stores vault:// references that resolve through the active VaultProvider at execution time. See interfaces.md — Configuration Security & Vault for details.
Interfaces are scoped per-Space and per-Group.
Knowledge Pipeline & Runtime Hierarchy
Morphee's architecture follows a single principle: knowledge enters as conversation and exits as portable, shareable intelligence.
The Compilation Chain
Knowledge exists at different optimization levels, each backed by a runtime:
Level 0: Raw knowledge (conversations, memories) → LLMRuntime (expensive, flexible)
Level 1: Structured skills (YAML step sequences) → PythonRuntime (cheap, structured)
Level 2: Canvas components (React/JS) → JSRuntime (fast, visual)
Level 3: Compiled extensions (.wasm binaries) → WasmRuntime (fastest, portable)
BaseMorphRuntime is the abstract contract for all runtimes:
class BaseMorphRuntime(ABC):
async def load(self, source) -> LoadedModule
async def describe(self, module) -> InterfaceDefinition
async def execute(self, module, action, params, context) -> ActionResult
async def teardown(self, module) -> None
Request Flow
User message
↓
VectorRouter (memory lookup — ~10ms, free, handles ~65% of requests)
↓ fan-out across IndexerIntegrations (see features/indexer-integration.md)
↓ miss
PythonRuntime → built-in integrations (14 hardcoded, fast)
WasmRuntime → installed .wasm extensions (portable, sandboxed)
JSRuntime → canvas components (frontend, interactive)
LLMRuntime → AI reasoning (powerful, expensive — last resort)
The LLM is the runtime of last resort. Vector search, structured skills, and compiled extensions handle most requests before the LLM is ever invoked.
Space → Integration
Any Space can become a shareable Integration through the compilation chain:
Space (accumulated knowledge)
→ AI strips PII, extracts reusable patterns
→ Packages as .morph/ bundle
→ Compiles: text → YAML skills → JS components → WASM
→ Published to marketplace (OCI registry or git clone)
→ Another Space installs it as an Integration
This means non-developers create "apps" by using Morphee. Their accumulated expertise IS the source code.
System Components
1. Agent Orchestrator (Built)
The only "special" component. It runs the conversation loop:
1. User sends message
2. Load context: Space, Group, conversation history
3. Build prompt: system prompt + context + message
4. Call LLM Integration: generate response (with tool definitions)
5. If tool calls: check ai_access — EXECUTE runs immediately, PROPOSE pauses for user approval
6. Execute approved tools via Interface Manager, feed results back to LLM
7. Loop until LLM returns end_turn or max_turns reached
8. Stream response to frontend via SSE (token, tool_use, tool_result, approval_request, done events)
Memory context is injected automatically via the RAG pipeline (Phase 2) — relevant memories are appended to the system prompt before each LLM call. After conversations reach 10+ messages, facts/preferences/events are auto-extracted and stored.
Location: backend/chat/orchestrator.py
2. Interface Manager (Built)
Location: backend/interfaces/
Manages Integration registration, action discovery, and execution. Supports stateless Integrations (Echo, Webhook) and configured Integrations with credentials via VaultProvider.
The Tool Bridge (backend/chat/tools.py) converts registered Interface actions into Anthropic tool definitions using interface__action double-underscore naming (e.g., echo__echo). The Agent Orchestrator uses this to give the LLM access to all registered tools.
class BaseInterface(ABC):
name: str
description: str
config_schema: dict # What configuration this Integration needs (with security levels)
def __init__(self, config: dict, vault: VaultProvider):
self.config = config # Public values + vault:// references for secrets
self.vault = vault
async def resolve_config() -> dict # Resolve vault:// references at execution time
async def get_actions() -> List[ActionDefinition]
async def execute_action(action, params) -> dict
2b. VaultProvider (Built)
Location: backend/vault/
A lower-level system service (not an Integration) that securely stores and retrieves credentials. Interfaces depend on it to resolve their vault:// references at execution time.
The VaultProvider is per-device — on a Mac it may use 1Password or OS Keychain, on the server it uses environment variables, on a phone it uses the OS keystore. The vault:// references in the Interface config are portable across all backends.
| Backend | Platform | Phase |
|---|---|---|
EnvVaultProvider | Server / Docker | 1b |
KeychainVaultProvider | Desktop (Tauri) — OS Keychain | 2b |
OnePasswordVaultProvider | Desktop (power users) — 1Password CLI/SDK | 3 |
MobileKeystoreVaultProvider | iOS Keychain (via keyring apple-native) / Android SQLite vault | 3d (M2) |
CloudKmsVaultProvider | AWS/Azure/GCP secret managers | 5 |
See interfaces.md — Configuration Security & Vault for the full design.
3. Tauri Rust Backend (Built — Phase 2b)
Location: frontend/src-tauri/src/
The Tauri v2 app includes a Rust backend that runs locally on the user's machine. It communicates with the frontend via IPC (invoke()). This is where local, privacy-preserving compute lives.
Built (Phase 2b + 3a):
src-tauri/src/
├── main.rs
├── lib.rs # Tauri builder + setup + 25 command registration
├── state.rs # AppState (Mutex/AsyncMutex for subsystems)
├── error.rs # MorpheeError (thiserror + Serialize for IPC)
├── embeddings.rs # EmbeddingProvider — fastembed/ONNX (desktop), candle/BERT (mobile via mobile-ml feature)
├── vector_store.rs # VectorStore — LanceDB (desktop), SQLite/rusqlite (mobile via mobile-ml feature)
├── git_store.rs # GitStore (git2/libgit2, Markdown + YAML frontmatter)
├── vault.rs # VaultProvider trait — keyring (macOS/Win/Linux/iOS), SQLite (Android via mobile-ml)
├── file_store.rs # FileStore (sandboxed fs per group, path traversal prevention)
├── tokenizer.rs # Pure-Rust BERT WordPiece tokenizer (mobile — HF tokenizers crate doesn't cross-compile)
└── commands/
├── mod.rs
├── embedding_commands.rs # embed_text, embed_batch, get_embedding_info
├── memory_commands.rs # memory_insert, memory_search, memory_delete, memory_get, memory_count
├── git_commands.rs # git_init_repo, git_save_conversation, git_save_memory, git_delete_memory, git_sync
├── vault_commands.rs # vault_get, vault_set, vault_delete, vault_exists
├── health_commands.rs # health_check (all subsystem status)
├── fs_commands.rs # fs_list_files, fs_read_file, fs_write_file, fs_delete_file, fs_search_files
└── session_commands.rs # set_session, clear_session (group_id auth for IPC)
Key crates: fastembed (ONNX embeddings, desktop), lancedb (embedded vector DB, desktop), candle (BERT embeddings, mobile), rusqlite (SQLite vector store + Android vault, mobile), git2 (native Git with vendored libgit2), keyring (OS keychain). 57 Rust tests (48 desktop + 9 mobile-ml).
Phase 3 (Local AI):
candlecrate — GGUF model inference for local LLM (quantized Llama, Phi, Mistral)- Whisper STT via ONNX or whisper.cpp bindings
- Audio/video processing
Phase 4 (Offline):
- Full offline operation — local LLM + local memory + queued actions
- Sync with server when online (Git push/pull, LanceDB sync)
Key design principle: Rust functions are exposed as Tauri commands. The frontend calls them the same way it calls the Python backend — through the Integration/Interface abstraction. An Interface can target either backend. The memoryClient and fsClient route calls to Rust IPC or Python HTTP based on isTauri() runtime detection.
4. LLM Integration (Built)
The LLM is an Integration like any other. Currently implemented with Anthropic SDK (streaming + tool calling):
class LLMIntegration(BaseInterface):
name = "llm"
config_schema = {
"provider": "anthropic|openai|google",
"api_key": "string",
"model": "string",
"temperature": "float",
"max_tokens": "integer"
}
actions:
- chat(messages, tools?) → streaming response
- complete(prompt) → text
- embed(text) → vector
- summarize(text) → summary
Uses Anthropic SDK directly. The Integration abstraction means swapping to another provider later is just a new class implementing the same contract. Streaming with tool calling is fully implemented via stream_chat_with_tools().
Location: backend/chat/llm.py
4b. Memory Integration (Built — Phase 2)
Memory is an Integration the AI calls through tool use, like any other Integration:
class MemoryIntegration(BaseInterface):
name = "memory"
actions:
- search(query, scope?, memory_type?, limit?) → ranked memories (ai_access: EXECUTE)
- store(content, memory_type, scope?) → stored (ai_access: EXECUTE)
- recall(topic, scope?) → semantic search by topic (ai_access: EXECUTE)
- forget(memory_id, reason?) → removed (ai_access: PROPOSE — requires approval)
Storage layers:
- pgvector: Vector embeddings in PostgreSQL for semantic search (cosine similarity, HNSW index)
- Git-backed Markdown: Per-space git repos (one repo = one space, groups as organizations) with Markdown files + YAML frontmatter, async subprocess git
- PostgreSQL:
memory_vectorstable (content, embedding, scope, type, group_id, space_id, user_id)
Embedding abstraction: Configurable providers — OpenAIEmbeddingProvider (text-embedding-3-small, 1536 dims) or FastEmbedProvider (BAAI/bge-small-en-v1.5, 384 dims, local). Consistency validation at startup.
RAG Pipeline (memory/rag.py): Before each LLM call, searches memory across scopes (group + space + user), deduplicates against conversation history, and injects relevant context into the system prompt (~2000 token budget).
Auto-summarization (memory/summarizer.py): After conversations reach 10+ messages, the LLM extracts facts/preferences/events and stores them as structured memories.
Location: backend/memory/, backend/interfaces/integrations/memory.py
5. Task System (Built)
Location: backend/tasks/
Background execution tracking. Tasks have a lifecycle:
pending → running → completed / failed
→ paused → resumed
→ cancelled
Tasks exist to track what the AI does — the user doesn't create tasks manually.
6. Event Bus (Built)
Location: backend/tasks/events.py
Redis pub/sub for real-time event distribution. WebSocket handler pushes events to connected clients.
7. Frontend (Built)
Location: frontend/src/
Current structure (will evolve with Chat UI):
frontend/src/
├── pages/ # Route pages (Chat, Login, Onboarding, Dashboard, Tasks, Spaces, Settings, AuthCallback)
├── components/
│ ├── ui/ # shadcn/ui components (23 installed)
│ ├── auth/ # AuthForm, SSOIcons
│ ├── chat/ # ChatBubble, ConversationList, ToolCallCard, ApprovalCard, ComponentRegistry, renderers/ (10 Tier 1 renderers)
│ ├── layout/ # Sidebar, Header, BottomNav, Breadcrumbs, MobileSidebar, NotificationBell, FeatureTour
│ ├── search/ # SearchDialog (Cmd+K)
│ ├── settings/ # GoogleConnect, InterfaceConfigCard
│ ├── tasks/ # TaskList, TaskDetail, CreateTaskDialog, ConnectionStatus
│ └── spaces/ # SpaceList, SpaceDetail, SpaceCard, CreateSpaceDialog
├── store/ # Zustand stores (authStore, taskStore, spaceStore, chatStore, notificationStore, componentStore)
├── hooks/ # useAuth, useTasks, useSpaces, useChat, useOnboarding, useNotifications, useDocumentTitle, useDateLocale, use-toast
├── lib/ # api.ts, auth.ts, sse.ts, websocket.ts, runtime.ts, tauri.ts, memory-client.ts, fs-client.ts, component-events.ts, push-notifications.ts, utils.ts
├── types/ # TypeScript types
└── styles/ # CSS variables (light/dark theme)
Chat page is the primary view. Existing pages remain as monitoring/management views. SSE client (lib/sse.ts) handles streaming events (token, tool_use, tool_result, approval_request, done, error, title).
8. Auth, Groups & Onboarding (Built + Planned)
Location: backend/auth/, backend/groups/
- Supabase Auth (GoTrue) for JWT-based authentication
- Group management with Row Level Security (RLS) policies
- All data scoped to the user's Group
- Personal Space created automatically for each user
- Onboarding (Phase 1b): AI-guided conversational setup — persona detection, Group creation, default Spaces and Interfaces based on user type. See interfaces.md for details.
8b. Memory System Enhancement (Planned — V1.0)
Location: backend/memory/, frontend/src-tauri/src/git_store.rs
The git-backed memory system (Phase 2) will be extended with branching and searchable commit history:
Git Branching:
GitStore.create_branch(group_id, space_id, branch_name, from_commit?)— create memory timelineGitStore.switch_branch(group_id, space_id, branch_name)— change active branchGitStore.merge_branch(group_id, space_id, source, target, strategy)— merge timelinesGitStore.list_branches(group_id, space_id)— enumerate branches- Branch-aware RAG:
RAGPipeline.search(query, branch?)limits context to specific timeline
Commit Search:
git_commit_metadatatable: commit_hash, message, timestamp, branch, group_id, space_id, embedding (384-dim vector)- Commits embedded during
MemorySyncSchedulerbackground sync (per-space) MemoryIntegration.search_history(query, time_range?, branch?)— semantic search over commits- Temporal navigation: "Show me what I knew about vacation planning last month"
Frontend:
TimelineRenderercomponent (Tier 2): visual git log with branch graphMemoryTimelinepage: commit detail, diff view, branch selector- Historical context badge: "Viewing: 2 weeks ago"
8c. Settings as Integration (COMPLETE — V0.9)
Location: backend/interfaces/integrations/settings.py
All app settings exposed via conversational interface:
class SettingsIntegration(BaseInterface):
name = "settings"
# Wraps: AuthService, GroupService, NotificationService, InterfaceConfigService
# 10 actions: get/update for profile, group, notifications, privacy, appearance, integrations
# Permission-aware: parent-only for group settings, user-only for profile
Setting categories:
profile(name, email, avatar, language)group(name, timezone, default_space)notifications(types, quiet_hours, sound)privacy(analytics, crash_reports)appearance(theme, text_size, animations)integrations(interface configs)
Use case: User says "Turn off email notifications at night" → settings__update_notification_preferences(type="email", quiet_hours={start: "22:00", end: "07:00"})
8d. Interactive UI & Haptic Feedback (Planned — V1.3)
Location: frontend/src-tauri/src/haptics.rs, frontend/src/components/chat/renderers/
Editable AI components with haptic feedback:
ComponentSpec extensions:
{
editable: boolean, // enables inline editing
edit_permissions: "user" | "parent" | "ai_turn",
haptic_feedback: "pulse" | "vibrate" | "flash" | "none",
auto_save: boolean,
edit_schema: object // validation for edits
}
Haptic implementation:
- Desktop: macOS NSHapticFeedbackManager, Windows Haptics API, Linux evdev
- Mobile: iOS UIImpactFeedbackGenerator, Android Vibrator
- Web:
navigator.vibrate()(limited support)
Editable components: card, list, table, calendar, kanban
Flow: AI renders → user edits inline → haptic pulse → save → optionally triggers new AI turn
8e. Monitoring & Analytics (Planned — V2.0)
Location: backend/analytics/, backend/utils/metrics.py
PostHog (product analytics):
- Event tracking (chat messages, tool calls, approvals, settings changes)
- Feature flags for A/B testing
- Session replay for debugging
- User properties (persona, group_size, active_integrations)
- Privacy: opt-in only, PII scrubbing (never log message content)
Prometheus + Grafana (infrastructure metrics):
- API metrics (request rate, latency, errors per endpoint)
- LLM metrics (tokens/sec, streaming latency, tool call count)
- Memory metrics (RAG latency, vector search, git sync success)
- WebSocket metrics (active connections, event throughput)
- 5 dashboards: API Health, LLM Performance, Memory System, User Activity, Errors & Alerts
8f. Extension Ecosystem — WASM Platform (Planned — Layer 4, V1.2)
Location: backend/extensions/, frontend/src-tauri/src/extensions/
Third-party extensions via WebAssembly. Same .wasm binary runs on Python backend AND Rust frontend (zero duplication). Extensions are backed by BaseMorphRuntime:
- WasmRuntime: Executes
.wasmextensions —wasmtime-py(Python),wasmer5.0 (Tauri Rust, Cranelift JIT on desktop/Android, Wasmi interpreter on iOS) - JSRuntime: Executes canvas/UI components in browser/Tauri webview
- PythonRuntime (future): Dynamic Python scripts at runtime
WASM Extension Contract — mirrors BaseInterface:
describe() → json— returns name, version, actions[], events[], config_schemaexecute(action, params, context) → result— runs an action
Distribution: OCI registry at rg.morphee.ai (GHCR public, Harbor private)
Security: 10 granular install-time permissions, code signing (RSA-PSS + SHA-256), resource limits
OpenMorph: Extensions stored in .morph/extensions/*.wasm — portable with Space
See WASM Extension System for full design.
Browser Extension & SDK (V2.0+): Chrome/Firefox extension, @morphee/sdk npm package, partner framework with revenue sharing.
8g. Self-Aware AI Development (Planned — V2.5+)
The revolutionary meta-feature: Morphee's codebase IS her memory database. Through the Memory Integration, Morphee can read, understand, and explain her own source code.
MorpheeSelfIntegration:
New integration that enables Morphee to:
- Search her own codebase (Python, TypeScript, Rust, Markdown)
- Explain how features are implemented with code references
- Review community-contributed branches with recommendations
- Propose improvements to her own code (requires human approval)
Self-Referential Memory:
GitStore supports special morphee-self group that points to the running codebase (/morphee-beta):
# backend/memory/git_store.py
def _repo_path(self, group_id: UUID, space_id: UUID) -> Path:
if str(group_id) == "morphee-self" and config.ENABLE_SELF_AWARENESS:
return Path(config.MORPHEE_CODEBASE_PATH) # /morphee-beta
return self.base_path / str(group_id) / str(space_id)
Actions:
# Read-only (EXECUTE)
search_code(query, file_types, max_results) -> results with line numbers + context
explain_implementation(feature, detail_level) -> structured explanation with code snippets
get_architecture_diagram(component) -> parsed docs/architecture.md with code refs
# Write (PROPOSE - requires human approval)
review_community_branch(branch_name) -> analysis + recommendation (approve/request changes/reject)
suggest_improvement(component, issue, solution) -> creates feature branch with drafted changes
Governance:
- Benevolent Dictator: Sebastien Mathieu has sole merge authority to main branch
- ACL Roles: viewer (read code), contributor (create branches), reviewer (comment PRs), maintainer (merge dev), dictator (merge main)
- Collaborative Workflow: Community creates branches → Morphee reviews → Human approves → Merge
Database:
morphee_self_config -- feature flags, codebase path, allowed file types
code_review_history -- Morphee's reviews of PRs (analysis, recommendation, human decision)
self_improvement_proposals -- Morphee's suggested code improvements
self_awareness_audit_log -- all self-awareness actions for security audit
Frontend:
- CodeExplorer: Search bar with syntax-highlighted results, file links, line numbers
- CommunityPage: PR list, review details, branch management, contributor recognition
Safety:
- Read-only by default (search/explain = EXECUTE, write = PROPOSE)
- File access restrictions: only allowed extensions (.py, .ts, .rs, .md), block secrets (.env, .key)
- No code execution: all changes go through branch → PR → CI → human review → merge
- Audit trail: logs all self-awareness actions (user_id, action, details, approved)
The Meta-Recursive Loop:
User asks → Orchestrator → Memory Integration → search("auth")
→ GitStore(morphee-self) → git grep in /morphee-beta
→ Returns backend/auth/client.py:42-89
→ RAG injects code → LLM explains with line numbers
Philosophical Impact:
First AI agent that:
- Reads her own implementation (not training data, but actual running code)
- Explains her decisions with source code citations
- Participates in her own development (proposes improvements)
- Collaborates with humans on her own codebase
Use cases:
- Developer: "How do you handle WebSocket auth?" → Morphee cites
backend/api/websocket.py:56 - Contributor: Creates Slack integration branch → Morphee reviews architecture, tests, docs → "APPROVE"
- Morphee: Identifies inefficient scheduler → proposes priority queue optimization → awaits Sebastien's review
9. Encryption at Rest (Built)
Location: backend/crypto/
Application-level encryption for data at rest using Fernet symmetric encryption (cryptography.fernet). All personal data is encrypted before storage and decrypted on read:
- Chat messages:
ChatServiceencryptscontentbefore INSERT, decrypts on SELECT - Memory vectors:
VectorStoreencryptscontentbefore INSERT, decrypts in search/get results - Git files:
GitStoreencrypts entire Markdown file content before writing to disk - Summarizer: Decrypts existing memories before LLM comparison
- Data export: Decrypts content for GDPR data export
Encrypted data uses an ENC: prefix for identification, enabling gradual migration — old plaintext data is returned as-is. When ENCRYPTION_KEY is not set (development), encryption is disabled and data passes through as plaintext.
Key management: Single Fernet key in ENCRYPTION_KEY env var. Losing this key = all encrypted data permanently unrecoverable.
10. Database (Built)
PostgreSQL via direct asyncpg connection pool (db/client.py). PostgREST has been removed — simpler stack, fewer moving parts.
Data Flow
Chat Message
1. User types message in Chat UI
2. Frontend → POST /api/chat {content, conversation_id?}
3. Agent Orchestrator:
a. Build system prompt with user/group context
b. Call LLM Integration: chat(messages + tools)
c. LLM decides to call tools → check ai_access (EXECUTE runs, PROPOSE pauses for approval)
d. Execute tools via Interface Manager (Tool Bridge), feed results back to LLM
e. Loop until end_turn or max_turns
f. Stream response to frontend via SSE (token, tool_use, tool_result, approval_request, done)
4. Save assistant message with tool call metadata
5. Frontend renders text + ToolCallCards + ApprovalCards inline
Steps 3a-3b include RAG context injection (search memory, append to system prompt) and step 4 includes auto-summarization after 10+ messages.
Spaces (automatic routing)
User: "Remind me to buy milk"
→ Agent: No Space specified → personal Space
→ Memory: Store in personal Space
User: "In meal planning, add chicken to the shopping list"
→ Agent: Detects "meal planning" → routes to that Space
→ Memory: Store in "Meal Planning" Space
User: "Create a space for the book club"
→ Agent: Creates new Space, invites group members
→ Memory: Initialize Space memory
Infrastructure
Development
Docker Compose:
- backend (FastAPI + asyncpg, port 8000)
- postgres (PostgreSQL, port 54322)
- redis (Redis, port 6379)
- supabase-auth (GoTrue, port 9999)
Frontend runs locally:
- npm run dev (Vite, port 5173)
- npm run tauri dev (Tauri desktop)
Production
- Backend: Docker container behind reverse proxy
- Frontend: Tauri desktop app (macOS, Windows, Linux)
- Tauri Rust: embedded LanceDB, Git, ONNX/GGUF models (local on user machine)
- Database: Managed PostgreSQL
- Redis: Managed Redis
- LanceDB: embedded in Tauri (local-first), server copy for sync
- Git: local repos in Tauri for memory, push/pull to server for backup
Design Decisions
| Decision | Rationale |
|---|---|
| Everything is an Integration | Unified abstraction. LLM, Memory, Frontend, Gmail — same contract |
| Chat-first UI | Non-technical users just talk |
| Group/Space naming | Universal: families, classrooms, teams. Not corporate jargon |
| Personal Space | Users don't need to create a Space to start using Morphee |
| Direct asyncpg | Simpler than PostgREST, fewer moving parts |
| LanceDB | Lightweight embedded vector DB, native Rust — runs in Tauri, no server needed |
| Git-backed memory | Human-readable, versioned, auditable. 1 repo per space, groups as organizations. libgit2 via git2 crate in Tauri |
| Conversations as memory | Not a separate DB table — they're stored in Git + LanceDB |
| Single LLM provider first | Start simple (Anthropic SDK), swap later via Integration abstraction |
ONNX via ort | Local embeddings, Whisper STT. Hardware-accelerated (Metal, CUDA) |
GGUF via candle | Local LLM inference. Pure Rust, no C++ build chain. Hugging Face maintained |
| Hybrid Python + Rust | Python backend for cloud/orchestration, Tauri Rust for local compute/privacy |
| Offline-first | App works without internet. Server is a sync hub, not a hard dependency |
| Redis pub/sub | Real-time events without polling |
| 100% async | Concurrent AI + tool calls + streaming |
| VaultProvider for secrets | Pluggable vault backends (env, OS Keychain, 1Password, mobile keystore, cloud KMS). Secrets never in DB. Offline-compatible |
Related Documentation
- ROADMAP.md — Development roadmap and vision
- interfaces.md — Integration/Interface system guide
- api.md — Backend API reference
- status.md — Implementation status
- testing.md — Testing guide
- deployment.md — Deployment guide
Last Updated: February 20, 2026