Skip to main content

Morphee Roadmap

Vision

Morphee is a conversational AI agent for groups of people — families, classrooms, teams. You talk to it. It understands, takes action, and responds.

The interface is minimal — almost a blank screen with a chat input. UI components appear dynamically when the AI needs to show structured information (a calendar, a task list, a shopping list, a form). The user never needs to learn a dashboard. Grandma just types. Kids just ask. Teachers just instruct.

Morphee is self-hosted, privacy-first, and extensible. It connects to the services your group already uses through a unified Integration system where everything — including the LLM itself, the memory system, and the frontend — is an Integration.

The app is built on Tauri v2 — a React frontend with a Rust backend embedded in the desktop and mobile apps. This Rust layer is key to Morphee's long-term vision: local ML inference (ONNX/GGUF), embedded LanceDB and Git for memory, audio/video processing, and eventually full offline operation on all platforms. The Python backend (api.morphee.app) handles cloud services, auth, and orchestration today, but over time the app itself becomes capable of running autonomously.


Core Concepts

Group

A collection of people who share access to Morphee. A group can be a family, a classroom, a team, a department — any set of people who collaborate.

Examples: "The Dupont Family", "Math 101 Class", "Engineering Team", "Book Club".

Space

A Space is the central organizing concept in Morphee. It's an isolated context where conversations, tasks, memory, and Interfaces live together. What the AI knows in one Space doesn't leak into another.

Every user has a personal Space by default. You don't need to create a Space to use Morphee — just talk and it goes to your personal Space.

Spaces can be nested. A Space can contain sub-Spaces, forming a hierarchy. Each sub-Space inherits its parent's Interfaces and memory access but can override with its own. This is how Morphee scales from a simple family to complex multi-client work environments:

Simple (family):
🏠 The Martins
├── 🍽️ Meal Planning
├── 🛒 Shopping
├── 📚 Homework
└── 🎉 Emma's Birthday Party

Complex (freelancer):
🏢 Seb Dev
├── 💼 TechCorp (JIRA + Slack connected)
│ ├── 🚀 API Refactor
│ └── 🚀 Mobile App
├── 💼 StartupXYZ (Linear + Discord connected)
│ └── 🚀 MVP
└── 📋 Admin (invoicing, contracts)

Interface inheritance: TechCorp's JIRA and Slack are automatically available in all its sub-Spaces. A sub-Space can add or override Interfaces if needed.

The AI manages Spaces naturally through conversation:

You: "Remind me to buy milk"
→ Personal Space (default)

You: "In the meal planning space, add pasta to this week's menu"
→ "Meal Planning" Space

You: "Create a space for the science fair"
→ New shared Space, other group members can join

You: "Open TechCorp API Refactor"
→ Loads TechCorp context, JIRA board, Slack channels, project memory

Integration

The definition of a service or capability. An Integration is code — a stateless, shared protocol that declares what it can do: available actions, configuration schema, parameter types, and .morph/ directory namespace.

Everything is an Integration — external services, the LLM, the memory system, the frontend:

IntegrationPurposeExample Actions
LLMThe AI's brainchat, complete, embed, summarize
MemoryKnowledge storagesearch, store, recall, forget
FrontendDynamic UI renderingshow_card, show_form, show_list
GmailEmail servicesend, read, draft
CalendarEvent managementlist_events, create_event
CronSchedulingschedule, list_schedules

Interface

A configured instance of an Integration, ready to use. An Interface takes an Integration definition and adds credentials, ACL rules, and configuration. Interfaces are bound to a Space and/or Group.

Interfaces are scoped per-Space and per-Group. Sub-Spaces inherit their parent's Interfaces — a project Space inside a client Space automatically gets the client's JIRA and Slack. A sub-Space can add or override Interfaces at its level.

IntegrationInterface (configured instance)
LLM"Cloud Claude" (Anthropic API key, model: claude-sonnet-4-20250514, temperature: 0.7)
LLM"Local Llama" (GGUF model path, runs in Tauri Rust via candle)
Memory"Cloud Memory" (PostgreSQL + LanceDB on server)
Memory"Local Memory" (embedded LanceDB + libgit2 in Tauri Rust)
Gmail"Mom's Gmail" (OAuth tokens + account)
Calendar"Family Calendar" (API key + calendar ID)
Frontend"Desktop App" (WebSocket connection)

This means swapping Claude for a local model = creating a different Interface for the same LLM Integration. The orchestrator doesn't know or care where compute happens. It could use Cloud Claude for complex reasoning and Local Llama for quick tasks — same contract, different runtimes.

Two-Way Communication

Interfaces communicate in both directions:

  • Outbound (Morphee → external): execute() calls an Interface action (e.g. send an email, create a calendar event, write to .morph/). The orchestrator initiates these.
  • Inbound (external → Morphee): get_events() declares what events an Integration can emit (e.g. calendar.event_created, gmail.email_received). The EventBus subscribes to these and routes them to the agent loop.

Every Integration declares both sides of its contract — what it can do when called, and what it can tell Morphee when something happens.

Skill

A composable capability the AI uses. Skills combine multiple Interface calls into higher-level actions. Can be built-in or AI-generated dynamically at runtime.

Example: A "Schedule Appointment" Skill uses the Calendar Interface to create an event, the Memory Interface to store the context, and the Frontend Interface to show a confirmation card.

Memory

Memory is an Integration — the AI interacts with it the same way it interacts with Gmail or Calendar, through tool calls.

Conversations are memory. They're stored in Git (Markdown, searchable, versioned) and embedded in LanceDB (semantic search). A minimal PostgreSQL index tracks which conversations exist for quick listing.

Three scopes:

  • Group memory: Shared rules, preferences, schedules
  • Space memory: Context-specific notes, decisions, documents
  • User memory: Individual preferences, habits, communication style

The Knowledge Pipeline: From Usage to Shareable Intelligence

Morphee's architecture is built on a single flow: knowledge enters as conversation and exits as portable, shareable intelligence — without anyone writing code.

Space → Integration

Every Space accumulates knowledge: conversations become memories, patterns become skills, layouts become canvas states. Over time, a Space develops expertise. That expertise can be extracted and shared as an Integration.

A teacher's "Math Tutoring" Space, after months of use, contains grading skills, lesson plan templates, a student dashboard canvas, and homework reminder schedules. She clicks "Share as Integration" — the AI strips personal data (student names, grades), extracts the reusable patterns, and packages it. Other teachers install it into their own Spaces.

No one wrote code. The teacher's usage IS the source code.

Sophie's "Math Tutoring" Space (6 months of use)
→ 12 skills (grading, quizzes, lesson plans)
→ Canvas layout (student dashboard)
→ Memory patterns (teaching strategies)
→ Schedules (homework reminders)

─── "Share as Integration" ───►

AI strips PII, extracts reusable patterns
Packages as .morph/ bundle → compiles to optimal level

Result: edu.sophie.math-tutoring
→ Any teacher installs it, gets Sophie's methods

The Compilation Chain (BaseMorphRuntime)

Knowledge exists at different levels of optimization. Raw conversation is flexible but expensive to interpret. A compiled WASM binary is fast but fixed. The system progressively compiles knowledge down as it stabilizes:

LevelFormatRuntimeSpeedFlexibilityWho creates it
0Raw knowledge (memories, conversations)LLMRuntimeSlowMaximumAnyone by talking
1Structured skills (YAML + step sequences)PythonRuntimeMediumHighAI extracts from usage
2Canvas components (React/JS)JSRuntimeFastMediumAI places on canvas
3Compiled binary (.wasm)WasmRuntimeFastestFixedAutomated compilation or developers

Compilation is lazy and incremental. A skill used twice a month stays at Level 1. A skill used 1000 times? Worth compiling to WASM. The system promotes knowledge through levels as usage demands it.

BaseMorphRuntime Hierarchy

Every execution path in Morphee flows through a runtime. BaseMorphRuntime is the abstract contract that unifies them all:

Request arrives

VectorRouter (memory lookup — ~10ms, free)
↓ cache miss
BaseMorphRuntime hierarchy:
PythonRuntime → built-in integrations (hardcoded, fast)
WasmRuntime → installed extensions (portable, sandboxed)
JSRuntime → frontend canvas/UI (live, interactive)
LLMRuntime → AI reasoning (powerful, expensive — last resort)

The LLM is the runtime of last resort, not the brain that controls everything. Vector search handles ~65% of requests without touching the LLM. Structured skills handle another chunk via PythonRuntime. WASM extensions handle professional integrations. The LLMRuntime is invoked only when nothing cheaper can handle the request.

This maps directly to cost: Level 0 knowledge (raw text) needs the expensive LLMRuntime. Level 3 knowledge (compiled WASM) runs on WasmRuntime at near-native speed. Compilation = cost optimization.

The Full Pipeline

USE (Space)
↓ conversations accumulate knowledge
EXTRACT (Vector-First + Auto-Summarization)
↓ AI identifies reusable patterns, strips PII
COMPILE (BaseMorphRuntime chain)
↓ text → YAML skills → JS components → WASM binary (progressive)
SHARE (OpenMorph + Marketplace)
↓ .morph/ bundle on OCI registry, or just git clone
INSTALL (another Space)
↓ InterfaceManager registers it as an Integration
USE (repeat — knowledge flows back)

Every major feature contributes to this pipeline:

  • OpenMorph (V1.0) — the .morph/ directory IS the portable packaging format
  • Vector-First — the intelligence layer that identifies what's novel and reusable
  • IndexerIntegrations — unified vector indexing via event subscription + tool pre-filtering (see IndexerIntegration design)
  • Canvas — the presentation layer where knowledge becomes visible and interactive
  • WASM Extensions (V1.2) — the compilation target for maximum portability and performance
  • Marketplace — the distribution layer where knowledge becomes shareable
  • Local AI (V1.5) — LLMRuntime runs locally, making the entire pipeline offline-capable

The Marketplace as Knowledge Exchange

The marketplace is not an App Store for developers. It's a knowledge exchange — a teacher's teaching methods, a family's meal planning system, a contractor's project workflow. You're paying for someone's accumulated expertise, not their code.

This is why MorphCoin (Layer 8) makes sense: the crypto marketplace values knowledge, not software. A dataset, a trained skill set, a proven workflow — all are tradeable assets in the Morphee economy.


What's Built (Foundation)

Backend

  • Async FastAPI with Supabase Auth (JWT), group management, RLS policies
  • Task system: full CRUD, state machine, group-based isolation
  • Space system: full CRUD, group-scoped queries
  • WebSocket: JWT auth, group-based event filtering, real-time updates
  • MCP-style Interface system: BaseInterface, InterfaceManager, action definitions
  • Built-in Integrations: Echo (testing), Webhook (HTTP)
  • ACL system, age verification, parental consent, encryption at rest, i18n (en/fr)
  • Redis pub/sub event bus, Docker Compose dev environment
  • 1,925 tests, 84% coverage

Frontend

  • Vite 6 + React 19 + TypeScript + Tauri 2.0 + Tailwind CSS + shadcn/ui
  • Auth, app shell, dark mode, tasks, spaces, dashboard, onboarding
  • WebSocket live updates, 24 shadcn/ui components, Zustand stores
  • 2,186 unit tests + 60 E2E tests

Infrastructure

  • PostgreSQL 15 + Redis 7 + direct asyncpg connection pool
  • Supabase Auth (GoTrue), Docker Compose, Tauri 2.0
  • Tauri Rust backend: fastembed (ONNX embeddings, desktop), LanceDB (vector store, desktop), candle (BERT embeddings, mobile), rusqlite (SQLite vector store + Android vault, mobile), git2 (Git storage, all platforms), keyring (OS keychain), file_store, llm (candle/GGUF), tts, whisper, extensions (WASM) — 58 IPC commands, 110 tests (90 desktop + 9 mobile-ml + 11 skipped)
  • Mobile: Tauri iOS + Android scaffolding, conditional compilation for desktop-only crates, BottomNav, safe areas
  • Institutional website: Astro 5 + Tailwind CSS v4 static site (www/) — landing page, docs index, privacy, terms, about, contact. Deploys to static hosting (Netlify/Vercel) or served via Nginx. See docs/deployment.md for build instructions.

Completed Architectural Changes

  • Rename Family → Group, Project → Space across codebase — DONE
  • Remove PostgREST, replace with direct asyncpg — DONE
  • Add configuration support to BaseInterface (Integration/Interface split) — DONE
  • Implement VaultProvider abstraction — DONE (EnvVaultProvider)

Phase 1a: Minimal Chat Loop

Status: COMPLETE

A working conversation loop: user types, AI responds with token-by-token SSE streaming. No tools, no memory, no dynamic UI — just chat.

Chat UI (Frontend)

  • Chat page as the default route — near-blank screen, centered input
  • Message history display (user messages + AI responses)
  • Streaming AI responses (token-by-token display via SSE)
  • Simple conversation list in sidebar (replaces current navigation)
  • Personal Space is implicit (no Space selection needed initially)

LLM Integration (Backend)

  • LLM Integration class with chat and complete actions
  • Single provider to start (Anthropic SDK directly), clean abstraction so providers can be swapped
  • System prompt with group and user context
  • Conversation history management (message list, context windowing)
  • Streaming response via SSE endpoint
  • Configuration stored as Interface: API key (vault reference), model, temperature

Chat API

  • POST /api/chat — Send message, receive streaming response
  • GET /api/conversations — List conversations for current user/Space
  • GET /api/conversations/{id}/messages — Get conversation history
  • Conversation metadata in PostgreSQL, full messages in Git + LanceDB (Phase 2)

Phase 1b: Tool Calling, Agent Loop & Onboarding

Status: COMPLETE

AI can take actions — call Integrations, create tasks, render UI components. New users welcomed via conversational onboarding.

Agent Orchestrator

  • Message → LLM → Tool Calls → Results → Response loop
  • LLM Integration provides the "thinking", other Integrations provide the "doing"
  • Multi-step: LLM can call multiple tools in sequence
  • Error handling: graceful fallbacks, user-friendly messages

Tool Integration

  • Interface Manager actions exposed as LLM tools
  • AI can call any registered Integration action
  • Approval workflow for sensitive actions (ai_access: propose)
  • Action metadata: side_effect (read/write/delete), returns (output schema), idempotent
  • Event system: Integrations declare events they emit via get_events()
  • Permission model: PermissionPolicy per-Interface, ExecutionContext for per-call checks
  • Task creation: agent creates tasks behind the scenes when appropriate

VaultProvider (Secure Credential Storage)

  • VaultProvider abstraction: async interface for get/set/delete/exists
  • EnvVaultProvider as default backend (reads secrets from environment variables)
  • vault:// references in Interface config — secrets never stored in DB
  • config_schema security levels: public (plain), private (vault, masked), secret (vault, hidden)
  • resolve_config() on BaseInterface — lazy-resolves vault references at execution time
  • See interfaces.md — Configuration Security & Vault for full design

Conversational Onboarding

  • First login → AI-guided setup conversation (not a form wizard)
  • Persona detection: parent, grandparent, kid, teacher, manager, employee, etc.
  • Group creation via conversation (name, type)
  • Default Spaces created based on persona
  • Core Interface configuration (LLM API key)
  • Suggested Integrations based on persona and needs
  • Default workflows activated (morning briefing, reminders, etc.)
  • See interfaces.md for full onboarding details and persona defaults

Frontend as Integration (basic)

  • AI can send simple render commands (show a card, show a list)
  • Component registry maps action names to React components
  • Bidirectional: UI events flow back to the AI

Phase 2: Memory & Knowledge

Status: COMPLETE. Python-side memory (pgvector, subprocess git, cloud embeddings) and Tauri Rust layer (LanceDB, git2, fastembed, OS keychain) both implemented.

The AI now has persistent semantic memory — it remembers facts, preferences, and events across conversations via embeddings + pgvector, git-backed Markdown, RAG context injection, and auto-summarization.

Memory Integration (Built)

  • Memory as an Integration with 4 actions: search, store, recall, forget
  • LLM calls Memory the same way it calls Gmail — through tool calls
  • Three scopes: group (shared), space (context), user (personal)
  • Memory types: fact, preference, event, conversation_summary, note
  • forget requires user approval (ai_access: PROPOSE)

Vector Storage — pgvector (Built)

  • PostgreSQL extension for vector similarity search (cosine distance, HNSW index)
  • memory_vectors table: content, embedding, scope, type, group_id, space_id, user_id
  • Embedding abstraction: OpenAIEmbeddingProvider (1536 dims) or FastEmbedProvider (384 dims, local)
  • EmbeddingManager singleton with consistency validation at startup
  • embedding_config table tracks active provider/model/dimensions

Git-Backed Markdown (Built)

  • Per-group git repos with Markdown files + YAML frontmatter
  • Directory structure: conversations/, facts/, preferences/, events/, notes/, spaces/
  • Async git operations via asyncio.create_subprocess_exec("git", ...)
  • MemorySyncScheduler — hourly background sync (git push/pull when remote configured)
  • Three scopes: group / space / user

RAG Pipeline (Built)

  • Before each LLM call, searches memory across scopes (group + space + user)
  • Deduplication: by content hash + against conversation history
  • Token budget: ~2000 tokens (~8000 chars) for injected context
  • Graceful degradation: RAG failures don't block chat

Auto-Summarization (Built)

  • After conversations reach 10+ messages, LLM extracts facts/preferences/events
  • Extracted items: embed → pgvector + git (per item)
  • Conversation summary (2-3 sentences) stored as conversation_summary type
  • Triggered as background task after assistant response

Phase 2b: Tauri Rust Layer (COMPLETE)

Memory operations implemented in native Rust for local-first, offline-capable desktop operation:

  • fastembed crate — ONNX embeddings (AllMiniLML6V2, 384 dims, auto-download)
  • lancedb crate — embedded vector DB (cosine search, Arrow RecordBatch schema)
  • git2 crate — native Git operations (libgit2, vendored, same Markdown format as Python)
  • keyring crate — OS keychain for secrets (macOS Keychain, Windows Credential Manager, Linux Secret Service)
  • IPC bridge: isTauri() runtime detection, typed invoke wrappers, unified memoryClient routes to Rust or Python
  • 58 Tauri commands (embed, memory, git, vault, health, fs, session, llm, tts, whisper, extensions, morph, vector, queue), 110 Rust tests (90 desktop + 9 mobile-ml + 11 skipped)

Layer 1: Core Integrations & Mobile Foundation

Core Integrations (COMPLETE)

  • Google Calendar: Events, reminders, availability
  • Gmail: Read/send/draft
  • Cron: Scheduled tasks, recurring reminders
  • Notifications: Desktop (Tauri), push
  • Filesystem: Local files, group documents
  • Google OAuth: Full OAuth2 flow with token refresh

Dynamic Skill Generation (COMPLETE)

  • SkillEngine: validate + execute step sequences with template interpolation
  • DynamicSkillInterface: runtime-generated interface per skill (single run action)
  • SkillsIntegration: AI-facing tool for creating/managing skills (5 actions)
  • REST API for skill CRUD, startup loading from DB, cron scheduling support
  • 83 new tests

SSO Login (COMPLETE)

Social/SSO login via GoTrue's built-in OAuth providers. See features/archive/sso-login.md for full design.

Tier 1 (must have): Google, Apple, Microsoft/Azure AD Tier 2 (should have): GitHub, Discord, Slack Tier 3 (later): SAML 2.0 for enterprise

Implemented:

  • GoTrue provider configuration (env vars in docker-compose)
  • Backend SSO endpoints (/api/auth/providers, /api/auth/sso/{provider}, /api/auth/sso/callback)
  • Frontend social login buttons + /auth/callback redirect handler + AuthCallback page
  • "Find or create" user flow on SSO callback (auto-creates morphee_user if new)
  • Provider discovery via GoTrue /settings endpoint
  • 21 backend tests + 17 frontend tests

Mobile (iOS & Android)

See features/2026-02-12-mobile-ios-android.md for full investigation and design.

Tauri v2 has stable iOS and Android support (since Oct 2024). The codebase now has full mobile scaffolding with conditional compilation for desktop-only crates.

Key challenge (solved in M1): fastembed (ONNX Runtime C++ binaries) and lancedb (SIMD detection failure on Android — lance#2411) don't compile for mobile targets. Solved with #[cfg(not(any(target_os = "ios", target_os = "android")))] conditional compilation — mobile runs online-only via Python backend.

Sub-Phase M1 — Minimal Mobile App — COMPLETE

Online-first mobile: connects to Python backend, local Rust features that compile for mobile (git2, keyring on iOS). Implemented:

  • Tauri mobile project scaffolding (tauri ios init, tauri android init)
  • Conditional compilation: #[cfg(not(any(target_os = "ios", target_os = "android")))] to gate fastembed + lancedb
  • Desktop/mobile module split: shared types + cfg-gated implementations
  • Android vault stub, platform-aware health check
  • Frontend isMobile(), isDesktop(), hasLocalML() detection
  • Memory client routes to Python HTTP on mobile via hasLocalML()
  • Mobile UI: BottomNav (5-tab), safe area CSS, 44px touch targets
  • CI/CD: GitHub Actions for iOS + Android builds enabled
  • Still pending: physical device testing, App Store + Google Play submission

Sub-Phase M2 — Native Mobile Features — COMPLETE

Replaced desktop-only crates with pure-Rust alternatives that compile everywhere. Implemented:

  • candle-based BERT embeddings on mobile (pure Rust, Metal on iOS, CPU on Android)
  • Pure-Rust BERT WordPiece tokenizer (HF tokenizers crate doesn't cross-compile)
  • SQLite vector store via rusqlite (brute-force cosine search, mobile-scale data)
  • Android vault via SQLite-backed encrypted storage (replacing the M1 stub)
  • Push notifications: APNs (iOS) + FCM (Android), backend push service, device token management
  • Deep linking for SSO callbacks (morphee://auth/callback scheme)
  • Frontend hasLocalML() updated: isTauri() (was isDesktop()) — mobile now routes to Rust IPC
  • 9 new Rust tests (mobile-ml feature), 5 new frontend tests, 6 new backend tests

Housekeeping (Completed Feb 13, 2026)

  • OAuth redirect migration: Google OAuth migrated from popup to full-page redirect (HTMLResponse+postMessage → RedirectResponse+query params, FRONTEND_URL config added)
  • Package upgrades: Full dependency audit completed — Python (fastapi 0.115.8, pydantic 2.10.6, httpx 0.28.1, cryptography 44.0.2, supabase-auth 2.28.0), Docker (Postgres 15.8.1.060), npm update, cargo update
  • GoTrue upgrade: v2.99.0 → v2.185.0, Python gotruesupabase_auth package migration

Frontend as Integration (advanced) — COMPLETE

Full bidirectional component protocol: the AI composes interactive UI from atomic building blocks, users interact, events flow back to the AI.

  • ComponentSpec protocol: structured JSON (id, type, props, children, events, layout)
  • 10 FrontendIntegration actions: 7 semantic + 1 generic render + 2 lifecycle (update/dismiss)
  • Three-tier event system: LOCAL (frontend-only), AI_TURN (triggers new AI chat turn)
  • 10 Tier 1 component renderers: card, list, form, choices, actions, progress, table, confirm, grid, markdown
  • Recursive ComponentRenderer with self-registering pattern
  • componentStore (Zustand) for active component lifecycle
  • POST /api/chat/component-event endpoint for bidirectional communication
  • WebSocket component.* event subscription for server-push updates
  • Backward compatible: _spec (new) alongside _component (legacy)
  • Tier 2 planned: tabs, accordion, calendar, chart, image, timeline
  • Tier 3 planned: kanban, timer, map, carousel, editable table, tree view

Core Design Principle: Vector-First Architecture

Every user message hits the vector database before it reaches the LLM.

The vector DB (pgvector server-side, LanceDB on desktop, SQLite-vec on mobile) is the first brain. Claude is the last resort — called only when the vector layer cannot confidently handle the request. This reduces Claude API calls by 50–65% for typical usage.

Decision Tree (runs in ~10ms, locally, free)

user message
→ embed with local model (fastembed desktop / candle mobile, free, ~5ms)
→ VectorRouter checks in parallel:
├── similarity ≥ 0.92 to stored fact/preference → DIRECT_MEMORY (no LLM, ~10ms total)
├── similarity ≥ 0.88 to skill, no required params → SKILL_EXECUTE (no LLM, ~10ms total)
├── similarity ≥ 0.83 to skill, has required params → SKILL_HINT (LLM, but guided)
└── else → LLM_REQUIRED (full Claude call)

What Gets Vectorized

Entitymemory_typeMatched by
Conversation factsfactDirect answers — "When is Emma's birthday?"
User preferencespreferenceDirect answers — "What's Sophie's bedtime?"
Skillsskill_indexTrigger routing — "Brief me" → daily-briefing skill
Canvas componentscanvas_componentUpdate routing — "Cross off milk" → shopping list component
Navigation intentsnav_indexNavigation — "Show me tasks" → Tasks canvas

Token Savings Estimate

Message type~% trafficBeforeAfter
Simple recall ("when is X?")~20%1 Claude call0 Claude calls
Skill triggers ("brief me")~15%1 Claude call + tool loop0 Claude calls
Canvas updates ("cross off X")~20%1 Claude call + tool0 Claude calls
Navigation ("go to tasks")~10%1 Claude call0 Claude calls
Complex reasoning~35%1 Claude call1 Claude call (unavoidable)

~65% reduction in Claude API calls for typical family usage patterns.

Memory Deduplication

Before storing any extracted memory, store_if_novel() checks for near-duplicates (cosine similarity ≥ 0.95). Near-identical facts are silently dropped. This keeps the vector store clean, RAG context tight, and avoids injecting redundant tokens into every future prompt.

Implementation Status

  • VectorRouter (backend/chat/vector_router.py) — DIRECT_MEMORY + SKILL_EXECUTE + SKILL_HINT routes
  • VectorStore.store_if_novel() — deduplication before insert
  • VectorStore.delete_by_content() — skill index cleanup
  • ✅ Skill vectorization — SkillService._index_skill() on create/update/delete
  • SkillService.reindex_all_skills() — rebuilds index at startup
  • ConversationSummarizer uses store_if_novel() — no more duplicate memories
  • ✅ Wired into _stream_orchestrator_response — runs in parallel with RAG + user context
  • 🔲 Canvas component indexing — after canvas redesign (V1.0)
  • 🔲 Navigation intent indexing — after Space/routing redesign (V1.0)
  • 🔲 LanceDB equivalents for all router checks (desktop offline path, V1.5)

Layer 2: Product Completeness & UX Polish (V0.9) ✅

Status: RELEASE CANDIDATE (V0.9.0-rc.1 — February 19, 2026) Audit Completion: 252/274 items fixed (92.0%)

  • Legal: 98.0% complete (GDPR-compliant multi-group DSAR)
  • Security: 95.7% complete (SSRF prevention, rate limits, security headers, OAuth allowlist)
  • Accessibility: 96.6% complete (WCAG 2.1 AA — contrast ratios, ARIA labels)
  • Product: Professional integrations deferred to V1.2 WASM Extensions

Remaining items (deferred):

  • Branding M-DESIGN-003 (logo/favicon — post-V0.9)
  • Code quality: 38 items (technical debt for V1.0)
  • Documentation: 9 items (refresh with V1.0)
  • i18n: 12 items (full locale support in V1.0)

The backend is feature-rich (15 integrations, 50+ actions, agentic loop, memory/RAG, dynamic UI). Layer 2 fixed all critical gaps: mobile UX, discoverability, search, profiles, group invites, message editing, error handling, notifications, data export, keyboard shortcuts, file upload, conversation context control, Settings as Integration.

Sub-Phase 3e.1 — Critical UX & Discoverability — COMPLETE

Fixed the most urgent blockers that prevent basic product usage:

  • Mobile conversation access — Sheet-based left drawer with ConversationList
  • Suggested prompts in empty chat — 6 clickable capability chips
  • Markdown rendering in chat — ReactMarkdown + remark-gfm
  • Theme persistence — localStorage morphee-theme key
  • Stop/cancel button for AI streaming — Square button replaces Send during streaming
  • Chat input streaming indicator — "Morphee is thinking..." placeholder
  • Typing indicator — bouncing dots animation
  • Standardize page headers — text-2xl font-bold across all pages
  • Chat icon consistency — MessageCircle in both Sidebar and BottomNav

Sub-Phase 3e.2 — Core Product Completeness — COMPLETE

Filled fundamental product gaps:

  • User profile management — Profile tab in Settings (name, email, avatar, password change)
  • Conversation rename + organization — DropdownMenu with rename/pin/archive/delete
  • Global search (Cmd+K) — SearchDialog with cmdk, backend GET /api/search/
  • Humanized tool call displayshumanizeToolCall() maps 14 integrations to readable text
  • Confirmation dialogs — AlertDialog before deleting conversations
  • Message editing + regeneration — ChatBubble edit/regenerate/copy buttons
  • Retry mechanism for failed chat messages — Retry button with RotateCcw icon
  • Conversation pagination — offset param, "Load earlier messages" button
  • Task sorting options — Sort by newest/oldest/priority/name
  • Skeleton loaders for chat messages — 5 alternating skeleton bubbles

Sub-Phase 3e.3 — Collaborative Foundation — COMPLETE

Core multi-user features, onboarding polish, and UX improvements:

  • Group member invites + management — email invites with token-based flow, member list, role management, removal (C-FEAT-004) ✓
  • Conversation-space association — space_id filter on conversations list, space-scoped chat (H-FEAT-003) ✓
  • Post-onboarding feature tour — guided walkthrough overlay (H-UX-007) ✓
  • Onboarding value explanation — welcome card with feature bullets (H-UX-001) ✓
  • Onboarding space summary — "Here's what I set up" after AI creates spaces (M-UX-006) ✓
  • Login page improvement — 4 feature bullets with icons (H-UX-002) ✓
  • Task subtask support — parent_task_id, subtask creation and display ✓
  • Task inline editing — description, priority, space assignment in TaskDetail (H-FEAT-007) ✓
  • Branded loading spinner — pulsing "M" circle during auth check (H-UX-003) ✓
  • Friendly error messages — network error wrapping, Dashboard error banner (H-UX-004) ✓

Sub-Phase 3e.4 — Engagement & Polish — PARTIAL (1 item remaining)

Create daily engagement hooks and polish the overall experience:

  • Daily briefing skill — built-in skill summarizing tasks, events, notifications (L-IDEA-004)
  • Notification preferences + controls — per-type settings, quiet hours, DND (H-FEAT-004)
  • Settings expansion — profile tab, persistent theme, language, privacy controls (H-FEAT-005)
  • Data export + backup — export conversations, tasks, memories as JSON/Markdown (C-FEAT-006)
  • Notification sound + desktop alerts — browser Notification API, sound effects (M-FEAT-009)
  • Dashboard evolution — show conversations, events, notifications alongside tasks (M-UX-001)
  • Empty state illustrations — unified EmptyState component with CTAs across all pages (M-DESIGN-004)
  • Branding + logo — app icon, favicon, visual identity (M-DESIGN-003)
  • Approval card timeout — countdown indicator for 60s approval expiry (M-FEAT-005)
  • WebSocket reconnect button — actionable UI when connection drops (M-UX-005)
  • Empty state for Spaces page (M-UX-004)
  • Breadcrumbs / back navigation for deep-linked pages (M-FEAT-007)
  • "What can I do?" command — /help listing AI capabilities (L-IDEA-003)

Sub-Phase 3e.5 — Conversation & Context — COMPLETE

Enhance the chat experience with power-user features:

  • Conversation context control — pin messages, control AI memory window (H-FEAT-009)
  • Keyboard shortcuts — Cmd+K (search), Cmd+N (new conversation), navigation (H-FEAT-006)
  • File upload in chat — images, documents, attachments (M-FEAT-001)
  • Quick Actions floating button — FAB for new task/reminder without typing (L-IDEA-001)
  • Onboarding re-engagement email — welcome email series, return notifications (M-FEAT-013)
  • Integration setup wizards — guided setup for non-technical users (M-FEAT-012)

Settings as Integration ✅ COMPLETE (February 18, 2026)

Make all app settings accessible through natural language conversation.

Settings management is now available as an Integration. Users can configure Morphee by talking to it, while the Settings page UI remains available as a visual reference.

10 actions: get_setting, update_setting, list_categories, get_profile, update_profile, get_group_settings, update_group_settings, get_notification_preferences, update_notification_preferences, get_interface_configs, configure_interface

Setting Categories:

CategoryExample KeysAI Access
profilename, email, avatar_url, languagePROPOSE
groupname, timezone, default_space_idPROPOSE (parent only)
notificationstypes, enabled, quiet_hours, soundPROPOSE
privacyanalytics_enabled, crash_reportsPROPOSE
appearancetheme, text_size, animationsEXECUTE
integrationsinterface configs (LLM, Calendar, Gmail)PROPOSE

Conversational examples:

User: "Change my name to Jane"
AI: → settings__update_profile(name="Jane")

User: "Turn off email notifications between 10pm and 7am"
AI: → settings__update_notification_preferences(type="email", quiet_hours={start: "22:00", end: "07:00"})

User: "What's our group timezone?"
AI: → settings__get_group_settings()
→ "Your group 'The Martins' is set to America/New_York."

User: "Connect my Google Calendar"
AI: → settings__configure_interface(interface_name="google_calendar", config={...})
→ Shows GoogleConnect OAuth flow inline in chat

Implementation:

  1. New SettingsIntegration wrapping existing services (AuthService, GroupService, NotificationService, InterfaceConfigService)
  2. Unified setting schema: {category, key, value, value_type, security_level, description}
  3. Settings stored in new user_settings table + existing tables (users, groups, interface_configs)
  4. System prompt guidance: AI learns when to route to settings vs task/space tools
  5. Frontend: SettingsCard renderer (Tier 2) for inline OAuth flows, confirmation dialogs

Benefits:

  • Grandparents can configure Morphee by asking, not navigating menus
  • Settings changes are auditable (stored in conversation history)
  • AI can proactively suggest settings changes ("Would you like quiet hours for dinner time?")
  • Unified settings API for CLI/voice/mobile/desktop interfaces

Effort: Medium (1-2 weeks) — Backend M + Frontend M + Tests M


Layer 3: Canvas-First UX + OpenMorph Revolution + Offline-First (V1.0) 🚀

Status: IN PROGRESS — ~8 weeks — Three work streams that cross-feature heavily. OpenMorph's git-native architecture makes offline data and sync a natural consequence — both land together in V1.0.

Work Stream A: Canvas-First UX Redesign

Morphee is not a chat app. The chat thread is one integration among many. The primary interface is a persistent spatial canvas that fills up as you talk.

The shift:

Before (V0.9):                    After (V1.0):
┌──────────────────────────┐ ┌──────────────────────────────────┐
│ Chat thread │ │ │
│ User: add milk │ │ [Shopping List] [Calendar] │
│ AI: Added milk ✓ │ │ │
│ [ShoppingList card] │ │ [Task: dentist] │
│ User: dentist Tuesday │ │ [Progress] │
│ AI: Created event │ │ │
│ [CalendarCard] │ │ "I added the dentist appt…" ↑ │
│ [input] │ │ [___ Type here ___] [💬 Chat] │
└──────────────────────────┘ └──────────────────────────────────┘

Key design decisions (locked Feb 20, 2026):

  • Spatial free-form — user can drag and reposition any component
  • Persistent across sessions — canvas survives close/reopen; AI archives stale items to memory
  • Space-scoped — each Space has its own canvas state (canvas.yaml in .morph/)
  • Discussion is an Integration — the chat thread is a collapsible drawer, not the page

AI canvas actions:

  • place_component(type, props, position_hint) — put a new component on the canvas
  • update_component(id, changes) — update an existing component in-place
  • dismiss_component(id) — archive content to memory_vectors, remove from canvas
  • minimize_component(id) — collapse to icon, click to restore

Implementation (3 weeks):

PhaseWhatWhen
A — Layout swapCanvasPage replaces ChatPage, chat → right Sheet drawer, last-message preview strip, input always floatingWeek 1
B — AI spatial placementComponentSpec canvas fields, position hints, dismiss → memory archive, componentStorelocalStorage, system prompt teaches AI to prefer canvasWeek 2
C — Canvas persistenceDrag-and-drop (@dnd-kit), minimize, full per-Space persistence, canvas_component VectorRouter indexingWeek 3

Cross-feature with OpenMorph: Canvas state goes into .morph/canvas.yaml — Git-native, temporally navigable, portable across devices.

Cross-feature with BaseMorphRuntime: Every canvas component routes through JSRuntime — the frontend execution runtime that keeps UI logic sandboxed and portable. Canvas components from installed WASM extensions flow through WasmRuntime → JSRuntime → canvas, maintaining the compilation chain.


Work Stream B: OpenMorph — Git-Native Architecture (V1.0) 🚀

Status: IN PROGRESS — Phase 1 complete (Feb 20, 2026) — Transform Morphee into a Git-native architecture with the OpenMorph protocol. ALL Space data lives in Git, making Spaces fully portable and versionable.

Vision: Just like .git/ makes directories version-controllable, .morph/ makes directories AI-augmentable. Like Android's per-app storage model: each Integration owns a sandboxed namespace inside .morph/, with the root as shared space and the parent folder as "the outside world" (ACL-gated).

Reference: OpenMorph Specification | Original design doc

Core Concept: .morph/ Directories

Any directory containing a .morph/ subdirectory becomes a "Morph Space":

any-directory/                     # "outside world" — ACL-gated access
├── .morph/ # ← Presence = Morph Space (git repo root)
│ ├── config.yaml # Space metadata + sync mode
│ ├── acl.yaml # Access control (which interface accesses what)
│ ├── vault/ # Encrypted secrets (vault.enc per interface)
│ ├── core.tasks/ # Built-in: tasks as Markdown (YAML frontmatter)
│ │ ├── manifest.yaml # Integration manifest (auto-bootstrapped)
│ │ └── task-{uuid}.md
│ ├── core.skills/ # Built-in: dynamic skills as YAML
│ │ ├── manifest.yaml
│ │ └── {slug}.yaml
│ ├── core.memory/ # Built-in: facts, preferences, events
│ ├── core.conversations/ # Built-in: AI conversations as Markdown
│ ├── core.scheduler/ # Built-in: cron schedules
│ ├── core.canvas/ # Built-in: canvas state
│ ├── net.atlassian.jira/ # Third-party (reverse-domain namespace)
│ │ ├── manifest.yaml
│ │ └── issues/PROJ-123.yaml
│ ├── com.google.calendar/ # Third-party
│ └── .git/ # Version control
├── src/ # User's actual files
└── README.md

Namespace conventions:

  • core.* — Morphee built-ins (protocol authority: morphee.ai)
  • com.* / net.* / org.* — Third-party integrations (reverse-domain)
  • io.github.* — Community integrations

manifest.yaml is required in every integration directory. Auto-bootstrapped on first write via BaseInterface.morph_write().

linked: field in every entity's YAML/frontmatter enables graph-oriented memory: any .morph/ entity can link to any other (e.g. a task can link to a JIRA issue, a calendar event, a conversation).

Three Sync Modes

  • local-only — Never leaves device, fully private
  • morphee-hosted — Synced to api.morphee.app (default)
  • git-remote — User's own GitLab/GitHub/Gitea instance

Git Branching & Temporal Navigation

  • Branches — Create experimental timelines ("vacation planning", "project ideas")
  • Temporal search — "Show me what we planned 2 weeks ago"
  • Commit history — Full audit trail of all Space changes
  • Branch merging — AI-mediated conflict resolution

Implementation Phases

Phase 0 — OpenMorph Specification ✅ DONE (Feb 20, 2026)

  • docs/features/OPENMORPH_SPECIFICATION.md — canonical spec with directory layout, namespace conventions, manifest.yaml format, task/skill serialization, linked: graph field

Phase 1 — Generic .morph/ Protocol + Dual-Write ✅ DONE (Feb 20, 2026)

  • GitStore generic entity API — save_entity, load_entity, delete_entity, list_entities
  • BaseInterface .morph/ protocol — morph_directory, morph_capabilities, morph_write/read/list/delete, get_manifest()
  • All 11 integrations declare morph_directory + morph_capabilities
  • TaskService dual-writes to core.tasks/task-{id}.md (with linked: graph field)
  • SkillService dual-writes to core.skills/{slug}.yaml
  • 24 tests: generic entity API + serialization roundtrips

Phase 2 — Complete Dual-Write (Week 2)

  • ScheduleService dual-writes to core.scheduler/{slug}.yaml
  • config.yaml + acl.yaml root writes via morph_root_access = True
  • ConversationService dual-writes to core.conversations/

Phase 3 — Full Migration + Git as Source of Truth (Weeks 3–5)BREAKING CHANGE

  • ALL Space data migrates to Git (tasks, conversations, skills, schedules, config)
  • Sync layer: Git = source of truth, PostgreSQL = read cache
  • Migration script: convert all existing Spaces → .morph/ format
  • Extend GitStore with branch operations (create, switch, merge, list)
  • git_commit_metadata table for semantic commit search

Phase 4 — Discovery + Sync (Weeks 6–7)

  • Tauri: morph_discover command (scan filesystem for .morph/)
  • Tauri: morph_init command (transform any directory into Morph Space)
  • Backend: MorphDiscoveryService, register/sync endpoints
  • Frontend: SpaceDiscovery page, MorphInitDialog
  • Three sync modes implemented

Phase 5 — Timeline UI (Week 8)

  • TimelineRenderer component (visual git log)
  • MemoryTimeline page
  • Branch selector dropdown
  • Conflict resolution dialog
  • Commit embedding pipeline (LanceDB semantic search)

Effort: L-XL (8 weeks total) — Phases 0–1 complete, Phases 2–5 remaining


Work Stream C: Offline-First Data + Sync (V1.0) 🚀

Status: PLANNED — M (2 weeks, parallel with OpenMorph) — Offline-first data access and sync for Tauri apps (desktop + mobile). This is the natural consequence of OpenMorph: if all Space data lives in .morph/ git repos, offline is free.

Web limitation: The web client (app.morphee.app) runs in a browser with no Tauri layer — no git2, no LanceDB, no local filesystem. Offline for web means Service Workers + IndexedDB cache (already implemented as M-FEAT-010). There is no path to full offline on web without WASM ports of the local stack, which is not planned. Morphee's full offline vision is Tauri-exclusive.

What Moves to V1.0 (from V1.5)

FeatureWhy it's free with OpenMorphComplexity
Offline data access (tasks, conversations, memory)Already in .morph/ git repos on deviceFree
Offline auth (JWT cache + queued refresh)Token cached in OS keychain (already done)Small
Offline UI / PWA shell (Tauri)App already loads locallyFree
Git sync (push/pull when reconnected)OpenMorph Phase 3i.3 already implements thisFree
Action queue (send message, create task offline → replay)Queue in SQLite, drain on reconnectMedium

What Stays in V1.5

FeatureWhy it waitsComplexity
Local LLM inference (GGUF/candle)Model management, quality gap vs Claude, mobile RAM constraintsL-XL
Local audio/video (Whisper, TTS)Depends on local LLM infrastructureL
Conflict resolution (complex merge)Advanced git merge semantics, edge casesM

V1.0 Offline Story

Tauri app (desktop + mobile):
✅ Data works offline — tasks, conversations, memory in .morph/ git
✅ Auth works offline — JWT cached in OS keychain
✅ Canvas works offline — components persist in localStorage
✅ Sync on reconnect — git push/pull, action queue replayed
⚡ AI responses — need cloud (Claude API) in V1.0
→ Full AI offline in V1.5 (local LLM via candle/GGUF)

Web client (app.morphee.app):
✅ Offline shell — Service Worker (M-FEAT-010, already done)
✅ Cached reads — IndexedDB (last seen tasks/conversations)
❌ Offline writes — no git, no local DB
❌ Local LLM — not possible without Tauri
→ Web remains cloud-dependent by design

Implementation (2 weeks, parallel with OpenMorph Phase 3i.3)

  • ActionQueue — SQLite table of pending actions (send_message, create_task, etc.)
  • Tauri queue_action / drain_queue commands — invoked on reconnect
  • Network status detector — tauri-plugin-network or periodic health check
  • Offline indicator in Header — "Working offline" badge when disconnected
  • Auth: extend JWT expiry grace period in Tauri (tolerate expired token for cached data reads)
  • Tests: offline scenario E2E tests (disconnect → act → reconnect → verify sync)

Cross-feature with OpenMorph: The sync engine IS OpenMorph Phase 3i.3. No duplicate work.


Layer 4: Extension Ecosystem — WASM Platform (V1.2)

Status: SHIPPED (February 21, 2026) — XL (12 weeks, 6 phases) — Third-party extensions via WebAssembly, professional integrations rebuilt as WASM modules. See docs/status.md for detailed phase completion.

Research Complete: See Investigation Document, Full Research, and Quick Reference (Feb 2026)

Decision: Build WebAssembly extension system instead of hardcoding integrations. Same .wasm binary runs on Python backend AND Rust frontend (no duplication). Extensions distributed via OCI registry at rg.morphee.ai.

Strategic Shift: WASM-First

  • Freeze new hardcoded integrations (no more Python backend/interfaces/integrations/*.py)
  • Build JIRA/Notion/GitHub as WASM extensions
  • Validates extension system with real use cases
  • Same binary runs on Python backend + Tauri frontend (zero duplication)

WebAssembly Extension System (WASM Plugins)

Technology Stack:

  • Python Backend: wasmtime-py v41+ (88% native performance, full WASI 0.3 async)
  • Tauri Rust (all platforms): wasmer 5.0+ — unified API with pluggable backends:
    • Desktop/Android: Cranelift JIT (80-88% native)
    • iOS: Wasmi interpreter (50-60% native) — iOS App Store §2.5.2 prohibits JIT; wasmtime blocked
    • Future: wasm3 as ultra-minimal interpreter option (64KB, 65% native, no async)
  • BaseMorphRuntime: WasmRuntime (backend .wasm via wasmer), JSRuntime (frontend canvas/UI via webview JS), PythonRuntime (future dynamic .py)
  • Interface Standard: WebAssembly Component Model + WASI 0.3 (async support, rich types)
  • Distribution: Dual OCI registry at rg.morphee.ai
    • Public extensions: rg.morphee.ai/public/* → GitHub Container Registry (free, unlimited)
    • Private extensions: rg.morphee.ai/private/* → self-hosted Harbor (~$20/month, Phase 6+)
  • Security: Install-time granular permissions (like Google Play Store) + resource limits + code signing

Why WASM over JavaScript:

  • Portability: Same binary runs on Python backend AND Rust frontend (no dual implementation!)
  • Security: True sandboxing (memory-safe, isolated execution, impossible to escape without explicit imports)
  • Performance: 88% native (wasmtime-py Python), 80-88% native (wasmer Cranelift desktop/Android), 50-65% native (wasmer Wasmi / wasm3 on iOS), <1ms warm start
  • Industry Proven: VS Code, Figma, Shopify, Cloudflare all use WASM for extensions
  • OpenMorph Synergy: Extensions live in .morph/extensions/*.wasm alongside Space data (portable)

Permission Model (Install-Time Approval): User reviews and approves 10 granular permissions before extension installs:

  • http:read — GET requests only
  • http:write — POST/PUT/DELETE requests
  • vault:read — Read secrets
  • vault:write — Write secrets
  • event:emit — Emit events to EventBus
  • data:read — Read tasks/conversations/memory
  • data:write — Create/update tasks/conversations
  • task:create — Shortcut for data:write (tasks only)
  • space:read — Read Space metadata
  • notify:send — Send notifications

Example Flow:

  1. User clicks "Install JIRA" in marketplace
  2. Frontend shows permission dialog: "JIRA Integration requests: http:read, http:write, vault:read, data:write"
  3. User clicks "Install" → extension downloads from rg.morphee.ai/public/jira:1.0.0
  4. Backend verifies signature (RSA-PSS + SHA-256), stores extension + permissions
  5. Extension ready — AI can now create JIRA issues via chat
  6. Same .wasm file works on Tauri desktop app (Rust runtime)

OpenMorph Integration (Layer 3):

  • Extensions stored in .morph/extensions/*.wasm (portable with Space)
  • File extension association system: GitStore maps .wasmWASMIntegration.can_open() returns true
  • Multi-Integration support: .py can be opened by TextIntegration OR PythonIntegration
  • Discovery: GitStore scans .morph/extensions/ on Space load, registers available extensions

Implementation Phases (12 weeks):

PhaseDurationDeliverables
1. Foundation2 weeksCore runtime, WIT interface, host functions, Rust SDK, echo.wasm example works
2. Security2 weeksInstall-time permissions, resource limits, code signing, audit logging
3. Distribution1 weekrg.morphee.ai registry (GHCR), manifest schema, morphee-ext CLI tool
4. Developer Experience2 weeksSDK docs (30+ pages), 3 example extensions, project templates, local dev tools
5. Built-in Extensions3 weeksJIRA, GitHub, Notion, Linear, Slack as WASM (5 professional integrations)
6. Marketplace UI2 weeksCatalog page, install flow, settings tab, auto-updates, analytics dashboard

Success Metrics:

  • ✅ Zero duplication: Same binary runs on Python + Rust (portability goal achieved)
  • ✅ Community-ready: Third-party developer builds and publishes extension
  • ✅ OpenMorph synergy: Extension in .morph/extensions/*.wasm works offline
  • ✅ Security: No security incidents (escapes, data leaks, crashes)

Use Cases:

  • Teacher loads "Math Tutor" extension into Homework Space → interactive geometry tools
  • Developer creates data visualization extension → publishes to rg.morphee.ai/public/dataviz:1.0.0
  • Parent installs "Chore Tracker" extension with permission to create tasks
  • Freelancer stores JIRA extension in .morph/extensions/jira.wasm → portable with TechCorp project Space

Professional Integrations as WASM

Rebuild as WASM extensions (validates extension system):

  • JIRA integration — issues, boards, sprints, comments
  • GitHub integration — PRs, issues, commits, reviews
  • Notion integration — pages, databases, search
  • Linear integration — issues, projects, workflows
  • Slack integration (MVP exists in Python) — rebuild as first WASM extension

1Password Vault Backend (opt-in)

  • OnePasswordVaultProvider — integrates with 1Password CLI (op) or 1Password SDK
  • Power users can store Integration credentials in their existing 1Password vault
  • Resolves op://vault/item/field URIs transparently

Browser Extension & Embedding SDK (Future)

Browser Extension (Chrome/Firefox)

Floating Morphee widget available on any webpage, context-aware (can extract selected text, page URL, page title).

Architecture:

  • Content script (injected into all pages) → detects context, shows widget
  • Background service worker → maintains WebSocket, authenticates with API
  • Popup/sidebar (React) → chat interface, task quick-add, memory search

Features:

  • Chat widget with page context: "I'm on Amazon looking at this product — add to family wishlist"
  • Context menu: Right-click selected text → "Add to Morphee Tasks"
  • Keyboard shortcuts: Cmd+Shift+M opens chat
  • Works offline (IndexedDB cache, sync when online)

Embedding SDK (@morphee/sdk)

npm package for website owners to integrate Morphee chat/tasks/memory into their apps.

Installation: npm install @morphee/sdk

Usage:

import { MorpheeClient } from '@morphee/sdk';

const morphee = new MorpheeClient({
apiKey: 'mk_partner_key', // Partner API key
groupId: 'user_group_id',
});

// Chat widget
await morphee.chat.render({
container: '#chat-widget',
theme: 'light',
});

// Task quick-add
await morphee.tasks.create({
title: 'Follow up on customer inquiry',
metadata: { customerId: '12345' },
});

// Memory search (RAG)
const results = await morphee.memory.search({
query: 'customer preferences for shipping',
limit: 5,
});

White-label support:

  • Partners customize logo, colors, fonts, name
  • Embedding appears as partner's AI assistant

Partnership Framework (Future)

Partnership Tiers:

  • Developer (Free): 1k API req/month, extension store listing, 70/30 revenue share
  • Startup: 100k req/month, white-label SDK, co-marketing, 60/40 share
  • Enterprise: Unlimited, custom SLA, on-prem option, custom revenue split

Partner Portal (https://partners.morphee.app):

  • API key management (create, revoke, rotate)
  • Usage analytics (requests/day, errors, latency)
  • Revenue dashboard (extension sales, usage-based billing)
  • Documentation (API reference, SDK guides)

Revenue Sharing:

  • User pays $9.99/month for premium extension
  • Morphee takes 30% ($3.00), developer gets 70% ($6.99)
  • Monthly payouts via Stripe Connect

Effort: X-Large (12 weeks) — Extension system core + WASM migrations + Marketplace + SDK/browser extension (future phases)


Layer 5: Multimodal Interaction (V1.3)

Status: SHIPPED (February 22, 2026) — Merge of video/gesture recognition (old 3p) + interactive UI (old 3k) + multi-modal identity (old 3o). See docs/status.md for detailed sprint completion.

Foundation for family-friendly app with kids, elderly, accessibility users. Users are not just email addresses. Identity is multi-modal — authenticate through voice recognition, face recognition, GitHub OAuth, passkeys, and more. Some ACL actions require multiple forms of identification.

Multi-Modal Identity & Authentication (6-8 weeks)

Core capabilities:

  • Identity model — separate from authentication methods (one identity, many methods)
  • Multi-method authentication — email, OAuth (GitHub/Google/Apple), face recognition, voice recognition, passkeys (WebAuthn), 2FA (TOTP/SMS), hardware keys
  • Step-up authentication — sensitive actions require additional authentication methods
  • Biometric processing — face encoding (FaceNet/ArcFace), voice encoding (speaker verification) in Tauri Rust via candle
  • Parent-kid system — kids without email can use biometrics, parents approve sensitive actions
  • Accessibility-first — voice-only mode, face-only mode, no email required

Critical use case: Kids without email can authenticate via voice or face recognition. "Morphee, it's me" → authenticated. Parents approve sensitive actions via their own auth.

Technical implementation:

  • Database: identities, authentication_methods, authentication_sessions, step_up_challenges, parent_approval_requests
  • Backend: MultiModalAuthService, FaceRecognitionService, VoiceRecognitionService, StepUpAuthService
  • Tauri Rust: FaceEncoder (candle + FaceNet), VoiceEncoder (candle + speaker verification), liveness detection
  • Frontend: FaceRecognitionAuth, VoiceRecognitionAuth, StepUpAuthDialog, multi-method login screen
  • Security: biometric templates in vault (never database), liveness detection, GDPR-compliant deletion

Use cases:

  • Sophie (age 8, no email) says "Morphee, it's me" → voice recognized → authenticated (level 2)
  • David (adult) uses face recognition for quick login → level 2 → tries to delete group → step-up challenge → scans TOTP → level 3 → allowed
  • Maria (visually impaired) uses voice-only → no visual UI needed → full app access
  • Parent approval: kid tries to add friend → parent gets notification → parent authenticates via face/voice → approves

Security & privacy:

  • Never store raw biometric data (face images, voice recordings) — only extract templates (512-dim vectors)
  • All templates stored in VaultProvider (AES-256 encrypted)
  • Liveness detection prevents spoofing (eye blinks, head movement, voice naturalness)
  • GDPR compliance: delete all biometric data on request

Effort: X-Large (6-8 weeks) — Multi-method login (2 weeks), Biometric enrollment (2 weeks), Step-up auth (1 week), Parent-kid features (2 weeks), Security & polish (1 week)

Video Interaction & Visual Recognition (3-4 weeks)

Enhance Morphee's interactivity through camera input, gesture recognition, and visual analysis.

Morphee is chat-first, but adding video capability opens new interaction modalities: gesture-based approvals, visual context sharing, handwriting recognition, and accessibility for non-vocal users.

Core Capabilities

Camera Input & Visual Capture

  • Device camera access (with user permission)
  • Picture/image upload + selection
  • Live video frame capture and analysis
  • Screen sharing (desktop via Tauri) for context
  • Gallery integration for existing photos/videos

Gesture Recognition

  • Hand gesture detection (thumbs up/down, wave, point, open palm) via pose.js or TensorFlow.js
  • Head movements (nod, shake, tilt)
  • Body language detection (sitting, standing, leaning)
  • Real-time gesture response: AI reacts to gestures as interactions

Visual Analysis

  • Claude's vision capabilities for image/video frame analysis
  • Handwriting recognition and transcription
  • Document scanning and OCR
  • Diagram/whiteboard interpretation
  • Object detection and scene understanding

Visual Content in Chat

  • Display pictures alongside messages
  • Image gallery views
  • Whiteboard/diagram rendering
  • Document preview + context discussion

Accessibility

  • Voice feedback for all visual features
  • Gesture alternatives to buttons (especially for physical accessibility)
  • Caption generation for video content
  • High-contrast visual indicators

Implementation Plan

Frontend

  • CameraCapture component — access device camera, preview, capture frames
  • ImageUploader component — drag-drop, gallery, file picker
  • GestureDetector — pose.js or TensorFlow.js for hand/body recognition
  • VisualRenderer — display pictures, diagrams, documents in chat
  • New FrontendIntegration actions: show_image, show_video, capture_gesture_response
  • Camera permission flow (browser + Tauri native permission dialogs)

Backend

  • New VisualIntegration(BaseInterface) with 4 actions:
    • analyze_image(image_data) → description + structured data
    • recognize_gesture(video_frame) → gesture_type + confidence
    • transcribe_handwriting(image) → text
    • interpret_diagram(image) → structured explanation
  • Vision API integration (Claude's vision model via Anthropic SDK)
  • Gesture interpretation rules (map raw poses to semantic gestures)

Tauri Rust (Desktop)

  • ScreenCapture command — capture portion/full screen as context
  • Camera frame streaming to frontend
  • Local gesture detection pre-processing (optional GPU acceleration)

Database

  • visual_interactions table — gesture logs, image analysis cache
  • gesture_preferences — custom gesture bindings per user

Use Cases

UserInteractionBenefit
ChildShows drawing → Morphee analyzes → creates task/noteVisual-first interaction, no typing
ParentGives thumbs up → Morphee approves child requestQuick non-verbal approval
TeacherShows diagram on whiteboard → Morphee explains → generates questionsInteractive learning
ElderlyPoints at screen → Morphee respondsGesture-based accessibility
DeveloperShows error screenshot → Morphee analyzes → suggests fixVisual debugging
FamilyShows photo → Morphee stores in memory + tagsVisual memory capture

Privacy & Security Considerations

  • Camera permission: Explicit user consent, stored in ACL system
  • Image storage: Analyzed images NOT stored (only descriptions/metadata) unless explicitly saved to memory
  • Gesture tracking: Optional feature, disable via settings
  • GDPR: Visual data deletion on request, privacy policy update
  • Data minimization: Process frames locally where possible (gesture detection), send to Claude only when needed (visual analysis)

Effort: Large (3-4 weeks) — Camera/upload M + Vision integration M + Gesture detection M + UI rendering M + Tests M

Interactive/Editable UI Components (2-3 weeks)

The Frontend Integration (Layer 1) renders interactive components, but they're read-only after generation. This adds inline editing with haptic feedback.

New ComponentSpec fields:

{
editable: boolean, // enables inline editing
edit_permissions: "user" | "parent" | "ai_turn",
haptic_feedback: "pulse" | "vibrate" | "flash" | "none",
auto_save: boolean,
edit_schema: object // validation for edited values
}

Haptic feedback API:

  • Desktop: macOS NSHapticFeedbackManager, Windows Haptics API, Linux evdev
  • Mobile: iOS UIImpactFeedbackGenerator, Android Vibrator
  • Web: navigator.vibrate(pattern) (limited support)

Editable component types:

ComponentEdit ActionsHaptic Feedback
cardEdit title/body inlinePulse on save
listReorder items (drag), add/removeVibrate on drag
tableEdit cell values inlineFlash row on update
calendarDrag events, resizeHaptic on drop
kanbanDrag cards between columnsVibrate on column change

New FrontendIntegration actions:

  • enable_editing(component_id, fields?) → updated component
  • lock_editing(component_id) → locked
  • apply_edits(component_id, changes) → updated (triggers AI_TURN if edit_permissions="ai_turn")

Use cases:

  • Edit task description inline in chat without opening TaskDetail page
  • Drag calendar events in timeline view, AI updates memory
  • Reorder shopping list items by dragging, AI learns preferences

Effort: Medium (1-2 weeks) — Interactive UI M + Haptic API M + Tests M


Layer 6: Local AI — Full Offline Intelligence (V1.5)

Status: SHIPPED (February 22, 2026) — Local LLM inference (candle/GGUF), on-device audio (Whisper STT, TTS), smart cloud/local routing. See docs/status.md for V1.5 gap closure details.

Context: V1.0 shipped offline data + sync (git-native, action queue). V1.5 completes the picture by adding local LLM inference — AI that works without any cloud dependency. Data offline was V1.0. AI offline is V1.5.

Tauri-only. The web client remains cloud-dependent by design. Local AI requires the Tauri Rust layer (candle, ONNX), which doesn't exist in a browser context.

Local LLM via Tauri Rust (candle + GGUF)

  • candle (Hugging Face Rust ML framework) for GGUF model inference
  • Run small quantized models locally: Phi-4-mini, Llama 3.2 3B-Q4, Mistral 7B-Q4
  • Use cases: quick local tasks, offline chat, summarization, classification, drafting
  • Smart routing: local model for simple tasks, cloud Claude for complex reasoning (user-configurable threshold)
  • Metal acceleration on macOS, CUDA on Linux/Windows, ANE on Apple Silicon mobile
  • Same LLM Integration contract — "Local Phi" is just a different Interface from "Cloud Claude". No orchestrator changes.
  • Model management UI: download, switch, delete models; storage usage indicator

Audio & Video Processing (Tauri Rust)

  • Speech-to-text: ONNX Whisper via ort — transcription runs entirely on-device
  • Text-to-speech: local TTS for voice responses (accessibility + offline voice UX)
  • Voice as a channel: speak → transcribe → local LLM → synthesize → play
  • Video: frame extraction, scene detection, thumbnail generation
  • All processing in Rust — zero cloud dependency for media

Mobile Local AI (M3)

  • Local LLM on mobile via candle (Phi-4-mini Q4 — fits in 2GB RAM)
  • ANE (Apple Neural Engine) acceleration on iOS, GPU on Android
  • Background inference scheduling — doesn't block UI thread
  • Progressive model download with resume support

What V1.5 does NOT need to re-implement

Since V1.0 already ships:

  • ✅ Offline data (git-native .morph/ dirs)
  • ✅ Offline sync (git push/pull on reconnect)
  • ✅ Action queue (offline writes replayed on reconnect)
  • ✅ Offline auth (JWT cached in OS keychain)

V1.5 only adds the intelligence layer — local model inference. The server becomes purely a sync hub and optional cloud compute provider, not a hard dependency for anything.


Layer 7: Production Platform & Rich Views (V2.0)

Monitoring & Analytics — PostHog + Grafana (2-3 weeks)

Production observability for product analytics (PostHog) and infrastructure metrics (Grafana).

PostHog Integration (Product Analytics)

  • Event tracking: user actions (chat messages, tool calls, approvals, settings changes)
  • Feature flags: A/B test new UI patterns, rollout features gradually
  • Session replay: debug user issues by replaying their session
  • Funnels: onboarding completion, integration setup, daily active users
  • User properties: persona, group size, active integrations, plan tier

Events to track: chat_message_sent, tool_call_executed, approval_requested, onboarding_completed, integration_connected, conversation_created, search_performed

Implementation:

  1. Backend: PostHogClient wrapper, async event capture
  2. New analytics/ module with event definitions
  3. Opt-in via privacy settings: update_setting("privacy", "analytics_enabled", true)
  4. Events only sent if user consents, PII scrubbing: never send message content, only metadata

Grafana + Prometheus (Infrastructure Metrics)

  • API metrics: request rate, latency, error rate per endpoint
  • LLM metrics: token usage, streaming latency, tool call count
  • Memory metrics: RAG latency, vector search performance, git sync success rate
  • Database metrics: query latency, connection pool usage
  • WebSocket metrics: active connections, event throughput, reconnect rate

Dashboards:

  1. API Health: request/error rates, p50/p95/p99 latency, status code distribution
  2. LLM Performance: tokens/sec, streaming chunks/sec, tool execution time
  3. Memory System: embedding generation time, vector search latency, git sync failures
  4. User Activity: active users (1h/24h/7d), conversations/day, messages/conversation
  5. Errors & Alerts: exception rate, failed tool calls, timeout rate

Implementation:

  1. Backend: prometheus_client library, /metrics endpoint
  2. Custom metrics in utils/metrics.py (counters, histograms, gauges)
  3. Docker Compose: add Prometheus + Grafana services
  4. Alert rules: Slack webhook on error rate spike, LLM timeout >10s

Effort: Large (2-3 weeks) — PostHog M + Grafana M + Tests M

Rich Content Views

Status: PARTIALLY DONE — Frontend views for existing backend integrations.

The backend already has Google Calendar and Gmail integrations, but users can only access them through chat commands. Visual views make these dramatically more useful.

  • Calendar view/calendar route with day view, navigation, Google OAuth check ✅
  • Email view/inbox route with email thread list, unread badges, Gmail OAuth check ✅
  • Voice input — Microphone button with Web Speech API, continuous recognition ✅
  • Offline mode — Service worker with network-first API caching, cache-first static assets ✅
  • Navigation consolidation — reduce 5-tab nav to 3-4 for chat-first focus (M-DESIGN-002)

Channel Adapters

  • WhatsApp Business API
  • Telegram Bot
  • Email (IMAP/SMTP)
  • Voice (speech-to-text → AI → text-to-speech) — local via Tauri Rust

All channels feed into the same Agent Orchestrator.

Tauri Desktop Polish

  • System tray, notifications, keyboard shortcuts
  • Native builds (macOS, Windows, Linux), auto-update
  • Model management UI: download/update local ONNX and GGUF models

Layer 8: Future & Optional Features (V2.5+)

Self-Aware AI Development (3-4 weeks)

The revolutionary feature that makes Morphee the first self-aware, collaborative AI agent.

Morphee's codebase IS her memory database. Through the Memory Integration, Morphee can read, understand, and explain her own source code. The community contributes improvements via git branches, Morphee reviews changes, and Sebastien Mathieu maintains sole merge authority to the main branch.

Core capabilities:

  • Self-referential memory scope — Memory Integration points to Morphee's own git repository
  • Code search & explanationsearch_code, explain_implementation, get_architecture_diagram actions
  • Community contribution reviewreview_community_branch analyzes PRs with recommendations
  • Self-improvement proposalssuggest_improvement allows Morphee to propose code changes
  • Benevolent dictator governance — Sebastien Mathieu has sole merge authority to main branch
  • Collaborative branching workflow — Community creates branches, Morphee reviews, human approves

What makes this revolutionary:

  • First truly self-documenting AI agent (reads her own implementation in real-time)
  • Open-source collaborative AI development (humans AND AI write code together)
  • Living documentation (docs are memories in the same Git repo)
  • Recursive self-improvement loop with human oversight

Technical implementation:

  • Extend GitStore with morphee-self group pointing to /morphee-beta
  • New MorpheeSelfIntegration with read-only and propose actions
  • Database tables: morphee_self_config, code_review_history, self_improvement_proposals
  • Frontend: CodeExplorer component, CommunityPage with PR management
  • ACL roles: viewer, contributor, reviewer, maintainer, dictator (Sebastien only)
  • Safety: read-only by default, all writes require human approval, audit trail

Use cases:

  • User asks: "How do you handle authentication?" → Morphee cites backend/auth/client.py:42-89
  • Community member proposes Slack integration → Morphee reviews code, checks tests, recommends approval
  • Morphee identifies inefficient code → proposes optimization → creates branch → awaits human approval
  • Developer explores codebase → CodeExplorer searches across Python/TypeScript/Rust files

Philosophical impact: This is the first AI agent that can read, understand, and improve her own implementation. Not just "aware" in the philosophical sense, but knows her own code and can participate in her own development. The future of AI is not closed-source proprietary models—it's open, collaborative, self-aware agents like Morphee.

Connection to Knowledge Pipeline: With OpenMorph, self-awareness is simple: Morphee's source code IS a Space (.morph/ at the repo root). A developer asking "how does task creation work?" is just a normal RAG query against that Space's memory. The codebase is just another Space — no special infrastructure needed.

Reference: MyHarbor.AI (user's previous project) provides inspiration for collaborative contribution model.

Effort: Large (3-4 weeks) — Phase 1: Read-only self-awareness (1 week), Phase 2: Community contributions (1 week), Phase 3: Self-improvement (1 week), Phase 4: Polish & governance (1 week)

Crypto Marketplace & Decentralized Economy

Status: DEPRIORITIZED — Optional module, can be built as WASM extension later.

Transform extension marketplace into decentralized economy with cryptocurrency payments, data exchange, smart contracts.

Reference project: MyHarbor.AI (user's previous work on transaction publishing) — study their model for publisher/reader payment flows.

Core features:

  • Crypto Payments: Accept ETH (L1 + L2s: Polygon, Arbitrum, Base), USDC/DAI (stablecoins), BTC + Lightning
  • Smart contracts: ExtensionMarketplace deployed on Ethereum + L2s
  • Web3 Wallet Integration: WalletConnect v2 — MetaMask, Coinbase Wallet, Rainbow, etc.
  • Data Marketplace: Users sell trained models, datasets, prompts, memory exports
  • NFT Extensions: Extensions as ERC-721 NFTs with automatic royalties
  • DAO Governance: MorphCoin (100M supply) governance token

Revenue model: Smart contracts auto-split payments (70% dev, 20% Morphee, 10% DAO treasury)

Security: External audit CRITICAL (Trail of Bits or OpenZeppelin) before mainnet deployment

Effort: XX-Large (10-12 weeks + 4 weeks security audit)


Safety, Compliance & Accessibility

Status: COMPLETE — Required for family personas (Sophie, Emma, Jeanne) and regulatory compliance.

Completed:

  • ACL system — Generic resource-based access control (grant/revoke/check/list), user preferences, monitoring ✅
  • Age verification — Regional age thresholds (EU 16, US 13), minor detection ✅
  • Parental consent flow — Consent service + email verification + frontend pages ✅
  • Account deletion — GDPR Article 17 compliant deletion with confirmation ✅
  • Data export — JSON and Markdown export (GDPR Articles 15/20) ✅
  • Consent tracking — Per-purpose consent (LLM, memory, Google, push) ✅
  • i18n foundation — Backend i18n module (errors, WebSocket, integrations) + frontend en/fr locales ✅
  • Encryption at rest — Fernet encryption for chat messages, memory vectors, Git files ✅
  • Privacy policy + terms — visible in login, onboarding, settings ✅
  • Content filtering — age-appropriate content controls, parental controls tab ✅
  • Chat text size control — adjustable text size (Small/Medium/Large/XL) in Settings ✅
  • Accessibility (WCAG 2.1 AA) — ARIA landmarks, screen reader support, keyboard navigation, reduced-motion, landscape orientation, semantic headings, radiogroup patterns, undo for destructive actions. Audit: 10/10 items resolved, score 9.8/10. ✅
  • Persona-specific UI modes — Simplified mode with larger text, wider spacing ✅
  • i18n audit — 20/20 items resolved: string externalization, locale-aware formatting, 6-language catalog, RTL foundation ✅

Ecosystem & Multi-Tenancy (Future)

  • Integration marketplace (community-contributed)
  • Skill sharing
  • Multi-group support on single instance
  • Developer API and SDK
  • Advanced trust model and parental controls
  • CloudKmsVaultProvider + EncryptedDbVaultProvider — enterprise vault backends (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager, encrypted DB columns for hosted multi-tenant)

Milestones

MilestoneLayerVersionWhat Works
AlphaPhase 1a + 1bChat works, AI responds and can take basic actions
BetaPhase 2AI remembers context (pgvector + Git), RAG, auto-summarization
FoundationLayer 1Calendar, email, skills, SSO, mobile apps, local ML
Product ReadyLayer 2V0.9.0-rc.1UX: mobile drawer, search, profiles, markdown, message editing, group invites, feature tour, data export, ACL, age verification, parental consent, encryption at rest, i18n (en/fr), keyboard shortcuts, file upload, conversation context, settings as integration — SHIPPED Feb 19, 2026
Git-Native + Offline DataLayer 3V1.0OpenMorph: .morph/ directories, portable Spaces, git branching, temporal navigation, multi-sync modes + offline data/sync/auth on Tauri — SHIPPED Feb 21, 2026
ExtensibleLayer 4V1.2WASM extensions (WasmRuntime), OCI registry, JIRA/Notion/GitHub as WASM, marketplace as knowledge exchange — SHIPPED Feb 21, 2026
MultimodalLayer 5V1.3Voice/face auth, kids without email, video/gesture recognition, interactive/editable components, haptic feedback — SHIPPED Feb 22, 2026
Local AILayer 6V1.5Local LLM inference (candle/GGUF: Phi-4, Llama 3B, Mistral 7B), on-device audio/video (Whisper TTS), smart cloud/local routing — SHIPPED Feb 22, 2026
ProductionLayer 7V2.0Monitoring (PostHog, Grafana), channels (WhatsApp, Telegram), browser extension, partner framework, rich content views
AdvancedLayer 8V2.5+Self-aware AI (codebase as memory), crypto marketplace (optional)

Principles

  • Chat-first: If it can't be done through conversation, it's not ready
  • Non-technical users: Grandma, kids, teachers — no dashboards, no jargon (see stories/ for personas)
  • Everything is an Integration: LLM, Memory, Frontend, Gmail — same abstraction. Spaces themselves can become Integrations through the Knowledge Pipeline.
  • Knowledge flows up: Usage → memory → skills → compiled extensions → marketplace. No one needs to write code to create shareable intelligence.
  • LLM as last resort: Vector search first, structured skills second, compiled WASM third. The LLM is called only when nothing cheaper can handle the request (~65% token savings).
  • Privacy: Self-hosted, your data stays with your group. Secrets stored in vault (OS keychain, 1Password, etc.), never in plaintext in the database
  • Offline-first (Tauri): The Tauri app (desktop + mobile) works without internet from V1.0 — data in git, auth cached, actions queued. Cloud enhances, not gates. The web client is cloud-dependent by design (no Tauri layer). Local LLM inference (candle/GGUF) completes the picture in V1.5.
  • Local compute via Rust: ML inference, memory, audio/video run in Tauri's Rust backend — fast, private, no server dependency
  • Protocol-first (OpenMorph): .morph/ is an open protocol, not a proprietary format. Any app that reads .morph/ directories becomes compatible — like ActivityPub for social media or .git/ for version control. Morphee is the reference implementation.
  • Minimal viable first: Get the conversation working, then layer on capabilities

Technology Choices: Tauri Rust Layer

The Tauri v2 desktop app has a Rust backend accessible via IPC (invoke()). This is where local compute lives:

CrateFormatPurposeLayer
fastembedONNXLocal embeddings (wraps ort, auto-downloads models)Phase 2b
lancedbEmbedded vector database (native Rust)Phase 2b
git2Local Git operations for memory (libgit2 bindings)Phase 2b
keyringOS keychain for secrets (macOS/Windows/Linux)Phase 2b
candleGGUFLocal LLM inference (quantized Llama, Phi, Mistral) — Tauri-only, not available on webLayer 6

Why fastembed/ort + candle:

  • fastembed wraps ort (ONNX Runtime) — handles model download, tokenization, embedding in one crate
  • ort wraps ONNX Runtime — industry standard, hardware-accelerated (Metal, CUDA, DirectML)
  • candle is pure Rust (no C++ build chain), great GGUF support, maintained by Hugging Face
  • Both support Metal on macOS natively
  • They handle different model formats for different purposes — no conflict

Mobile crate strategy (Layer 1):

fastembed and lancedb don't compile for mobile targets (ONNX Runtime has no prebuilt mobile binaries; LanceDB has a known SIMD detection failure on Android). On mobile, these are replaced:

DesktopMobileWhy
fastembed (ONNX)candle (pure Rust)No C++ dependency, Metal on iOS, CPU on Android
lancedb (embedded)SQLite + sqlite-vecSQLite is universal, vector extension is lightweight
git2 (vendored)git2 (same)Vendored libgit2 cross-compiles via cc crate
keyring (apple-native)keyring (iOS) / Android Keystore plugin (Android)apple-native covers both macOS and iOS

Conditional compilation (#[cfg(not(any(target_os = "ios", target_os = "android")))]) keeps desktop and mobile paths separate. The Integration/Interface abstraction means the orchestrator doesn't care which crate runs underneath.

Hybrid architecture:

Frontend (React/TS)
├── HTTP → Python Backend (api.morphee.app)
│ ├── Auth, Groups, Cloud LLM, External Integrations
│ └── PostgreSQL, Redis

└── IPC (invoke) → Tauri Rust Backend (local)
├── LanceDB (vector search, native Rust)
├── Git (libgit2, memory storage)
├── ONNX (ort — embeddings, whisper, small models)
└── GGUF (candle — local LLM inference)

The frontend talks to both backends. The Integration/Interface abstraction means the orchestrator doesn't need to know which backend handles a given action — it just calls the Interface.


Last Updated: February 22, 2026