Morphee Integration & Interface System
Guide to the MCP-style system where everything is an Integration — external services, the LLM, the memory system, and the frontend all share the same contract.
Terminology
Integration
The definition of a service or capability. Declares available actions, parameter types, and configuration schema.
In Morphee, everything is an Integration:
| Integration | Type | Purpose |
|---|---|---|
| LLM | Core | The AI's brain — reasoning, generating, embedding |
| Memory | Core | Knowledge storage — search, store, recall |
| Frontend | Core | Dynamic UI — render components in chat |
| Tasks | Core | Task management — list, create, update status |
| Spaces | Core | Space management — list, get current |
| Notifications | Core | Alerts & reminders — send, list, mark read |
| Onboarding | Core | New user setup — create group, create spaces |
| Cron | Service | Scheduled tasks, recurring reminders |
| Google Calendar | External | Event management (OAuth) |
| Gmail | External | Email service (OAuth) |
| Filesystem | External | Local file management (sandboxed) |
| Skills | Core | Dynamic workflow management (create, execute) |
| Echo | Testing | Echo back messages (testing) |
| Webhook | Service | HTTP requests (send/receive) |
Interface
A configured instance of an Integration. Adds credentials, API keys, and settings to make an Integration operational. Scoped per-Space and per-Group.
| Integration | Interface | Configuration |
|---|---|---|
| LLM | "Claude Sonnet" | api_key, model: claude-sonnet-4-20250514, temperature: 0.7 |
| LLM | "GPT-4" | api_key, model: gpt-4-turbo, temperature: 0.5 |
| LLM | "Haiku (cheap)" | api_key, model: claude-haiku-4-5-20251001, temperature: 0.3 |
| Memory | "Group Memory" | pgvector (via PostgreSQL), git_repo_path |
| Gmail | "Mom's Gmail" | OAuth tokens, account email |
| Calendar | "Family Calendar" | API key, calendar ID |
| Frontend | "Desktop App" | WebSocket connection |
The Agent Orchestrator can use different Interfaces for different tasks — powerful LLM for reasoning, cheap LLM for summarization, etc.
Interface Inheritance (Sub-Spaces)
Spaces can be nested. Sub-Spaces inherit their parent's Interfaces and can add or override at their level:
🏢 Seb Dev (root Space)
│ Interfaces: Gmail, Calendar, LLM
│
├── 💼 TechCorp (client Space)
│ │ Interfaces: + JIRA (TechCorp), + Slack (TechCorp)
│ │
│ ├── 🚀 API Refactor (project Space)
│ │ Inherits: Gmail, Calendar, LLM, JIRA, Slack — no overrides
│ │
│ └── 🚀 Mobile App (project Space)
│ Inherits: same + overrides Slack channel to #mobile
│
├── 💼 StartupXYZ (client Space)
│ Interfaces: + Linear, + Discord (replaces Slack at this level)
Resolution order when executing an action:
- Check current Space for a matching Interface
- Walk up to parent Space
- Walk up to Group-level Interfaces
- Use core defaults (LLM, Memory, Frontend)
This means: configure JIRA once at the client level, and every project Space under that client gets it automatically.
Skill
A composable capability combining multiple Interface calls. Built-in or AI-generated dynamically. V0.9 COMPLETE — SkillEngine executes step sequences with template interpolation, DynamicSkillInterface registers each skill as a callable tool, SkillsIntegration provides AI-facing CRUD (create, list, get, update, delete). Skills can be scheduled via cron.
Example: "Schedule Appointment" Skill calls Calendar Interface (create event) + Memory Interface (store context) + Frontend Interface (show confirmation).
Space as Integration
Any Space can become a shareable Integration. As you use Morphee, your Space accumulates skills, memory patterns, canvas layouts, and schedules. This accumulated knowledge can be extracted, stripped of personal data, and shared as an installable Integration.
The compilation chain determines the format:
- Level 0: Raw knowledge (memories) → needs LLMRuntime to interpret
- Level 1: Structured skills (YAML) → PythonRuntime executes directly
- Level 2: Canvas components (JS) → JSRuntime renders in browser
- Level 3: Compiled WASM → WasmRuntime runs at near-native speed
All runtimes implement BaseMorphRuntime — the same load(), describe(), execute(), teardown() contract. InterfaceManager treats runtime-backed Integrations identically to built-in Python integrations.
InterfaceManager
├── Built-in Python Integrations (14 hardcoded)
│ ├── LLMIntegration, TasksIntegration, MemoryIntegration...
│
├── BaseMorphRuntime-backed Integrations (dynamic, installed)
│ ├── WasmIntegration("jira.wasm") ← WasmRuntime
│ ├── JsIntegration("dashboard.js") ← JSRuntime
│ └── PythonIntegration("custom.py") ← PythonRuntime [future]
│
└── Space-derived Integrations (compiled from usage)
└── "Math Tutoring" Space → extracted skills + memory + canvas
Architecture
┌─────────────────────────── ───────────────────────────────────┐
│ Integration System │
│ │
│ ┌───────────────┐ ┌─────────────────────┐ │
│ │ Integrations │───────▶│ Interface Manager │ │
│ │ │ │ - Registration │ │
│ │ Built-in: │ │ - Discovery │ │
│ │ LLM, Memory, │ │ - Routing │ │
│ │ Frontend, │ │ - Config mgmt │ │
│ │ Gmail, etc. │ └──────────┬──────────┘ │
│ │ │ │ │
│ │ Runtime- │ ┌──────────▼──────────┐ │
│ │ backed: │ │ Action Executor │ │
│ │ .wasm, .js, │ │ - Async exec │ │
│ │ .py (future) │ │ - Timeout handling │ │
│ └───────────────┘ │ - Event publishing │ │
│ └──────────┬──────────┘ │
│ ┌───────────────┐ │ │
│ │ BaseMorph- │ ┌──────────▼──────────┐ │
│ │ Runtime │ │ Event Bus (Redis) │ │
│ │ WasmRuntime │ └─────────────────────┘ │
│ │ JSRuntime │ │
│ │ PythonRuntime │ │
│ │ LLMRuntime │ │
│ └───────────────┘ │
└──────────────────────────────────────────────────────────────┘
Tool Bridge
The Tool Bridge (backend/chat/tools.py) converts registered Interface actions into Anthropic tool definitions for the LLM. Tool names use double-underscore format: interface__action (e.g., echo__echo, webhook__send).
actions_to_anthropic_tools(manager)— iterates all registered Interfaces, builds tool definitionsparse_tool_name(name)— splits"echo__echo"→("echo", "echo")for routing- Excludes the LLM Interface (can't call itself), always excludes
ai_access: blockedactions - Includes
ai_access: proposeactions — these trigger the approval workflow at runtime get_action_ai_access(manager, tool_name)— looks up the AIAccess level for a tool name
The Agent Orchestrator (backend/chat/orchestrator.py) uses the Tool Bridge to give the LLM access to all registered tools, then executes tool calls via the Interface Manager in a loop until the LLM produces a final response.
Core Integrations
LLM Integration
The AI's brain. The Agent Orchestrator calls this to think.
class LLMIntegration(BaseInterface):
name = "llm"
description = "Large Language Model for reasoning and generation"
config_schema = {
"model": {"type": "string", "required": False, "security": "public"},
"api_key": {"type": "string", "required": True, "security": "private"},
"max_tokens": {"type": "integer", "required": False, "security": "public", "default": 4096}
}
# Implemented actions:
# chat - Send messages, receive streaming response (with tool calling)
# complete - Simple text completion
# Planned actions (Phase 2+):
# embed - Generate vector embeddings for text
# summarize - Summarize text into key points
Start with Anthropic SDK directly. The Integration abstraction means adding other providers later is just a new class implementing the same contract.
Local LLM option (V1.5): The LLM Integration can also target the Tauri Rust backend, where candle runs quantized GGUF models locally. Same contract, different runtime — "Cloud Claude" vs "Local Llama" are both LLM Interfaces.
Memory Integration (Built — Phase 2)
The AI's knowledge. Called through tool use like any other Integration.
class MemoryIntegration(BaseInterface):
name = "memory"
description = "Persistent memory — search, store, recall, and forget information"
actions:
search(query, scope?, memory_type?, limit?) → ranked memories (ai_access: EXECUTE, side_effect: READ)
store(content, memory_type, scope?) → stored (ai_access: EXECUTE, side_effect: WRITE)
recall(topic, scope?) → semantic results (ai_access: EXECUTE, side_effect: READ)
forget(memory_id, reason?) → removed (ai_access: PROPOSE, side_effect: DELETE)
Storage layers (current Python implementation):
- pgvector: Vector embeddings in PostgreSQL for semantic search (cosine similarity, HNSW index)
- Git-backed Markdown: Per-group git repos with Markdown files + YAML frontmatter (async subprocess git)
- PostgreSQL:
memory_vectorstable (content, embedding, scope, type, group_id, space_id, user_id)
Embedding abstraction: Configurable providers — OpenAIEmbeddingProvider (text-embedding-3-small, 1536 dims) or FastEmbedProvider (BAAI/bge-small-en-v1.5, 384 dims, local). Consistency validation at startup.
RAG Pipeline (memory/rag.py): Before each LLM call, searches memory across scopes (group + space + user), deduplicates against conversation history, injects relevant context into system prompt (~2000 token budget).
Auto-summarization (memory/summarizer.py): After conversations reach 10+ messages, the LLM extracts facts/preferences/events as structured memories.
Future — Local Memory (Tauri Rust): The Memory Integration can also target the Tauri Rust backend, where lancedb (native Rust) provides embedded vector storage and git2 (libgit2 bindings) manages local Git operations. This enables fully offline memory access.
Frontend Integration (V1.2 — Advanced)
The AI's face. Renders dynamic, interactive UI components in the chat. Full bidirectional component protocol: the AI composes UI from atomic building blocks, users interact, events flow back to the AI.
class FrontendIntegration(BaseInterface):
name = "frontend"
description = "Render interactive UI components in the chat"
# Semantic shortcuts (7):
show_card - Info card with variant colors, action buttons, dismiss
show_list - Interactive list with select, check mode, status badges
show_form - Dynamic form with validation (text, number, email, textarea, select, checkbox, date)
show_choices - Pill-shaped chip selection, multi-select with confirm
show_actions - Button group with variant mapping, row/column layout
show_progress - Progress bar with color changes by percentage
show_table - Data table with column alignment, row click
# Generic + lifecycle (3):
render - Full ComponentSpec (nested compositions)
update - Patch props of existing component by ID
dismiss - Remove component by ID
ComponentSpec protocol: Every component is described by a structured JSON spec with id, type, props, children (recursive), events, and layout. Actions return _spec in ActionResult, which the frontend's ToolCallCard detects and renders via the recursive ComponentRenderer.
Three-tier event system: Component interactions are classified by tier:
- LOCAL — frontend-only (dismiss, collapse), no network call
- AI_TURN —
POST /api/chat/component-event→ SSE stream with AI response
Self-registering renderers: Each renderer file calls registerComponent('type', Component) at import time. Adding a new component = 1 file.
10 Tier 1 renderers: card, list, form, choices, actions, progress, table, confirm, grid (recursive), markdown.
Backward compatible: _spec (new protocol) alongside _component (legacy).
Built-in Integrations
Echo Integration (Testing)
Location: backend/interfaces/integrations/echo.py
echo— Echo back a messagedelay_echo— Echo with delay (timeout testing)
Webhook Integration (HTTP)
Location: backend/interfaces/integrations/webhook.py
receive— Process incoming webhookssend— Send HTTP request (ai_access=propose, requires approval)
Cron Integration (Scheduler)
Location: backend/interfaces/integrations/cron.py
schedule— Create a schedule (once, interval, or cron expression)list— List schedules for the current contextget— Get schedule details by IDcancel— Cancel a schedule (ai_access=propose, requires approval)
Notifications Integration
Location: backend/interfaces/integrations/notifications.py
send— Send a notification to a userlist— List notifications (with unread filter)mark_read— Mark notifications as read
Google Calendar Integration
Location: backend/interfaces/integrations/google_calendar.py
list_events— List upcoming calendar eventscreate_event— Create a new calendar eventupdate_event— Modify an existing event (ai_access=propose)delete_event— Remove a calendar event (ai_access=propose)check_availability— Check free/busy status for a time range
Gmail Integration
Location: backend/interfaces/integrations/gmail.py
list_emails— Search/list emails (query, label, unread filters)read_email— Read full email content by message IDsend_email— Send an email (ai_access=propose, requires approval)draft_email— Create a draft without sendingreply_email— Reply to an email (ai_access=propose, requires approval)
Filesystem Integration
Location: backend/interfaces/integrations/filesystem.py + frontend/src-tauri/src/file_store.rs
list_files— Browse files and foldersread_file— Read file content (max 1MB)write_file— Save content to a file (ai_access=propose, .md/.txt/.json/.csv/.yaml only, max 512KB)delete_file— Remove a file (ai_access=propose)search_files— Search by filename and content
Dual implementation: Python (server-side) and Tauri Rust (desktop). Both enforce per-group sandboxing with path traversal prevention.
Memory Integration (Extended — V1.0 OpenMorph)
Location: backend/interfaces/integrations/memory.py
Phase 2 actions: search, store, recall, forget
V1.0 additions (git-native memory):
create_branch(name, from_commit?) → branch createdswitch_branch(name) → switched to branch {name}search_history(query, time_range?, branch?) → commits with matching embeddingsmerge_branch(source, target, strategy?) → merge result (conflicts?, merged_count)
Branch-aware RAG:
The RAG pipeline (memory/rag.py) can limit context to a specific branch:
# System prompt injection
context = await rag_pipeline.search(
query=user_message,
scopes=["group", "space", "user"],
branch="vacation-planning" # NEW: limit to branch
)
Commit search:
Commits are embedded in LanceDB during MemorySyncScheduler background sync. Search queries match against:
- Commit message
- Changed memory content
- File paths
Use cases:
- "Show me when I last talked about vacation" → semantic search over commits
- "Create an experimental branch for meal planning" → new timeline without affecting main
- "Merge vacation ideas into main" → bring experimental memories into production
Settings Integration (V0.9 — SHIPPED)
Location: backend/interfaces/integrations/settings.py
Expose all app settings through conversational interface:
class SettingsIntegration(BaseInterface):
name = "settings"
description = "Manage user, group, and system settings conversationally"
actions:
get_setting(category, key?) → value(s) (ai_access: EXECUTE, side_effect: READ)
update_setting(category, key, value) → updated (ai_access: PROPOSE, side_effect: WRITE)
list_categories() → categories (ai_access: EXECUTE, side_effect: READ)
get_profile() → user profile (ai_access: EXECUTE, side_effect: READ)
update_profile(name?, avatar_url?) → updated (ai_access: PROPOSE, side_effect: WRITE)
get_group_settings() → group config (ai_access: EXECUTE, side_effect: READ)
update_group_settings(name?, timezone?) → updated (ai_access: PROPOSE, side_effect: WRITE, requires parent role)
get_notification_preferences() → settings (ai_access: EXECUTE, side_effect: READ)
update_notification_preferences(...) → updated (ai_access: PROPOSE, side_effect: WRITE)
get_interface_configs() → list of integrations (ai_access: EXECUTE, side_effect: READ)
configure_interface(name, config) → updated (ai_access: PROPOSE, side_effect: WRITE)
Setting categories:
| Category | Keys | Default | Security |
|---|---|---|---|
| profile | name, email, avatar_url, language | user-specific | public |
| group | name, timezone, default_space_id | group-shared | public |
| notifications | types, enabled, quiet_hours, sound | user-specific | public |
| privacy | analytics_enabled, crash_reports_enabled | user-specific | public |
| appearance | theme, text_size, animations_enabled | user-specific | public |
| integrations | interface configs (LLM, Calendar, etc.) | group-shared | mixed (secrets in vault) |
Permission model:
update_profile— any userupdate_group_settings— parent role onlyconfigure_interface— parent role only (delegates to InterfaceConfigService)
Examples:
User: "Change my name to Jane"
AI: → settings__update_profile(name="Jane")
User: "Set our group timezone to Paris time"
AI: → settings__update_group_settings(timezone="Europe/Paris")
→ "Updated! Your group is now set to Europe/Paris."
User: "Turn off notifications after 10pm"
AI: → settings__update_notification_preferences(quiet_hours={start: "22:00", end: "07:00"})
Extensions Integration (V1.2 — WASM Extensions)
Location: backend/interfaces/integrations/extensions.py
Manage third-party JavaScript libraries (Space plugins) with ACL-controlled permissions:
class ExtensionsIntegration(BaseInterface):
name = "extensions"
description = "Install and manage third-party extensions in Spaces"
actions:
install_extension(extension_id, space_id) → installed (ai_access: PROPOSE, side_effect: WRITE, requires parent role)
uninstall_extension(extension_id, space_id) → removed (ai_access: PROPOSE, side_effect: DELETE, requires parent role)
list_installed(space_id) → extensions[] (ai_access: EXECUTE, side_effect: READ)
grant_permission(extension_id, permission) → granted (ai_access: PROPOSE, side_effect: WRITE, requires parent role)
revoke_permission(extension_id, permission) → revoked (ai_access: PROPOSE, side_effect: DELETE, requires parent role)
execute_extension_action(extension_id, action, params) → result (ai_access: dynamic based on permission, side_effect: dynamic)
Permission model:
Extensions declare required permissions that map to Integration actions:
| Extension Permission | Integration | Action | Description |
|---|---|---|---|
frontend.render | FrontendIntegration | show_* | Render custom components |
memory.search | MemoryIntegration | search | Search group memory |
memory.store | MemoryIntegration | store | Store memories |
tasks.create | TasksIntegration | create_task | Create tasks |
tasks.list | TasksIntegration | list_tasks | List tasks |
llm.chat | LLMIntegration | chat | Call LLM (token limit enforced) |
network.fetch | NetworkIntegration | fetch | HTTP requests (whitelist enforced) |
ACL enforcement:
async def execute_extension_action(self, ctx: ExecutionContext, extension_id: str, action: str, params: dict):
# Check if extension has permission
perms = await self.get_extension_permissions(ctx.space_id, extension_id)
required = f"{action.split('__')[0]}.{action.split('__')[1]}"
if required not in perms:
return ActionResult(success=False, error=f"Extension missing permission: {required}")
# Check user's role allows this action
user_role = await self.get_user_role(ctx.user_id, ctx.space_id)
if not self.can_execute_action(user_role, action):
return ActionResult(success=False, error="User lacks permission")
# Execute via InterfaceManager
return await self.interface_manager.execute_action(action, params, ctx)
Database tables:
extensions (id, name, version, manifest_url, verified)
space_extensions (space_id, extension_id, installed_by, enabled, config)
extension_permissions (space_extension_id, permission, granted_by)
extension_action_logs (extension_id, user_id, action, params, success, timestamp)
Frontend integration:
- Extension loader (
frontend/src/lib/extensions/loader.ts) fetches manifest, verifies SRI hash, loads in sandboxed iframe/Worker - Permission UI prompts: "Math Tutor wants to: Search memories, Render graphs — Approve?"
- Extension marketplace page for browsing/installing extensions
Use cases:
Teacher: "Install the Math Tutor extension in Homework space"
AI: → extensions__install_extension(extension_id="math-tutor", space_id="...")
→ "Math Tutor installed! Grant permissions to: Search memories, Render custom graphs?"
→ User approves → AI calls grant_permission for each
Student: "Plot the function y = x^2"
AI: → extensions__execute_extension_action(
extension_id="math-tutor",
action="plot_function",
params={expression: "x^2", range: {x: [-10, 10]}}
)
→ Extension renders interactive graph component
Security:
- All extension code sandboxed (iframe with CSP or Web Worker)
- Permissions checked at runtime (can't access Integration actions without grant)
- Full audit trail in
extension_action_logs(who, what, when, success/failure) - Verified extensions badge for Morphee-reviewed extensions
MorpheeSelf Integration (Planned — V2.5+)
Revolutionary self-awareness: Morphee can read, understand, and improve her own codebase.
The Memory Integration with group_id="morphee-self" points to Morphee's own git repository, enabling:
- Code search with context
- Implementation explanations with source citations
- Community contribution reviews
- Self-improvement proposals
Actions:
| Action | Description | AI Access | Side Effects |
|---|---|---|---|
search_code | Search through Morphee's source code | execute | read |
explain_implementation | Explain how a feature works with code snippets | execute | read |
get_architecture_diagram | Get architecture docs with code references | execute | read |
review_community_branch | Review a PR with recommendations | propose | read |
suggest_improvement | Propose code changes to improve Morphee | propose | write |
Parameters:
# search_code
query: str # Search term (supports regex)
file_types: list[str] | None # ["py", "ts", "rs", "md"]
max_results: int = 10 # Limit results
# explain_implementation
feature: str # Feature name or component
detail_level: str = "medium" # "brief", "medium", "detailed"
# get_architecture_diagram
component: str | None # "backend", "frontend", "rust", "full"
# review_community_branch
branch_name: str # Branch to review
compare_to: str = "main" # Base branch
# suggest_improvement
component: str # "backend.auth", "frontend.chat", etc.
issue: str # Problem description
proposed_solution: str # How to solve it
Example usage:
# User asks: "How do you handle authentication?"
result = await morphee_self.search_code(
ctx=ctx,
query="async def validate_token",
file_types=["py"],
max_results=5
)
# Returns:
# [
# {
# "file": "backend/auth/client.py",
# "line": 42,
# "match": "async def validate_token(token: str) -> dict:",
# "context_before": ["", "class SupabaseAuthClient:"],
# "context_after": [" # Validate JWT token", " try:"],
# "url": "file:///morphee-beta/backend/auth/client.py#L42"
# }
# ]
# Morphee reviews community PR
review = await morphee_self.review_community_branch(
ctx=ctx,
branch_name="community/slack-integration",
compare_to="main"
)
# Returns:
# {
# "branch": "community/slack-integration",
# "author": "sarah-dev",
# "commit_count": 12,
# "files_changed": 8,
# "analysis": {
# "summary": "Adds Slack integration following BaseInterface pattern",
# "architecture_impact": "Low - follows existing OAuth2 pattern",
# "breaking_changes": [],
# "security_concerns": []
# },
# "tests": {
# "coverage_delta": "+3.2%",
# "new_tests": 15,
# "passing": true
# },
# "documentation": {
# "updated": ["docs/interfaces.md", "docs/api.md"],
# "missing": []
# },
# "recommendation": "approve"
# }
Governance:
- Benevolent Dictator: Sebastien Mathieu has sole merge authority to main branch
- ACL Roles: viewer, contributor, reviewer, maintainer, dictator
- Read-only by default:
search_code,explain_implementation= EXECUTE - Writes require approval:
suggest_improvement= PROPOSE
Configuration:
config = {
"enabled": True, # Feature flag
"codebase_path": "/morphee-beta", # Path to source
"allowed_extensions": ["py", "ts", "tsx", "rs", "md", "sql"],
"excluded_paths": ["node_modules", "__pycache__", "target", ".git"],
"max_search_results": 100,
"self_improvement_enabled": True # Allow suggest_improvement
}
Database:
morphee_self_config (enabled, codebase_path, allowed_file_types)
code_review_history (branch, pr_number, morphee_review, morphee_recommendation, human_decision)
self_improvement_proposals (component, issue, solution, branch, status)
self_awareness_audit_log (action, user_id, details, approved, timestamp)
Use cases:
Developer: "How do you store credentials securely?"
AI: → morphee_self__search_code(query="VaultProvider", file_types=["py"])
→ "I use a VaultProvider abstraction to store secrets:
- backend/interfaces/vault.py:15 — VaultProvider ABC
- backend/interfaces/vault.py:42 — EnvVaultProvider (reads from env)
- frontend/src-tauri/src/vault.rs:28 — KeychainVaultProvider (OS keychain)
Would you like me to explain the implementation in detail?"
Community member creates PR #43 (Slack integration)
AI: → morphee_self__review_community_branch(branch="community/slack-integration")
→ "✅ Review complete:
- Architecture: Clean BaseInterface implementation
- Tests: 15 new tests, +3.2% coverage
- Docs: Updated interfaces.md and api.md
- Security: OAuth tokens properly vaulted
Recommendation: APPROVE
@sebastien — ready for your review!"
Morphee identifies optimization opportunity:
AI: → morphee_self__suggest_improvement(
component="backend.scheduler",
issue="Polling every 60s is inefficient",
solution="Use priority queue + sleep-until-next"
)
→ "I've drafted an improvement and created branch:
morphee-suggests/scheduler-priority-queue
Changes:
- backend/scheduler/runner.py: Add heapq priority queue
- backend/tests/test_scheduler.py: Add priority queue tests
Benefits: Lower CPU, more precise execution
Awaiting your approval, Sebastien!"
The Meta-Recursive Loop:
Morphee's Memory Integration reads her own git repository → understands her architecture → can explain her implementation with source citations → participates in her own development.
This is not just open-source development—it's collaborative AI self-improvement with human oversight.
BaseInterface Contract
class BaseInterface:
name: str
description: str
config_schema: dict = {} # What config this Integration needs
def __init__(self, config: dict = None, vault: VaultProvider = None):
self.config = config or {}
self.vault = vault
# ── Actions (what the Integration can DO) ──
def get_actions(self) -> List[ActionDefinition]:
"""Declare available actions with parameters, ai_access, side_effect, etc."""
return []
async def execute(self, action_name: str, parameters: dict) -> ActionResult:
"""Execute an action by name."""
raise NotImplementedError
# ── Events (what the Integration can EMIT) ──
def get_events(self) -> List[EventDefinition]:
"""Declare events this Integration emits. Override in subclasses."""
return []
# ── Configuration ──
async def resolve_config(self) -> dict:
"""Resolve vault:// references to actual secret values. Called at execution time, not init."""
resolved = {}
for key, value in self.config.items():
schema = self.config_schema.get(key, {})
if schema.get("security") in ("private", "secret") and isinstance(value, str) and value.startswith("vault://"):
resolved[key] = await self.vault.get(value.removeprefix("vault://"))
else:
resolved[key] = value
return resolved
The config dict holds everything needed to make the Integration operational. Public values (model names, temperatures) are stored directly. Secret values (API keys, OAuth tokens, encryption keys) are stored as vault:// references that resolve through the active VaultProvider at execution time. This is what turns an Integration into a usable Interface.
Current state: Interfaces are registered in-memory at startup (main.py). Per-Group/Space configuration is implemented via the interface_configs database table and InterfaceConfigService — stored configs are loaded at startup and applied to in-memory interfaces.
-- Per-group/space interface configuration (007_interface_configs.sql)
interface_configs (id, group_id, space_id, integration_name, config JSONB, enabled, created_at, updated_at)
-- Unique: (group_id, integration_name) WHERE space_id IS NULL (group-wide)
-- Unique: (group_id, space_id, integration_name) WHERE space_id IS NOT NULL (space-specific)
API endpoints (all require group auth):
GET /api/interfaces/configs— list all configs for user's groupGET /api/interfaces/{name}/config— get config (secrets redacted to"***")PUT /api/interfaces/{name}/config— create/update config (upsert, secrets stored in vault)DELETE /api/interfaces/{name}/config— remove config + vault entries
Frontend: The Settings page shows an InterfaceConfigCard for each integration with a non-empty config_schema. Secret fields use password inputs; public fields use text inputs.
Creating a New Integration
Step 1: Define the Integration
# backend/interfaces/integrations/google_calendar.py
class GoogleCalendarIntegration(BaseInterface):
name = "google_calendar"
description = "Google Calendar — manage events and reminders"
config_schema = {
"api_key": {"type": "string", "required": True, "security": "private"},
"calendar_id": {"type": "string", "required": True, "security": "public"}
}
def get_actions(self):
return [
ActionDefinition(
name="list_events",
description="List upcoming calendar events",
parameters=[...],
ai_access=AIAccess.EXECUTE,
side_effect=SideEffect.READ,
idempotent=True,
),
ActionDefinition(
name="create_event",
description="Create a new calendar event",
parameters=[...],
ai_access=AIAccess.PROPOSE, # AI must get user confirmation
side_effect=SideEffect.WRITE,
)
]
async def execute(self, action_name, parameters):
config = await self.resolve_config() # vault:// references resolved here
if action_name == "list_events":
return await self._list_events(config, parameters)
elif action_name == "create_event":
return await self._create_event(config, parameters)
Step 2: Register at Startup
# Registration happens at startup in main.py (async)
await interface_manager.register_interface(GoogleCalendarIntegration())
Current state: All interfaces are registered globally in-memory at startup. Per-Group/Space configuration is implemented — InterfaceConfigService persists configs to interface_configs table, with secrets stored via vault:// references. Stored configs are loaded at startup and applied to registered interfaces.
When per-Group configuration is built, the flow will be:
# Store the secret in vault, store a reference in the DB
await vault.set("google_calendar/family/api_key", "actual-api-key-value")
# Create a configured instance (Phase 2 — not yet implemented)
interface_manager.create_interface(
integration="google_calendar",
group_id="group-123",
space_id="space-456",
name="Family Calendar",
config={
"api_key": "vault://google_calendar/family/api_key", # reference, not the value
"calendar_id": "family@group.calendar.google.com" # public, stored as-is
}
)
Step 3: AI Uses It
User: "Add a dentist appointment next Tuesday at 2pm"
Agent → LLM Integration: chat(messages, tools=[google_calendar.create_event, ...])
LLM → tool_call: google_calendar.create_event(title="Dentist", start="2026-02-17T14:00")
Agent → Interface Manager: execute("google_calendar", "create_event", params)
→ ai_access=propose → Orchestrator pauses → User confirms → Event created
Agent → LLM Integration: chat(messages + tool_result)
LLM → "Done! Dentist appointment added for next Tuesday at 2pm."
API Endpoints
GET /api/interfaces/ # List all registered Integrations (InterfaceDefinition[])
GET /api/interfaces/actions # List all actions across all Integrations
GET /api/interfaces/{interface_name} # Get Integration details (InterfaceDefinition)
POST /api/interfaces/execute # Execute action (ActionExecution → ActionResult)
The execute endpoint takes a JSON body with interface_name, action_name, parameters, user_id, and group_id (see api.md for full schema).
Configuration Security & Vault
Security Levels
Every field in config_schema has a security level that determines how it's stored, displayed, and transmitted:
| Level | Storage | Display | Example |
|---|---|---|---|
| public | Plain text in DB | Visible | model, temperature, calendar_id |
| private | Vault reference in DB, actual value in Vault | Masked (last 4 chars) | api_key, webhook_url |
| secret | Vault reference in DB, actual value in Vault | Never displayed | oauth_refresh_token, encryption_key |
Config schema with security levels:
config_schema = {
"model": {
"type": "string",
"required": True,
"security": "public",
"description": "Model name",
"default": "claude-sonnet-4-5-20250929"
},
"api_key": {
"type": "string",
"required": True,
"security": "private",
"description": "API key for the provider"
},
"temperature": {
"type": "float",
"required": False,
"security": "public",
"default": 0.7,
"min": 0.0,
"max": 2.0
}
}
The AI can read public fields (to know which model is configured) but never sees private or secret values. It can ask the user to configure them through the onboarding or settings, but the values go directly to the backend — never through the LLM context.
VaultProvider Abstraction
All private and secret config values are stored and retrieved through a VaultProvider — an async interface with swappable backends. The Interface's config dict in the database stores a vault reference (e.g., vault://llm/anthropic/api_key), never the actual secret. The VaultProvider resolves the reference at runtime.
class VaultProvider(ABC):
"""Abstract interface for secure credential storage."""
@abstractmethod
async def get(self, key: str) -> str | None:
"""Retrieve a secret by key. Returns None if not found."""
@abstractmethod
async def set(self, key: str, value: str) -> None:
"""Store a secret. Overwrites if key already exists."""
@abstractmethod
async def delete(self, key: str) -> None:
"""Remove a secret."""
@abstractmethod
async def exists(self, key: str) -> bool:
"""Check if a secret exists without retrieving it."""
Vault Backends
Different platforms and deployment targets use different vault backends. The vault:// references in the Interface config are portable — the same config works everywhere, only the backend differs.
| Backend | Platform | How It Works |
|---|---|---|
EnvVaultProvider | Server / Docker | Reads from environment variables. Simplest backend. Default for Phase 1 |
KeychainVaultProvider | Desktop (Tauri) | macOS Keychain, Windows Credential Manager, Linux Secret Service — via Tauri plugin or keyring crate |
OnePasswordVaultProvider | Desktop (power users) | 1Password CLI (op) or 1Password SDK. Resolves op://vault/item/field URIs |
EncryptedDbVaultProvider | Hosted / multi-tenant | AES-256 encrypted column in PostgreSQL, encryption key from env or KMS |
MobileKeystoreVaultProvider | Mobile (Tauri) | iOS Keychain Services, Android Keystore — via Tauri mobile security plugin |
CloudKmsVaultProvider | Enterprise | AWS Secrets Manager, Azure Key Vault, GCP Secret Manager |
Key design points:
- Vault is NOT an Integration — it's a lower-level system service. Integrations depend on it to retrieve their credentials, so it cannot be an Integration itself (circular dependency).
- VaultProvider is per-device, not per-user — on your Mac you use 1Password, on the server it uses env vars, on a phone it uses the OS keystore. The
vault://references in the Interface config are the same everywhere. - Secrets are lazy-resolved — the vault reference sits in config, the actual secret is fetched only when needed at execution time. Secrets never sit in memory longer than necessary.
- Offline-compatible — OS Keychain, Tauri Stronghold, and 1Password desktop all work without internet. This aligns with the offline-first goal.
- Composable — a
ChainVaultProvidercan try multiple backends in order (e.g., 1Password → OS Keychain → env fallback).
How Vault References Work
When an Interface is configured, private and secret fields are stored as vault references:
# What gets stored in the database (interfaces table)
config = {
"model": "claude-sonnet-4-5-20250929", # public → stored as-is
"temperature": 0.7, # public → stored as-is
"api_key": "vault://llm/anthropic/api_key", # private → vault reference
}
At runtime, when the Interface needs its credentials:
# Resolution flow inside BaseInterface
async def resolve_config(self) -> dict:
"""Resolve vault references in config to actual values."""
resolved = {}
for key, value in self.config.items():
schema = self.config_schema.get(key, {})
if schema.get("security") in ("private", "secret") and isinstance(value, str) and value.startswith("vault://"):
resolved[key] = await self.vault.get(value.removeprefix("vault://"))
else:
resolved[key] = value
return resolved
Migration Path
| Phase | Vault Backend | Notes |
|---|---|---|
| 1a–1b | EnvVaultProvider | API keys from environment variables (current behavior, wrapped in abstraction) |
| 2 | + KeychainVaultProvider | Tauri desktop uses OS keychain for local secrets |
| 3 | + OnePasswordVaultProvider | Opt-in for power users, 1Password CLI/SDK integration |
| 4 (mobile) | + MobileKeystoreVaultProvider | iOS Keychain Services, Android Keystore via Tauri mobile |
| 5 (enterprise) | + CloudKmsVaultProvider, EncryptedDbVaultProvider | Cloud KMS, encrypted DB for hosted multi-tenant deployments |
Integration Types
Core Integrations (ship by default)
These are installed automatically when a Group is created. They require minimal or no configuration to start working:
| Integration | Auto-configured? | Notes |
|---|---|---|
| LLM | Yes (with default model) | Needs API key — set during onboarding or by admin |
| Memory | Yes | pgvector + Git-backed Markdown per Group (Tauri: LanceDB + git2 later) |
| Frontend | Yes | WebSocket connection, no config needed |
| Tasks | Yes | Built into the system, no external config |
| Spaces | Yes | Built into the system, no external config |
| Notifications | Yes | Desktop notifications via Tauri |
| Onboarding | Yes | AI-guided new user setup |
| Cron | Yes | Built-in scheduler, no external config |
Core integrations are always registered. The onboarding conversation helps configure the ones that need credentials (primarily LLM API key).
Installable Integrations (added by user/AI)
External services that a Group can connect:
| Integration | Config Required |
|---|---|
| Google Calendar | Google OAuth tokens, calendar ID |
| Gmail | Google OAuth tokens, account |
| Webhook | Endpoint URL |
| Filesystem | Allowed paths (sandboxed per group) |
These can be installed through conversation:
User: "Can you check my Gmail?"
AI: "I don't have Gmail connected yet. Want me to set it up?"
→ Guides user through OAuth flow
→ Creates Gmail Interface scoped to user/group
→ "Done! I can now read and send emails."
AI-Generated Integrations (dynamic, V1.5+)
When the AI encounters a need and no matching Integration exists, it can propose creating one:
User: "Track my water intake"
AI: (no water tracking Integration exists)
→ Creates a simple Skill that uses Memory to track daily water intake
→ Registers it as a lightweight Integration for the Group
→ Persists so it works across conversations
Onboarding Flow
First-time users go through a conversational onboarding — not a form wizard. The AI asks questions and configures Interfaces based on answers.
What the onboarding configures
- Group creation — name, type (family, classroom, team)
- Persona detection — who is the user? (parent, teacher, kid, grandparent, employee...)
- LLM Interface — API key (or use group default)
- Default Spaces — create Spaces based on persona (meal planning, homework, etc.)
- Suggested Integrations — based on persona, offer to connect services
Example: Parent onboarding
AI: "Welcome to Morphee! I'm here to help your family organize life.
Who's setting things up?"
User: "I'm the mom"
AI: "Nice to meet you! Let's get a few things ready.
What's your family name?"
User: "The Martins"
AI: "The Martins — got it! I've created your family group.
Most families find it useful to have spaces for meal planning,
shopping, and kids' activities. Want me to set those up?"
User: "Yes, and also one for homework"
AI: "Done! I've created:
- Meal Planning
- Shopping
- Kids' Activities
- Homework
Plus your personal space is always there.
One last thing — do you use Google Calendar?
I can sync events and send reminders."
Persona-based defaults
| Persona | Default Spaces | Suggested Integrations | Default Workflows |
|---|---|---|---|
| Parent | Meal Planning, Shopping, Kids' Activities | Calendar, Gmail, Notifications | Morning briefing, meal suggestions |
| Grandparent | Personal, Family Updates | Notifications | Simple reminders, family photo sharing |
| Kid | Homework, My Stuff | — | Homework reminders, chore tracking |
| Teacher | My Class, Assignments, Grades | Calendar, Gmail | Assignment reminders, parent updates |
| Manager | Team Space, Sprints | Calendar, JIRA, Slack | Standup summaries, deadline tracking |
| Employee | My Work, Team Space | Calendar, Gmail | Daily schedule, meeting prep |
Action Metadata
Every ActionDefinition carries rich metadata that drives the Tool Bridge, the approval workflow, and the permission system:
| Field | Type | Default | Purpose |
|---|---|---|---|
ai_access | execute / propose / blocked | execute | Controls how the AI can use this action |
side_effect | read / write / delete | read | Classifies what the action does — affects permission defaults |
returns | JSON Schema dict | {} | Describes the output shape — used by frontend for rendering |
idempotent | bool | false | Whether the action is safe to retry on failure |
timeout_seconds | int | 300 | Execution timeout |
AI Access Levels
| Level | Meaning | Example |
|---|---|---|
| execute | AI calls directly, no human in the loop | echo.echo, memory.search, calendar.list_events |
| propose | AI must pause and get human approval first | webhook.send, calendar.create_event, gmail.send |
| blocked | AI cannot use this action at all (human-only or system-only) | admin.delete_group, system.shutdown |
This is the default set on the Integration (in the ActionDefinition). Per-Interface overrides live in PermissionPolicy (see Permission Model below).
Side Effect Classification
| Level | Meaning | Default AI behavior |
|---|---|---|
| read | Only reads data, no mutations | Safe to auto-approve |
| write | Creates or updates data | May need confirmation depending on context |
| delete | Removes data | Should default to human approval |
Events
Integrations declare what events they can emit via get_events(). This is how the system becomes reactive — an event from one Integration triggers actions in another.
class EventDefinition:
name: str # "email_received"
description: str # "An email was received"
payload_schema: dict = {} # JSON Schema for event data
Example events by Integration
| Integration | Events |
|---|---|
| Gmail | email_received, email_sent, draft_created |
| Calendar | event_upcoming, event_created, event_cancelled |
| Tasks | task_completed, task_overdue, task_assigned |
| Memory | memory_updated, memory_searched |
| Frontend | user_interaction, form_submitted, button_clicked |
Event flow
Integration.emit("email_received", {from: "...", subject: "..."})
↓
EventBus (Redis pub/sub)
├─ → Other Integration handler (e.g., Notifications.on_email)
├─ → AI Orchestrator (wakes up to process)
└─ → Frontend (real-time UI update via WebSocket)
Events are defined on the Integration but delivered through the existing EventBus (tasks/events.py). The get_events() contract is a stub today — real event emission will be wired in a future phase.
Permission Model
The basic approval workflow is implemented (Phase 1b.2). Actions with ai_access: propose trigger a pause in the orchestrator, emit an approval_request SSE event, and wait for the user to approve or reject via POST /api/chat/approve/{approval_id}. The advanced permission system (per-Interface policies, multi-approver, role-based overrides) is defined as models but not yet wired into execution.
Core Concepts
The Integration (ActionDefinition) sets defaults via ai_access and side_effect. The Interface owner (admin, parent, teacher) can override per-Group/Space needs with PermissionPolicy entries.
Example: A family auto-approves calendar events, but a corporate team requires manager approval for the same action.
ExecutionContext
Passed to every action execution — tells the permission system WHO is doing WHAT and WHERE:
class ExecutionContext:
user_id: str
group_id: str
space_id: str | None
caller: "ai" | "human" | "system" # Key for permission decisions
conversation_id: str | None
PermissionPolicy
Per-action permission rules, configured on an Interface:
class PermissionPolicy:
action: str # Action name or "*" for all
allowed_roles: list[str] = [] # Roles that can invoke (empty = all)
approval_mode: "none" | "single" | "multi" # How many approvals needed
approvers: list[str] = [] # User IDs or roles who can approve
ai_access_override: AIAccess | None # Override action's default ai_access
conditions: dict = {} # Future: context-based rules
InterfacePermissions
Collection of policies for a configured Interface, stored alongside the Interface config:
class InterfacePermissions:
default_ai_access: AIAccess = AIAccess.EXECUTE
policies: list[PermissionPolicy] = []
Approval Workflow (Implemented)
AI wants to call webhook.send (ai_access = "propose")
→ Orchestrator checks ai_access via get_action_ai_access()
→ ai_access is PROPOSE → generates approval_id, creates asyncio.Event
→ Orchestrator yields "approval_request" AgentEvent with {approval_id, tool, input}
→ Frontend shows ApprovalCard: "Action requires approval — Approve / Reject"
→ User clicks Approve → POST /api/chat/approve/{approval_id}
→ Orchestrator resumes → action executes → tool_result streams back to LLM
Current implementation (Phase 1b.2):
- Orchestrator (
chat/orchestrator.py): Whenai_access == PROPOSE, pauses viaasyncio.Event, emitsapproval_request, awaits with 120s timeout (auto-rejects on timeout). - Frontend:
ApprovalCardcomponent renders inline with Approve/Reject buttons.ToolCallCardrenders tool call status (Running/Done/Failed). - API (
POST /api/chat/approve/{approval_id}): Submits approve/reject decisions. - Storage: Module-level dicts (short-lived, tied to active SSE streams). DB persistence for audit trail is planned.
- Permissions: Currently checks
ai_accesson ActionDefinition only.PermissionPolicyper-Interface lookups are defined but not yet wired.
Security
- VaultProvider abstraction: All credentials stored and retrieved through a pluggable vault system — never in plaintext in the database. See Configuration Security & Vault above
- Vault references: Database stores
vault://references for secret fields, actual values live only in the vault backend (OS Keychain, 1Password, env vars, etc.) - Secrets never in memory longer than needed:
resolve_config()fetches secrets at execution time, not at init - AI access control:
ai_access(execute/propose/blocked) controls what the AI can do.PermissionPolicyallows per-Interface overrides - Approval workflow: Actions with
ai_access: proposepause for user confirmation before executing - Input validation: Parameter types and required fields validated before execution
- Credential isolation: Each Group's Interface credentials are stored separately, scoped by vault key namespace
- Config security levels:
public/private/secret— AI never seesprivateorsecretvalues - Rate limiting: Per-user call frequency limits
- Scoping: Interfaces scoped to Group + Space — a Gmail Interface in one Space can't be accessed from another
- Audit trail: All Integration actions logged with who/what/when
- User consent: External service connections always require explicit user approval
- Offline vault: OS Keychain and Tauri Stronghold work without internet — secrets available offline
Related Documentation
- architecture.md — System architecture
- api.md — API reference
- ROADMAP.md — Development roadmap
- testing.md — Testing guide
Last Updated: February 13, 2026