Tauri IPC Commands Reference
Complete reference for all 58 Tauri IPC commands exposed by the Morphee Rust backend. These commands are invoked from the frontend via invoke() (Tauri IPC).
All commands that accept a group_id parameter validate it against the authenticated session (H-AUTHZ-006). Commands return Result<T, MorpheeError> where errors are serialized to the frontend as structured error objects.
Summary Table
| # | Command | Category | Return Type | Auth |
|---|---|---|---|---|
| 1 | embed_text | Embeddings | EmbeddingResult | No |
| 2 | embed_batch | Embeddings | Vec<EmbeddingResult> | No |
| 3 | get_embedding_info | Embeddings | EmbeddingInfo | No |
| 4 | download_embedding_model | Embeddings | bool | No |
| 5 | is_model_cached | Embeddings | bool | No |
| 6 | memory_insert | Memory (Vector Store) | String | Yes |
| 7 | memory_search | Memory (Vector Store) | Vec<SearchResult> | Yes |
| 8 | memory_delete | Memory (Vector Store) | bool | Yes |
| 9 | memory_get | Memory (Vector Store) | Option<MemoryRecord> | Yes |
| 10 | memory_count | Memory (Vector Store) | usize | Yes |
| 11 | git_init_repo | Git Storage | String | Yes |
| 12 | git_save_conversation | Git Storage | String | Yes |
| 13 | git_save_memory | Git Storage | String | Yes |
| 14 | git_delete_memory | Git Storage | bool | Yes |
| 15 | git_sync | Git Storage | bool | Yes |
| 16 | git_create_branch | Git Branching | String | Yes |
| 17 | git_switch_branch | Git Branching | String | Yes |
| 18 | git_merge_branch | Git Branching | String | Yes |
| 19 | git_list_branches | Git Branching | Vec<String> | Yes |
| 20 | git_get_current_branch | Git Branching | String | Yes |
| 21 | git_get_commit_history | Git Branching | Vec<CommitInfo> | Yes |
| 22 | vault_get | Vault | Option<String> | Yes |
| 23 | vault_set | Vault | () | Yes |
| 24 | vault_delete | Vault | () | Yes |
| 25 | vault_exists | Vault | bool | Yes |
| 26 | fs_list_files | Filesystem | Vec<FileInfo> | Yes |
| 27 | fs_read_file | Filesystem | FileContent | Yes |
| 28 | fs_write_file | Filesystem | WriteResult | Yes |
| 29 | fs_delete_file | Filesystem | DeleteResult | Yes |
| 30 | fs_search_files | Filesystem | Vec<SearchMatch> | Yes |
| 31 | set_session | Session | () | No |
| 32 | clear_session | Session | () | No |
| 33 | health_check | Health | HealthStatus | No |
| 34 | queue_action | Offline Queue | String | Yes |
| 35 | drain_queue | Offline Queue | Vec<QueuedAction> | Yes |
| 36 | get_queue_status | Offline Queue | QueueStatus | Yes |
| 37 | remove_queued_action | Offline Queue | bool | Yes |
| 38 | clear_queue | Offline Queue | usize | Yes |
| 39 | morph_discover | OpenMorph | Vec<MorphDirInfo> | No |
| 40 | morph_init | OpenMorph | MorphInitResult | No |
| 41 | extension_load | WASM Extensions | String | No |
| 42 | extension_execute | WASM Extensions | ExtensionExecutionResult | No |
| 43 | extension_unload | WASM Extensions | bool | No |
| 44 | extension_list | WASM Extensions | Vec<String> | No |
| 45 | llm_chat_stream | Local LLM | String | Yes |
| 46 | llm_cancel_stream | Local LLM | () | Yes |
| 47 | llm_get_info | Local LLM | Option<LlmModelInfo> | No |
| 48 | llm_load_model | Local LLM | LlmModelInfo | Yes |
| 49 | llm_unload_model | Local LLM | () | Yes |
| 50 | llm_list_models | Local LLM | Vec<LlmModelInfo> | No |
| 51 | llm_download_model | Local LLM | () | Yes |
| 52 | llm_delete_model | Local LLM | () | Yes |
| 53 | vector_route_message | VectorRouter | VectorRouteResult | Yes |
| 54 | tts_speak | Audio: TTS | () | Yes |
| 55 | tts_stop | Audio: TTS | () | Yes |
| 56 | tts_is_speaking | Audio: TTS | bool | No |
| 57 | tts_set_rate | Audio: TTS | () | Yes |
| 58 | whisper_transcribe | Audio: Whisper STT | TranscriptionResult | Yes |
1. Embeddings (5 commands)
Commands for generating text embeddings using ONNX (fastembed on desktop, candle on mobile).
embed_text
Generate an embedding vector for a single text string.
invoke('embed_text', { text: string }): Promise<EmbeddingResult>
| Parameter | Type | Required | Description |
|---|---|---|---|
text | String | Yes | Text to embed (max 8192 characters) |
Returns: EmbeddingResult — { vector: number[], model: string, dimensions: number }
CPU-intensive ONNX inference runs on spawn_blocking to avoid starving the async runtime.
embed_batch
Generate embedding vectors for multiple texts in a single call.
invoke('embed_batch', { texts: string[] }): Promise<EmbeddingResult[]>
| Parameter | Type | Required | Description |
|---|---|---|---|
texts | Vec<String> | Yes | Array of texts to embed (max 100 items, each max 8192 chars) |
Returns: Vec<EmbeddingResult> — one result per input text.
get_embedding_info
Get metadata about the loaded embedding model.
invoke('get_embedding_info'): Promise<EmbeddingInfo>
No parameters.
Returns: EmbeddingInfo — model name, dimensions (384 for AllMiniLML6V2), etc.
download_embedding_model
Download the embedding model for mobile (candle). No-op on desktop (fastembed auto-downloads).
invoke('download_embedding_model'): Promise<boolean>
No parameters.
Returns: bool — true if model is ready, false if not applicable (desktop).
is_model_cached
Check if the embedding model is cached locally.
invoke('is_model_cached'): Promise<boolean>
No parameters.
Returns: bool — always true on desktop (fastembed auto-downloads), checks cache on mobile.
2. Memory / Vector Store (5 commands)
LanceDB-backed vector memory with group-based isolation. Desktop uses LanceDB, mobile uses SQLite.
memory_insert
Insert a memory record. Automatically embeds the content before storing.
invoke('memory_insert', {
content: string,
memory_type: string,
scope: string,
group_id: string,
space_id?: string,
user_id?: string,
source_conversation_id?: string,
metadata?: object
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
content | String | Yes | Text content to store and embed |
memory_type | String | Yes | Type classification (e.g., "fact", "preference", "event", "skill_index") |
scope | String | Yes | Visibility scope |
group_id | String | Yes | Group ID (validated against session) |
space_id | Option<String> | No | Space ID for space-scoped memories |
user_id | Option<String> | No | User ID for user-scoped memories |
source_conversation_id | Option<String> | No | Originating conversation ID |
metadata | Option<Value> | No | Arbitrary JSON metadata |
Returns: String — the generated memory record ID.
memory_search
Semantic search across memory records using cosine similarity.
invoke('memory_search', {
query: string,
group_id: string,
scope?: string,
space_id?: string,
user_id?: string,
memory_type?: string,
limit?: number,
threshold?: number
}): Promise<SearchResult[]>
| Parameter | Type | Required | Description |
|---|---|---|---|
query | String | Yes | Natural language search query (embedded for similarity) |
group_id | String | Yes | Group ID (validated against session) |
scope | Option<String> | No | Filter by scope |
space_id | Option<String> | No | Filter by space |
user_id | Option<String> | No | Filter by user |
memory_type | Option<String> | No | Filter by memory type |
limit | Option<usize> | No | Max results (default: 5) |
threshold | Option<f32> | No | Min similarity score 0.0-1.0 (default: 0.3) |
Returns: Vec<SearchResult> — matching records with similarity scores.
memory_delete
Delete a memory record by ID. Verifies the record belongs to the specified group before deleting.
invoke('memory_delete', { memory_id: string, group_id: string }): Promise<boolean>
| Parameter | Type | Required | Description |
|---|---|---|---|
memory_id | String | Yes | ID of the memory record to delete |
group_id | String | Yes | Group ID (validated; record must belong to this group) |
Returns: bool — true if deleted.
memory_get
Retrieve a single memory record by ID. Enforces group-based isolation.
invoke('memory_get', { memory_id: string, group_id: string }): Promise<MemoryRecord | null>
| Parameter | Type | Required | Description |
|---|---|---|---|
memory_id | String | Yes | ID of the memory record |
group_id | String | Yes | Group ID (validated; record must belong to this group) |
Returns: Option<MemoryRecord> — the record, or null if not found.
memory_count
Count total memory records for a group.
invoke('memory_count', { group_id: string }): Promise<number>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID (validated against session) |
Returns: usize — total record count.
3. Git Storage (5 commands)
Git-backed persistent storage using libgit2. One repo per space. Stores conversations and memories as Markdown with YAML frontmatter.
git_init_repo
Initialize a git repository for a space.
invoke('git_init_repo', { group_id: string, space_id: string }): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
Returns: String — path to the initialized repository.
git_save_conversation
Save a conversation to the git store as a Markdown file with YAML frontmatter.
invoke('git_save_conversation', {
group_id: string,
space_id: string,
conversation_id: string,
messages: MessageData[],
title: string
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
conversation_id | String | Yes | Conversation UUID |
messages | Vec<MessageData> | Yes | Array of { role, content } messages |
title | String | Yes | Conversation title |
Returns: String — git commit hash.
git_save_memory
Save a memory entry to the git store.
invoke('git_save_memory', {
group_id: string,
space_id: string,
memory_id: string,
content: string,
memory_type: string,
metadata?: object
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
memory_id | String | Yes | Memory record UUID |
content | String | Yes | Memory content text |
memory_type | String | Yes | Type classification |
metadata | Option<Value> | No | Additional JSON metadata |
Returns: String — git commit hash.
git_delete_memory
Delete a memory entry from the git store.
invoke('git_delete_memory', {
group_id: string,
space_id: string,
memory_id: string,
memory_type: string
}): Promise<boolean>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
memory_id | String | Yes | Memory record UUID to delete |
memory_type | String | Yes | Type of memory (determines file path) |
Returns: bool — true if deleted.
git_sync
Sync a space's git repository with a remote URL (push/pull).
invoke('git_sync', {
group_id: string,
space_id: string,
remote_url: string
}): Promise<boolean>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
remote_url | String | Yes | Git remote URL to sync with |
Returns: bool — true if sync succeeded.
4. Git Branching (6 commands)
Phase 3i.1: Git branching operations for temporal navigation and parallel space states.
git_create_branch
Create a new branch in a space's repository.
invoke('git_create_branch', {
group_id: string,
space_id: string,
branch_name: string,
from_branch?: string
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
branch_name | String | Yes | Name for the new branch |
from_branch | Option<String> | No | Source branch (defaults to current branch) |
Returns: String — the created branch name.
git_switch_branch
Switch the space's repository to a different branch.
invoke('git_switch_branch', {
group_id: string,
space_id: string,
branch_name: string
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
branch_name | String | Yes | Branch to switch to |
Returns: String — the active branch name after switch.
git_merge_branch
Merge a source branch into a target branch.
invoke('git_merge_branch', {
group_id: string,
space_id: string,
source_branch: string,
target_branch?: string,
message?: string
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
source_branch | String | Yes | Branch to merge from |
target_branch | Option<String> | No | Branch to merge into (defaults to current) |
message | Option<String> | No | Custom merge commit message |
Returns: String — merge commit hash or result message.
git_list_branches
List all branches in a space's repository.
invoke('git_list_branches', {
group_id: string,
space_id: string
}): Promise<string[]>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
Returns: Vec<String> — list of branch names.
git_get_current_branch
Get the name of the currently checked-out branch.
invoke('git_get_current_branch', {
group_id: string,
space_id: string
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
Returns: String — current branch name (e.g., "main").
git_get_commit_history
Get the commit history for a branch.
invoke('git_get_commit_history', {
group_id: string,
space_id: string,
branch?: string,
limit: number
}): Promise<CommitInfo[]>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
space_id | String | Yes | Space ID |
branch | Option<String> | No | Branch to read history from (defaults to current) |
limit | usize | Yes | Max number of commits to return |
Returns: Vec<CommitInfo> — commit objects with hash, message, author, timestamp.
5. Vault (4 commands)
Secure credential storage via OS keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service). Android uses SQLite vault.
vault_get
Retrieve a secret from the vault.
invoke('vault_get', { key: string, group_id: string }): Promise<string | null>
| Parameter | Type | Required | Description |
|---|---|---|---|
key | String | Yes | Secret key name |
group_id | String | Yes | Group ID (validated against session) |
Returns: Option<String> — the secret value, or null if not found.
vault_set
Store a secret in the vault.
invoke('vault_set', { key: string, value: string, group_id: string }): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
key | String | Yes | Secret key name |
value | String | Yes | Secret value to store |
group_id | String | Yes | Group ID (validated against session) |
Returns: () (void)
vault_delete
Delete a secret from the vault.
invoke('vault_delete', { key: string, group_id: string }): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
key | String | Yes | Secret key to delete |
group_id | String | Yes | Group ID (validated against session) |
Returns: () (void)
vault_exists
Check if a key exists in the vault.
invoke('vault_exists', { key: string, group_id: string }): Promise<boolean>
| Parameter | Type | Required | Description |
|---|---|---|---|
key | String | Yes | Secret key to check |
group_id | String | Yes | Group ID (validated against session) |
Returns: bool — true if the key exists.
6. Filesystem (5 commands)
Sandboxed file operations per group with path traversal prevention.
fs_list_files
List files and directories in a group's file storage area.
invoke('fs_list_files', { group_id: string, path?: string }): Promise<FileInfo[]>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
path | Option<String> | No | Relative subdirectory path (default: root) |
Returns: Vec<FileInfo> — file metadata objects (name, size, type, modified date).
fs_read_file
Read a file from the group's storage.
invoke('fs_read_file', { group_id: string, path: string }): Promise<FileContent>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
path | String | Yes | Relative file path |
Returns: FileContent — file content and metadata.
fs_write_file
Write content to a file in the group's storage.
invoke('fs_write_file', { group_id: string, path: string, content: string }): Promise<WriteResult>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
path | String | Yes | Relative file path |
content | String | Yes | File content to write |
Returns: WriteResult — write confirmation with path and bytes written.
fs_delete_file
Delete a file from the group's storage.
invoke('fs_delete_file', { group_id: string, path: string }): Promise<DeleteResult>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
path | String | Yes | Relative file path to delete |
Returns: DeleteResult — deletion confirmation.
fs_search_files
Search file contents within a group's storage.
invoke('fs_search_files', {
group_id: string,
query: string,
path?: string,
max_results?: number
}): Promise<SearchMatch[]>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
query | String | Yes | Search query string |
path | Option<String> | No | Subdirectory to search within (default: root) |
max_results | Option<usize> | No | Max results (default: 20) |
Returns: Vec<SearchMatch> — matching files with context.
7. Session (2 commands)
Session management for offline grace period support. The frontend calls these after authentication to persist the session across app restarts.
set_session
Set the authenticated session after login. Persists to vault for 72-hour offline grace period.
invoke('set_session', { group_id: string, expires_at?: number }): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Authenticated user's group ID (cannot be empty) |
expires_at | Option<i64> | No | Token expiry as Unix timestamp (seconds) |
Returns: () (void)
The session is stored both in-memory and in the vault (session/group_id, session/expires_at). On app restart, restore_session_from_vault (internal, not an IPC command) restores the session if within the 72-hour grace period.
clear_session
Clear the authenticated session on logout. Removes from both memory and vault.
invoke('clear_session'): Promise<void>
No parameters.
Returns: () (void)
8. Health (1 command)
health_check
Get the health status of all Tauri Rust subsystems.
invoke('health_check'): Promise<HealthStatus>
No parameters.
Returns: HealthStatus:
{
healthy: boolean; // Overall health (platform-aware criteria)
platform: string; // "macos" | "windows" | "linux" | "ios" | "android"
subsystems: SubsystemStatus[]; // Per-subsystem status
startup_errors: string[]; // Errors from initialization
}
Each SubsystemStatus has { name, ready, detail }. Checked subsystems:
embeddings— ONNX/candle modelvector_store— LanceDB/SQLitegit_store— libgit2vault— OS keychain/SQLiteaction_queue— offline replay buffer
Health criteria by platform:
- Desktop: embeddings + git_store + vault must be ready
- Mobile with ML: embeddings + vector_store + git_store must be ready
- Mobile without ML: git_store must be ready
9. Offline Action Queue (5 commands)
Persistent queue for actions that fail while offline. Stored as JSON on disk. Actions are replayed when connectivity is restored.
queue_action
Add an action to the offline queue.
invoke('queue_action', {
group_id: string,
action_type: string,
payload: object
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
action_type | String | Yes | Action type identifier (e.g., "send_message", "create_task") |
payload | Value | Yes | Arbitrary JSON payload for replay |
Returns: String — the generated action ID.
drain_queue
Drain all pending actions for a group (oldest-first). Increments each action's attempts counter. Call remove_queued_action after successfully replaying each action.
invoke('drain_queue', { group_id: string }): Promise<QueuedAction[]>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
Returns: Vec<QueuedAction> — all pending actions for the group, ordered oldest-first.
get_queue_status
Get queue status without modifying the queue.
invoke('get_queue_status', { group_id: string }): Promise<QueueStatus>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
Returns: QueueStatus — pending count and oldest action timestamp.
remove_queued_action
Remove a successfully replayed action from the queue.
invoke('remove_queued_action', {
action_id: string,
group_id: string
}): Promise<boolean>
| Parameter | Type | Required | Description |
|---|---|---|---|
action_id | String | Yes | ID of the action to remove |
group_id | String | Yes | Group ID |
Returns: bool — true if the action was found and removed.
clear_queue
Discard all pending actions for a group.
invoke('clear_queue', { group_id: string }): Promise<number>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
Returns: usize — number of actions cleared.
10. OpenMorph (2 commands)
V1.0: Local .morph/ directory discovery and initialization for git-native Spaces.
morph_discover
Scan a local directory tree for existing Morph Spaces (directories containing .morph/). This is a pre-authentication operation -- no group_id required.
invoke('morph_discover', {
root_path?: string,
max_depth?: number
}): Promise<MorphDirInfo[]>
| Parameter | Type | Required | Description |
|---|---|---|---|
root_path | Option<String> | No | Directory to start scanning (default: user home). Must be >2 chars, cannot be filesystem root. |
max_depth | Option<u32> | No | Max directory depth (default: 5, capped at 10) |
Returns: Vec<MorphDirInfo> — discovered .morph/ directories with metadata.
morph_init
Initialize a .morph/ Morph Space in an arbitrary local directory. Idempotent: returns existing info if .morph/ already exists.
Creates:
.morph/config.yaml-- space identity and sync configuration.morph/canvas.yaml-- blank canvas state.morph/acl.yaml-- default access control.morph/.git/-- git repository (non-fatal if git init fails)
invoke('morph_init', {
path: string,
space_name: string,
space_id: string,
sync_mode?: string,
remote_url?: string
}): Promise<MorphInitResult>
| Parameter | Type | Required | Description |
|---|---|---|---|
path | String | Yes | Absolute path to the directory |
space_name | String | Yes | Human-readable space name |
space_id | String | Yes | UUID assigned by the backend |
sync_mode | Option<String> | No | "local-only" (default), "morphee-hosted", or "git-remote" |
remote_url | Option<String> | No | Git remote URL for git-remote sync mode |
Returns: MorphInitResult -- initialization result with space metadata.
11. WASM Extensions (4 commands)
V1.2: Load and execute WASM extensions in a sandboxed runtime.
extension_load
Load a WASM extension from raw bytes with a manifest.
invoke('extension_load', {
wasm_bytes: number[], // Uint8Array serialized
manifest_json: string // JSON string of ExtensionManifest
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
wasm_bytes | Vec<u8> | Yes | Raw WASM binary bytes |
manifest_json | String | Yes | JSON-serialized ExtensionManifest |
Returns: String — the extension ID.
extension_execute
Execute an action on a loaded extension.
invoke('extension_execute', {
extension_id: string,
action: string,
params: object
}): Promise<ExtensionExecutionResult>
| Parameter | Type | Required | Description |
|---|---|---|---|
extension_id | String | Yes | ID of the loaded extension |
action | String | Yes | Action name to execute |
params | Value | Yes | JSON parameters to pass to the action |
Returns: ExtensionExecutionResult — execution result with output data.
extension_unload
Unload a WASM extension from the runtime.
invoke('extension_unload', { extension_id: string }): Promise<boolean>
| Parameter | Type | Required | Description |
|---|---|---|---|
extension_id | String | Yes | ID of the extension to unload |
Returns: bool — true if the extension was found and unloaded.
extension_list
List all currently loaded extension IDs.
invoke('extension_list'): Promise<string[]>
No parameters.
Returns: Vec<String> — list of loaded extension IDs.
12. Local LLM (8 commands)
V1.5: Local GGUF model inference via candle. Supports Phi-4 Mini (3.8B) and Llama 3.2 3B. Uses Metal acceleration on macOS.
llm_chat_stream
Start streaming LLM token generation. Returns immediately with a generation_id. Tokens are emitted as Tauri events.
invoke('llm_chat_stream', {
request: {
generation_id: string,
messages: Array<{ role: string, content: string }>,
group_id: string,
config?: {
max_tokens: number, // default: 1024
temperature: number, // default: 0.7
top_p: number, // default: 0.9
repeat_penalty: number, // default: 1.1
repeat_last_n: number // default: 64
}
}
}): Promise<string>
| Parameter | Type | Required | Description |
|---|---|---|---|
request | ChatStreamRequest | Yes | Chat request with messages and config |
Returns: String — the generation_id (echoed back).
Emitted events:
llm-token—{ generation_id, token }for each generated token (real-time streaming)llm-done—{ generation_id, token_count }when generation completesllm-error—{ generation_id, message }on error
Example usage:
import { invoke } from '@tauri-apps/api/core';
import { listen } from '@tauri-apps/api/event';
const unlisten = await listen('llm-token', (event) => {
appendToUI(event.payload.token);
});
await invoke('llm_chat_stream', {
request: {
generation_id: crypto.randomUUID(),
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
],
group_id: currentGroupId
}
});
llm_cancel_stream
Cancel an in-progress generation by setting the atomic cancel flag.
invoke('llm_cancel_stream', { group_id: string }): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID (validated against session) |
Returns: () (void). The running generation will stop at the next token boundary.
llm_get_info
Get info about the currently loaded model.
invoke('llm_get_info'): Promise<LlmModelInfo | null>
No parameters.
Returns: Option<LlmModelInfo>:
{
model_id: string; // e.g., "phi-4-mini-q4"
display_name: string; // e.g., "Phi-4 Mini (Q4)"
parameters: string; // e.g., "3.8B"
quantization: string; // e.g., "Q4_K_M"
context_length: number; // e.g., 4096
downloaded: boolean;
size_bytes: number | null;
}
Returns null if no model is loaded.
llm_load_model
Load a previously downloaded model into memory. Emits progress events during loading (5-30 seconds).
invoke('llm_load_model', {
model_id: string,
group_id: string
}): Promise<LlmModelInfo>
| Parameter | Type | Required | Description |
|---|---|---|---|
model_id | String | Yes | Model identifier: "phi-4-mini-q4" or "llama-3.2-3b-q4" |
group_id | String | Yes | Group ID |
Returns: LlmModelInfo — info about the loaded model.
Emitted events:
llm-load-progress—{ stage: "tokenizer" | "weights" | "ready", progress: 0.0-1.0 }
Errors if the model has not been downloaded first.
llm_unload_model
Unload the current model from memory to free resources.
invoke('llm_unload_model', { group_id: string }): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
Returns: () (void)
llm_list_models
List all available models in the catalog with their download status.
invoke('llm_list_models'): Promise<LlmModelInfo[]>
No parameters.
Returns: Vec<LlmModelInfo> — all models in the catalog.
Available models:
| Model ID | Display Name | Parameters | Context | HuggingFace Repo |
|---|---|---|---|---|
phi-4-mini-q4 | Phi-4 Mini (Q4) | 3.8B | 4096 | microsoft/Phi-4-mini-instruct-GGUF |
llama-3.2-3b-q4 | Llama 3.2 3B (Q4) | 3.2B | 4096 | bartowski/Llama-3.2-3B-Instruct-GGUF |
llm_download_model
Download a model from HuggingFace. Emits progress events during download. No-op if already downloaded.
invoke('llm_download_model', {
model_id: string,
group_id: string
}): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
model_id | String | Yes | Model identifier to download |
group_id | String | Yes | Group ID |
Returns: () (void)
Emitted events:
llm-download-progress—{ model_id, bytes_downloaded, bytes_total }
llm_delete_model
Delete a downloaded model from disk. Automatically unloads the model if it is currently active.
invoke('llm_delete_model', {
model_id: string,
group_id: string
}): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
model_id | String | Yes | Model identifier to delete |
group_id | String | Yes | Group ID |
Returns: () (void)
13. VectorRouter (1 command)
V1.5: Offline-first message routing. Checks local LanceDB memory and skill index before falling back to LLM.
vector_route_message
Route a user message using local vector memory and skill index. Determines whether the query can be answered directly from memory, routed to a skill, or requires the LLM.
invoke('vector_route_message', {
message: string,
group_id: string
}): Promise<VectorRouteResult>
| Parameter | Type | Required | Description |
|---|---|---|---|
message | String | Yes | User's message to route |
group_id | String | Yes | Group ID |
Returns: VectorRouteResult:
{
type: "DIRECT_MEMORY" | "SKILL_EXECUTE" | "SKILL_HINT" | "LLM_REQUIRED";
content?: string; // For DIRECT_MEMORY: the stored answer text
skill_id?: string; // For SKILL_*: the skill UUID
skill_name?: string; // For SKILL_*: human-readable skill name
score: number; // Similarity score (0.0-1.0)
}
Routing thresholds (from VectorRouter):
DIRECT_MEMORY>= 0.92 -- use content directly as AI responseSKILL_EXECUTE>= 0.88 -- execute the matched skill (no required params)SKILL_HINT>= 0.83 -- pass skill name as hint in LLM system promptLLM_REQUIRED-- no match above threshold; fall through to LLM
Falls back to LLM_REQUIRED gracefully if embedding provider or vector store are not initialized.
14. Audio: TTS (4 commands)
V1.5: Text-to-speech via platform TTS engines (macOS AVSpeech, Windows SAPI5, Linux espeak/piper). Requires --features audio Cargo flag.
tts_speak
Speak text using the platform TTS engine. Initializes the TTS provider lazily on first call. Non-blocking -- speech continues in background.
invoke('tts_speak', { text: string, group_id: string }): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
text | String | Yes | Text to speak (max 4096 characters) |
group_id | String | Yes | Group ID |
Returns: () (void)
tts_stop
Stop any currently-in-progress TTS speech.
invoke('tts_stop', { group_id: string }): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
group_id | String | Yes | Group ID |
Returns: () (void). No-op if nothing is playing.
tts_is_speaking
Check if TTS is currently speaking.
invoke('tts_is_speaking'): Promise<boolean>
No parameters.
Returns: bool — true if audio is currently playing.
tts_set_rate
Set TTS speech rate.
invoke('tts_set_rate', { rate: number, group_id: string }): Promise<void>
| Parameter | Type | Required | Description |
|---|---|---|---|
rate | f32 | Yes | Speech rate: 0.5 = slow, 1.0 = normal, 2.0 = fast (clamped to 0.1-3.0) |
group_id | String | Yes | Group ID |
Returns: () (void)
15. Audio: Whisper STT (1 command)
V1.5: Local speech-to-text via Whisper-tiny ONNX model. Status: skeleton with API surface; ONNX inference pipeline not yet implemented.
whisper_transcribe
Transcribe PCM audio samples to text using the local Whisper model.
invoke('whisper_transcribe', {
audio_data: number[], // Float32Array serialized as Vec<f32>
sample_rate: number,
group_id: string
}): Promise<TranscriptionResult>
| Parameter | Type | Required | Description |
|---|---|---|---|
audio_data | Vec<f32> | Yes | Mono PCM f32 samples (max 30s at 16kHz = 480,000 samples) |
sample_rate | u32 | Yes | Audio sample rate (should be 16000 for Whisper) |
group_id | String | Yes | Group ID |
Returns: TranscriptionResult:
{
text: string; // Transcribed text
confidence: number | null; // Confidence score [0.0, 1.0] if available
duration_ms: number; // Processing time in milliseconds
}
Falls back to error if the Whisper model has not been downloaded. The frontend should fall back to the Web Speech API.
Tauri Event Reference
Several commands emit events via app.emit() rather than returning data directly. Listen for these on the frontend using @tauri-apps/api/event.
| Event Name | Payload | Emitted By |
|---|---|---|
llm-token | { generation_id: string, token: string } | llm_chat_stream |
llm-done | { generation_id: string, token_count: number } | llm_chat_stream |
llm-error | { generation_id: string, message: string } | llm_chat_stream |
llm-download-progress | { model_id: string, bytes_downloaded: number, bytes_total: number } | llm_download_model |
llm-load-progress | { stage: string, progress: number } | llm_load_model |
Error Handling
All commands return Result<T, MorpheeError>. The MorpheeError enum maps to specific subsystem errors:
| Variant | Description |
|---|---|
MorpheeError::Embedding(String) | Embedding provider errors |
MorpheeError::VectorStore(String) | LanceDB/SQLite vector store errors |
MorpheeError::Git(String) | Git storage errors |
MorpheeError::Vault(String) | Keychain/vault errors |
MorpheeError::Filesystem(String) | File store errors |
MorpheeError::Auth(String) | Session validation failures |
MorpheeError::Extension(String) | WASM extension errors |
MorpheeError::Llm(String) | Local LLM, TTS, and Whisper errors |
On the frontend, invoke errors are caught as rejected promises with the error message string.
Feature Flags
Some commands require specific Cargo features to be functional:
| Feature | Commands Affected | Description |
|---|---|---|
local-llm | llm_chat_stream, llm_load_model, llm_cancel_stream | Enables candle GGUF inference (desktop) |
mobile-ml | download_embedding_model, is_model_cached | Enables candle embeddings + SQLite vector store (mobile) |
audio | tts_speak, tts_stop, tts_is_speaking, tts_set_rate | Enables platform TTS via tts crate |
Without the relevant feature flag, stub implementations return descriptive errors (e.g., "Build with --features local-llm").
Source Files
| File | Commands |
|---|---|
frontend/src-tauri/src/commands/embedding_commands.rs | embed_text, embed_batch, get_embedding_info, download_embedding_model, is_model_cached |
frontend/src-tauri/src/commands/memory_commands.rs | memory_insert, memory_search, memory_delete, memory_get, memory_count |
frontend/src-tauri/src/commands/git_commands.rs | git_init_repo, git_save_conversation, git_save_memory, git_delete_memory, git_sync, git_create_branch, git_switch_branch, git_merge_branch, git_list_branches, git_get_current_branch, git_get_commit_history |
frontend/src-tauri/src/commands/vault_commands.rs | vault_get, vault_set, vault_delete, vault_exists |
frontend/src-tauri/src/commands/fs_commands.rs | fs_list_files, fs_read_file, fs_write_file, fs_delete_file, fs_search_files |
frontend/src-tauri/src/commands/session_commands.rs | set_session, clear_session |
frontend/src-tauri/src/commands/health_commands.rs | health_check |
frontend/src-tauri/src/commands/queue_commands.rs | queue_action, drain_queue, get_queue_status, remove_queued_action, clear_queue |
frontend/src-tauri/src/commands/morph_commands.rs | morph_discover, morph_init |
frontend/src-tauri/src/commands/extension_commands.rs | extension_load, extension_execute, extension_unload, extension_list |
frontend/src-tauri/src/commands/llm_commands.rs | llm_chat_stream, llm_cancel_stream, llm_get_info, llm_load_model, llm_unload_model, llm_list_models, llm_download_model, llm_delete_model |
frontend/src-tauri/src/commands/vector_commands.rs | vector_route_message |
frontend/src-tauri/src/commands/audio_commands.rs | tts_speak, tts_stop, tts_is_speaking, tts_set_rate, whisper_transcribe |
frontend/src-tauri/src/lib.rs | Command registration (invoke_handler) |