Skip to main content

Tauri IPC Commands Reference

Complete reference for all 58 Tauri IPC commands exposed by the Morphee Rust backend. These commands are invoked from the frontend via invoke() (Tauri IPC).

All commands that accept a group_id parameter validate it against the authenticated session (H-AUTHZ-006). Commands return Result<T, MorpheeError> where errors are serialized to the frontend as structured error objects.


Summary Table

#CommandCategoryReturn TypeAuth
1embed_textEmbeddingsEmbeddingResultNo
2embed_batchEmbeddingsVec<EmbeddingResult>No
3get_embedding_infoEmbeddingsEmbeddingInfoNo
4download_embedding_modelEmbeddingsboolNo
5is_model_cachedEmbeddingsboolNo
6memory_insertMemory (Vector Store)StringYes
7memory_searchMemory (Vector Store)Vec<SearchResult>Yes
8memory_deleteMemory (Vector Store)boolYes
9memory_getMemory (Vector Store)Option<MemoryRecord>Yes
10memory_countMemory (Vector Store)usizeYes
11git_init_repoGit StorageStringYes
12git_save_conversationGit StorageStringYes
13git_save_memoryGit StorageStringYes
14git_delete_memoryGit StorageboolYes
15git_syncGit StorageboolYes
16git_create_branchGit BranchingStringYes
17git_switch_branchGit BranchingStringYes
18git_merge_branchGit BranchingStringYes
19git_list_branchesGit BranchingVec<String>Yes
20git_get_current_branchGit BranchingStringYes
21git_get_commit_historyGit BranchingVec<CommitInfo>Yes
22vault_getVaultOption<String>Yes
23vault_setVault()Yes
24vault_deleteVault()Yes
25vault_existsVaultboolYes
26fs_list_filesFilesystemVec<FileInfo>Yes
27fs_read_fileFilesystemFileContentYes
28fs_write_fileFilesystemWriteResultYes
29fs_delete_fileFilesystemDeleteResultYes
30fs_search_filesFilesystemVec<SearchMatch>Yes
31set_sessionSession()No
32clear_sessionSession()No
33health_checkHealthHealthStatusNo
34queue_actionOffline QueueStringYes
35drain_queueOffline QueueVec<QueuedAction>Yes
36get_queue_statusOffline QueueQueueStatusYes
37remove_queued_actionOffline QueueboolYes
38clear_queueOffline QueueusizeYes
39morph_discoverOpenMorphVec<MorphDirInfo>No
40morph_initOpenMorphMorphInitResultNo
41extension_loadWASM ExtensionsStringNo
42extension_executeWASM ExtensionsExtensionExecutionResultNo
43extension_unloadWASM ExtensionsboolNo
44extension_listWASM ExtensionsVec<String>No
45llm_chat_streamLocal LLMStringYes
46llm_cancel_streamLocal LLM()Yes
47llm_get_infoLocal LLMOption<LlmModelInfo>No
48llm_load_modelLocal LLMLlmModelInfoYes
49llm_unload_modelLocal LLM()Yes
50llm_list_modelsLocal LLMVec<LlmModelInfo>No
51llm_download_modelLocal LLM()Yes
52llm_delete_modelLocal LLM()Yes
53vector_route_messageVectorRouterVectorRouteResultYes
54tts_speakAudio: TTS()Yes
55tts_stopAudio: TTS()Yes
56tts_is_speakingAudio: TTSboolNo
57tts_set_rateAudio: TTS()Yes
58whisper_transcribeAudio: Whisper STTTranscriptionResultYes

1. Embeddings (5 commands)

Commands for generating text embeddings using ONNX (fastembed on desktop, candle on mobile).

embed_text

Generate an embedding vector for a single text string.

invoke('embed_text', { text: string }): Promise<EmbeddingResult>
ParameterTypeRequiredDescription
textStringYesText to embed (max 8192 characters)

Returns: EmbeddingResult{ vector: number[], model: string, dimensions: number }

CPU-intensive ONNX inference runs on spawn_blocking to avoid starving the async runtime.


embed_batch

Generate embedding vectors for multiple texts in a single call.

invoke('embed_batch', { texts: string[] }): Promise<EmbeddingResult[]>
ParameterTypeRequiredDescription
textsVec<String>YesArray of texts to embed (max 100 items, each max 8192 chars)

Returns: Vec<EmbeddingResult> — one result per input text.


get_embedding_info

Get metadata about the loaded embedding model.

invoke('get_embedding_info'): Promise<EmbeddingInfo>

No parameters.

Returns: EmbeddingInfo — model name, dimensions (384 for AllMiniLML6V2), etc.


download_embedding_model

Download the embedding model for mobile (candle). No-op on desktop (fastembed auto-downloads).

invoke('download_embedding_model'): Promise<boolean>

No parameters.

Returns: booltrue if model is ready, false if not applicable (desktop).


is_model_cached

Check if the embedding model is cached locally.

invoke('is_model_cached'): Promise<boolean>

No parameters.

Returns: bool — always true on desktop (fastembed auto-downloads), checks cache on mobile.


2. Memory / Vector Store (5 commands)

LanceDB-backed vector memory with group-based isolation. Desktop uses LanceDB, mobile uses SQLite.

memory_insert

Insert a memory record. Automatically embeds the content before storing.

invoke('memory_insert', {
content: string,
memory_type: string,
scope: string,
group_id: string,
space_id?: string,
user_id?: string,
source_conversation_id?: string,
metadata?: object
}): Promise<string>
ParameterTypeRequiredDescription
contentStringYesText content to store and embed
memory_typeStringYesType classification (e.g., "fact", "preference", "event", "skill_index")
scopeStringYesVisibility scope
group_idStringYesGroup ID (validated against session)
space_idOption<String>NoSpace ID for space-scoped memories
user_idOption<String>NoUser ID for user-scoped memories
source_conversation_idOption<String>NoOriginating conversation ID
metadataOption<Value>NoArbitrary JSON metadata

Returns: String — the generated memory record ID.


Semantic search across memory records using cosine similarity.

invoke('memory_search', {
query: string,
group_id: string,
scope?: string,
space_id?: string,
user_id?: string,
memory_type?: string,
limit?: number,
threshold?: number
}): Promise<SearchResult[]>
ParameterTypeRequiredDescription
queryStringYesNatural language search query (embedded for similarity)
group_idStringYesGroup ID (validated against session)
scopeOption<String>NoFilter by scope
space_idOption<String>NoFilter by space
user_idOption<String>NoFilter by user
memory_typeOption<String>NoFilter by memory type
limitOption<usize>NoMax results (default: 5)
thresholdOption<f32>NoMin similarity score 0.0-1.0 (default: 0.3)

Returns: Vec<SearchResult> — matching records with similarity scores.


memory_delete

Delete a memory record by ID. Verifies the record belongs to the specified group before deleting.

invoke('memory_delete', { memory_id: string, group_id: string }): Promise<boolean>
ParameterTypeRequiredDescription
memory_idStringYesID of the memory record to delete
group_idStringYesGroup ID (validated; record must belong to this group)

Returns: booltrue if deleted.


memory_get

Retrieve a single memory record by ID. Enforces group-based isolation.

invoke('memory_get', { memory_id: string, group_id: string }): Promise<MemoryRecord | null>
ParameterTypeRequiredDescription
memory_idStringYesID of the memory record
group_idStringYesGroup ID (validated; record must belong to this group)

Returns: Option<MemoryRecord> — the record, or null if not found.


memory_count

Count total memory records for a group.

invoke('memory_count', { group_id: string }): Promise<number>
ParameterTypeRequiredDescription
group_idStringYesGroup ID (validated against session)

Returns: usize — total record count.


3. Git Storage (5 commands)

Git-backed persistent storage using libgit2. One repo per space. Stores conversations and memories as Markdown with YAML frontmatter.

git_init_repo

Initialize a git repository for a space.

invoke('git_init_repo', { group_id: string, space_id: string }): Promise<string>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID

Returns: String — path to the initialized repository.


git_save_conversation

Save a conversation to the git store as a Markdown file with YAML frontmatter.

invoke('git_save_conversation', {
group_id: string,
space_id: string,
conversation_id: string,
messages: MessageData[],
title: string
}): Promise<string>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID
conversation_idStringYesConversation UUID
messagesVec<MessageData>YesArray of { role, content } messages
titleStringYesConversation title

Returns: String — git commit hash.


git_save_memory

Save a memory entry to the git store.

invoke('git_save_memory', {
group_id: string,
space_id: string,
memory_id: string,
content: string,
memory_type: string,
metadata?: object
}): Promise<string>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID
memory_idStringYesMemory record UUID
contentStringYesMemory content text
memory_typeStringYesType classification
metadataOption<Value>NoAdditional JSON metadata

Returns: String — git commit hash.


git_delete_memory

Delete a memory entry from the git store.

invoke('git_delete_memory', {
group_id: string,
space_id: string,
memory_id: string,
memory_type: string
}): Promise<boolean>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID
memory_idStringYesMemory record UUID to delete
memory_typeStringYesType of memory (determines file path)

Returns: booltrue if deleted.


git_sync

Sync a space's git repository with a remote URL (push/pull).

invoke('git_sync', {
group_id: string,
space_id: string,
remote_url: string
}): Promise<boolean>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID
remote_urlStringYesGit remote URL to sync with

Returns: booltrue if sync succeeded.


4. Git Branching (6 commands)

Phase 3i.1: Git branching operations for temporal navigation and parallel space states.

git_create_branch

Create a new branch in a space's repository.

invoke('git_create_branch', {
group_id: string,
space_id: string,
branch_name: string,
from_branch?: string
}): Promise<string>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID
branch_nameStringYesName for the new branch
from_branchOption<String>NoSource branch (defaults to current branch)

Returns: String — the created branch name.


git_switch_branch

Switch the space's repository to a different branch.

invoke('git_switch_branch', {
group_id: string,
space_id: string,
branch_name: string
}): Promise<string>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID
branch_nameStringYesBranch to switch to

Returns: String — the active branch name after switch.


git_merge_branch

Merge a source branch into a target branch.

invoke('git_merge_branch', {
group_id: string,
space_id: string,
source_branch: string,
target_branch?: string,
message?: string
}): Promise<string>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID
source_branchStringYesBranch to merge from
target_branchOption<String>NoBranch to merge into (defaults to current)
messageOption<String>NoCustom merge commit message

Returns: String — merge commit hash or result message.


git_list_branches

List all branches in a space's repository.

invoke('git_list_branches', {
group_id: string,
space_id: string
}): Promise<string[]>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID

Returns: Vec<String> — list of branch names.


git_get_current_branch

Get the name of the currently checked-out branch.

invoke('git_get_current_branch', {
group_id: string,
space_id: string
}): Promise<string>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID

Returns: String — current branch name (e.g., "main").


git_get_commit_history

Get the commit history for a branch.

invoke('git_get_commit_history', {
group_id: string,
space_id: string,
branch?: string,
limit: number
}): Promise<CommitInfo[]>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
space_idStringYesSpace ID
branchOption<String>NoBranch to read history from (defaults to current)
limitusizeYesMax number of commits to return

Returns: Vec<CommitInfo> — commit objects with hash, message, author, timestamp.


5. Vault (4 commands)

Secure credential storage via OS keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service). Android uses SQLite vault.

vault_get

Retrieve a secret from the vault.

invoke('vault_get', { key: string, group_id: string }): Promise<string | null>
ParameterTypeRequiredDescription
keyStringYesSecret key name
group_idStringYesGroup ID (validated against session)

Returns: Option<String> — the secret value, or null if not found.


vault_set

Store a secret in the vault.

invoke('vault_set', { key: string, value: string, group_id: string }): Promise<void>
ParameterTypeRequiredDescription
keyStringYesSecret key name
valueStringYesSecret value to store
group_idStringYesGroup ID (validated against session)

Returns: () (void)


vault_delete

Delete a secret from the vault.

invoke('vault_delete', { key: string, group_id: string }): Promise<void>
ParameterTypeRequiredDescription
keyStringYesSecret key to delete
group_idStringYesGroup ID (validated against session)

Returns: () (void)


vault_exists

Check if a key exists in the vault.

invoke('vault_exists', { key: string, group_id: string }): Promise<boolean>
ParameterTypeRequiredDescription
keyStringYesSecret key to check
group_idStringYesGroup ID (validated against session)

Returns: booltrue if the key exists.


6. Filesystem (5 commands)

Sandboxed file operations per group with path traversal prevention.

fs_list_files

List files and directories in a group's file storage area.

invoke('fs_list_files', { group_id: string, path?: string }): Promise<FileInfo[]>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
pathOption<String>NoRelative subdirectory path (default: root)

Returns: Vec<FileInfo> — file metadata objects (name, size, type, modified date).


fs_read_file

Read a file from the group's storage.

invoke('fs_read_file', { group_id: string, path: string }): Promise<FileContent>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
pathStringYesRelative file path

Returns: FileContent — file content and metadata.


fs_write_file

Write content to a file in the group's storage.

invoke('fs_write_file', { group_id: string, path: string, content: string }): Promise<WriteResult>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
pathStringYesRelative file path
contentStringYesFile content to write

Returns: WriteResult — write confirmation with path and bytes written.


fs_delete_file

Delete a file from the group's storage.

invoke('fs_delete_file', { group_id: string, path: string }): Promise<DeleteResult>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
pathStringYesRelative file path to delete

Returns: DeleteResult — deletion confirmation.


fs_search_files

Search file contents within a group's storage.

invoke('fs_search_files', {
group_id: string,
query: string,
path?: string,
max_results?: number
}): Promise<SearchMatch[]>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
queryStringYesSearch query string
pathOption<String>NoSubdirectory to search within (default: root)
max_resultsOption<usize>NoMax results (default: 20)

Returns: Vec<SearchMatch> — matching files with context.


7. Session (2 commands)

Session management for offline grace period support. The frontend calls these after authentication to persist the session across app restarts.

set_session

Set the authenticated session after login. Persists to vault for 72-hour offline grace period.

invoke('set_session', { group_id: string, expires_at?: number }): Promise<void>
ParameterTypeRequiredDescription
group_idStringYesAuthenticated user's group ID (cannot be empty)
expires_atOption<i64>NoToken expiry as Unix timestamp (seconds)

Returns: () (void)

The session is stored both in-memory and in the vault (session/group_id, session/expires_at). On app restart, restore_session_from_vault (internal, not an IPC command) restores the session if within the 72-hour grace period.


clear_session

Clear the authenticated session on logout. Removes from both memory and vault.

invoke('clear_session'): Promise<void>

No parameters.

Returns: () (void)


8. Health (1 command)

health_check

Get the health status of all Tauri Rust subsystems.

invoke('health_check'): Promise<HealthStatus>

No parameters.

Returns: HealthStatus:

{
healthy: boolean; // Overall health (platform-aware criteria)
platform: string; // "macos" | "windows" | "linux" | "ios" | "android"
subsystems: SubsystemStatus[]; // Per-subsystem status
startup_errors: string[]; // Errors from initialization
}

Each SubsystemStatus has { name, ready, detail }. Checked subsystems:

  • embeddings — ONNX/candle model
  • vector_store — LanceDB/SQLite
  • git_store — libgit2
  • vault — OS keychain/SQLite
  • action_queue — offline replay buffer

Health criteria by platform:

  • Desktop: embeddings + git_store + vault must be ready
  • Mobile with ML: embeddings + vector_store + git_store must be ready
  • Mobile without ML: git_store must be ready

9. Offline Action Queue (5 commands)

Persistent queue for actions that fail while offline. Stored as JSON on disk. Actions are replayed when connectivity is restored.

queue_action

Add an action to the offline queue.

invoke('queue_action', {
group_id: string,
action_type: string,
payload: object
}): Promise<string>
ParameterTypeRequiredDescription
group_idStringYesGroup ID
action_typeStringYesAction type identifier (e.g., "send_message", "create_task")
payloadValueYesArbitrary JSON payload for replay

Returns: String — the generated action ID.


drain_queue

Drain all pending actions for a group (oldest-first). Increments each action's attempts counter. Call remove_queued_action after successfully replaying each action.

invoke('drain_queue', { group_id: string }): Promise<QueuedAction[]>
ParameterTypeRequiredDescription
group_idStringYesGroup ID

Returns: Vec<QueuedAction> — all pending actions for the group, ordered oldest-first.


get_queue_status

Get queue status without modifying the queue.

invoke('get_queue_status', { group_id: string }): Promise<QueueStatus>
ParameterTypeRequiredDescription
group_idStringYesGroup ID

Returns: QueueStatus — pending count and oldest action timestamp.


remove_queued_action

Remove a successfully replayed action from the queue.

invoke('remove_queued_action', {
action_id: string,
group_id: string
}): Promise<boolean>
ParameterTypeRequiredDescription
action_idStringYesID of the action to remove
group_idStringYesGroup ID

Returns: booltrue if the action was found and removed.


clear_queue

Discard all pending actions for a group.

invoke('clear_queue', { group_id: string }): Promise<number>
ParameterTypeRequiredDescription
group_idStringYesGroup ID

Returns: usize — number of actions cleared.


10. OpenMorph (2 commands)

V1.0: Local .morph/ directory discovery and initialization for git-native Spaces.

morph_discover

Scan a local directory tree for existing Morph Spaces (directories containing .morph/). This is a pre-authentication operation -- no group_id required.

invoke('morph_discover', {
root_path?: string,
max_depth?: number
}): Promise<MorphDirInfo[]>
ParameterTypeRequiredDescription
root_pathOption<String>NoDirectory to start scanning (default: user home). Must be >2 chars, cannot be filesystem root.
max_depthOption<u32>NoMax directory depth (default: 5, capped at 10)

Returns: Vec<MorphDirInfo> — discovered .morph/ directories with metadata.


morph_init

Initialize a .morph/ Morph Space in an arbitrary local directory. Idempotent: returns existing info if .morph/ already exists.

Creates:

  • .morph/config.yaml -- space identity and sync configuration
  • .morph/canvas.yaml -- blank canvas state
  • .morph/acl.yaml -- default access control
  • .morph/.git/ -- git repository (non-fatal if git init fails)
invoke('morph_init', {
path: string,
space_name: string,
space_id: string,
sync_mode?: string,
remote_url?: string
}): Promise<MorphInitResult>
ParameterTypeRequiredDescription
pathStringYesAbsolute path to the directory
space_nameStringYesHuman-readable space name
space_idStringYesUUID assigned by the backend
sync_modeOption<String>No"local-only" (default), "morphee-hosted", or "git-remote"
remote_urlOption<String>NoGit remote URL for git-remote sync mode

Returns: MorphInitResult -- initialization result with space metadata.


11. WASM Extensions (4 commands)

V1.2: Load and execute WASM extensions in a sandboxed runtime.

extension_load

Load a WASM extension from raw bytes with a manifest.

invoke('extension_load', {
wasm_bytes: number[], // Uint8Array serialized
manifest_json: string // JSON string of ExtensionManifest
}): Promise<string>
ParameterTypeRequiredDescription
wasm_bytesVec<u8>YesRaw WASM binary bytes
manifest_jsonStringYesJSON-serialized ExtensionManifest

Returns: String — the extension ID.


extension_execute

Execute an action on a loaded extension.

invoke('extension_execute', {
extension_id: string,
action: string,
params: object
}): Promise<ExtensionExecutionResult>
ParameterTypeRequiredDescription
extension_idStringYesID of the loaded extension
actionStringYesAction name to execute
paramsValueYesJSON parameters to pass to the action

Returns: ExtensionExecutionResult — execution result with output data.


extension_unload

Unload a WASM extension from the runtime.

invoke('extension_unload', { extension_id: string }): Promise<boolean>
ParameterTypeRequiredDescription
extension_idStringYesID of the extension to unload

Returns: booltrue if the extension was found and unloaded.


extension_list

List all currently loaded extension IDs.

invoke('extension_list'): Promise<string[]>

No parameters.

Returns: Vec<String> — list of loaded extension IDs.


12. Local LLM (8 commands)

V1.5: Local GGUF model inference via candle. Supports Phi-4 Mini (3.8B) and Llama 3.2 3B. Uses Metal acceleration on macOS.

llm_chat_stream

Start streaming LLM token generation. Returns immediately with a generation_id. Tokens are emitted as Tauri events.

invoke('llm_chat_stream', {
request: {
generation_id: string,
messages: Array<{ role: string, content: string }>,
group_id: string,
config?: {
max_tokens: number, // default: 1024
temperature: number, // default: 0.7
top_p: number, // default: 0.9
repeat_penalty: number, // default: 1.1
repeat_last_n: number // default: 64
}
}
}): Promise<string>
ParameterTypeRequiredDescription
requestChatStreamRequestYesChat request with messages and config

Returns: String — the generation_id (echoed back).

Emitted events:

  • llm-token{ generation_id, token } for each generated token (real-time streaming)
  • llm-done{ generation_id, token_count } when generation completes
  • llm-error{ generation_id, message } on error

Example usage:

import { invoke } from '@tauri-apps/api/core';
import { listen } from '@tauri-apps/api/event';

const unlisten = await listen('llm-token', (event) => {
appendToUI(event.payload.token);
});

await invoke('llm_chat_stream', {
request: {
generation_id: crypto.randomUUID(),
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
],
group_id: currentGroupId
}
});

llm_cancel_stream

Cancel an in-progress generation by setting the atomic cancel flag.

invoke('llm_cancel_stream', { group_id: string }): Promise<void>
ParameterTypeRequiredDescription
group_idStringYesGroup ID (validated against session)

Returns: () (void). The running generation will stop at the next token boundary.


llm_get_info

Get info about the currently loaded model.

invoke('llm_get_info'): Promise<LlmModelInfo | null>

No parameters.

Returns: Option<LlmModelInfo>:

{
model_id: string; // e.g., "phi-4-mini-q4"
display_name: string; // e.g., "Phi-4 Mini (Q4)"
parameters: string; // e.g., "3.8B"
quantization: string; // e.g., "Q4_K_M"
context_length: number; // e.g., 4096
downloaded: boolean;
size_bytes: number | null;
}

Returns null if no model is loaded.


llm_load_model

Load a previously downloaded model into memory. Emits progress events during loading (5-30 seconds).

invoke('llm_load_model', {
model_id: string,
group_id: string
}): Promise<LlmModelInfo>
ParameterTypeRequiredDescription
model_idStringYesModel identifier: "phi-4-mini-q4" or "llama-3.2-3b-q4"
group_idStringYesGroup ID

Returns: LlmModelInfo — info about the loaded model.

Emitted events:

  • llm-load-progress{ stage: "tokenizer" | "weights" | "ready", progress: 0.0-1.0 }

Errors if the model has not been downloaded first.


llm_unload_model

Unload the current model from memory to free resources.

invoke('llm_unload_model', { group_id: string }): Promise<void>
ParameterTypeRequiredDescription
group_idStringYesGroup ID

Returns: () (void)


llm_list_models

List all available models in the catalog with their download status.

invoke('llm_list_models'): Promise<LlmModelInfo[]>

No parameters.

Returns: Vec<LlmModelInfo> — all models in the catalog.

Available models:

Model IDDisplay NameParametersContextHuggingFace Repo
phi-4-mini-q4Phi-4 Mini (Q4)3.8B4096microsoft/Phi-4-mini-instruct-GGUF
llama-3.2-3b-q4Llama 3.2 3B (Q4)3.2B4096bartowski/Llama-3.2-3B-Instruct-GGUF

llm_download_model

Download a model from HuggingFace. Emits progress events during download. No-op if already downloaded.

invoke('llm_download_model', {
model_id: string,
group_id: string
}): Promise<void>
ParameterTypeRequiredDescription
model_idStringYesModel identifier to download
group_idStringYesGroup ID

Returns: () (void)

Emitted events:

  • llm-download-progress{ model_id, bytes_downloaded, bytes_total }

llm_delete_model

Delete a downloaded model from disk. Automatically unloads the model if it is currently active.

invoke('llm_delete_model', {
model_id: string,
group_id: string
}): Promise<void>
ParameterTypeRequiredDescription
model_idStringYesModel identifier to delete
group_idStringYesGroup ID

Returns: () (void)


13. VectorRouter (1 command)

V1.5: Offline-first message routing. Checks local LanceDB memory and skill index before falling back to LLM.

vector_route_message

Route a user message using local vector memory and skill index. Determines whether the query can be answered directly from memory, routed to a skill, or requires the LLM.

invoke('vector_route_message', {
message: string,
group_id: string
}): Promise<VectorRouteResult>
ParameterTypeRequiredDescription
messageStringYesUser's message to route
group_idStringYesGroup ID

Returns: VectorRouteResult:

{
type: "DIRECT_MEMORY" | "SKILL_EXECUTE" | "SKILL_HINT" | "LLM_REQUIRED";
content?: string; // For DIRECT_MEMORY: the stored answer text
skill_id?: string; // For SKILL_*: the skill UUID
skill_name?: string; // For SKILL_*: human-readable skill name
score: number; // Similarity score (0.0-1.0)
}

Routing thresholds (from VectorRouter):

  • DIRECT_MEMORY >= 0.92 -- use content directly as AI response
  • SKILL_EXECUTE >= 0.88 -- execute the matched skill (no required params)
  • SKILL_HINT >= 0.83 -- pass skill name as hint in LLM system prompt
  • LLM_REQUIRED -- no match above threshold; fall through to LLM

Falls back to LLM_REQUIRED gracefully if embedding provider or vector store are not initialized.


14. Audio: TTS (4 commands)

V1.5: Text-to-speech via platform TTS engines (macOS AVSpeech, Windows SAPI5, Linux espeak/piper). Requires --features audio Cargo flag.

tts_speak

Speak text using the platform TTS engine. Initializes the TTS provider lazily on first call. Non-blocking -- speech continues in background.

invoke('tts_speak', { text: string, group_id: string }): Promise<void>
ParameterTypeRequiredDescription
textStringYesText to speak (max 4096 characters)
group_idStringYesGroup ID

Returns: () (void)


tts_stop

Stop any currently-in-progress TTS speech.

invoke('tts_stop', { group_id: string }): Promise<void>
ParameterTypeRequiredDescription
group_idStringYesGroup ID

Returns: () (void). No-op if nothing is playing.


tts_is_speaking

Check if TTS is currently speaking.

invoke('tts_is_speaking'): Promise<boolean>

No parameters.

Returns: booltrue if audio is currently playing.


tts_set_rate

Set TTS speech rate.

invoke('tts_set_rate', { rate: number, group_id: string }): Promise<void>
ParameterTypeRequiredDescription
ratef32YesSpeech rate: 0.5 = slow, 1.0 = normal, 2.0 = fast (clamped to 0.1-3.0)
group_idStringYesGroup ID

Returns: () (void)


15. Audio: Whisper STT (1 command)

V1.5: Local speech-to-text via Whisper-tiny ONNX model. Status: skeleton with API surface; ONNX inference pipeline not yet implemented.

whisper_transcribe

Transcribe PCM audio samples to text using the local Whisper model.

invoke('whisper_transcribe', {
audio_data: number[], // Float32Array serialized as Vec<f32>
sample_rate: number,
group_id: string
}): Promise<TranscriptionResult>
ParameterTypeRequiredDescription
audio_dataVec<f32>YesMono PCM f32 samples (max 30s at 16kHz = 480,000 samples)
sample_rateu32YesAudio sample rate (should be 16000 for Whisper)
group_idStringYesGroup ID

Returns: TranscriptionResult:

{
text: string; // Transcribed text
confidence: number | null; // Confidence score [0.0, 1.0] if available
duration_ms: number; // Processing time in milliseconds
}

Falls back to error if the Whisper model has not been downloaded. The frontend should fall back to the Web Speech API.


Tauri Event Reference

Several commands emit events via app.emit() rather than returning data directly. Listen for these on the frontend using @tauri-apps/api/event.

Event NamePayloadEmitted By
llm-token{ generation_id: string, token: string }llm_chat_stream
llm-done{ generation_id: string, token_count: number }llm_chat_stream
llm-error{ generation_id: string, message: string }llm_chat_stream
llm-download-progress{ model_id: string, bytes_downloaded: number, bytes_total: number }llm_download_model
llm-load-progress{ stage: string, progress: number }llm_load_model

Error Handling

All commands return Result<T, MorpheeError>. The MorpheeError enum maps to specific subsystem errors:

VariantDescription
MorpheeError::Embedding(String)Embedding provider errors
MorpheeError::VectorStore(String)LanceDB/SQLite vector store errors
MorpheeError::Git(String)Git storage errors
MorpheeError::Vault(String)Keychain/vault errors
MorpheeError::Filesystem(String)File store errors
MorpheeError::Auth(String)Session validation failures
MorpheeError::Extension(String)WASM extension errors
MorpheeError::Llm(String)Local LLM, TTS, and Whisper errors

On the frontend, invoke errors are caught as rejected promises with the error message string.


Feature Flags

Some commands require specific Cargo features to be functional:

FeatureCommands AffectedDescription
local-llmllm_chat_stream, llm_load_model, llm_cancel_streamEnables candle GGUF inference (desktop)
mobile-mldownload_embedding_model, is_model_cachedEnables candle embeddings + SQLite vector store (mobile)
audiotts_speak, tts_stop, tts_is_speaking, tts_set_rateEnables platform TTS via tts crate

Without the relevant feature flag, stub implementations return descriptive errors (e.g., "Build with --features local-llm").


Source Files

FileCommands
frontend/src-tauri/src/commands/embedding_commands.rsembed_text, embed_batch, get_embedding_info, download_embedding_model, is_model_cached
frontend/src-tauri/src/commands/memory_commands.rsmemory_insert, memory_search, memory_delete, memory_get, memory_count
frontend/src-tauri/src/commands/git_commands.rsgit_init_repo, git_save_conversation, git_save_memory, git_delete_memory, git_sync, git_create_branch, git_switch_branch, git_merge_branch, git_list_branches, git_get_current_branch, git_get_commit_history
frontend/src-tauri/src/commands/vault_commands.rsvault_get, vault_set, vault_delete, vault_exists
frontend/src-tauri/src/commands/fs_commands.rsfs_list_files, fs_read_file, fs_write_file, fs_delete_file, fs_search_files
frontend/src-tauri/src/commands/session_commands.rsset_session, clear_session
frontend/src-tauri/src/commands/health_commands.rshealth_check
frontend/src-tauri/src/commands/queue_commands.rsqueue_action, drain_queue, get_queue_status, remove_queued_action, clear_queue
frontend/src-tauri/src/commands/morph_commands.rsmorph_discover, morph_init
frontend/src-tauri/src/commands/extension_commands.rsextension_load, extension_execute, extension_unload, extension_list
frontend/src-tauri/src/commands/llm_commands.rsllm_chat_stream, llm_cancel_stream, llm_get_info, llm_load_model, llm_unload_model, llm_list_models, llm_download_model, llm_delete_model
frontend/src-tauri/src/commands/vector_commands.rsvector_route_message
frontend/src-tauri/src/commands/audio_commands.rstts_speak, tts_stop, tts_is_speaking, tts_set_rate, whisper_transcribe
frontend/src-tauri/src/lib.rsCommand registration (invoke_handler)