Feature: Local/Remote Task Execution Tracking & Credits System
Date: 2026-02-15 Status: Approved (Option B) — V1.1 Version: V1.1 (post-launch)
1. Context & Motivation
Problem
Currently, Morphee's task system:
- Only executes tasks in the Python backend (TaskProcessor)
- Has no concept of where a task runs (locally on Tauri vs remotely on server)
- Has no credits/usage tracking
- No abstraction layer for alternative storage backends
Opportunity
As Morphee evolves toward offline-first (Phase 3d M3), tasks should be:
- Offloadable to local Tauri layer (ONNX embeddings, local LLM inference, file ops)
- Tracked with location visibility (user sees "running locally" vs "running in cloud")
- Accountable with credits (per-user, per-space quotas for API calls, compute, etc.)
- Storage-agnostic (PostgreSQL now, Redis queue later, S3 archive, etc.)
Strategic Goals
- Hybrid execution: Route tasks intelligently (local if capable, remote if needed)
- Cost visibility: Users understand what resources they're using
- Offline operation: Tasks queued/cached locally, synced when online
- Extensibility: Easy to swap storage backend without code changes
2. Options Investigated
Option A: Minimal — Add Execution Location + Credits to Existing Task Table
Approach:
- Add columns to
taskstable:execution_location(enum: local/remote/hybrid),credits_used(numeric) - Add columns to track execution times per location
- Create simple
TaskCreditsmodel with user/space/interface quotas - Update TaskProcessor to mark location based on which backend executes
- No storage abstraction—stay with PostgreSQL
Pros:
- Minimal changes to existing system
- Quick to implement (~2-3 weeks)
- Straightforward frontend display
- No new services or infrastructure
Cons:
- No storage abstraction (hard to migrate later)
- No credit enforcement mechanism (tracking only, no limits)
- Task routing still manual (no intelligent dispatcher)
- Scales poorly with many concurrent tasks (single TaskProcessor bottleneck)
Estimated Effort: M (2-3 weeks)
Option B: Task Router + Credits System + Storage Abstraction (Recommended)
Approach:
-
Execution Location Model:
- Add
execution_location(enum: LOCAL, REMOTE, HYBRID, AUTO) - Add
executor_id(which Tauri instance or Python worker) - Add
execution_metadata(JSONB: latency, memory, duration, location details)
- Add
-
Credits System:
- Create
task_creditstable (task_id, space_id, interface_id, amount, reason, timestamp) - Create
space_quotastable (space_id, credit_limit, credit_used, reset_period) - Quota enforcement: reject task if space quota exceeded
- Audit trail for compliance
- Create
-
Task Router (New Service):
- TaskRouter evaluates task → decides execution location
- Rules: if local capability + offline mode → LOCAL
- Otherwise: if quota available → REMOTE
- Otherwise: PENDING (queue for later)
- Emit
task.routedevent
-
Storage Abstraction:
- Create
TaskStoreABC (abstract base class) - Implement
PostgresTaskStore,RedisTaskStore(queue),S3TaskStore(archive) - TaskService uses TaskStore interface (not direct DB queries)
- Config selects storage backend
- Create
-
Frontend Updates:
- Task card shows: "⚙️ Running locally" / "☁️ Running in cloud" / "⏳ Queued"
- Credits used per task
- Quota usage indicator
- Estimated credits before execution (preview)
Pros:
- Future-proof for offline operation
- Intelligent routing based on capability + quota
- Cost control (enforce limits, prevent runaway)
- Storage flexibility (swap backends without code changes)
- Audit trail (compliance-ready for GDPR/enterprise)
- Scales (distributed task queuing with Redis)
Cons:
- Larger scope (~6-8 weeks)
- Requires migration script for existing tasks
- More moving parts (router, quota service, multiple stores)
- Needs careful design of routing rules
Estimated Effort: L (6-8 weeks)
Option C: Event-Driven Task Execution with Worker Pool
Approach:
- Execution Model: Tasks → Event Bus → Worker Pool
- Workers: Python backend, Tauri instances, or external job servers
- Job Queue: Redis-backed queue (Celery-like) with priorities
- Location Tracking: Worker reports location after claiming task
- Credits: Per-worker billing (local cheaper, remote standard rate)
Pros:
- Highly scalable (multiple workers)
- Natural async/parallel execution
- Easy to add new worker types (GPU servers, etc.)
- Distributed tracing built-in
Cons:
- Complex (Celery learning curve)
- Operational overhead (worker health checks, dead letters, etc.)
- Overkill for Phase 3e (MVP doesn't need distributed workers)
- High effort (~10-12 weeks)
Estimated Effort: XL (10-12 weeks)
Option D: Lightweight — Storage Abstraction Only (No Router/Credits)
Approach:
- Add
execution_locationfield to tasks - Implement TaskStore ABC with PostgreSQL + Redis queue support
- Frontend shows location
- No router, no credits (can add later)
Pros:
- Medium effort (~3-4 weeks)
- Unblocks future credit system
- Gives storage flexibility without complexity
Cons:
- Routing still manual
- No cost control
- Incomplete user-facing feature (location tracking without credits feels incomplete)
Estimated Effort: M (3-4 weeks)
3. Decision (APPROVED)
Chosen: Option B (Task Router + Credits System + Storage Abstraction)
Approved Design Details:
3.1 Execution Location Routing (Configurable, User-Driven)
- Task specifies
execution_preference: "local" | "remote" | "auto" | "cost_optimized" | "performance_optimized" - Router evaluates actual capability + preference + quota:
local→ Force LOCAL if device is Tauri, error otherwiseremote→ Always REMOTE (useful for high-security tasks)auto→ Default; router decides based on capability + efficiencycost_optimized→ Prefer LOCAL (free) over REMOTE (paid) if capableperformance_optimized→ Prefer REMOTE (cloud resources) over LOCAL (device resource)
- Router returns: execution location + estimated cost
- User sees decision BEFORE task executes
3.2 Credit Model (Backend-Driven, Action + Token Multiplier)
-
Base credit per action: Defined in
IntegrationDefinition# Example
{
"gmail": {"send": 1.0, "read": 0.5},
"llm": {"chat": 0.01, "embed": 0.005}, # multiplied by token count
"local_embedding": 0, # local = free
} -
LLM factor:
credits_used = base_credit * (input_tokens + output_tokens) / 1000- E.g., 1000-token chat with base=0.01 → 1 token = 0.01 credits
-
Execution location discount:
- LOCAL: 0% surcharge (free if action supports it)
- REMOTE: 1x base cost (standard)
- REMOTE + peak hours: 1.2x base cost (future: time-based pricing)
-
Backend decides cost via config file per integration
3.3 Quota Enforcement (Two-Tier: Monthly Hard + Weekly Soft)
- Hard limit: Monthly quota cap (e.g., 1000 credits/month)
- Tasks blocked if monthly quota exceeded
- Reset every month on billing cycle start
- Soft limit: Weekly advisory quota (e.g., 250 credits/week)
- Tasks allowed to proceed but trigger warning
- User can override with one-click confirmation ("Use 50 extra credits?")
- Goal: help users pace themselves without hard brick wall
- Per-space quotas (not per-user)
- Admin can override quota for urgent tasks
3.4 Storage Abstraction (MVP: PostgreSQL, Future: Redis/S3)
- Design
TaskStoreABC now - Implement PostgresTaskStore fully for MVP
- Future: RedisTaskStore (queue), S3TaskStore (archive) without code changes
- Each backend is swappable via config
Reasoning:
- Aligns with Morphee's offline-first vision (Phase 3d M3)
- Provides user-visible value (quota control, cost transparency, execution choice)
- Future-proof (storage abstraction enables Redis queue, S3 archive, etc.)
- Moderate scope (6-8 weeks confirmed acceptable)
- Sets up for Phase 3m (crypto marketplace) where credits → real value
- Flexible: users can optimize for cost OR performance based on needs
Trade-offs we're accepting:
- Larger upfront effort vs. Option A (6-8 weeks vs. 2-3 weeks)
- Complexity added (router, quota service, multiple stores)
- Requires migration script for existing tasks
If we do Option B, when would we do Option C (distributed workers)? — Later (Phase 4+), when we need to scale beyond a single backend instance.
4. Implementation Plan (Option B)
| Step | Description | Effort | Dependencies |
|---|---|---|---|
| 4.1 | Design TaskStore ABC + PostgresTaskStore | S | None |
| 4.2 | Create DB migrations (execution_location, task_credits, space_quotas) | S | None |
| 4.3 | Implement TaskRouter service (decision logic) | M | 4.1 |
| 4.4 | Implement TaskCreditsService (track, enforce quotas) | M | 4.2 |
| 4.5 | Integrate router/credits into TaskProcessor | M | 4.3, 4.4 |
| 4.6 | Refactor TaskService to use TaskStore ABC | M | 4.1 |
| 4.7 | Add TaskStore-backed Redis queue support | M | 4.1, 4.6 |
| 4.8 | Frontend: TaskCard location display | S | 4.5 |
| 4.9 | Frontend: Credits/quota indicators | S | 4.4 |
| 4.10 | Migration script: backfill execution_location | S | 4.2 |
| 4.11 | Integration tests (router, credits, stores) | M | 4.3-4.7 |
| 4.12 | E2E tests (task execution, quota enforcement) | M | 4.8-4.9 |
| 4.13 | Documentation (architecture, quotas, examples) | S | All |
Total: ~8 weeks (M+M+M+M+S+S+S+S+M+M+S)
5. Key Questions & Decisions (APPROVED ANSWERS)
Q1: What determines "local" vs. "remote"?
- A (APPROVED): Task specifies
execution_preference(local/remote/auto/cost_optimized/performance_optimized)local: Force LOCAL if device is Tauri, error otherwiseremote: Always REMOTEauto(default): Router decides based on capability + efficiencycost_optimized: Prefer LOCAL (free) → REMOTE (paid) if capableperformance_optimized: Prefer REMOTE (cloud) → LOCAL (device) for speed- User sees cost estimate BEFORE execution
Q2: Credit model — how is it determined?
- A (APPROVED): Backend-driven, per-action with LLM token multiplier
- Base credit per action defined in IntegrationDefinition (config file)
- LLM:
credits = base * (input_tokens + output_tokens) / 1000(e.g., 0.01 credits per 1000 tokens for chat) - Local tasks: free (0 credits) to incentivize offline use
- REMOTE surcharge: 1x base (standard), 1.2x during peak hours (future)
- Example: Gmail send=1 credit, LLM chat=0.01 per 1000 tokens, local embedding=0
Q3: What if quota exceeded mid-task?
- A (APPROVED):
- Hard monthly limit: Task BLOCKED with
blocked_reason = "monthly_quota_exceeded" - Soft weekly limit: Task proceeds with warning, user can confirm override
- No partial rollback (credits charged upfront based on estimate)
- Hard monthly limit: Task BLOCKED with
Q4: Do we enforce quotas or just track?
- A (APPROVED): ENFORCE both hard and soft limits
- Hard monthly: rejects execution
- Soft weekly: allows but warns
- Admin override available for urgent tasks
- Compliance-ready for B2B (teams, companies have budgets)
Q5: Storage abstraction — do we actually need RedisTaskStore now?
- A (APPROVED): Not now. Design ABC for it, implement PostgresTaskStore only (MVP)
- RedisTaskStore (queue) + S3TaskStore (archive) in Phase 4 without code changes to TaskService
Q6: How do we handle task retry and credits?
- A: Original task_id charged credits once; retries don't re-charge
- Failed → Pending → Retry = no additional credit charge (same task, same debit)
- Re-run (manual execution, new task_id) = new task, new credits charged
Q7: Should we track credits per user or per space?
- A: Per SPACE (not per user)
- Rationale: spaces are the billing unit (family has 1 quota, school has 1 quota)
- Sub-spaces inherit parent quota by default but can override per sub-space
Q8: What about historical reports and predictions?
- A: Deferred to Phase 3e.5+ (UI for quota management, usage reports, predictions)
6. Open Items / Deferred
- Crypto integration (Phase 3m) — how do credits convert to MorphCoin/payments?
- User-facing quota UI — settings page, quota management (Phase 3e.5+)
- Quota reset schedules — monthly/weekly/custom
- Multi-tenant enterprise quotas — shared vs. per-seat
- Prediction API — estimate credits before executing task
- Alert system — notify when quota approaching 80%
- Historical reports — usage over time (audit/billing)
7. References
- Existing:
backend/tasks/models.py,backend/tasks/service.py,backend/tasks/processor.py - Storage Pattern: Similar to Python's
abc.ABC+ SQLAlchemy - Credits Model: Inspired by OpenAI's token counting, Stripe's credit system
- Router Logic: Similar to Kubernetes scheduler (request-based dispatch)
- Phase 3d M3: docs/features/2026-02-12-mobile-ios-android.md (local execution baseline)
- Phase 3m: MEMORY.md (crypto marketplace reference)
8. Recommended Next Steps
- Get approval on Option B — Does the scope and timeline work?
- Clarify routing rules — Which tasks go local vs. remote by default?
- Define credit costs — Per-interface/action credit amounts
- Timeline alignment — Does 6-8 weeks fit the current sprint?
- Begin Step 4.1 — TaskStore ABC design (most critical dependency)
Last Updated: 2026-02-20