Morphee Deployment Guide
This guide covers deploying Morphee to production environments.
Status Legend:
- ✅ Implemented — Feature/infrastructure exists and is ready to use
- 📋 Planned — Documentation for future implementation (files/infrastructure not yet created)
Prerequisites
- Docker and Docker Compose
- PostgreSQL 15+
- Redis 7+
- SSL/TLS certificates (for HTTPS)
- Domain name (optional but recommended)
- At least 2GB RAM, 2 CPU cores
Docker Image Versions (as specified in Dockerfiles and docker-compose.yml):
- Node.js: 22-alpine (frontend build stage)
- Nginx: 1.27-alpine (frontend serving)
- Supabase GoTrue: v2.186.0 (authentication service)
Production Architecture
┌─────────────┐
│ Nginx │
│ (Reverse │
│ Proxy) │
└──────┬──────┘
│
┌─────────────────┼─────────────────┐
│ │
┌────▼─────┐ ┌─────▼──────┐
│ FastAPI │ │ Tauri │
│ Backend │ │ Desktop App│
│ (8000) │ │(Vite+React)│
└────┬─────┘ └────────────┘
│
┌────┼─────────────┐
│ │ │
┌─────▼──────┐ ┌───▼────────┐ ┌───────────┐
│ PostgreSQL │ │ Redis │ │ Backups │
│ (asyncpg) │ │ (Pub/Sub) │ │ │
└────────────┘ └────────────┘ └───────────┘
See also: Coolify Deployment Guide for deploying with Coolify, V1.0 Deployment Tasklist for release checklist, Runbook for operations manual.
Deployment Methods
Option 1: Docker Compose (Recommended for Small-Medium Scale)
1. Clone and Configure
# Clone repository
git clone https://github.com/your-org/morphee-beta.git
cd morphee-beta
# Create production environment file
cp .env.example .env.prod
2. Configure Production Environment
Edit .env.prod:
# Environment
NODE_ENV=production
# Database (use external managed database recommended)
DATABASE_URL=postgresql://morphee:STRONG_PASSWORD@postgres:5432/morphee
# Redis (use external managed Redis recommended)
REDIS_URL=redis://redis:6379/0
# Security
JWT_SECRET=your-very-strong-random-secret-key-here-min-32-chars
# CORS (set to your frontend domain)
CORS_ORIGINS=["https://yourdomain.com"]
# Logging
LOG_LEVEL=INFO
# Timeouts
TASK_TIMEOUT_SECONDS=300
3. Create Production Docker Compose
⚠️ Status: Planned Template — The file
docker-compose.prod.ymldoes not yet exist in the repository. This section provides planning documentation for future production deployment. For current deployment, usedocker-compose.dev.ymlas a starting point and adapt it for your production needs.
Create docker-compose.prod.yml:
version: '3.9'
services:
backend:
build:
context: ./backend
dockerfile: ../Dockerfile.backend
args:
ENVIRONMENT: production
container_name: morphee-backend
restart: unless-stopped
ports:
- "127.0.0.1:8000:8000"
env_file:
- .env.prod
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./logs:/app/logs
depends_on:
- postgres
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- morphee-network
postgres:
image: postgres:15-alpine
container_name: morphee-postgres
restart: unless-stopped
ports:
- "127.0.0.1:5432:5432"
environment:
POSTGRES_USER: morphee
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: morphee
POSTGRES_INITDB_ARGS: "-E UTF8"
volumes:
- postgres-data:/var/lib/postgresql/data
- ./supabase/migrations:/docker-entrypoint-initdb.d:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U morphee"]
interval: 10s
timeout: 5s
retries: 5
networks:
- morphee-network
redis:
image: redis:7-alpine
container_name: morphee-redis
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
networks:
- morphee-network
nginx:
image: nginx:alpine
container_name: morphee-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- nginx-logs:/var/log/nginx
depends_on:
- backend
networks:
- morphee-network
volumes:
postgres-data:
driver: local
redis-data:
driver: local
nginx-logs:
driver: local
networks:
morphee-network:
driver: bridge
4. Configure Nginx
Create nginx/nginx.conf:
events {
worker_connections 1024;
}
http {
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_status 429;
# Backend API
upstream backend {
server backend:8000;
}
# HTTP to HTTPS redirect
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
return 301 https://$server_name$request_uri;
}
# HTTPS server
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
# SSL configuration
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# API endpoints
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://backend/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# WebSocket endpoint
location /ws {
proxy_pass http://backend/ws;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket timeouts
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
}
# Health check endpoint
location /health {
proxy_pass http://backend/health;
access_log off;
}
# Frontend (Tauri desktop app connects directly to API)
# Static web fallback if needed
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
}
}
5. Deploy
# Build and start services
docker compose -f docker-compose.prod.yml build
docker compose -f docker-compose.prod.yml up -d
# Check services are running
docker compose -f docker-compose.prod.yml ps
# View logs
docker compose -f docker-compose.prod.yml logs -f
# Test endpoints
curl https://yourdomain.com/api/health
Option 2: Kubernetes (Recommended for Large Scale)
⚠️ Status: Planned Infrastructure — The
k8s/directory and Kubernetes manifests referenced in this section do not yet exist in the repository. This is planning documentation for future Kubernetes deployment at scale. The manifests provided here serve as templates for when Kubernetes deployment is needed.
1. Create Kubernetes Manifests
Create k8s/namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: morphee
Create k8s/configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: morphee-config
namespace: morphee
data:
ENVIRONMENT: "production"
LOG_LEVEL: "INFO"
LOG_FORMAT: "json"
CORS_ORIGINS: '["https://yourdomain.com"]'
Create k8s/secrets.yaml:
apiVersion: v1
kind: Secret
metadata:
name: morphee-secrets
namespace: morphee
type: Opaque
stringData:
DATABASE_URL: "postgresql://user:password@postgres:5432/morphee"
REDIS_URL: "redis://redis:6379/0"
JWT_SECRET: "your-secret-key-here"
POSTGRES_PASSWORD: "your-postgres-password"
Create k8s/backend-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: morphee-backend
namespace: morphee
spec:
replicas: 3
selector:
matchLabels:
app: morphee-backend
template:
metadata:
labels:
app: morphee-backend
spec:
containers:
- name: backend
image: your-registry/morphee-backend:latest
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: morphee-config
- secretRef:
name: morphee-secrets
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: morphee-backend
namespace: morphee
spec:
selector:
app: morphee-backend
ports:
- protocol: TCP
port: 8000
targetPort: 8000
type: ClusterIP
Create k8s/ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: morphee-ingress
namespace: morphee
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
ingressClassName: nginx
tls:
- hosts:
- yourdomain.com
secretName: morphee-tls
rules:
- host: yourdomain.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: morphee-backend
port:
number: 8000
- path: /ws
pathType: Prefix
backend:
service:
name: morphee-backend
port:
number: 8000
2. Deploy to Kubernetes
# Apply manifests
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secrets.yaml
kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/ingress.yaml
# Check deployment
kubectl get pods -n morphee
kubectl get services -n morphee
kubectl get ingress -n morphee
# View logs
kubectl logs -f -n morphee -l app=morphee-backend
Institutional Website (Astro)
The institutional website (www/) is a static site built with Astro 5 and Tailwind CSS v4. It includes the landing page, documentation index, privacy policy, terms of service, about page, and contact page.
Build & Deploy
Prerequisites:
- Node.js 22+ and npm
- The
www/directory contains an Astro project withpackage.jsonandastro.config.mjs
1. Install Dependencies
cd www/
npm install
2. Build for Production
npm run build
This generates static files in www/dist/:
/index.html— Landing page/privacy/index.html— Privacy Policy/terms/index.html— Terms of Service/about/index.html— About page/contact/index.html— Contact page/docs/index.html— Documentation indexsitemap-index.xml— Auto-generated sitemap
3. Deploy Options
Option A: Serve via Nginx (same server as backend)
Add to your nginx config:
server {
listen 80;
server_name www.morphee.app;
root /var/www/morphee-www/dist;
index index.html;
location / {
try_files $uri $uri/ =404;
}
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|svg|ico|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}
Deploy:
# On your server
cd /var/www/
git clone https://github.com/morphee-app/morphee-beta.git morphee-www
cd morphee-www/www
npm install
npm run build
# Reload nginx
sudo nginx -t && sudo nginx -s reload
Option B: Deploy to Static Hosting (Netlify, Vercel, Cloudflare Pages)
# Build command
cd www && npm run build
# Publish directory
www/dist
# Or use Netlify CLI
cd www/
npm install -g netlify-cli
netlify deploy --prod --dir=dist
Option C: Deploy to CDN (AWS S3 + CloudFront, Google Cloud Storage)
# Build
cd www/
npm run build
# AWS S3 example
aws s3 sync dist/ s3://morphee-www-bucket/ --delete
aws cloudfront create-invalidation --distribution-id YOUR_DIST_ID --paths "/*"
Development
cd www/
npm run dev
Visit http://localhost:4321 (Astro default port).
Project Structure
www/
├── astro.config.mjs # Astro config (Tailwind, sitemap)
├── package.json # Dependencies (Astro 5, Tailwind CSS 4)
├── tsconfig.json # TypeScript config
├── src/
│ ├── pages/ # Route pages (index.astro, privacy.astro, etc.)
│ ├── layouts/ # BaseLayout.astro, PageLayout.astro
│ ├── components/ # Header.astro, Footer.astro, etc.
│ ├── styles/ # global.css (Tailwind v4 imports)
│ └── content.config.ts # Content collections (blog, changelog)
├── public/ # Static assets (favicon, images)
└── dist/ # Build output (generated)
CI/CD Integration
GitHub Actions workflow example (.github/workflows/www-deploy.yml):
name: Deploy Institutional Website
on:
push:
branches: [main]
paths:
- 'www/**'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
cache-dependency-path: www/package-lock.json
- name: Install & Build
run: |
cd www/
npm ci
npm run build
- name: Deploy to Netlify
run: |
npm install -g netlify-cli
netlify deploy --prod --dir=www/dist
env:
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}
E2E Testing
For end-to-end testing of the institutional website:
cd www/
npm run build
npm run preview # Starts preview server on port 4322
# In another terminal, run E2E tests
# (Add Playwright or Cypress tests here)
Add to roadmap: E2E testing for institutional website pages (smoke tests for all pages load, forms submit, links work).
Database Setup
Production Database Recommendations
-
Use Managed Database Service (Recommended)
- AWS RDS PostgreSQL
- Google Cloud SQL
- Azure Database for PostgreSQL
- Digital Ocean Managed Databases
-
Configuration
-- Set appropriate connection limits
ALTER SYSTEM SET max_connections = 100;
-- Enable query performance tracking
ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';
-- Set work memory
ALTER SYSTEM SET work_mem = '16MB'; -
Run Migrations
# Connect to database
psql $DATABASE_URL
# Run migration files in order
\i supabase/migrations/001_initial_schema.sql
\i supabase/migrations/002_memory_vectors.sql
\i supabase/migrations/003_schedules.sql
\i supabase/migrations/004_notifications.sql
\i supabase/migrations/005_oauth_connections.sql
\i supabase/migrations/006_skills.sql
\i supabase/migrations/007_interface_configs.sql
\i supabase/migrations/008_push_tokens.sql
\i supabase/migrations/009_phase3e2.sql
\i supabase/migrations/010_phase3e3.sql
\i supabase/migrations/011_gdpr_compliance.sql
\i supabase/migrations/012_performance_indexes.sql
\i supabase/migrations/013_user_language.sql
\i supabase/migrations/014_age_verification.sql
\i supabase/migrations/015_add_message_pinning_and_context_window.sql
\i supabase/migrations/016_generic_acl_system.sql
\i supabase/migrations/017_trigram_search_indexes.sql -
Generate Encryption Key & Encrypt Existing Data
# Generate a Fernet key and add to .env.prod as ENCRYPTION_KEY
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
# Dry run — see what would be encrypted (no changes made)
docker compose exec backend python scripts/encrypt_existing_data.py --dry-run
# Encrypt existing plaintext data (messages + memory_vectors)
docker compose exec backend python scripts/encrypt_existing_data.py
# IMPORTANT: Back up the encryption key securely.
# Losing this key = ALL encrypted data is permanently unrecoverable. -
Create Indexes
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_created_at ON tasks(created_at DESC);
CREATE INDEX idx_task_logs_task_id ON task_logs(task_id);
CREATE INDEX idx_task_logs_created_at ON task_logs(created_at DESC);
Environment Variables Reference
Complete list of environment variables for production deployment. See backend/config.py for authoritative source.
Naming note: Some variables have aliases across different services and compose files:
- SMTP Password: Backend uses
SMTP_PASSWORD, GoTrue usesGOTRUE_SMTP_PASS(mapped in compose asSMTP_PASS)- SMTP From Address: Backend uses
FROM_EMAIL, GoTrue/Grafana useSMTP_FROMorGOTRUE_SMTP_ADMIN_EMAIL- GoTrue Redirect URIs: Dev compose uses
GOTRUE_URI_ALLOW_LIST, Coolify/prod compose usesGOTRUE_ALLOWED_REDIRECTS(both are valid, GoTrue accepts either)The canonical names in
backend/config.pyare:SMTP_PASSWORD,FROM_EMAIL. Compose files map these to service-specific names as needed.
Core Application
| Variable | Required | Default | Description |
|---|---|---|---|
NODE_ENV | No | production | Environment mode (development, production) |
DEBUG | No | false | Enable debug mode |
LOG_LEVEL | No | INFO | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
LOG_FORMAT | No | text | Log output format (text for development, json for production structured logging) |
AGENT_HOST | No | 0.0.0.0 | Server bind address |
AGENT_PORT | No | 8000 | Server port |
Database & Storage
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_URL | Yes | - | PostgreSQL connection URL |
DB_POOL_MIN_SIZE | No | 5 | Minimum database connection pool size |
DB_POOL_MAX_SIZE | No | 20 | Maximum database connection pool size |
DB_POOL_TIMEOUT | No | 10.0 | Database connection pool timeout (seconds) |
KNOWLEDGE_BASE_PATH | No | /knowledge | Path for knowledge base storage |
UPLOAD_PATH | No | /uploads | Path for file uploads |
MEMORY_GIT_PATH | No | /data/memory | Path for Git-backed memory storage |
FILESYSTEM_PATH | No | /data/files | Path for sandboxed file operations |
Authentication & Security
| Variable | Required | Default | Description |
|---|---|---|---|
SUPABASE_AUTH_URL | Yes | - | Supabase Auth (GoTrue) URL |
SUPABASE_JWT_SECRET | Yes | - | Supabase JWT verification secret |
JWT_SECRET | Yes | - | JWT signing secret (min 32 characters) |
WEBHOOK_SECRET | No | - | Secret for webhook validation |
ENCRYPTION_KEY | Yes (prod) | - | Fernet encryption key for data at rest (chat messages, memory vectors, Git files). Generate: python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())". Losing this key = all encrypted data unrecoverable. |
SKIP_AUTH | No | false | Skip authentication (development only, never use in production) |
TRUSTED_PROXY_IPS | No | "" | Comma-separated IPs trusted for X-Forwarded-For |
LLM Configuration
| Variable | Required | Default | Description |
|---|---|---|---|
DEFAULT_LLM_PROVIDER | No | anthropic | LLM provider (openai, anthropic, litellm) |
DEFAULT_LLM_MODEL | No | claude-sonnet-4-20250514 | Default LLM model identifier |
OPENAI_API_KEY | Conditional | - | OpenAI API key (required if provider=openai) |
ANTHROPIC_API_KEY | Conditional | - | Anthropic API key (required if provider=anthropic) |
LITELLM_API_BASE | Conditional | - | LiteLLM proxy base URL (required if provider=litellm) |
EMBEDDING_PROVIDER | No | fastembed | Embedding provider (fastembed, openai) |
EMBEDDING_MODEL | No | - | Specific embedding model (provider-dependent) |
Redis & Event Bus
| Variable | Required | Default | Description |
|---|---|---|---|
REDIS_URL | Conditional | redis://localhost:6379/0 | Redis connection URL |
USE_REDIS | No | false | Enable Redis pub/sub for events |
REQUIRE_REDIS | No | false | Fail startup if Redis unavailable |
Task Processing
| Variable | Required | Default | Description |
|---|---|---|---|
MAX_CONCURRENT_TASKS | No | 10 | Maximum concurrent background tasks |
TASK_RETRY_MAX | No | 3 | Maximum task retry attempts |
TASK_TIMEOUT_SECONDS | No | 300 | Task execution timeout (seconds) |
Agent Orchestrator
| Variable | Required | Default | Description |
|---|---|---|---|
ORCHESTRATOR_MAX_TURNS | No | 5 | Maximum agent loop iterations (reduced from 10 per M-DOS-001 to limit LLM cost) |
APPROVAL_TIMEOUT_SECONDS | No | 120 | Tool approval timeout (seconds) |
DEFAULT_MONTHLY_TOKEN_LIMIT | No | 2000000 | Per-group monthly LLM token quota (~$6 of Claude Sonnet at current pricing) |
Scheduler
| Variable | Required | Default | Description |
|---|---|---|---|
SCHEDULER_POLL_INTERVAL | No | 15 | Cron schedule polling interval (seconds) |
Email / SMTP
| Variable | Required | Default | Description |
|---|---|---|---|
SMTP_HOST | No | localhost | SMTP server hostname |
SMTP_PORT | No | 587 | SMTP server port |
SMTP_USER | No | - | SMTP authentication username |
SMTP_PASSWORD | No | - | SMTP authentication password |
FROM_EMAIL | No | noreply@morphee.app | Sender email address |
FROM_NAME | No | Morphee | Sender display name |
Google OAuth & APIs
| Variable | Required | Default | Description |
|---|---|---|---|
GOOGLE_CLIENT_ID | Conditional | - | Google OAuth client ID (for both Google API integrations and SSO) |
GOOGLE_CLIENT_SECRET | Conditional | - | Google OAuth client secret |
GOOGLE_REDIRECT_URI | No | http://localhost:8000/api/oauth/google/callback | OAuth redirect URI for Google API integrations |
GOOGLE_SSO_ENABLED | No | false | Enable Google SSO via GoTrue (set to true to activate) |
SSO Providers
| Variable | Required | Default | Description |
|---|---|---|---|
APPLE_SSO_ENABLED | No | false | Enable Apple Sign-In via GoTrue |
APPLE_CLIENT_ID | Conditional | - | Apple Services ID (required if Apple SSO enabled) |
APPLE_CLIENT_SECRET | Conditional | - | Apple JWT secret key (required if Apple SSO enabled) |
AZURE_SSO_ENABLED | No | false | Enable Microsoft/Azure AD SSO via GoTrue |
AZURE_CLIENT_ID | Conditional | - | Azure AD application (client) ID (required if Azure SSO enabled) |
AZURE_CLIENT_SECRET | Conditional | - | Azure AD client secret (required if Azure SSO enabled) |
Push Notifications
| Variable | Required | Default | Description |
|---|---|---|---|
APNS_KEY_ID | Conditional | - | Apple Push Notification Service key ID (iOS) |
APNS_TEAM_ID | Conditional | - | Apple Developer Team ID (iOS) |
APNS_KEY_PATH | Conditional | - | Path to APNs .p8 key file (iOS) |
APNS_BUNDLE_ID | No | app.morphee.mobile | iOS app bundle identifier |
FCM_PROJECT_ID | Conditional | - | Firebase Cloud Messaging project ID (Android) |
FCM_SERVICE_ACCOUNT_PATH | Conditional | - | Path to FCM service account JSON (Android) |
Frontend & CORS
| Variable | Required | Default | Description |
|---|---|---|---|
FRONTEND_URL | No | http://localhost:5173 | Frontend URL for OAuth redirects |
CORS_ORIGINS | No | http://localhost:3000,http://localhost:5173,tauri://localhost,http://tauri.localhost | Comma-separated allowed CORS origins |
WebSocket
| Variable | Required | Default | Description |
|---|---|---|---|
WEBSOCKET_HEARTBEAT_INTERVAL | No | 30 | WebSocket ping interval (seconds) |
Optional Observability
| Variable | Required | Default | Description |
|---|---|---|---|
LANGFUSE_PUBLIC_KEY | No | - | Langfuse public key (LLM cost tracking) |
LANGFUSE_SECRET_KEY | No | - | Langfuse secret key |
LANGFUSE_HOST | No | https://cloud.langfuse.com | Langfuse API endpoint |
Supabase GoTrue / SSO Configuration
These variables configure the GoTrue auth service in docker-compose files.
| Variable | Required | Default | Description |
|---|---|---|---|
GOTRUE_API_HOST | No | 0.0.0.0 | GoTrue listen address |
GOTRUE_API_PORT | No | 9999 | GoTrue listen port |
GOTRUE_DB_DRIVER | Yes | postgres | Database driver |
GOTRUE_DB_DATABASE_URL | Yes | - | PostgreSQL connection for GoTrue (auth schema) |
GOTRUE_SITE_URL | Yes | - | Frontend URL for email links and redirects |
GOTRUE_URI_ALLOW_LIST | No | - | Comma-separated list of allowed redirect URIs |
GOTRUE_JWT_SECRET | Yes | - | Must match SUPABASE_JWT_SECRET |
GOTRUE_JWT_EXP | No | 86400 | JWT expiration in seconds |
GOTRUE_JWT_DEFAULT_GROUP_NAME | No | authenticated | Default JWT group claim |
GOTRUE_MAILER_AUTOCONFIRM | No | false | Skip email confirmation (dev only) |
GOTRUE_DISABLE_SIGNUP | No | false | Disable new user registration |
GOTRUE_EXTERNAL_GOOGLE_ENABLED | No | false | Enable Google SSO (set via GOOGLE_SSO_ENABLED in compose) |
GOTRUE_EXTERNAL_APPLE_ENABLED | No | false | Enable Apple SSO (set via APPLE_SSO_ENABLED in compose) |
GOTRUE_EXTERNAL_AZURE_ENABLED | No | false | Enable Azure AD SSO (set via AZURE_SSO_ENABLED in compose) |
GOTRUE_SMTP_HOST | No | - | SMTP host for GoTrue emails |
GOTRUE_SMTP_PORT | No | - | SMTP port |
GOTRUE_SMTP_USER | No | - | SMTP username |
GOTRUE_SMTP_PASS | No | - | SMTP password |
GOTRUE_SMTP_ADMIN_EMAIL | No | - | Sender email for GoTrue |
Frontend / Vite Build Variables
These are set at build time for the frontend container.
| Variable | Required | Default | Description |
|---|---|---|---|
VITE_API_TARGET | Yes (dev) | http://backend:8000 | Backend API URL (dev proxy target) |
VITE_WS_TARGET | Yes (dev) | ws://backend:8000 | WebSocket URL (dev proxy target) |
VITE_SUPABASE_URL | Yes | - | Supabase Auth URL for frontend |
VITE_SUPABASE_ANON_KEY | Yes | - | Supabase anonymous key for frontend |
VITE_CSP | No | - | Custom Content-Security-Policy header for the frontend |
VITE_CSP_CONNECT_SRC | No | - | Additional CSP connect-src origins (e.g., analytics, API domains) |
Database & Redis Credentials
These configure PostgreSQL and Redis in docker-compose.
| Variable | Required | Default | Description |
|---|---|---|---|
POSTGRES_USER | Yes | morphee | PostgreSQL username |
POSTGRES_PASSWORD | Yes | - | PostgreSQL password |
POSTGRES_DB | Yes | morphee | PostgreSQL database name |
REDIS_PASSWORD | Yes (prod) | - | Redis password for production. Required — REDIS_URL includes :${REDIS_PASSWORD}@ in prod compose. |
Monitoring Stack (docker-compose.monitoring.yml)
Optional — for Grafana + Prometheus monitoring.
| Variable | Required | Default | Description |
|---|---|---|---|
GRAFANA_ADMIN_USER | No | admin | Grafana admin username |
GRAFANA_PASSWORD | Yes | - | Grafana admin password |
GRAFANA_SECRET_KEY | Yes | - | Grafana cookie/session secret |
GRAFANA_DOMAIN | No | monitoring.morphee.app | Grafana public domain |
SMTP_ENABLED | No | false | Enable Grafana email alerts |
PostHog Analytics (docker-compose.posthog.yml)
Optional — self-hosted product analytics. Resource-intensive (4GB+ RAM for ClickHouse).
| Variable | Required | Default | Description |
|---|---|---|---|
POSTHOG_SECRET_KEY | Yes | - | Django secret key for PostHog |
POSTHOG_DOMAIN | No | analytics.morphee.app | PostHog public domain |
CLICKHOUSE_PASSWORD | Yes | - | ClickHouse password |
Note: PostHog also reuses
POSTGRES_PASSWORD,SMTP_HOST,SMTP_PORT,SMTP_USER, andSMTP_PASSfrom the core config. Consider using PostHog Cloud free tier instead of self-hosting.
Planned / Not Yet Implemented
These variables are documented for future use but not currently implemented in the backend code:
| Variable | Description | Status |
|---|---|---|
RATE_LIMIT_ENABLED | Enable API rate limiting | Planned — rate limiting is currently handled by Nginx/reverse proxy |
RATE_LIMIT_REQUESTS_PER_MINUTE | Max requests per minute per IP | Planned — rate limiting is currently handled by Nginx/reverse proxy |
API_TIMEOUT | Global API request timeout | Planned — timeouts are currently per-endpoint |
JWT_ALGORITHM | JWT signing algorithm | Not needed — GoTrue handles JWT signing, backend only verifies |
JWT_EXPIRATION_MINUTES | JWT token expiration time | Not needed — controlled by GoTrue via GOTRUE_JWT_EXP |
Note: If you need these features now, use Nginx rate limiting (see nginx.conf example above) and configure GoTrue's JWT settings directly.
Security Checklist
- Use strong, randomly generated JWT_SECRET (min 32 characters)
- Enable HTTPS with valid SSL certificates
- Configure CORS to only allow your frontend domain
- Use environment variables for all secrets
- Enable rate limiting
- Keep all dependencies updated
- Application-level encryption at rest (Fernet) for chat messages, memory vectors, and Git files
- Use managed databases with encryption at rest (disk-level)
- Enable database connection encryption (SSL/TLS)
- Set up firewall rules to restrict database access
- Implement logging and monitoring
- Regular security audits and penetration testing
- Set up automated backups
- Implement DDoS protection
- Use container image scanning
Monitoring & Logging
✅ Health Checks: Implemented 📋 Centralized Logging & Metrics: Planned templates
Health Checks
# Backend health
curl https://yourdomain.com/api/health
# Expected response:
# {"status": "healthy", "version": "2.0.0"}
Logging
Configure centralized logging:
# docker-compose.prod.yml - add logging driver
services:
backend:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Metrics (Optional - Prometheus)
Add Prometheus metrics endpoint to backend:
# backend/main.py
from prometheus_client import make_asgi_app
# Mount metrics endpoint
metrics_app = make_asgi_app()
app.mount("/metrics", metrics_app)
Backup Strategy
✅ Status: Implemented — Backup script at
scripts/backup-morphee.sh.
What Gets Backed Up
| Data | Method | Location | Priority |
|---|---|---|---|
| PostgreSQL database | pg_dump (compressed) | morphee_db_TIMESTAMP.sql.gz | Critical |
| Git memory repos | tar archive | morphee_memory_TIMESTAMP.tar.gz | High |
| File storage | tar archive | morphee_files_TIMESTAMP.tar.gz | Medium |
| Redis snapshot | BGSAVE + copy | morphee_redis_TIMESTAMP.rdb | Low (ephemeral) |
Running Backups
# Manual backup
DATABASE_URL=postgresql://morphee:password@localhost:5432/morphee \
./scripts/backup-morphee.sh
# With S3 offsite upload
DATABASE_URL=postgresql://morphee:password@localhost:5432/morphee \
./scripts/backup-morphee.sh --s3-bucket my-backup-bucket
# Custom retention (default: 30 days)
./scripts/backup-morphee.sh --retention-days 90
Automated Daily Backups (cron)
# Add to crontab (runs daily at 2 AM)
0 2 * * * DATABASE_URL=postgresql://morphee:password@localhost:5432/morphee \
/opt/morphee/scripts/backup-morphee.sh >> /var/log/morphee-backup.log 2>&1
Restoring from Backup
# 1. Restore PostgreSQL database
gunzip -c morphee_db_20260214_020000.sql.gz | \
psql postgresql://morphee:password@localhost:5432/morphee
# 2. Restore Git memory repos
mkdir -p /data/coolify/morphee/memory
tar -xzf morphee_memory_20260214_020000.tar.gz -C /data/coolify/morphee/memory/
# 3. Restore file storage
mkdir -p /data/coolify/morphee/files
tar -xzf morphee_files_20260214_020000.tar.gz -C /data/coolify/morphee/files/
# 4. Restore Redis (optional — it regenerates from PostgreSQL)
docker cp morphee_redis_20260214_020000.rdb morphee-redis:/data/dump.rdb
docker restart morphee-redis
Backup Verification
Test restores periodically to ensure backups are valid:
# Verify database backup is valid SQL
gunzip -c morphee_db_*.sql.gz | head -20
# Verify archive integrity
tar -tzf morphee_memory_*.tar.gz > /dev/null && echo "OK"
# Check backup sizes (sudden drops may indicate issues)
ls -lh /data/backups/morphee/
Offsite Storage
For production, configure S3-compatible offsite storage:
# Environment variables for S3 access
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
export AWS_DEFAULT_REGION=eu-west-1
# Run with S3 upload
./scripts/backup-morphee.sh --s3-bucket morphee-backups
Backups are uploaded with STANDARD_IA storage class (infrequent access, lower cost). Files are organized by date: s3://bucket/morphee/YYYY-MM-DD/.
Scaling Considerations
📋 Status: Planned — Scaling strategies documented here are guidance for future growth. Current deployment uses docker-compose.dev.yml.
Horizontal Scaling
-
Backend API
- Add more backend replicas
- Use load balancer (Nginx, AWS ALB, GCP Load Balancer)
- Ensure stateless design (all state in Redis/PostgreSQL)
-
Database
- Use read replicas for read-heavy workloads
- Consider connection pooling (PgBouncer)
- Implement caching layer (Redis)
-
Redis
- Use Redis Cluster for high availability
- Consider Redis Sentinel for automatic failover
Vertical Scaling
- Increase CPU/memory based on monitoring
- Optimize database queries
- Add database indexes
- Tune PostgreSQL configuration
Maintenance
Updates
# Pull latest changes
git pull origin main
# Rebuild and restart
docker compose -f docker-compose.prod.yml build
docker compose -f docker-compose.prod.yml up -d
# Run new migrations if any
docker compose -f docker-compose.prod.yml exec postgres psql -U morphee -d morphee -f /migrations/new_migration.sql
Zero-Downtime Deployment
-
Using Docker Compose
# Build new image
docker compose -f docker-compose.prod.yml build backend
# Rolling update
docker compose -f docker-compose.prod.yml up -d --no-deps --scale backend=2 backend
docker compose -f docker-compose.prod.yml up -d --no-deps --scale backend=1 backend -
Using Kubernetes
# Update image
kubectl set image deployment/morphee-backend backend=your-registry/morphee-backend:new-version -n morphee
# Monitor rollout
kubectl rollout status deployment/morphee-backend -n morphee
Troubleshooting Production Issues
See troubleshooting.md for detailed troubleshooting guide.
Quick checks:
# Check all services
docker compose -f docker-compose.prod.yml ps
# View recent logs
docker compose -f docker-compose.prod.yml logs --tail=100 backend
# Check resource usage
docker stats
# Test database connection
docker compose -f docker-compose.prod.yml exec backend python -c "
import asyncio
from db.client import get_db
async def check():
db = get_db()
await db.initialize()
print('Database connection:', await db.health_check())
asyncio.run(check())
"
Cost Optimization
- Use managed services: Often cheaper than self-managing
- Right-size resources: Monitor and adjust based on actual usage
- Use spot/preemptible instances: For non-critical workloads
- Implement caching: Reduce database queries
- Archive old data: Move historical data to cheaper storage
Last Updated: February 14, 2026
For more information, see:
- README.md - Project overview
- architecture.md - System architecture
- troubleshooting.md - Troubleshooting guide
- testing.md - Testing guide