Skip to main content

Morphee Deployment Guide

This guide covers deploying Morphee to production environments.

Status Legend:

  • Implemented — Feature/infrastructure exists and is ready to use
  • 📋 Planned — Documentation for future implementation (files/infrastructure not yet created)

Prerequisites

  • Docker and Docker Compose
  • PostgreSQL 15+
  • Redis 7+
  • SSL/TLS certificates (for HTTPS)
  • Domain name (optional but recommended)
  • At least 2GB RAM, 2 CPU cores

Docker Image Versions (as specified in Dockerfiles and docker-compose.yml):

  • Node.js: 22-alpine (frontend build stage)
  • Nginx: 1.27-alpine (frontend serving)
  • Supabase GoTrue: v2.186.0 (authentication service)

Production Architecture

                                    ┌─────────────┐
│ Nginx │
│ (Reverse │
│ Proxy) │
└──────┬──────┘

┌─────────────────┼─────────────────┐
│ │
┌────▼─────┐ ┌─────▼──────┐
│ FastAPI │ │ Tauri │
│ Backend │ │ Desktop App│
│ (8000) │ │(Vite+React)│
└────┬─────┘ └────────────┘

┌────┼─────────────┐
│ │ │
┌─────▼──────┐ ┌───▼────────┐ ┌───────────┐
│ PostgreSQL │ │ Redis │ │ Backups │
│ (asyncpg) │ │ (Pub/Sub) │ │ │
└────────────┘ └────────────┘ └───────────┘

See also: Coolify Deployment Guide for deploying with Coolify, V1.0 Deployment Tasklist for release checklist, Runbook for operations manual.

Deployment Methods

1. Clone and Configure

# Clone repository
git clone https://github.com/your-org/morphee-beta.git
cd morphee-beta

# Create production environment file
cp .env.example .env.prod

2. Configure Production Environment

Edit .env.prod:

# Environment
NODE_ENV=production

# Database (use external managed database recommended)
DATABASE_URL=postgresql://morphee:STRONG_PASSWORD@postgres:5432/morphee

# Redis (use external managed Redis recommended)
REDIS_URL=redis://redis:6379/0

# Security
JWT_SECRET=your-very-strong-random-secret-key-here-min-32-chars

# CORS (set to your frontend domain)
CORS_ORIGINS=["https://yourdomain.com"]

# Logging
LOG_LEVEL=INFO

# Timeouts
TASK_TIMEOUT_SECONDS=300

3. Create Production Docker Compose

⚠️ Status: Planned Template — The file docker-compose.prod.yml does not yet exist in the repository. This section provides planning documentation for future production deployment. For current deployment, use docker-compose.dev.yml as a starting point and adapt it for your production needs.

Create docker-compose.prod.yml:

version: '3.9'

services:
backend:
build:
context: ./backend
dockerfile: ../Dockerfile.backend
args:
ENVIRONMENT: production
container_name: morphee-backend
restart: unless-stopped
ports:
- "127.0.0.1:8000:8000"
env_file:
- .env.prod
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./logs:/app/logs
depends_on:
- postgres
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- morphee-network

postgres:
image: postgres:15-alpine
container_name: morphee-postgres
restart: unless-stopped
ports:
- "127.0.0.1:5432:5432"
environment:
POSTGRES_USER: morphee
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: morphee
POSTGRES_INITDB_ARGS: "-E UTF8"
volumes:
- postgres-data:/var/lib/postgresql/data
- ./supabase/migrations:/docker-entrypoint-initdb.d:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U morphee"]
interval: 10s
timeout: 5s
retries: 5
networks:
- morphee-network

redis:
image: redis:7-alpine
container_name: morphee-redis
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
networks:
- morphee-network

nginx:
image: nginx:alpine
container_name: morphee-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- nginx-logs:/var/log/nginx
depends_on:
- backend
networks:
- morphee-network

volumes:
postgres-data:
driver: local
redis-data:
driver: local
nginx-logs:
driver: local

networks:
morphee-network:
driver: bridge

4. Configure Nginx

Create nginx/nginx.conf:

events {
worker_connections 1024;
}

http {
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;

# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

# Rate limiting
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_status 429;

# Backend API
upstream backend {
server backend:8000;
}

# HTTP to HTTPS redirect
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
return 301 https://$server_name$request_uri;
}

# HTTPS server
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;

# SSL configuration
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

# API endpoints
location /api/ {
limit_req zone=api_limit burst=20 nodelay;

proxy_pass http://backend/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}

# WebSocket endpoint
location /ws {
proxy_pass http://backend/ws;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

# WebSocket timeouts
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
}

# Health check endpoint
location /health {
proxy_pass http://backend/health;
access_log off;
}

# Frontend (Tauri desktop app connects directly to API)
# Static web fallback if needed
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
}
}

5. Deploy

# Build and start services
docker compose -f docker-compose.prod.yml build
docker compose -f docker-compose.prod.yml up -d

# Check services are running
docker compose -f docker-compose.prod.yml ps

# View logs
docker compose -f docker-compose.prod.yml logs -f

# Test endpoints
curl https://yourdomain.com/api/health

⚠️ Status: Planned Infrastructure — The k8s/ directory and Kubernetes manifests referenced in this section do not yet exist in the repository. This is planning documentation for future Kubernetes deployment at scale. The manifests provided here serve as templates for when Kubernetes deployment is needed.

1. Create Kubernetes Manifests

Create k8s/namespace.yaml:

apiVersion: v1
kind: Namespace
metadata:
name: morphee

Create k8s/configmap.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
name: morphee-config
namespace: morphee
data:
ENVIRONMENT: "production"
LOG_LEVEL: "INFO"
LOG_FORMAT: "json"
CORS_ORIGINS: '["https://yourdomain.com"]'

Create k8s/secrets.yaml:

apiVersion: v1
kind: Secret
metadata:
name: morphee-secrets
namespace: morphee
type: Opaque
stringData:
DATABASE_URL: "postgresql://user:password@postgres:5432/morphee"
REDIS_URL: "redis://redis:6379/0"
JWT_SECRET: "your-secret-key-here"
POSTGRES_PASSWORD: "your-postgres-password"

Create k8s/backend-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: morphee-backend
namespace: morphee
spec:
replicas: 3
selector:
matchLabels:
app: morphee-backend
template:
metadata:
labels:
app: morphee-backend
spec:
containers:
- name: backend
image: your-registry/morphee-backend:latest
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: morphee-config
- secretRef:
name: morphee-secrets
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: morphee-backend
namespace: morphee
spec:
selector:
app: morphee-backend
ports:
- protocol: TCP
port: 8000
targetPort: 8000
type: ClusterIP

Create k8s/ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: morphee-ingress
namespace: morphee
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
ingressClassName: nginx
tls:
- hosts:
- yourdomain.com
secretName: morphee-tls
rules:
- host: yourdomain.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: morphee-backend
port:
number: 8000
- path: /ws
pathType: Prefix
backend:
service:
name: morphee-backend
port:
number: 8000

2. Deploy to Kubernetes

# Apply manifests
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secrets.yaml
kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/ingress.yaml

# Check deployment
kubectl get pods -n morphee
kubectl get services -n morphee
kubectl get ingress -n morphee

# View logs
kubectl logs -f -n morphee -l app=morphee-backend

Institutional Website (Astro)

The institutional website (www/) is a static site built with Astro 5 and Tailwind CSS v4. It includes the landing page, documentation index, privacy policy, terms of service, about page, and contact page.

Build & Deploy

Prerequisites:

  • Node.js 22+ and npm
  • The www/ directory contains an Astro project with package.json and astro.config.mjs

1. Install Dependencies

cd www/
npm install

2. Build for Production

npm run build

This generates static files in www/dist/:

  • /index.html — Landing page
  • /privacy/index.html — Privacy Policy
  • /terms/index.html — Terms of Service
  • /about/index.html — About page
  • /contact/index.html — Contact page
  • /docs/index.html — Documentation index
  • sitemap-index.xml — Auto-generated sitemap

3. Deploy Options

Option A: Serve via Nginx (same server as backend)

Add to your nginx config:

server {
listen 80;
server_name www.morphee.app;

root /var/www/morphee-www/dist;
index index.html;

location / {
try_files $uri $uri/ =404;
}

# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|svg|ico|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}

Deploy:

# On your server
cd /var/www/
git clone https://github.com/morphee-app/morphee-beta.git morphee-www
cd morphee-www/www
npm install
npm run build

# Reload nginx
sudo nginx -t && sudo nginx -s reload

Option B: Deploy to Static Hosting (Netlify, Vercel, Cloudflare Pages)

# Build command
cd www && npm run build

# Publish directory
www/dist

# Or use Netlify CLI
cd www/
npm install -g netlify-cli
netlify deploy --prod --dir=dist

Option C: Deploy to CDN (AWS S3 + CloudFront, Google Cloud Storage)

# Build
cd www/
npm run build

# AWS S3 example
aws s3 sync dist/ s3://morphee-www-bucket/ --delete
aws cloudfront create-invalidation --distribution-id YOUR_DIST_ID --paths "/*"

Development

cd www/
npm run dev

Visit http://localhost:4321 (Astro default port).

Project Structure

www/
├── astro.config.mjs # Astro config (Tailwind, sitemap)
├── package.json # Dependencies (Astro 5, Tailwind CSS 4)
├── tsconfig.json # TypeScript config
├── src/
│ ├── pages/ # Route pages (index.astro, privacy.astro, etc.)
│ ├── layouts/ # BaseLayout.astro, PageLayout.astro
│ ├── components/ # Header.astro, Footer.astro, etc.
│ ├── styles/ # global.css (Tailwind v4 imports)
│ └── content.config.ts # Content collections (blog, changelog)
├── public/ # Static assets (favicon, images)
└── dist/ # Build output (generated)

CI/CD Integration

GitHub Actions workflow example (.github/workflows/www-deploy.yml):

name: Deploy Institutional Website

on:
push:
branches: [main]
paths:
- 'www/**'

jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
cache-dependency-path: www/package-lock.json

- name: Install & Build
run: |
cd www/
npm ci
npm run build

- name: Deploy to Netlify
run: |
npm install -g netlify-cli
netlify deploy --prod --dir=www/dist
env:
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}

E2E Testing

For end-to-end testing of the institutional website:

cd www/
npm run build
npm run preview # Starts preview server on port 4322

# In another terminal, run E2E tests
# (Add Playwright or Cypress tests here)

Add to roadmap: E2E testing for institutional website pages (smoke tests for all pages load, forms submit, links work).


Database Setup

Production Database Recommendations

  1. Use Managed Database Service (Recommended)

    • AWS RDS PostgreSQL
    • Google Cloud SQL
    • Azure Database for PostgreSQL
    • Digital Ocean Managed Databases
  2. Configuration

    -- Set appropriate connection limits
    ALTER SYSTEM SET max_connections = 100;

    -- Enable query performance tracking
    ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';

    -- Set work memory
    ALTER SYSTEM SET work_mem = '16MB';
  3. Run Migrations

    # Connect to database
    psql $DATABASE_URL

    # Run migration files in order
    \i supabase/migrations/001_initial_schema.sql
    \i supabase/migrations/002_memory_vectors.sql
    \i supabase/migrations/003_schedules.sql
    \i supabase/migrations/004_notifications.sql
    \i supabase/migrations/005_oauth_connections.sql
    \i supabase/migrations/006_skills.sql
    \i supabase/migrations/007_interface_configs.sql
    \i supabase/migrations/008_push_tokens.sql
    \i supabase/migrations/009_phase3e2.sql
    \i supabase/migrations/010_phase3e3.sql
    \i supabase/migrations/011_gdpr_compliance.sql
    \i supabase/migrations/012_performance_indexes.sql
    \i supabase/migrations/013_user_language.sql
    \i supabase/migrations/014_age_verification.sql
    \i supabase/migrations/015_add_message_pinning_and_context_window.sql
    \i supabase/migrations/016_generic_acl_system.sql
    \i supabase/migrations/017_trigram_search_indexes.sql
  4. Generate Encryption Key & Encrypt Existing Data

    # Generate a Fernet key and add to .env.prod as ENCRYPTION_KEY
    python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"

    # Dry run — see what would be encrypted (no changes made)
    docker compose exec backend python scripts/encrypt_existing_data.py --dry-run

    # Encrypt existing plaintext data (messages + memory_vectors)
    docker compose exec backend python scripts/encrypt_existing_data.py

    # IMPORTANT: Back up the encryption key securely.
    # Losing this key = ALL encrypted data is permanently unrecoverable.
  5. Create Indexes

    CREATE INDEX idx_tasks_status ON tasks(status);
    CREATE INDEX idx_tasks_created_at ON tasks(created_at DESC);
    CREATE INDEX idx_task_logs_task_id ON task_logs(task_id);
    CREATE INDEX idx_task_logs_created_at ON task_logs(created_at DESC);

Environment Variables Reference

Complete list of environment variables for production deployment. See backend/config.py for authoritative source.

Naming note: Some variables have aliases across different services and compose files:

  • SMTP Password: Backend uses SMTP_PASSWORD, GoTrue uses GOTRUE_SMTP_PASS (mapped in compose as SMTP_PASS)
  • SMTP From Address: Backend uses FROM_EMAIL, GoTrue/Grafana use SMTP_FROM or GOTRUE_SMTP_ADMIN_EMAIL
  • GoTrue Redirect URIs: Dev compose uses GOTRUE_URI_ALLOW_LIST, Coolify/prod compose uses GOTRUE_ALLOWED_REDIRECTS (both are valid, GoTrue accepts either)

The canonical names in backend/config.py are: SMTP_PASSWORD, FROM_EMAIL. Compose files map these to service-specific names as needed.

Core Application

VariableRequiredDefaultDescription
NODE_ENVNoproductionEnvironment mode (development, production)
DEBUGNofalseEnable debug mode
LOG_LEVELNoINFOLogging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
LOG_FORMATNotextLog output format (text for development, json for production structured logging)
AGENT_HOSTNo0.0.0.0Server bind address
AGENT_PORTNo8000Server port

Database & Storage

VariableRequiredDefaultDescription
DATABASE_URLYes-PostgreSQL connection URL
DB_POOL_MIN_SIZENo5Minimum database connection pool size
DB_POOL_MAX_SIZENo20Maximum database connection pool size
DB_POOL_TIMEOUTNo10.0Database connection pool timeout (seconds)
KNOWLEDGE_BASE_PATHNo/knowledgePath for knowledge base storage
UPLOAD_PATHNo/uploadsPath for file uploads
MEMORY_GIT_PATHNo/data/memoryPath for Git-backed memory storage
FILESYSTEM_PATHNo/data/filesPath for sandboxed file operations

Authentication & Security

VariableRequiredDefaultDescription
SUPABASE_AUTH_URLYes-Supabase Auth (GoTrue) URL
SUPABASE_JWT_SECRETYes-Supabase JWT verification secret
JWT_SECRETYes-JWT signing secret (min 32 characters)
WEBHOOK_SECRETNo-Secret for webhook validation
ENCRYPTION_KEYYes (prod)-Fernet encryption key for data at rest (chat messages, memory vectors, Git files). Generate: python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())". Losing this key = all encrypted data unrecoverable.
SKIP_AUTHNofalseSkip authentication (development only, never use in production)
TRUSTED_PROXY_IPSNo""Comma-separated IPs trusted for X-Forwarded-For

LLM Configuration

VariableRequiredDefaultDescription
DEFAULT_LLM_PROVIDERNoanthropicLLM provider (openai, anthropic, litellm)
DEFAULT_LLM_MODELNoclaude-sonnet-4-20250514Default LLM model identifier
OPENAI_API_KEYConditional-OpenAI API key (required if provider=openai)
ANTHROPIC_API_KEYConditional-Anthropic API key (required if provider=anthropic)
LITELLM_API_BASEConditional-LiteLLM proxy base URL (required if provider=litellm)
EMBEDDING_PROVIDERNofastembedEmbedding provider (fastembed, openai)
EMBEDDING_MODELNo-Specific embedding model (provider-dependent)

Redis & Event Bus

VariableRequiredDefaultDescription
REDIS_URLConditionalredis://localhost:6379/0Redis connection URL
USE_REDISNofalseEnable Redis pub/sub for events
REQUIRE_REDISNofalseFail startup if Redis unavailable

Task Processing

VariableRequiredDefaultDescription
MAX_CONCURRENT_TASKSNo10Maximum concurrent background tasks
TASK_RETRY_MAXNo3Maximum task retry attempts
TASK_TIMEOUT_SECONDSNo300Task execution timeout (seconds)

Agent Orchestrator

VariableRequiredDefaultDescription
ORCHESTRATOR_MAX_TURNSNo5Maximum agent loop iterations (reduced from 10 per M-DOS-001 to limit LLM cost)
APPROVAL_TIMEOUT_SECONDSNo120Tool approval timeout (seconds)
DEFAULT_MONTHLY_TOKEN_LIMITNo2000000Per-group monthly LLM token quota (~$6 of Claude Sonnet at current pricing)

Scheduler

VariableRequiredDefaultDescription
SCHEDULER_POLL_INTERVALNo15Cron schedule polling interval (seconds)

Email / SMTP

VariableRequiredDefaultDescription
SMTP_HOSTNolocalhostSMTP server hostname
SMTP_PORTNo587SMTP server port
SMTP_USERNo-SMTP authentication username
SMTP_PASSWORDNo-SMTP authentication password
FROM_EMAILNonoreply@morphee.appSender email address
FROM_NAMENoMorpheeSender display name

Google OAuth & APIs

VariableRequiredDefaultDescription
GOOGLE_CLIENT_IDConditional-Google OAuth client ID (for both Google API integrations and SSO)
GOOGLE_CLIENT_SECRETConditional-Google OAuth client secret
GOOGLE_REDIRECT_URINohttp://localhost:8000/api/oauth/google/callbackOAuth redirect URI for Google API integrations
GOOGLE_SSO_ENABLEDNofalseEnable Google SSO via GoTrue (set to true to activate)

SSO Providers

VariableRequiredDefaultDescription
APPLE_SSO_ENABLEDNofalseEnable Apple Sign-In via GoTrue
APPLE_CLIENT_IDConditional-Apple Services ID (required if Apple SSO enabled)
APPLE_CLIENT_SECRETConditional-Apple JWT secret key (required if Apple SSO enabled)
AZURE_SSO_ENABLEDNofalseEnable Microsoft/Azure AD SSO via GoTrue
AZURE_CLIENT_IDConditional-Azure AD application (client) ID (required if Azure SSO enabled)
AZURE_CLIENT_SECRETConditional-Azure AD client secret (required if Azure SSO enabled)

Push Notifications

VariableRequiredDefaultDescription
APNS_KEY_IDConditional-Apple Push Notification Service key ID (iOS)
APNS_TEAM_IDConditional-Apple Developer Team ID (iOS)
APNS_KEY_PATHConditional-Path to APNs .p8 key file (iOS)
APNS_BUNDLE_IDNoapp.morphee.mobileiOS app bundle identifier
FCM_PROJECT_IDConditional-Firebase Cloud Messaging project ID (Android)
FCM_SERVICE_ACCOUNT_PATHConditional-Path to FCM service account JSON (Android)

Frontend & CORS

VariableRequiredDefaultDescription
FRONTEND_URLNohttp://localhost:5173Frontend URL for OAuth redirects
CORS_ORIGINSNohttp://localhost:3000,http://localhost:5173,tauri://localhost,http://tauri.localhostComma-separated allowed CORS origins

WebSocket

VariableRequiredDefaultDescription
WEBSOCKET_HEARTBEAT_INTERVALNo30WebSocket ping interval (seconds)

Optional Observability

VariableRequiredDefaultDescription
LANGFUSE_PUBLIC_KEYNo-Langfuse public key (LLM cost tracking)
LANGFUSE_SECRET_KEYNo-Langfuse secret key
LANGFUSE_HOSTNohttps://cloud.langfuse.comLangfuse API endpoint

Supabase GoTrue / SSO Configuration

These variables configure the GoTrue auth service in docker-compose files.

VariableRequiredDefaultDescription
GOTRUE_API_HOSTNo0.0.0.0GoTrue listen address
GOTRUE_API_PORTNo9999GoTrue listen port
GOTRUE_DB_DRIVERYespostgresDatabase driver
GOTRUE_DB_DATABASE_URLYes-PostgreSQL connection for GoTrue (auth schema)
GOTRUE_SITE_URLYes-Frontend URL for email links and redirects
GOTRUE_URI_ALLOW_LISTNo-Comma-separated list of allowed redirect URIs
GOTRUE_JWT_SECRETYes-Must match SUPABASE_JWT_SECRET
GOTRUE_JWT_EXPNo86400JWT expiration in seconds
GOTRUE_JWT_DEFAULT_GROUP_NAMENoauthenticatedDefault JWT group claim
GOTRUE_MAILER_AUTOCONFIRMNofalseSkip email confirmation (dev only)
GOTRUE_DISABLE_SIGNUPNofalseDisable new user registration
GOTRUE_EXTERNAL_GOOGLE_ENABLEDNofalseEnable Google SSO (set via GOOGLE_SSO_ENABLED in compose)
GOTRUE_EXTERNAL_APPLE_ENABLEDNofalseEnable Apple SSO (set via APPLE_SSO_ENABLED in compose)
GOTRUE_EXTERNAL_AZURE_ENABLEDNofalseEnable Azure AD SSO (set via AZURE_SSO_ENABLED in compose)
GOTRUE_SMTP_HOSTNo-SMTP host for GoTrue emails
GOTRUE_SMTP_PORTNo-SMTP port
GOTRUE_SMTP_USERNo-SMTP username
GOTRUE_SMTP_PASSNo-SMTP password
GOTRUE_SMTP_ADMIN_EMAILNo-Sender email for GoTrue

Frontend / Vite Build Variables

These are set at build time for the frontend container.

VariableRequiredDefaultDescription
VITE_API_TARGETYes (dev)http://backend:8000Backend API URL (dev proxy target)
VITE_WS_TARGETYes (dev)ws://backend:8000WebSocket URL (dev proxy target)
VITE_SUPABASE_URLYes-Supabase Auth URL for frontend
VITE_SUPABASE_ANON_KEYYes-Supabase anonymous key for frontend
VITE_CSPNo-Custom Content-Security-Policy header for the frontend
VITE_CSP_CONNECT_SRCNo-Additional CSP connect-src origins (e.g., analytics, API domains)

Database & Redis Credentials

These configure PostgreSQL and Redis in docker-compose.

VariableRequiredDefaultDescription
POSTGRES_USERYesmorpheePostgreSQL username
POSTGRES_PASSWORDYes-PostgreSQL password
POSTGRES_DBYesmorpheePostgreSQL database name
REDIS_PASSWORDYes (prod)-Redis password for production. Required — REDIS_URL includes :${REDIS_PASSWORD}@ in prod compose.

Monitoring Stack (docker-compose.monitoring.yml)

Optional — for Grafana + Prometheus monitoring.

VariableRequiredDefaultDescription
GRAFANA_ADMIN_USERNoadminGrafana admin username
GRAFANA_PASSWORDYes-Grafana admin password
GRAFANA_SECRET_KEYYes-Grafana cookie/session secret
GRAFANA_DOMAINNomonitoring.morphee.appGrafana public domain
SMTP_ENABLEDNofalseEnable Grafana email alerts

PostHog Analytics (docker-compose.posthog.yml)

Optional — self-hosted product analytics. Resource-intensive (4GB+ RAM for ClickHouse).

VariableRequiredDefaultDescription
POSTHOG_SECRET_KEYYes-Django secret key for PostHog
POSTHOG_DOMAINNoanalytics.morphee.appPostHog public domain
CLICKHOUSE_PASSWORDYes-ClickHouse password

Note: PostHog also reuses POSTGRES_PASSWORD, SMTP_HOST, SMTP_PORT, SMTP_USER, and SMTP_PASS from the core config. Consider using PostHog Cloud free tier instead of self-hosting.

Planned / Not Yet Implemented

These variables are documented for future use but not currently implemented in the backend code:

VariableDescriptionStatus
RATE_LIMIT_ENABLEDEnable API rate limitingPlanned — rate limiting is currently handled by Nginx/reverse proxy
RATE_LIMIT_REQUESTS_PER_MINUTEMax requests per minute per IPPlanned — rate limiting is currently handled by Nginx/reverse proxy
API_TIMEOUTGlobal API request timeoutPlanned — timeouts are currently per-endpoint
JWT_ALGORITHMJWT signing algorithmNot needed — GoTrue handles JWT signing, backend only verifies
JWT_EXPIRATION_MINUTESJWT token expiration timeNot needed — controlled by GoTrue via GOTRUE_JWT_EXP

Note: If you need these features now, use Nginx rate limiting (see nginx.conf example above) and configure GoTrue's JWT settings directly.


Security Checklist

  • Use strong, randomly generated JWT_SECRET (min 32 characters)
  • Enable HTTPS with valid SSL certificates
  • Configure CORS to only allow your frontend domain
  • Use environment variables for all secrets
  • Enable rate limiting
  • Keep all dependencies updated
  • Application-level encryption at rest (Fernet) for chat messages, memory vectors, and Git files
  • Use managed databases with encryption at rest (disk-level)
  • Enable database connection encryption (SSL/TLS)
  • Set up firewall rules to restrict database access
  • Implement logging and monitoring
  • Regular security audits and penetration testing
  • Set up automated backups
  • Implement DDoS protection
  • Use container image scanning

Monitoring & Logging

Health Checks: Implemented 📋 Centralized Logging & Metrics: Planned templates

Health Checks

# Backend health
curl https://yourdomain.com/api/health

# Expected response:
# {"status": "healthy", "version": "2.0.0"}

Logging

Configure centralized logging:

# docker-compose.prod.yml - add logging driver
services:
backend:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"

Metrics (Optional - Prometheus)

Add Prometheus metrics endpoint to backend:

# backend/main.py
from prometheus_client import make_asgi_app

# Mount metrics endpoint
metrics_app = make_asgi_app()
app.mount("/metrics", metrics_app)

Backup Strategy

Status: Implemented — Backup script at scripts/backup-morphee.sh.

What Gets Backed Up

DataMethodLocationPriority
PostgreSQL databasepg_dump (compressed)morphee_db_TIMESTAMP.sql.gzCritical
Git memory repostar archivemorphee_memory_TIMESTAMP.tar.gzHigh
File storagetar archivemorphee_files_TIMESTAMP.tar.gzMedium
Redis snapshotBGSAVE + copymorphee_redis_TIMESTAMP.rdbLow (ephemeral)

Running Backups

# Manual backup
DATABASE_URL=postgresql://morphee:password@localhost:5432/morphee \
./scripts/backup-morphee.sh

# With S3 offsite upload
DATABASE_URL=postgresql://morphee:password@localhost:5432/morphee \
./scripts/backup-morphee.sh --s3-bucket my-backup-bucket

# Custom retention (default: 30 days)
./scripts/backup-morphee.sh --retention-days 90

Automated Daily Backups (cron)

# Add to crontab (runs daily at 2 AM)
0 2 * * * DATABASE_URL=postgresql://morphee:password@localhost:5432/morphee \
/opt/morphee/scripts/backup-morphee.sh >> /var/log/morphee-backup.log 2>&1

Restoring from Backup

# 1. Restore PostgreSQL database
gunzip -c morphee_db_20260214_020000.sql.gz | \
psql postgresql://morphee:password@localhost:5432/morphee

# 2. Restore Git memory repos
mkdir -p /data/coolify/morphee/memory
tar -xzf morphee_memory_20260214_020000.tar.gz -C /data/coolify/morphee/memory/

# 3. Restore file storage
mkdir -p /data/coolify/morphee/files
tar -xzf morphee_files_20260214_020000.tar.gz -C /data/coolify/morphee/files/

# 4. Restore Redis (optional — it regenerates from PostgreSQL)
docker cp morphee_redis_20260214_020000.rdb morphee-redis:/data/dump.rdb
docker restart morphee-redis

Backup Verification

Test restores periodically to ensure backups are valid:

# Verify database backup is valid SQL
gunzip -c morphee_db_*.sql.gz | head -20

# Verify archive integrity
tar -tzf morphee_memory_*.tar.gz > /dev/null && echo "OK"

# Check backup sizes (sudden drops may indicate issues)
ls -lh /data/backups/morphee/

Offsite Storage

For production, configure S3-compatible offsite storage:

# Environment variables for S3 access
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
export AWS_DEFAULT_REGION=eu-west-1

# Run with S3 upload
./scripts/backup-morphee.sh --s3-bucket morphee-backups

Backups are uploaded with STANDARD_IA storage class (infrequent access, lower cost). Files are organized by date: s3://bucket/morphee/YYYY-MM-DD/.


Scaling Considerations

📋 Status: Planned — Scaling strategies documented here are guidance for future growth. Current deployment uses docker-compose.dev.yml.

Horizontal Scaling

  1. Backend API

    • Add more backend replicas
    • Use load balancer (Nginx, AWS ALB, GCP Load Balancer)
    • Ensure stateless design (all state in Redis/PostgreSQL)
  2. Database

    • Use read replicas for read-heavy workloads
    • Consider connection pooling (PgBouncer)
    • Implement caching layer (Redis)
  3. Redis

    • Use Redis Cluster for high availability
    • Consider Redis Sentinel for automatic failover

Vertical Scaling

  • Increase CPU/memory based on monitoring
  • Optimize database queries
  • Add database indexes
  • Tune PostgreSQL configuration

Maintenance

Updates

# Pull latest changes
git pull origin main

# Rebuild and restart
docker compose -f docker-compose.prod.yml build
docker compose -f docker-compose.prod.yml up -d

# Run new migrations if any
docker compose -f docker-compose.prod.yml exec postgres psql -U morphee -d morphee -f /migrations/new_migration.sql

Zero-Downtime Deployment

  1. Using Docker Compose

    # Build new image
    docker compose -f docker-compose.prod.yml build backend

    # Rolling update
    docker compose -f docker-compose.prod.yml up -d --no-deps --scale backend=2 backend
    docker compose -f docker-compose.prod.yml up -d --no-deps --scale backend=1 backend
  2. Using Kubernetes

    # Update image
    kubectl set image deployment/morphee-backend backend=your-registry/morphee-backend:new-version -n morphee

    # Monitor rollout
    kubectl rollout status deployment/morphee-backend -n morphee

Troubleshooting Production Issues

See troubleshooting.md for detailed troubleshooting guide.

Quick checks:

# Check all services
docker compose -f docker-compose.prod.yml ps

# View recent logs
docker compose -f docker-compose.prod.yml logs --tail=100 backend

# Check resource usage
docker stats

# Test database connection
docker compose -f docker-compose.prod.yml exec backend python -c "
import asyncio
from db.client import get_db
async def check():
db = get_db()
await db.initialize()
print('Database connection:', await db.health_check())
asyncio.run(check())
"

Cost Optimization

  1. Use managed services: Often cheaper than self-managing
  2. Right-size resources: Monitor and adjust based on actual usage
  3. Use spot/preemptible instances: For non-critical workloads
  4. Implement caching: Reduce database queries
  5. Archive old data: Move historical data to cheaper storage

Last Updated: February 14, 2026

For more information, see: