High Availability & Scaling — Morphee Infrastructure
Last updated: 2026-02-20 Status: 📋 Reference architecture — Phase 1 ready for implementation
Overview
Morphee's data architecture revolves around git-backed Spaces (OpenMorph .morph/ directories). Every Space is a git repository storing tasks, skills, configs, memory, and canvas state as version-controlled files. This document covers how to make that storage highly available, and how to scale the entire platform from a single Coolify server to a multi-region AWS deployment.
Current state: Single Hetzner VPS running Coolify (~$17/mo). All services (backend, frontend, Redis, PostgreSQL, GoTrue) run as Docker containers on one server with local volumes.
Target: A clear, incremental migration path with cost-aware thresholds for when to move to each phase.
Architecture Phases
| Phase | Name | When | Monthly Cost | Users |
|---|---|---|---|---|
| 0 | Single VPS (current) | Now | ~$17 | 1-50 |
| 1 | Forgejo on Coolify | V1.0 launch | ~$17 (same server) | 1-200 |
| 2 | Managed services + replicas | >200 users | ~$80-120 | 200-2,000 |
| 3 | AWS full stack | >2,000 users | ~$200-400 | 2,000-50,000 |
| 4 | Multi-region | >50,000 users | ~$800+ | 50,000+ |
Phase 0 — Current Architecture (Single VPS)
┌──────────────────── Hetzner CPX31 (4 vCPU, 8 GB) ────────────────────┐
│ │
│ Coolify (Traefik reverse proxy + Let's Encrypt) │
│ │
│ ┌─────────┐ ┌──────────┐ ┌───────┐ ┌────────┐ ┌─────────────┐ │
│ │Frontend │ │ Backend │ │ Redis │ │PostgreSQL│ │ GoTrue │ │
│ │ (Nginx) │ │(FastAPI) │ │ 7 │ │ 15 │ │ (Supabase) │ │
│ └─────────┘ └──────────┘ └───────┘ └────────┘ └─────────────┘ │
│ │ │ │
│ /data/memory (git repos) /data/files │
│ (local Docker volume) (local Docker volume) │
└───────────────────────────────────────────────────────────────────────┘
Risks:
- Single point of failure — server dies, everything is gone
- Git repos on local disk — no replication
- No automated backups (script exists but manual)
- No horizontal scaling
Phase 1 — Forgejo on Coolify ($0 extra)
Goal: Give git repos a proper home with an API, webhooks, and backup-friendly storage. Zero additional cost — runs on the same VPS.
Why Forgejo
Forgejo is a community fork of Gitea — a lightweight, self-hosted Git server. It's the right fit because:
- Tiny footprint: ~80 MB RAM idle, ~200 MB under load (vs GitLab's 4 GB+)
- Gitea-compatible API: Full REST API at
/api/v1/for org/repo CRUD, webhooks, file access - Shares PostgreSQL: Uses our existing database — no new stateful service
- Docker image:
codeberg.org/forgejo/forgejo:13(~100 MB) - Registration disabled: We manage repos via API only — no user-facing Git UI needed
Architecture
┌──────────────────── Hetzner CPX31 ────────────────────────────────────┐
│ │
│ Coolify (Traefik) │
│ │
│ ┌─────────┐ ┌──────────┐ ┌──────────┐ ┌───────┐ ┌──────────┐ │
│ │Frontend │ │ Backend │ │ Forgejo │ │ Redis │ │PostgreSQL│ │
│ │ (Nginx) │ │(FastAPI) │ │(Git srvr)│ │ 7 │ │ 15 │ │
│ └─────────┘ └────┬─────┘ └────┬─────┘ └───────┘ └─────┬────┘ │
│ │ │ │ │
│ │ HTTP API │ git push/pull │ │
│ └─────────────┘ │ │
│ │ │ │
│ forgejo-data:/data shared DB │
│ (repos on disk) │
└───────────────────────────────────────────────────────────────────────┘
Docker Compose Service
Add this to your Coolify deployment (or create a separate Docker Compose resource):
services:
forgejo:
image: codeberg.org/forgejo/forgejo:13
container_name: forgejo
restart: unless-stopped
environment:
# --- Database (shared PostgreSQL) ---
FORGEJO__database__DB_TYPE: postgres
FORGEJO__database__HOST: ${FORGEJO_DB_HOST:-supabase-db}:5432
FORGEJO__database__NAME: ${FORGEJO_DB_NAME:-forgejo}
FORGEJO__database__USER: ${FORGEJO_DB_USER:-forgejo}
FORGEJO__database__PASSWD: ${FORGEJO_DB_PASSWORD}
FORGEJO__database__SSL_MODE: ${FORGEJO_DB_SSL:-disable}
# --- Server ---
FORGEJO__server__DOMAIN: ${FORGEJO_DOMAIN:-git.morphee.app}
FORGEJO__server__ROOT_URL: https://${FORGEJO_DOMAIN:-git.morphee.app}/
FORGEJO__server__SSH_DOMAIN: ${FORGEJO_DOMAIN:-git.morphee.app}
FORGEJO__server__HTTP_PORT: "3000"
FORGEJO__server__LFS_START_SERVER: "true"
# --- Access control ---
FORGEJO__service__DISABLE_REGISTRATION: "true"
FORGEJO__service__REQUIRE_SIGNIN_VIEW: "true"
# --- Repository defaults ---
FORGEJO__repository__DEFAULT_BRANCH: main
FORGEJO__repository__DEFAULT_PRIVATE: private
volumes:
- forgejo-data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "222:22"
deploy:
resources:
limits:
memory: 512M
cpus: "0.5"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/v1/version"]
interval: 30s
timeout: 5s
retries: 3
volumes:
forgejo-data:
Coolify Deployment Steps
-
Create a
forgejodatabase in your existing PostgreSQL:CREATE DATABASE forgejo;
CREATE USER forgejo WITH PASSWORD 'your-secure-password';
GRANT ALL PRIVILEGES ON DATABASE forgejo TO forgejo; -
Add Forgejo as a Docker Compose resource in Coolify:
- New Resource → Docker Compose → paste the YAML above
- Set environment variables in Coolify UI
-
Configure domain: Set
git.morphee.appin Coolify proxy settings. Coolify auto-provisions the SSL cert via Traefik. -
Persistent Storage: In Coolify's Persistent Storage panel, add volume
forgejo-data→/data -
First boot: Forgejo auto-initializes the database schema and creates an admin user
-
Create API token: Log into the Forgejo web UI once to create an admin API token, then store it as
FORGEJO_TOKENin your backend's environment
Forgejo API Integration
The Morphee backend uses the Forgejo API to manage git repos programmatically. Key mapping:
| Morphee Concept | Forgejo Concept | API Endpoint |
|---|---|---|
| Group | Organization | POST /api/v1/orgs |
| Space | Repository | POST /api/v1/orgs/{org}/repos |
| Memory sync | Git push/pull | Standard git protocol |
| Webhook (change notification) | Repo webhook | POST /api/v1/repos/{owner}/{repo}/hooks |
Create a Group (Organization):
curl -X POST https://git.morphee.app/api/v1/orgs \
-H "Authorization: token ${FORGEJO_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"username": "group-abc123",
"full_name": "Family Smith",
"visibility": "private"
}'
Create a Space (Repository):
curl -X POST https://git.morphee.app/api/v1/orgs/group-abc123/repos \
-H "Authorization: token ${FORGEJO_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"name": "space-xyz789",
"description": "Space: Family Tasks",
"private": true,
"auto_init": true,
"default_branch": "main"
}'
Register a webhook (push notifications):
curl -X POST https://git.morphee.app/api/v1/repos/group-abc123/space-xyz789/hooks \
-H "Authorization: token ${FORGEJO_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"type": "forgejo",
"active": true,
"config": {
"url": "https://api.morphee.app/hooks/forgejo",
"content_type": "json",
"secret": "webhook-hmac-secret"
},
"events": ["push"]
}'
Backend Integration Service
The MorphDiscoveryService (or GitStoreService) wraps these API calls:
# backend/git/forgejo_client.py (future)
class ForgejoClient:
"""HTTP client for Forgejo API — manages orgs/repos for Morphee groups/spaces."""
def __init__(self, base_url: str, token: str):
self.base_url = base_url.rstrip("/")
self.headers = {
"Authorization": f"token {token}",
"Content-Type": "application/json",
}
async def create_org(self, group_id: str, display_name: str) -> dict:
"""Create Forgejo org for a Morphee group."""
...
async def create_repo(self, group_id: str, space_id: str, name: str) -> dict:
"""Create Forgejo repo for a Morphee space."""
...
async def delete_repo(self, group_id: str, space_id: str) -> None:
"""Delete repo when a space is deleted."""
...
async def get_clone_url(self, group_id: str, space_id: str) -> str:
"""Get the git clone URL for a space's repo."""
...
Backup Strategy (Phase 1)
With Forgejo, backups become simple:
#!/bin/bash
# backup-forgejo.sh — run daily via cron
BACKUP_DIR="/data/backups/forgejo"
DATE=$(date +%Y%m%d_%H%M%S)
# 1. Dump Forgejo database
pg_dump -U forgejo -h localhost -d forgejo | gzip > "${BACKUP_DIR}/forgejo_db_${DATE}.sql.gz"
# 2. Tar the git repos (Forgejo stores them in /data/forgejo/repositories/)
tar -czf "${BACKUP_DIR}/forgejo_repos_${DATE}.tar.gz" -C /data/forgejo repositories/
# 3. Upload to S3 / R2 (optional — recommended)
# aws s3 cp "${BACKUP_DIR}/forgejo_repos_${DATE}.tar.gz" s3://morphee-backups/forgejo/
# Or with rclone for Cloudflare R2:
# rclone copy "${BACKUP_DIR}/" r2:morphee-backups/forgejo/ --max-age 1h
# 4. Retention: keep last 7 daily backups locally
find "${BACKUP_DIR}" -name "*.gz" -mtime +7 -delete
Cloudflare R2 is recommended for offsite backups:
- Free egress (no data transfer fees)
- $0.015/GB/month storage
- S3-compatible API (works with aws CLI and rclone)
- 10 GB free tier
Phase 2 — Managed Services + Replicas (~$80-120/mo)
When to move: >200 users, or when you need zero-downtime deploys and automatic failover.
What changes:
- Move PostgreSQL to a managed service (Supabase Pro or Hetzner Managed PostgreSQL)
- Add a second backend instance behind Coolify's load balancer
- Move backups to Cloudflare R2 with automated daily schedule
- Optional: move Forgejo to its own small VPS
Architecture
┌──────────── Cloudflare ─────────────┐
│ DNS: morphee.app │
│ R2: morphee-backups (offsite) │
└──────────────┬──────────────────────┘
│
┌─────────────────────────┼───────────────────────┐
│ Coolify (Traefik LB) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │
│ │Backend 1 │ │Backend 2 │ │ Forgejo │ │
│ │(FastAPI) │ │(FastAPI) │ │ (Git server) │ │
│ └─────┬────┘ └─────┬────┘ └──────┬───────┘ │
│ │ │ │ │
│ └──────┬──────┘ │ │
│ │ │ │
│ ┌──────┴──────┐ │ │
│ │ Redis 7 │ │ │
│ │ (Sentinel) │ │ │
│ └─────────────┘ │ │
└───────────────────┬───────────────────┘ │
│ │
┌────────┴────────┐ ┌────────┴────────┐
│ Managed PG │ │ Persistent Vol │
│ (Supabase Pro │ │ (forgejo-data) │
│ or Hetzner) │ └─────────────────┘
└─────────────────┘
Managed PostgreSQL Options
| Provider | Plan | Monthly Cost | Features |
|---|---|---|---|
| Supabase Pro | Pro | $25 | Auth included, dashboard, point-in-time recovery, 8 GB storage |
| Hetzner Cloud DB | CPX11 | ~$15 | Managed PG, auto backups, EU data residency |
| Neon | Scale | $19 | Serverless PG, branching, auto-suspend |
Recommendation: Supabase Pro — already integrated (GoTrue auth), includes connection pooling, and the dashboard is useful for debugging.
Multiple Backend Replicas
Coolify supports scaling services to multiple replicas. The Morphee backend is stateless (session state in Redis, data in PostgreSQL) so it scales horizontally:
- In Coolify, set the backend service replica count to 2
- Coolify's Traefik automatically round-robins between replicas
- WebSocket connections are sticky by default (cookie affinity)
Requirements for stateless backend:
- Redis for pub/sub and rate limiting (already done)
- No local file writes for user data (git repos moved to Forgejo)
- JWT auth is stateless (already done)
Phase 3 — AWS Full Stack (~$200-400/mo)
When to move: >2,000 users, need geographic presence, compliance requirements (SOC2, HIPAA), or the Coolify server is hitting resource limits.
Architecture
┌──────────── CloudFront CDN ──────────┐
│ app.morphee.app (frontend) │
│ Static assets cached at edge │
└──────────────┬───────────────────────┘
│
┌──────────────┴───────────────────────┐
│ AWS VPC │
│ │
│ ┌──────── ALB ────────┐ │
│ │ api.morphee.app │ │
│ │ git.morphee.app │ │
│ └────┬──────────┬─────┘ │
│ │ │ │
│ ┌────┴───┐ ┌────┴───┐ │
│ │ECS │ │ECS │ │
│ │Backend │ │Forgejo │ │
│ │Fargate │ │Fargate │ │
│ │(2 tasks│ │(1 task)│ │
│ └────┬───┘ └────┬───┘ │
│ │ │ │
│ ┌────┴──────────┴───┐ │
│ │ Amazon EFS │ │
│ │ (git repo data) │ │
│ │ Multi-AZ, IA │ │
│ └──────────────────┘ │
│ │
│ ┌─────────────┐ ┌──────────────┐ │
│ │Aurora Srvls │ │ElastiCache │ │
│ │v2 (PG 15) │ │Redis (t4g) │ │
│ │Writer + Rdr│ │1 node │ │
│ └─────────────┘ └──────────────┘ │
└──────────────────────────────────────┘
AWS Service Selection & Pricing
Compute — ECS Fargate
| Service | Config | Monthly (ARM/Graviton) |
|---|---|---|
| Morphee Backend (2 tasks) | 0.5 vCPU, 1 GB each | ~$53 |
| Forgejo (1 task) | 0.5 vCPU, 1 GB | ~$26 |
| Subtotal | 3 tasks | ~$79 |
ARM/Graviton instances are 20% cheaper and both Python and Forgejo (Go) run on ARM natively.
Storage — Amazon EFS
EFS is the key enabler — it's a managed NFS filesystem that multiple containers can mount simultaneously. Git repos are small text files that fit perfectly.
| Storage Class | Price/GB-month | Data Access (read) | Data Access (write) |
|---|---|---|---|
| Standard | $0.30 | $0.03/GB | $0.06/GB |
| Infrequent Access (IA) | $0.016 | $0.03/GB | $0.06/GB |
| Archive | $0.008 | $0.03/GB | $0.06/GB |
Lifecycle policy: Move files untouched for 30 days to IA, 90 days to Archive. Most git repos are cold data (written once, read occasionally).
| Scenario | Storage | Monthly Cost |
|---|---|---|
| 100 groups, 5 spaces, ~50 MB/space | ~25 GB | ~$7.50 (Standard) |
| Same with IA lifecycle | ~5 GB Std + 20 GB IA | ~$1.82 |
| 1,000 groups at scale | ~250 GB with IA | ~$18 |
| 10,000 groups | ~2.5 TB with IA/Archive | ~$60 |
Why EFS over EBS:
- EBS is single-AZ, single-attach — can't share between Fargate tasks
- EFS is multi-AZ, multi-attach — multiple Forgejo/backend tasks read the same repos
- EFS Elastic Throughput auto-scales (no provisioning)
Database — Aurora Serverless v2
| Config | Min ACU | Monthly Minimum | Typical Monthly |
|---|---|---|---|
| Writer only (0.5 min ACU) | 0.5 | $43.80 | ~$50-70 |
| Writer + 1 Reader (0.5 each) | 1.0 | $87.60 | ~$100-150 |
| Storage (10-50 GB) | — | $1-5 | $1-5 |
Important: Aurora Serverless v2 cannot scale to zero. The minimum is 0.5 ACU ($43.80/mo). For dev/staging, consider a regular db.t4g.micro at ~$13/mo instead.
Alternative — RDS PostgreSQL (cheaper for predictable workloads):
| Instance | vCPU | RAM | Monthly | Multi-AZ |
|---|---|---|---|---|
| db.t4g.micro | 2 | 1 GB | ~$13 | ~$26 |
| db.t4g.small | 2 | 2 GB | ~$26 | ~$52 |
| db.t4g.medium | 2 | 4 GB | ~$55 | ~$110 |
Recommendation: Start with db.t4g.small (~$26/mo) with automated backups. Move to Aurora when you need auto-scaling or read replicas.
Cache — ElastiCache Redis
| Instance | Monthly |
|---|---|
| cache.t4g.micro (0.5 GB) | ~$12 |
| cache.t4g.small (1.5 GB) | ~$24 |
Load Balancer — ALB
| Component | Monthly |
|---|---|
| ALB fixed cost | ~$16 |
| LCU (per 1M requests) | ~$5-10 |
CDN — CloudFront
| Component | Monthly |
|---|---|
| First 1 TB/mo | Free |
| Additional data transfer | $0.085/GB |
| HTTPS requests (10M) | ~$10 |
Total AWS Cost Estimate
| Component | Monthly (Minimum) | Monthly (Typical) |
|---|---|---|
| ECS Fargate (3 tasks, ARM) | $79 | $79 |
| RDS PostgreSQL (t4g.small) | $26 | $26 |
| EFS (Standard + IA) | $2 | $18 |
| ElastiCache Redis | $12 | $12 |
| ALB | $16 | $22 |
| CloudFront | $0 | $10 |
| R2 Backups | $1 | $3 |
| Total | ~$136 | ~$170 |
With Aurora Serverless v2 instead of RDS: add ~$20-40/mo.
Migration from Coolify to AWS
- Set up VPC with 2 public + 2 private subnets (Multi-AZ)
- Create RDS PostgreSQL — restore from Coolify backup
- Create EFS filesystem — mount in Forgejo ECS task
- Create ECR repositories — push Docker images from CI/CD
- Create ECS cluster (Fargate) — deploy backend + Forgejo task definitions
- Create ALB — route
api.morphee.app→ backend,git.morphee.app→ Forgejo - Create CloudFront distribution — serve frontend from S3
- Update DNS — point domains to ALB/CloudFront
- Migrate git repos —
tarfrom Coolify volume, upload to EFS - Smoke test — verify all flows work
- Cut over — update DNS TTL to 60s, switch, monitor
Zero-downtime approach: Run both Coolify and AWS in parallel for 48 hours. Use DNS weighted routing (50/50 → 100% AWS).
Phase 4 — Multi-Region (>50,000 users)
This is future planning — document the architecture now, implement when needed.
┌─────── CloudFront Global ──────┐
│ │
┌────────┴────────┐ ┌────────┴────────┐
│ US-EAST-1 │ │ EU-WEST-1 │
│ │ │ │
│ ECS + ALB │ │ ECS + ALB │
│ Aurora Writer │──replicate──▶│ Aurora Reader │
│ EFS │ │ EFS (sync) │
│ ElastiCache │ │ ElastiCache │
└─────────────────┘ └─────────────────┘
Key decisions for multi-region:
- Aurora Global Database for cross-region PostgreSQL replication (~1 second lag)
- EFS replication via AWS DataSync or application-level git mirroring
- Route 53 latency-based routing
- Write-primary region with read replicas globally
- Estimated additional cost: $400-600/mo per region
Alternative: PostgreSQL Object Database (Long-term)
An elegant alternative to Forgejo is storing git objects directly in PostgreSQL via a custom libgit2 ODB (Object DataBase) backend. This eliminates the need for a separate git server entirely.
How It Works
┌─────────────┐ ┌──────────────┐ ┌──────────────────┐
│ Tauri/Rust │ │ Python │ │ PostgreSQL │
│ (git2 crate) │────▶│ (pygit2) │────▶│ git_objects table │
│ │ │ │ │ (oid, type, data) │
└─────────────┘ └──────────────┘ └──────────────────┘
- Git objects (blobs, trees, commits, tags) stored as
byteain agit_objectstable - References stored in a
git_refstable - libgit2/git2 crate supports custom ODB backends — swap filesystem for PostgreSQL
- All git data lives in one database — no filesystem dependency, no EFS
Schema
CREATE TABLE git_objects (
space_id UUID NOT NULL REFERENCES spaces(id) ON DELETE CASCADE,
oid TEXT NOT NULL, -- SHA-1 hex (40 chars)
type SMALLINT NOT NULL, -- 1=commit, 2=tree, 3=blob, 4=tag
data BYTEA NOT NULL, -- raw git object (zlib compressed)
size INTEGER NOT NULL, -- uncompressed size
PRIMARY KEY (space_id, oid)
);
CREATE TABLE git_refs (
space_id UUID NOT NULL REFERENCES spaces(id) ON DELETE CASCADE,
name TEXT NOT NULL, -- e.g., "refs/heads/main"
target TEXT NOT NULL, -- commit OID
PRIMARY KEY (space_id, name)
);
-- Index for listing objects by type (garbage collection)
CREATE INDEX idx_git_objects_type ON git_objects (space_id, type);
Pros and Cons
| Aspect | PostgreSQL ODB | Forgejo |
|---|---|---|
| Simplicity | One database for everything | Separate service to manage |
| Backup | Single pg_dump backs up all data | Separate DB + repo backups |
| Scaling | Scales with PostgreSQL (RDS, Aurora) | Needs EFS or shared storage |
| Implementation | Custom libgit2 backend (complex) | Standard git server (simple) |
| Performance | Slower for large repos (DB round-trips) | Native filesystem (fast) |
| Ecosystem | No git web UI, no webhooks | Full git features |
| Tauri offline | Works — git2 custom ODB compiles on all platforms | N/A — Tauri uses local git |
Recommendation: Start with Forgejo (Phase 1) for simplicity. Consider PostgreSQL ODB for V2.0+ when you want to eliminate the Forgejo dependency and have all data in one database.
Scaling Thresholds — When to Migrate
| Signal | Current Value | Threshold | Action |
|---|---|---|---|
| Users (total) | <50 | >200 | Phase 2 (managed DB) |
| Concurrent WebSockets | <20 | >100 | Add backend replica |
| Git repos (total) | <50 | >500 | Move to Forgejo |
| Git storage (GB) | <1 | >50 | EFS or R2 for backups |
| API latency p95 | <200ms | >500ms | Add backend replica |
| Database connections | <20 | >80 | Connection pooler (PgBouncer) |
| CPU utilization | <30% | >70% sustained | Upgrade server or Phase 3 |
| Memory utilization | <50% | >80% | Upgrade server or Phase 3 |
| Revenue (MRR) | $0 | >$500 | Phase 3 justified |
Monitoring Checklist
Set up alerts for these metrics (Coolify dashboard or Prometheus/Grafana):
- CPU > 80% for 5 minutes
- Memory > 85% for 5 minutes
- Disk > 90%
- API error rate > 5%
- API latency p95 > 1 second
- WebSocket disconnection rate > 10%
- Database connection pool exhaustion
- Redis memory > 200 MB
Data Durability Strategy
Backup Schedule
| Data | Frequency | Retention | Destination |
|---|---|---|---|
| PostgreSQL | Daily (automated) | 30 days | Cloudflare R2 |
| Git repos (Forgejo) | Daily (tar + upload) | 14 days | Cloudflare R2 |
| Redis | None (ephemeral) | — | — |
| Uploaded files | Daily (rsync) | 14 days | Cloudflare R2 |
Recovery Time Objectives
| Scenario | RTO | RPO | Recovery Method |
|---|---|---|---|
| Backend crash | 30 seconds | 0 | Docker auto-restart |
| Database corruption | 1 hour | 24 hours | Restore from R2 backup |
| Server failure | 2 hours | 24 hours | New server + restore |
| Datacenter outage | 4 hours | 24 hours | New provider + restore |
| With AWS Phase 3 | 5 minutes | ~0 | Multi-AZ automatic failover |
Cloudflare R2 Backup Cost
| Storage | Monthly |
|---|---|
| 10 GB (first year) | Free |
| 10-100 GB | $0.015/GB = $0.15-1.50 |
| Egress | Always free |
| Operations (1M writes/mo) | $4.50 |
Security Considerations
Network Isolation
- Phase 1 (Coolify): Forgejo listens only on Docker internal network. Backend accesses it via
http://forgejo:3000. Only Traefik exposes port 443. - Phase 3 (AWS): Forgejo in private subnet. ALB in public subnet. Security groups restrict access.
Credential Management
| Secret | Phase 1 (Coolify) | Phase 3 (AWS) |
|---|---|---|
| Forgejo API token | Coolify env vars | AWS Secrets Manager |
| Database password | Coolify env vars | RDS IAM auth |
| Webhook HMAC secret | Coolify env vars | Secrets Manager |
| Backup encryption key | Coolify env vars | KMS |
Git Repo Access Control
- Forgejo registration is disabled — no external users can create accounts
- All repo operations go through the Morphee backend via API token
- Each group's repos are in a private Forgejo organization
- The backend validates
group_idownership before any git operation - Webhook secrets use HMAC-SHA256 for payload verification
Implementation Roadmap
V1.0 (Now — Week 1-2)
- Add Forgejo Docker service to Coolify
- Create
ForgejoClientin backend (backend/git/forgejo_client.py) - Wire into
GitStoreService— create org/repo on group/space creation - Set up daily backup to R2
- Update
docker-compose.ymlwith Forgejo service
V1.0 (Week 3-4)
- Migrate existing git repos from local volume to Forgejo
- Add webhook handler for push events
- Integration tests for Forgejo API operations
- Update
docs/deployment.mdanddocs/COOLIFY_DEPLOYMENT.md
V1.2+
- Evaluate Phase 2 readiness based on user growth
- Prototype PostgreSQL ODB backend (if eliminating Forgejo is desired)
- Set up monitoring dashboards (Grafana or Coolify built-in)
V2.0+
- AWS infrastructure-as-code (Terraform or CDK)
- ECS task definitions for backend + Forgejo
- Aurora Serverless v2 setup
- CloudFront distribution for frontend
- CI/CD pipeline for ECS deployments
Related Documentation
- Deployment Guide — Current single-server deployment
- Coolify Deployment Guide — Step-by-step Coolify setup
- Architecture Overview — System architecture diagram
- OpenMorph Specification — Git-native
.morph/protocol - Status & Progress — Current development status