Skip to main content

High Availability & Scaling — Morphee Infrastructure

Last updated: 2026-02-20 Status: 📋 Reference architecture — Phase 1 ready for implementation


Overview

Morphee's data architecture revolves around git-backed Spaces (OpenMorph .morph/ directories). Every Space is a git repository storing tasks, skills, configs, memory, and canvas state as version-controlled files. This document covers how to make that storage highly available, and how to scale the entire platform from a single Coolify server to a multi-region AWS deployment.

Current state: Single Hetzner VPS running Coolify (~$17/mo). All services (backend, frontend, Redis, PostgreSQL, GoTrue) run as Docker containers on one server with local volumes.

Target: A clear, incremental migration path with cost-aware thresholds for when to move to each phase.


Architecture Phases

PhaseNameWhenMonthly CostUsers
0Single VPS (current)Now~$171-50
1Forgejo on CoolifyV1.0 launch~$17 (same server)1-200
2Managed services + replicas>200 users~$80-120200-2,000
3AWS full stack>2,000 users~$200-4002,000-50,000
4Multi-region>50,000 users~$800+50,000+

Phase 0 — Current Architecture (Single VPS)

┌──────────────────── Hetzner CPX31 (4 vCPU, 8 GB) ────────────────────┐
│ │
│ Coolify (Traefik reverse proxy + Let's Encrypt) │
│ │
│ ┌─────────┐ ┌──────────┐ ┌───────┐ ┌────────┐ ┌─────────────┐ │
│ │Frontend │ │ Backend │ │ Redis │ │PostgreSQL│ │ GoTrue │ │
│ │ (Nginx) │ │(FastAPI) │ │ 7 │ │ 15 │ │ (Supabase) │ │
│ └─────────┘ └──────────┘ └───────┘ └────────┘ └─────────────┘ │
│ │ │ │
│ /data/memory (git repos) /data/files │
│ (local Docker volume) (local Docker volume) │
└───────────────────────────────────────────────────────────────────────┘

Risks:

  • Single point of failure — server dies, everything is gone
  • Git repos on local disk — no replication
  • No automated backups (script exists but manual)
  • No horizontal scaling

Phase 1 — Forgejo on Coolify ($0 extra)

Goal: Give git repos a proper home with an API, webhooks, and backup-friendly storage. Zero additional cost — runs on the same VPS.

Why Forgejo

Forgejo is a community fork of Gitea — a lightweight, self-hosted Git server. It's the right fit because:

  • Tiny footprint: ~80 MB RAM idle, ~200 MB under load (vs GitLab's 4 GB+)
  • Gitea-compatible API: Full REST API at /api/v1/ for org/repo CRUD, webhooks, file access
  • Shares PostgreSQL: Uses our existing database — no new stateful service
  • Docker image: codeberg.org/forgejo/forgejo:13 (~100 MB)
  • Registration disabled: We manage repos via API only — no user-facing Git UI needed

Architecture

┌──────────────────── Hetzner CPX31 ────────────────────────────────────┐
│ │
│ Coolify (Traefik) │
│ │
│ ┌─────────┐ ┌──────────┐ ┌──────────┐ ┌───────┐ ┌──────────┐ │
│ │Frontend │ │ Backend │ │ Forgejo │ │ Redis │ │PostgreSQL│ │
│ │ (Nginx) │ │(FastAPI) │ │(Git srvr)│ │ 7 │ │ 15 │ │
│ └─────────┘ └────┬─────┘ └────┬─────┘ └───────┘ └─────┬────┘ │
│ │ │ │ │
│ │ HTTP API │ git push/pull │ │
│ └─────────────┘ │ │
│ │ │ │
│ forgejo-data:/data shared DB │
│ (repos on disk) │
└───────────────────────────────────────────────────────────────────────┘

Docker Compose Service

Add this to your Coolify deployment (or create a separate Docker Compose resource):

services:
forgejo:
image: codeberg.org/forgejo/forgejo:13
container_name: forgejo
restart: unless-stopped
environment:
# --- Database (shared PostgreSQL) ---
FORGEJO__database__DB_TYPE: postgres
FORGEJO__database__HOST: ${FORGEJO_DB_HOST:-supabase-db}:5432
FORGEJO__database__NAME: ${FORGEJO_DB_NAME:-forgejo}
FORGEJO__database__USER: ${FORGEJO_DB_USER:-forgejo}
FORGEJO__database__PASSWD: ${FORGEJO_DB_PASSWORD}
FORGEJO__database__SSL_MODE: ${FORGEJO_DB_SSL:-disable}

# --- Server ---
FORGEJO__server__DOMAIN: ${FORGEJO_DOMAIN:-git.morphee.app}
FORGEJO__server__ROOT_URL: https://${FORGEJO_DOMAIN:-git.morphee.app}/
FORGEJO__server__SSH_DOMAIN: ${FORGEJO_DOMAIN:-git.morphee.app}
FORGEJO__server__HTTP_PORT: "3000"
FORGEJO__server__LFS_START_SERVER: "true"

# --- Access control ---
FORGEJO__service__DISABLE_REGISTRATION: "true"
FORGEJO__service__REQUIRE_SIGNIN_VIEW: "true"

# --- Repository defaults ---
FORGEJO__repository__DEFAULT_BRANCH: main
FORGEJO__repository__DEFAULT_PRIVATE: private

volumes:
- forgejo-data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "222:22"
deploy:
resources:
limits:
memory: 512M
cpus: "0.5"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/v1/version"]
interval: 30s
timeout: 5s
retries: 3

volumes:
forgejo-data:

Coolify Deployment Steps

  1. Create a forgejo database in your existing PostgreSQL:

    CREATE DATABASE forgejo;
    CREATE USER forgejo WITH PASSWORD 'your-secure-password';
    GRANT ALL PRIVILEGES ON DATABASE forgejo TO forgejo;
  2. Add Forgejo as a Docker Compose resource in Coolify:

    • New Resource → Docker Compose → paste the YAML above
    • Set environment variables in Coolify UI
  3. Configure domain: Set git.morphee.app in Coolify proxy settings. Coolify auto-provisions the SSL cert via Traefik.

  4. Persistent Storage: In Coolify's Persistent Storage panel, add volume forgejo-data/data

  5. First boot: Forgejo auto-initializes the database schema and creates an admin user

  6. Create API token: Log into the Forgejo web UI once to create an admin API token, then store it as FORGEJO_TOKEN in your backend's environment

Forgejo API Integration

The Morphee backend uses the Forgejo API to manage git repos programmatically. Key mapping:

Morphee ConceptForgejo ConceptAPI Endpoint
GroupOrganizationPOST /api/v1/orgs
SpaceRepositoryPOST /api/v1/orgs/{org}/repos
Memory syncGit push/pullStandard git protocol
Webhook (change notification)Repo webhookPOST /api/v1/repos/{owner}/{repo}/hooks

Create a Group (Organization):

curl -X POST https://git.morphee.app/api/v1/orgs \
-H "Authorization: token ${FORGEJO_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"username": "group-abc123",
"full_name": "Family Smith",
"visibility": "private"
}'

Create a Space (Repository):

curl -X POST https://git.morphee.app/api/v1/orgs/group-abc123/repos \
-H "Authorization: token ${FORGEJO_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"name": "space-xyz789",
"description": "Space: Family Tasks",
"private": true,
"auto_init": true,
"default_branch": "main"
}'

Register a webhook (push notifications):

curl -X POST https://git.morphee.app/api/v1/repos/group-abc123/space-xyz789/hooks \
-H "Authorization: token ${FORGEJO_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"type": "forgejo",
"active": true,
"config": {
"url": "https://api.morphee.app/hooks/forgejo",
"content_type": "json",
"secret": "webhook-hmac-secret"
},
"events": ["push"]
}'

Backend Integration Service

The MorphDiscoveryService (or GitStoreService) wraps these API calls:

# backend/git/forgejo_client.py (future)

class ForgejoClient:
"""HTTP client for Forgejo API — manages orgs/repos for Morphee groups/spaces."""

def __init__(self, base_url: str, token: str):
self.base_url = base_url.rstrip("/")
self.headers = {
"Authorization": f"token {token}",
"Content-Type": "application/json",
}

async def create_org(self, group_id: str, display_name: str) -> dict:
"""Create Forgejo org for a Morphee group."""
...

async def create_repo(self, group_id: str, space_id: str, name: str) -> dict:
"""Create Forgejo repo for a Morphee space."""
...

async def delete_repo(self, group_id: str, space_id: str) -> None:
"""Delete repo when a space is deleted."""
...

async def get_clone_url(self, group_id: str, space_id: str) -> str:
"""Get the git clone URL for a space's repo."""
...

Backup Strategy (Phase 1)

With Forgejo, backups become simple:

#!/bin/bash
# backup-forgejo.sh — run daily via cron

BACKUP_DIR="/data/backups/forgejo"
DATE=$(date +%Y%m%d_%H%M%S)

# 1. Dump Forgejo database
pg_dump -U forgejo -h localhost -d forgejo | gzip > "${BACKUP_DIR}/forgejo_db_${DATE}.sql.gz"

# 2. Tar the git repos (Forgejo stores them in /data/forgejo/repositories/)
tar -czf "${BACKUP_DIR}/forgejo_repos_${DATE}.tar.gz" -C /data/forgejo repositories/

# 3. Upload to S3 / R2 (optional — recommended)
# aws s3 cp "${BACKUP_DIR}/forgejo_repos_${DATE}.tar.gz" s3://morphee-backups/forgejo/
# Or with rclone for Cloudflare R2:
# rclone copy "${BACKUP_DIR}/" r2:morphee-backups/forgejo/ --max-age 1h

# 4. Retention: keep last 7 daily backups locally
find "${BACKUP_DIR}" -name "*.gz" -mtime +7 -delete

Cloudflare R2 is recommended for offsite backups:

  • Free egress (no data transfer fees)
  • $0.015/GB/month storage
  • S3-compatible API (works with aws CLI and rclone)
  • 10 GB free tier

Phase 2 — Managed Services + Replicas (~$80-120/mo)

When to move: >200 users, or when you need zero-downtime deploys and automatic failover.

What changes:

  • Move PostgreSQL to a managed service (Supabase Pro or Hetzner Managed PostgreSQL)
  • Add a second backend instance behind Coolify's load balancer
  • Move backups to Cloudflare R2 with automated daily schedule
  • Optional: move Forgejo to its own small VPS

Architecture

                    ┌──────────── Cloudflare ─────────────┐
│ DNS: morphee.app │
│ R2: morphee-backups (offsite) │
└──────────────┬──────────────────────┘

┌─────────────────────────┼───────────────────────┐
│ Coolify (Traefik LB) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │
│ │Backend 1 │ │Backend 2 │ │ Forgejo │ │
│ │(FastAPI) │ │(FastAPI) │ │ (Git server) │ │
│ └─────┬────┘ └─────┬────┘ └──────┬───────┘ │
│ │ │ │ │
│ └──────┬──────┘ │ │
│ │ │ │
│ ┌──────┴──────┐ │ │
│ │ Redis 7 │ │ │
│ │ (Sentinel) │ │ │
│ └─────────────┘ │ │
└───────────────────┬───────────────────┘ │
│ │
┌────────┴────────┐ ┌────────┴────────┐
│ Managed PG │ │ Persistent Vol │
│ (Supabase Pro │ │ (forgejo-data) │
│ or Hetzner) │ └─────────────────┘
└─────────────────┘

Managed PostgreSQL Options

ProviderPlanMonthly CostFeatures
Supabase ProPro$25Auth included, dashboard, point-in-time recovery, 8 GB storage
Hetzner Cloud DBCPX11~$15Managed PG, auto backups, EU data residency
NeonScale$19Serverless PG, branching, auto-suspend

Recommendation: Supabase Pro — already integrated (GoTrue auth), includes connection pooling, and the dashboard is useful for debugging.

Multiple Backend Replicas

Coolify supports scaling services to multiple replicas. The Morphee backend is stateless (session state in Redis, data in PostgreSQL) so it scales horizontally:

  1. In Coolify, set the backend service replica count to 2
  2. Coolify's Traefik automatically round-robins between replicas
  3. WebSocket connections are sticky by default (cookie affinity)

Requirements for stateless backend:

  • Redis for pub/sub and rate limiting (already done)
  • No local file writes for user data (git repos moved to Forgejo)
  • JWT auth is stateless (already done)

Phase 3 — AWS Full Stack (~$200-400/mo)

When to move: >2,000 users, need geographic presence, compliance requirements (SOC2, HIPAA), or the Coolify server is hitting resource limits.

Architecture

                    ┌──────────── CloudFront CDN ──────────┐
│ app.morphee.app (frontend) │
│ Static assets cached at edge │
└──────────────┬───────────────────────┘

┌──────────────┴───────────────────────┐
│ AWS VPC │
│ │
│ ┌──────── ALB ────────┐ │
│ │ api.morphee.app │ │
│ │ git.morphee.app │ │
│ └────┬──────────┬─────┘ │
│ │ │ │
│ ┌────┴───┐ ┌────┴───┐ │
│ │ECS │ │ECS │ │
│ │Backend │ │Forgejo │ │
│ │Fargate │ │Fargate │ │
│ │(2 tasks│ │(1 task)│ │
│ └────┬───┘ └────┬───┘ │
│ │ │ │
│ ┌────┴──────────┴───┐ │
│ │ Amazon EFS │ │
│ │ (git repo data) │ │
│ │ Multi-AZ, IA │ │
│ └──────────────────┘ │
│ │
│ ┌─────────────┐ ┌──────────────┐ │
│ │Aurora Srvls │ │ElastiCache │ │
│ │v2 (PG 15) │ │Redis (t4g) │ │
│ │Writer + Rdr│ │1 node │ │
│ └─────────────┘ └──────────────┘ │
└──────────────────────────────────────┘

AWS Service Selection & Pricing

Compute — ECS Fargate

ServiceConfigMonthly (ARM/Graviton)
Morphee Backend (2 tasks)0.5 vCPU, 1 GB each~$53
Forgejo (1 task)0.5 vCPU, 1 GB~$26
Subtotal3 tasks~$79

ARM/Graviton instances are 20% cheaper and both Python and Forgejo (Go) run on ARM natively.

Storage — Amazon EFS

EFS is the key enabler — it's a managed NFS filesystem that multiple containers can mount simultaneously. Git repos are small text files that fit perfectly.

Storage ClassPrice/GB-monthData Access (read)Data Access (write)
Standard$0.30$0.03/GB$0.06/GB
Infrequent Access (IA)$0.016$0.03/GB$0.06/GB
Archive$0.008$0.03/GB$0.06/GB

Lifecycle policy: Move files untouched for 30 days to IA, 90 days to Archive. Most git repos are cold data (written once, read occasionally).

ScenarioStorageMonthly Cost
100 groups, 5 spaces, ~50 MB/space~25 GB~$7.50 (Standard)
Same with IA lifecycle~5 GB Std + 20 GB IA~$1.82
1,000 groups at scale~250 GB with IA~$18
10,000 groups~2.5 TB with IA/Archive~$60

Why EFS over EBS:

  • EBS is single-AZ, single-attach — can't share between Fargate tasks
  • EFS is multi-AZ, multi-attach — multiple Forgejo/backend tasks read the same repos
  • EFS Elastic Throughput auto-scales (no provisioning)

Database — Aurora Serverless v2

ConfigMin ACUMonthly MinimumTypical Monthly
Writer only (0.5 min ACU)0.5$43.80~$50-70
Writer + 1 Reader (0.5 each)1.0$87.60~$100-150
Storage (10-50 GB)$1-5$1-5

Important: Aurora Serverless v2 cannot scale to zero. The minimum is 0.5 ACU ($43.80/mo). For dev/staging, consider a regular db.t4g.micro at ~$13/mo instead.

Alternative — RDS PostgreSQL (cheaper for predictable workloads):

InstancevCPURAMMonthlyMulti-AZ
db.t4g.micro21 GB~$13~$26
db.t4g.small22 GB~$26~$52
db.t4g.medium24 GB~$55~$110

Recommendation: Start with db.t4g.small (~$26/mo) with automated backups. Move to Aurora when you need auto-scaling or read replicas.

Cache — ElastiCache Redis

InstanceMonthly
cache.t4g.micro (0.5 GB)~$12
cache.t4g.small (1.5 GB)~$24

Load Balancer — ALB

ComponentMonthly
ALB fixed cost~$16
LCU (per 1M requests)~$5-10

CDN — CloudFront

ComponentMonthly
First 1 TB/moFree
Additional data transfer$0.085/GB
HTTPS requests (10M)~$10

Total AWS Cost Estimate

ComponentMonthly (Minimum)Monthly (Typical)
ECS Fargate (3 tasks, ARM)$79$79
RDS PostgreSQL (t4g.small)$26$26
EFS (Standard + IA)$2$18
ElastiCache Redis$12$12
ALB$16$22
CloudFront$0$10
R2 Backups$1$3
Total~$136~$170

With Aurora Serverless v2 instead of RDS: add ~$20-40/mo.

Migration from Coolify to AWS

  1. Set up VPC with 2 public + 2 private subnets (Multi-AZ)
  2. Create RDS PostgreSQL — restore from Coolify backup
  3. Create EFS filesystem — mount in Forgejo ECS task
  4. Create ECR repositories — push Docker images from CI/CD
  5. Create ECS cluster (Fargate) — deploy backend + Forgejo task definitions
  6. Create ALB — route api.morphee.app → backend, git.morphee.app → Forgejo
  7. Create CloudFront distribution — serve frontend from S3
  8. Update DNS — point domains to ALB/CloudFront
  9. Migrate git repostar from Coolify volume, upload to EFS
  10. Smoke test — verify all flows work
  11. Cut over — update DNS TTL to 60s, switch, monitor

Zero-downtime approach: Run both Coolify and AWS in parallel for 48 hours. Use DNS weighted routing (50/50 → 100% AWS).


Phase 4 — Multi-Region (>50,000 users)

This is future planning — document the architecture now, implement when needed.

              ┌─────── CloudFront Global ──────┐
│ │
┌────────┴────────┐ ┌────────┴────────┐
│ US-EAST-1 │ │ EU-WEST-1 │
│ │ │ │
│ ECS + ALB │ │ ECS + ALB │
│ Aurora Writer │──replicate──▶│ Aurora Reader │
│ EFS │ │ EFS (sync) │
│ ElastiCache │ │ ElastiCache │
└─────────────────┘ └─────────────────┘

Key decisions for multi-region:

  • Aurora Global Database for cross-region PostgreSQL replication (~1 second lag)
  • EFS replication via AWS DataSync or application-level git mirroring
  • Route 53 latency-based routing
  • Write-primary region with read replicas globally
  • Estimated additional cost: $400-600/mo per region

Alternative: PostgreSQL Object Database (Long-term)

An elegant alternative to Forgejo is storing git objects directly in PostgreSQL via a custom libgit2 ODB (Object DataBase) backend. This eliminates the need for a separate git server entirely.

How It Works

┌─────────────┐     ┌──────────────┐     ┌──────────────────┐
│ Tauri/Rust │ │ Python │ │ PostgreSQL │
│ (git2 crate) │────▶│ (pygit2) │────▶│ git_objects table │
│ │ │ │ │ (oid, type, data) │
└─────────────┘ └──────────────┘ └──────────────────┘
  • Git objects (blobs, trees, commits, tags) stored as bytea in a git_objects table
  • References stored in a git_refs table
  • libgit2/git2 crate supports custom ODB backends — swap filesystem for PostgreSQL
  • All git data lives in one database — no filesystem dependency, no EFS

Schema

CREATE TABLE git_objects (
space_id UUID NOT NULL REFERENCES spaces(id) ON DELETE CASCADE,
oid TEXT NOT NULL, -- SHA-1 hex (40 chars)
type SMALLINT NOT NULL, -- 1=commit, 2=tree, 3=blob, 4=tag
data BYTEA NOT NULL, -- raw git object (zlib compressed)
size INTEGER NOT NULL, -- uncompressed size
PRIMARY KEY (space_id, oid)
);

CREATE TABLE git_refs (
space_id UUID NOT NULL REFERENCES spaces(id) ON DELETE CASCADE,
name TEXT NOT NULL, -- e.g., "refs/heads/main"
target TEXT NOT NULL, -- commit OID
PRIMARY KEY (space_id, name)
);

-- Index for listing objects by type (garbage collection)
CREATE INDEX idx_git_objects_type ON git_objects (space_id, type);

Pros and Cons

AspectPostgreSQL ODBForgejo
SimplicityOne database for everythingSeparate service to manage
BackupSingle pg_dump backs up all dataSeparate DB + repo backups
ScalingScales with PostgreSQL (RDS, Aurora)Needs EFS or shared storage
ImplementationCustom libgit2 backend (complex)Standard git server (simple)
PerformanceSlower for large repos (DB round-trips)Native filesystem (fast)
EcosystemNo git web UI, no webhooksFull git features
Tauri offlineWorks — git2 custom ODB compiles on all platformsN/A — Tauri uses local git

Recommendation: Start with Forgejo (Phase 1) for simplicity. Consider PostgreSQL ODB for V2.0+ when you want to eliminate the Forgejo dependency and have all data in one database.


Scaling Thresholds — When to Migrate

SignalCurrent ValueThresholdAction
Users (total)<50>200Phase 2 (managed DB)
Concurrent WebSockets<20>100Add backend replica
Git repos (total)<50>500Move to Forgejo
Git storage (GB)<1>50EFS or R2 for backups
API latency p95<200ms>500msAdd backend replica
Database connections<20>80Connection pooler (PgBouncer)
CPU utilization<30%>70% sustainedUpgrade server or Phase 3
Memory utilization<50%>80%Upgrade server or Phase 3
Revenue (MRR)$0>$500Phase 3 justified

Monitoring Checklist

Set up alerts for these metrics (Coolify dashboard or Prometheus/Grafana):

  • CPU > 80% for 5 minutes
  • Memory > 85% for 5 minutes
  • Disk > 90%
  • API error rate > 5%
  • API latency p95 > 1 second
  • WebSocket disconnection rate > 10%
  • Database connection pool exhaustion
  • Redis memory > 200 MB

Data Durability Strategy

Backup Schedule

DataFrequencyRetentionDestination
PostgreSQLDaily (automated)30 daysCloudflare R2
Git repos (Forgejo)Daily (tar + upload)14 daysCloudflare R2
RedisNone (ephemeral)
Uploaded filesDaily (rsync)14 daysCloudflare R2

Recovery Time Objectives

ScenarioRTORPORecovery Method
Backend crash30 seconds0Docker auto-restart
Database corruption1 hour24 hoursRestore from R2 backup
Server failure2 hours24 hoursNew server + restore
Datacenter outage4 hours24 hoursNew provider + restore
With AWS Phase 35 minutes~0Multi-AZ automatic failover

Cloudflare R2 Backup Cost

StorageMonthly
10 GB (first year)Free
10-100 GB$0.015/GB = $0.15-1.50
EgressAlways free
Operations (1M writes/mo)$4.50

Security Considerations

Network Isolation

  • Phase 1 (Coolify): Forgejo listens only on Docker internal network. Backend accesses it via http://forgejo:3000. Only Traefik exposes port 443.
  • Phase 3 (AWS): Forgejo in private subnet. ALB in public subnet. Security groups restrict access.

Credential Management

SecretPhase 1 (Coolify)Phase 3 (AWS)
Forgejo API tokenCoolify env varsAWS Secrets Manager
Database passwordCoolify env varsRDS IAM auth
Webhook HMAC secretCoolify env varsSecrets Manager
Backup encryption keyCoolify env varsKMS

Git Repo Access Control

  • Forgejo registration is disabled — no external users can create accounts
  • All repo operations go through the Morphee backend via API token
  • Each group's repos are in a private Forgejo organization
  • The backend validates group_id ownership before any git operation
  • Webhook secrets use HMAC-SHA256 for payload verification

Implementation Roadmap

V1.0 (Now — Week 1-2)

  • Add Forgejo Docker service to Coolify
  • Create ForgejoClient in backend (backend/git/forgejo_client.py)
  • Wire into GitStoreService — create org/repo on group/space creation
  • Set up daily backup to R2
  • Update docker-compose.yml with Forgejo service

V1.0 (Week 3-4)

  • Migrate existing git repos from local volume to Forgejo
  • Add webhook handler for push events
  • Integration tests for Forgejo API operations
  • Update docs/deployment.md and docs/COOLIFY_DEPLOYMENT.md

V1.2+

  • Evaluate Phase 2 readiness based on user growth
  • Prototype PostgreSQL ODB backend (if eliminating Forgejo is desired)
  • Set up monitoring dashboards (Grafana or Coolify built-in)

V2.0+

  • AWS infrastructure-as-code (Terraform or CDK)
  • ECS task definitions for backend + Forgejo
  • Aurora Serverless v2 setup
  • CloudFront distribution for frontend
  • CI/CD pipeline for ECS deployments