Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.usetitan.app/llms.txt

Use this file to discover all available pages before exploring further.

Docker Compose is the quickest way to get a self-hosted Titan instance running on a single server or local development machine. The stack includes all six services Titan needs — API server, Runner, Media worker, PostgreSQL, Redis, and RabbitMQ — wired together and ready to start with a single command.

Prerequisites

  • Docker Engine 24+ and Docker Compose v2 (docker compose not docker-compose)
  • A Titan license key — contact Titan if you do not have one yet
  • Outbound internet access from the server (for WhatsApp connectivity)

Setup

1

Download the Compose file

Download the official docker-compose.selfhost.yml file and rename it to docker-compose.yml:
curl -o docker-compose.yml \
  https://raw.githubusercontent.com/titan-api/titan/main/docker-compose.selfhost.yml
The file defines six services:
ServiceImageDefault port
titan-apiTitan API server8080 (API), 9090 (metrics)
titan-runnerWhatsApp session manager— (internal only)
titan-mediaMedia download and S3 upload8082
postgresPostgreSQL 165432
redisRedis 76379
rabbitmqRabbitMQ 3.13 with management UI5672, 15672
2

Create your .env file

Copy the example file and fill in your values:
curl -o .env \
  https://raw.githubusercontent.com/titan-api/titan/main/.env.selfhost.example
Edit .env and set the following required variables:
.env
# --- Required ---
LICENSE_KEY=your-titan-license-key

# Master key for the Admin API — use a strong random value
MASTER_KEY=changeme

# Secret used to sign API keys — generate with: openssl rand -hex 32
API_KEY_SECRET=changeme_generate_a_random_secret

# Database credentials
POSTGRES_USER=titan
POSTGRES_PASSWORD=changeme

# RabbitMQ credentials
RABBITMQ_USER=titan
RABBITMQ_PASSWORD=changeme

# --- S3 / Media Storage (required if you want media persistence) ---
S3_ENDPOINT=https://s3.amazonaws.com
S3_BUCKET=my-titan-media
S3_REGION=us-east-1
S3_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE
S3_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
MEDIA_AUTO_PERSIST=false

# --- Optional tuning ---
LOG_LEVEL=info
MAX_SESSIONS_PER_POD=50
ENV=production
API_KEY_SECRET and MASTER_KEY are separate credentials. API_KEY_SECRET is used internally to sign user-facing API keys. MASTER_KEY is the bearer token you use to authenticate against the Admin API. Both must be set before starting the stack.
Never commit .env to version control. Add it to .gitignore before your first commit.
3

Start the stack

Pull images and start all services in detached mode:
docker compose up -d
Docker Compose starts the infrastructure services first and waits for health checks to pass before starting the Titan services. On a fresh machine, the first start takes a minute or two while images are pulled.Watch the logs to confirm everything is running:
docker compose logs -f titan-api
You should see a line similar to:
{"level":"info","msg":"server started","port":8080}
4

Verify the health check

Confirm that the API, database, Redis, and RabbitMQ are all reachable:
curl http://localhost:8080/health
A healthy instance returns:
{
  "status": "ok",
  "database": "ok",
  "redis": "ok",
  "rabbitmq": "ok"
}
If any dependency returns "error", check docker compose logs <service-name> for details.

Environment variable reference

The following table covers every variable that changes runtime behavior. Variables with a default value are optional; all others are required.
VariableDescriptionDefault
LICENSE_KEYYour Titan license key
MASTER_KEYBearer token for the Admin APImaster_changeme
API_KEY_SECRETSecret for signing API keys. Generate with openssl rand -hex 32.changeme_generate_a_random_secret
POSTGRES_USERPostgreSQL usernametitan
POSTGRES_PASSWORDPostgreSQL passwordtitan
RABBITMQ_USERRabbitMQ usernameguest
RABBITMQ_PASSWORDRabbitMQ passwordguest
S3_ENDPOINTS3-compatible storage endpoint URL
S3_BUCKETStorage bucket name
S3_REGIONStorage bucket region
S3_ACCESS_KEYStorage access key ID
S3_SECRET_KEYStorage secret access key
MEDIA_AUTO_PERSISTAutomatically save all incoming media to S3false
LOG_LEVELLog verbosity: debug, info, warn, errorinfo
MAX_SESSIONS_PER_PODMax concurrent WhatsApp sessions per Runner pod50
API_PORTHost port for the API server8080
MEDIA_PORTHost port for the media worker8082
METRICS_PORTHost port for Prometheus metrics9090
ENVRuntime environment (production or development)production
To use MinIO or Cloudflare R2 instead of AWS S3, set S3_ENDPOINT to your MinIO or R2 endpoint URL and use the corresponding access credentials. The API is S3-compatible.

Network and security

The Compose file exposes service ports on localhost by default. Before putting this instance in front of real traffic:
  • Place the API server behind a reverse proxy (nginx, Caddy, or a cloud load balancer) with TLS termination
  • Do not expose the PostgreSQL (5432), Redis (6379), or RabbitMQ (5672) ports to the public internet — they are for internal service communication only
  • The RabbitMQ management UI (15672) should be firewalled or tunnelled rather than exposed publicly
  • Set RABBITMQ_USER and RABBITMQ_PASSWORD to non-default values — Titan logs a warning at startup if the default guest/guest credentials are in use
The default credentials in .env.selfhost.example are placeholders. Replace every changeme value before running in any environment that handles real data.

Scaling

The API server is fully stateless and can run as multiple replicas behind a load balancer. To scale the API tier:
docker compose up -d --scale titan-api=3
Runner pods manage session state and scale independently. Each Runner pod handles up to MAX_SESSIONS_PER_POD concurrent sessions (default: 50). If a Runner pod stops unexpectedly, the API detects the stale heartbeat within 15 seconds and reassigns its sessions to healthy pods. For production workloads with more than a few hundred sessions, consider moving to Kubernetes, which adds horizontal pod autoscaling and pod disruption budgets.

Updating

To update to a new Titan version, pull the latest images and restart:
docker compose pull
docker compose up -d
Database migrations run automatically on API startup.

Useful commands

# View logs for a specific service
docker compose logs -f titan-api

# Restart a single service
docker compose restart titan-api

# Stop the stack without removing volumes
docker compose stop

# Remove the stack and all data volumes
docker compose down -v