Documentation Index
Fetch the complete documentation index at: https://docs.usetitan.app/llms.txt
Use this file to discover all available pages before exploring further.
Titan’s official Helm chart deploys the full three-service stack — API server, Runner, and Media worker — on any Kubernetes cluster. For teams on AWS, a one-click CloudFormation stack provisions the entire cluster and managed backing services automatically.
Prerequisites
- A Kubernetes cluster (v1.27+)
- Helm 3 installed locally
kubectl configured to point at your cluster
- A Titan license key — contact Titan if you do not have one yet
- A PostgreSQL database, Redis instance, and RabbitMQ broker (bring your own, or use the chart dependencies)
If you are deploying to AWS and want the fastest path to production, the CloudFormation stack provisions everything for you:
- Amazon EKS cluster with managed node groups
- Amazon RDS (PostgreSQL) with automated backups
- Amazon ElastiCache (Redis) in cluster mode
- Amazon MQ (RabbitMQ) with high availability
The stack is production-ready in approximately 15 minutes. Launch it from your AWS console or via the AWS CLI:
aws cloudformation create-stack \
--stack-name titan \
--template-url https://titan-cloudformation.s3.amazonaws.com/titan-eks.yaml \
--parameters \
ParameterKey=LicenseKey,ParameterValue=your-license-key \
ParameterKey=MasterKey,ParameterValue=your-master-key \
ParameterKey=ApiKeySecret,ParameterValue=your-api-key-secret \
--capabilities CAPABILITY_IAM
Once the stack reaches CREATE_COMPLETE, the Titan Helm chart is already deployed and the load balancer endpoint is available in the stack outputs.
Install with Helm
If you are bringing your own cluster and backing services, install the Helm chart directly.
Add the Titan Helm repository
helm repo add titan https://charts.titanapi.dev
helm repo update
Install the chart
Pass your configuration as --set flags or in a values.yaml file:helm install titan titan/titan \
--namespace titan \
--create-namespace \
--set api.env.LICENSE_KEY=your-license-key \
--set api.env.MASTER_KEY=your-master-key \
--set api.env.API_KEY_SECRET=your-api-key-secret \
--set database.url="postgres://titan:password@your-db-host:5432/titan" \
--set redis.url="redis://your-redis-host:6379/0" \
--set rabbitmq.url="amqp://titan:password@your-rabbitmq-host:5672/"
Create a values.yaml file:api:
env:
LICENSE_KEY: "your-license-key"
MASTER_KEY: "your-master-key"
API_KEY_SECRET: "your-api-key-secret"
LOG_LEVEL: "info"
MAX_SESSIONS_PER_POD: "50"
database:
url: "postgres://titan:password@your-db-host:5432/titan"
redis:
url: "redis://your-redis-host:6379/0"
rabbitmq:
url: "amqp://titan:password@your-rabbitmq-host:5672/"
ingress:
enabled: true
host: "titan.example.com"
tls: true
Then install:helm install titan titan/titan \
--namespace titan \
--create-namespace \
-f values.yaml
The full list of configurable values is documented in the chart’s values.yaml. Run helm show values titan/titan to see all available options and their defaults.
Verify the deployment
Check that all pods are running:kubectl get pods -n titan
Expected output:NAME READY STATUS RESTARTS AGE
titan-api-7d9f8b6c4-xk2qp 1/1 Running 0 2m
titan-runner-5c6d9f7b8-p4rjt 1/1 Running 0 2m
titan-media-6b8e7c9d5-m3nws 1/1 Running 0 2m
Confirm the API is healthy:kubectl port-forward -n titan svc/titan-api 8080:8080
curl http://localhost:8080/health
{
"status": "ok",
"database": "ok",
"redis": "ok",
"rabbitmq": "ok"
}
Kubernetes features
Horizontal Pod Autoscaler
The chart configures HPA for the Runner and Media worker services. Sessions are the primary scaling signal for the Runner; CPU and memory drive Media worker scaling. You can tune the thresholds in values.yaml:
runner:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 70
media:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 80
Pod Disruption Budgets
The chart creates Pod Disruption Budgets for all three services to prevent simultaneous pod evictions during node maintenance or rolling upgrades. By default, at least one pod of each service must remain available.
Kustomize overlays
If you manage configuration with Kustomize, the chart supports a base + environment-overlay pattern. Typical structure:
k8s/
base/
kustomization.yaml
overlays/
dev/
kustomization.yaml
staging/
kustomization.yaml
prod/
kustomization.yaml
Each overlay can patch replica counts, resource limits, ingress hostnames, and environment variables independently without duplicating the full configuration.
Ingress
The chart creates a separate Ingress resource per environment. Configure your ingress class and TLS settings in values.yaml:
ingress:
enabled: true
className: nginx
host: titan.example.com
tls: true
tlsSecretName: titan-tls
RBAC and service accounts
The chart creates a ServiceAccount for each service and binds only the permissions each service needs. No service runs with cluster-admin privileges.
Updating
To upgrade to a new Titan version:
helm repo update
helm upgrade titan titan/titan \
--namespace titan \
--set image.tag=v2.x.x \
-f values.yaml
The upgrade performs a rolling restart. The API and Runner pods have a configurable drain period (default: 15 seconds) so in-flight requests and active sessions complete before pods terminate.
Pin image.tag in your values.yaml so upgrades are explicit and intentional rather than happening automatically on helm upgrade.
Uninstall
helm uninstall titan --namespace titan
kubectl delete namespace titan
This removes all Kubernetes resources created by the chart. It does not delete data in external managed services (RDS, ElastiCache, etc.).