Deployment
This guide covers everything needed to take an Intuware project from a local development environment to a production deployment.
Docker
Section titled “Docker”Every project created with intu init includes a multi-stage Dockerfile:
# Build stageFROM node:20-alpine AS buildWORKDIR /appCOPY package*.json ./RUN npm ciCOPY . .RUN npx intu build
# Runtime stageFROM node:20-alpineWORKDIR /appCOPY --from=build /app .EXPOSE 3000 3001CMD ["npx", "intu", "serve"]Build and run the image:
docker build -t my-engine .docker run -p 3000:3000 -p 3001:3001 my-engineDocker Compose
Section titled “Docker Compose”For a full-stack local environment with PostgreSQL, Redis, and Kafka:
services: intu: build: . ports: - "3000:3000" - "3001:3001" environment: - DATABASE_URL=postgres://intu:intu@postgres:5432/intu - REDIS_URL=redis://redis:6379 - KAFKA_BROKERS=kafka:9092 depends_on: - postgres - redis - kafka
postgres: image: postgres:16-alpine environment: POSTGRES_USER: intu POSTGRES_PASSWORD: intu POSTGRES_DB: intu volumes: - pgdata:/var/lib/postgresql/data
redis: image: redis:7-alpine
kafka: image: bitnami/kafka:3.7 environment: KAFKA_CFG_NODE_ID: 0 KAFKA_CFG_PROCESS_ROLES: controller,broker KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093 KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 0@kafka:9093 KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER
volumes: pgdata:PostgreSQL
Section titled “PostgreSQL”Enable persistent message storage by configuring PostgreSQL in intu.yaml:
message_storage: driver: postgres connection_string: ${DATABASE_URL}Intuware runs schema migrations automatically on startup — no manual SQL required.
| Property | Description |
|---|---|
message_storage.driver | Storage backend (memory, postgres) |
message_storage.connection_string | PostgreSQL connection URI |
Configure a shared Kafka connection at the root level of intu.yaml. Individual channels reference this connection by name in their listener or destination blocks.
kafka: brokers: - kafka-1.example.com:9092 - kafka-2.example.com:9092 - kafka-3.example.com:9092 auth: mechanism: SASL_SSL username: ${KAFKA_USERNAME} password: ${KAFKA_PASSWORD} tls: enabled: true ca_file: /etc/ssl/certs/kafka-ca.pem| Property | Description |
|---|---|
kafka.brokers | List of bootstrap broker addresses |
kafka.auth.mechanism | Authentication mechanism (PLAINTEXT, SASL_SSL, SASL_PLAINTEXT) |
kafka.tls.enabled | Enable TLS for broker connections |
kafka.tls.ca_file | Path to CA certificate |
Redis Clustering
Section titled “Redis Clustering”Run multiple Intuware instances behind a load balancer with Redis-backed coordination:
runtime: mode: cluster
redis: address: redis://redis.example.com:6379Cluster mode provides:
- Channel partitioning — each channel is owned by exactly one instance at a time
- Message deduplication — prevents duplicate processing when instances overlap during rolling deploys
- Health checks — instances publish heartbeats; unhealthy instances have their channels reassigned
Listener TLS
Section titled “Listener TLS”Terminate TLS on inbound listeners:
listeners: - type: http port: 8443 tls: cert_file: /etc/ssl/certs/listener.crt key_file: /etc/ssl/private/listener.keyDestination TLS
Section titled “Destination TLS”Enable TLS for outbound connections:
destinations: - type: http url: https://downstream.example.com/api tls: cert_file: /etc/ssl/certs/client.crt key_file: /etc/ssl/private/client.keyAuthentication
Section titled “Authentication”dashboard: auth: provider: oidc issuer: https://auth.example.com client_id: ${OIDC_CLIENT_ID} client_secret: ${OIDC_CLIENT_SECRET}dashboard: auth: provider: ldap url: ldap://ldap.example.com:389 bind_dn: cn=readonly,dc=example,dc=com search_base: ou=users,dc=example,dc=com user_filter: "(uid={{username}})"Basic Auth
Section titled “Basic Auth”dashboard: auth: provider: basic users: - username: admin password: ${ADMIN_PASSWORD} role: adminThree built-in roles control access to the dashboard and REST API:
| Role | Description |
|---|---|
| admin | Full access — manage channels, messages, and settings |
| operator | Operational access — deploy/undeploy channels, reprocess messages |
| viewer | Read-only access — view channels and messages |
Audit Logging
Section titled “Audit Logging”Track security-relevant events with the audit log:
audit: enabled: true destination: file # file | stdout | syslog path: /var/log/intu/audit.logAudited events include:
| Event | Description |
|---|---|
channel.deploy | Channel deployed or restarted |
channel.undeploy | Channel stopped |
message.reprocess | Message reprocessed |
auth.login | User authenticated |
auth.logout | User session ended |
settings.change | Configuration changed via dashboard |
Secrets Management
Section titled “Secrets Management”Avoid hardcoding secrets in intu.yaml by using a secrets provider:
secrets: provider: vault # env | vault | aws_secrets_manager | gcp_secret_manager vault: address: https://vault.example.com token: ${VAULT_TOKEN} path: secret/data/intu| Provider | Description |
|---|---|
env | Read secrets from environment variables (default) |
vault | HashiCorp Vault KV v2 |
aws_secrets_manager | AWS Secrets Manager |
gcp_secret_manager | Google Cloud Secret Manager |
Reference secrets in any configuration value with ${SECRET_NAME} syntax. The provider resolves them at startup.
Observability
Section titled “Observability”OpenTelemetry
Section titled “OpenTelemetry”Intuware exports traces and metrics via OpenTelemetry:
telemetry: traces: exporter: otlp endpoint: http://otel-collector:4318 metrics: exporter: otlp endpoint: http://otel-collector:4318Prometheus
Section titled “Prometheus”A Prometheus-compatible metrics endpoint is available at /metrics on the dashboard port:
telemetry: metrics: exporter: prometheusScrape target: http://intu-host:3001/metrics
Log Transports
Section titled “Log Transports”Ship structured logs to external systems:
logging: transports: - type: cloudwatch log_group: /intu/production - type: datadog api_key: ${DD_API_KEY} - type: elasticsearch url: https://es.example.com:9200 index: intu-logs - type: sumo_logic url: ${SUMO_ENDPOINT} - type: file path: /var/log/intu/engine.logProduction Checklist
Section titled “Production Checklist”Use this checklist before going live:
-
message_storage.driverset topostgres - Dashboard authentication enabled
- TLS configured on all public listeners
- Secrets stored in a provider (not plain text)
- Audit logging enabled
- OpenTelemetry or Prometheus metrics configured
- Log transport shipping to a central aggregator
-
intu validatepassing in CI - Health check (
/api/health) integrated with load balancer - Redis configured if running multiple instances
- Kafka TLS and auth configured if using Kafka listeners/destinations
- Backup strategy in place for PostgreSQL