Skip to content

Deployment

This guide covers everything needed to take an Intuware project from a local development environment to a production deployment.

Every project created with intu init includes a multi-stage Dockerfile:

# Build stage
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npx intu build
# Runtime stage
FROM node:20-alpine
WORKDIR /app
COPY --from=build /app .
EXPOSE 3000 3001
CMD ["npx", "intu", "serve"]

Build and run the image:

Terminal window
docker build -t my-engine .
docker run -p 3000:3000 -p 3001:3001 my-engine

For a full-stack local environment with PostgreSQL, Redis, and Kafka:

services:
intu:
build: .
ports:
- "3000:3000"
- "3001:3001"
environment:
- DATABASE_URL=postgres://intu:intu@postgres:5432/intu
- REDIS_URL=redis://redis:6379
- KAFKA_BROKERS=kafka:9092
depends_on:
- postgres
- redis
- kafka
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: intu
POSTGRES_PASSWORD: intu
POSTGRES_DB: intu
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
kafka:
image: bitnami/kafka:3.7
environment:
KAFKA_CFG_NODE_ID: 0
KAFKA_CFG_PROCESS_ROLES: controller,broker
KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 0@kafka:9093
KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER
volumes:
pgdata:

Enable persistent message storage by configuring PostgreSQL in intu.yaml:

message_storage:
driver: postgres
connection_string: ${DATABASE_URL}

Intuware runs schema migrations automatically on startup — no manual SQL required.

PropertyDescription
message_storage.driverStorage backend (memory, postgres)
message_storage.connection_stringPostgreSQL connection URI

Configure a shared Kafka connection at the root level of intu.yaml. Individual channels reference this connection by name in their listener or destination blocks.

kafka:
brokers:
- kafka-1.example.com:9092
- kafka-2.example.com:9092
- kafka-3.example.com:9092
auth:
mechanism: SASL_SSL
username: ${KAFKA_USERNAME}
password: ${KAFKA_PASSWORD}
tls:
enabled: true
ca_file: /etc/ssl/certs/kafka-ca.pem
PropertyDescription
kafka.brokersList of bootstrap broker addresses
kafka.auth.mechanismAuthentication mechanism (PLAINTEXT, SASL_SSL, SASL_PLAINTEXT)
kafka.tls.enabledEnable TLS for broker connections
kafka.tls.ca_filePath to CA certificate

Run multiple Intuware instances behind a load balancer with Redis-backed coordination:

runtime:
mode: cluster
redis:
address: redis://redis.example.com:6379

Cluster mode provides:

  • Channel partitioning — each channel is owned by exactly one instance at a time
  • Message deduplication — prevents duplicate processing when instances overlap during rolling deploys
  • Health checks — instances publish heartbeats; unhealthy instances have their channels reassigned

Terminate TLS on inbound listeners:

listeners:
- type: http
port: 8443
tls:
cert_file: /etc/ssl/certs/listener.crt
key_file: /etc/ssl/private/listener.key

Enable TLS for outbound connections:

destinations:
- type: http
url: https://downstream.example.com/api
tls:
cert_file: /etc/ssl/certs/client.crt
key_file: /etc/ssl/private/client.key
dashboard:
auth:
provider: oidc
issuer: https://auth.example.com
client_id: ${OIDC_CLIENT_ID}
client_secret: ${OIDC_CLIENT_SECRET}
dashboard:
auth:
provider: ldap
url: ldap://ldap.example.com:389
bind_dn: cn=readonly,dc=example,dc=com
search_base: ou=users,dc=example,dc=com
user_filter: "(uid={{username}})"
dashboard:
auth:
provider: basic
users:
- username: admin
password: ${ADMIN_PASSWORD}
role: admin

Three built-in roles control access to the dashboard and REST API:

RoleDescription
adminFull access — manage channels, messages, and settings
operatorOperational access — deploy/undeploy channels, reprocess messages
viewerRead-only access — view channels and messages

Track security-relevant events with the audit log:

audit:
enabled: true
destination: file # file | stdout | syslog
path: /var/log/intu/audit.log

Audited events include:

EventDescription
channel.deployChannel deployed or restarted
channel.undeployChannel stopped
message.reprocessMessage reprocessed
auth.loginUser authenticated
auth.logoutUser session ended
settings.changeConfiguration changed via dashboard

Avoid hardcoding secrets in intu.yaml by using a secrets provider:

secrets:
provider: vault # env | vault | aws_secrets_manager | gcp_secret_manager
vault:
address: https://vault.example.com
token: ${VAULT_TOKEN}
path: secret/data/intu
ProviderDescription
envRead secrets from environment variables (default)
vaultHashiCorp Vault KV v2
aws_secrets_managerAWS Secrets Manager
gcp_secret_managerGoogle Cloud Secret Manager

Reference secrets in any configuration value with ${SECRET_NAME} syntax. The provider resolves them at startup.

Intuware exports traces and metrics via OpenTelemetry:

telemetry:
traces:
exporter: otlp
endpoint: http://otel-collector:4318
metrics:
exporter: otlp
endpoint: http://otel-collector:4318

A Prometheus-compatible metrics endpoint is available at /metrics on the dashboard port:

telemetry:
metrics:
exporter: prometheus

Scrape target: http://intu-host:3001/metrics

Ship structured logs to external systems:

logging:
transports:
- type: cloudwatch
log_group: /intu/production
- type: datadog
api_key: ${DD_API_KEY}
- type: elasticsearch
url: https://es.example.com:9200
index: intu-logs
- type: sumo_logic
url: ${SUMO_ENDPOINT}
- type: file
path: /var/log/intu/engine.log

Use this checklist before going live:

  • message_storage.driver set to postgres
  • Dashboard authentication enabled
  • TLS configured on all public listeners
  • Secrets stored in a provider (not plain text)
  • Audit logging enabled
  • OpenTelemetry or Prometheus metrics configured
  • Log transport shipping to a central aggregator
  • intu validate passing in CI
  • Health check (/api/health) integrated with load balancer
  • Redis configured if running multiple instances
  • Kafka TLS and auth configured if using Kafka listeners/destinations
  • Backup strategy in place for PostgreSQL