Skip to content

Configuration

Every intu project is configured through YAML files at the project root and inside channel directories. The root intu.yaml defines runtime behaviour, destinations, storage, logging, and metrics. Each channel has its own channel.yaml that describes a single integration pipeline.

my-project/
├── intu.yaml # root config (base profile)
├── intu.dev.yaml # dev overrides
├── intu.prod.yaml # production overrides
└── channels/
└── adt-to-fhir/
└── channel.yaml # channel config

intu supports profile-based configuration layering. At startup the engine loads files in order and deep-merges them:

  1. intu.yaml — base configuration (always loaded)
  2. intu.<profile>.yaml — profile overlay (e.g. intu.dev.yaml, intu.prod.yaml)

The active profile is set with the --profile flag or the INTU_PROFILE environment variable:

Terminal window
intu run --profile prod
# or
INTU_PROFILE=prod intu run

Keys in the profile file override the same keys in the base file. Nested maps are merged recursively; arrays are replaced entirely.

Any value in YAML can reference an environment variable with ${VAR} syntax. An optional default is supported with ${VAR:-default}:

destinations:
epic:
type: http
url: ${EPIC_BASE_URL:-https://sandbox.epic.com/api}
auth:
client_secret: ${EPIC_CLIENT_SECRET}

Variables are resolved at startup. A missing variable with no default causes a fatal error.

runtime:
mode: standalone # standalone | cluster
workers: 4 # concurrent pipeline workers
hot_reload: true # watch for config/code changes
KeyTypeDefaultDescription
modestandalone | clusterstandaloneSingle-process or distributed mode
workersintegerCPU countNumber of concurrent workers per channel
hot_reloadbooleantrueRestart affected channels on file change

Destinations are declared at the root level and referenced by name in channel configs:

destinations:
epic_fhir:
type: http
url: https://fhir.epic.com/R4
auth:
type: oauth2_client_credentials
token_url: https://fhir.epic.com/oauth2/token
client_id: ${EPIC_CLIENT_ID}
client_secret: ${EPIC_CLIENT_SECRET}
scopes:
- system/*.read
- system/*.write
warehouse:
type: postgres
host: ${PG_HOST:-localhost}
port: 5432
database: integration_db
username: ${PG_USER}
password: ${PG_PASSWORD}
ssl_mode: require
archive:
type: s3
bucket: ${S3_BUCKET}
region: us-east-1
prefix: hl7/archive/

Each destination has a type and type-specific settings. Supported types include http, tcp, kafka, postgres, s3, file, smtp, sftp, dicom, and database.

storage:
driver: postgres # memory | postgres | s3
mode: full # none | status | full
connection:
host: ${PG_HOST:-localhost}
port: 5432
database: intu_messages
username: ${PG_USER}
password: ${PG_PASSWORD}
ModeStored Data
noneNothing — fire-and-forget
statusMessage ID, status, timestamps, error info
fullEverything in status plus the complete message body
logging:
level: info # debug | info | warn | error
format: json # json | text
transports:
- type: console
- type: file
path: ./logs/intu.log
max_size_mb: 100
max_backups: 5
- type: cloudwatch
log_group: /intu/prod
region: us-east-1
- type: datadog
api_key: ${DD_API_KEY}
site: datadoghq.com
- type: sumologic
endpoint: ${SUMO_ENDPOINT}
- type: elasticsearch
url: ${ES_URL}
index: intu-logs

Multiple transports can run simultaneously. Each transport inherits the root level unless overridden.

metrics:
prometheus:
enabled: true
port: 9090
path: /metrics
opentelemetry:
enabled: true
endpoint: ${OTEL_ENDPOINT}
protocol: grpc # grpc | http
service_name: intu

Each channel directory contains a channel.yaml that defines a single pipeline:

id: adt-to-fhir
enabled: true
group: admissions
listener:
type: tcp
port: 6661
content_type: hl7v2
transformer: transform.ts
validator: validate.ts
destinations:
- epic_fhir
- warehouse
retry:
max_attempts: 5
backoff: exponential
initial_delay_ms: 500
max_delay_ms: 30000
jitter: true
tags:
department: admissions
priority: high
KeyRequiredDescription
idyesUnique channel identifier
enablednotrue (default) or false to disable without deleting
listeneryesInbound connector (type, port, content_type, etc.)
transformernoPath to a TypeScript transformer
validatornoPath to a TypeScript validator
destinationsyesList of destination names declared in intu.yaml
retrynoRetry policy for failed deliveries
tagsnoArbitrary key-value metadata for filtering and grouping
groupnoLogical channel group for batch operations
runtime:
mode: standalone
workers: 4
hot_reload: true
storage:
driver: memory
mode: status
logging:
level: info
format: json
transports:
- type: console
metrics:
prometheus:
enabled: true
port: 9090
path: /metrics
destinations:
epic_fhir:
type: http
url: ${EPIC_FHIR_URL:-https://sandbox.epic.com/R4}
auth:
type: oauth2_client_credentials
token_url: ${EPIC_TOKEN_URL}
client_id: ${EPIC_CLIENT_ID}
client_secret: ${EPIC_CLIENT_SECRET}
local_file:
type: file
path: ./output/
naming: "{{.ChannelID}}_{{.Timestamp}}.json"
runtime:
mode: cluster
workers: 8
hot_reload: false
storage:
driver: postgres
mode: full
connection:
host: ${PG_HOST}
port: 5432
database: intu_prod
username: ${PG_USER}
password: ${PG_PASSWORD}
ssl_mode: require
logging:
level: warn
format: json
transports:
- type: console
- type: datadog
api_key: ${DD_API_KEY}
site: datadoghq.com
metrics:
prometheus:
enabled: true
port: 9090
path: /metrics
opentelemetry:
enabled: true
endpoint: ${OTEL_ENDPOINT}
protocol: grpc
service_name: intu-prod
destinations:
epic_fhir:
type: http
url: ${EPIC_FHIR_URL}
auth:
type: oauth2_client_credentials
token_url: ${EPIC_TOKEN_URL}
client_id: ${EPIC_CLIENT_ID}
client_secret: ${EPIC_CLIENT_SECRET}
scopes:
- system/*.read
- system/*.write

Given the two files above and --profile prod, the merged result at startup is:

KeyBase (intu.yaml)Prod OverrideEffective Value
runtime.modestandaloneclustercluster
runtime.workers488
runtime.hot_reloadtruefalsefalse
storage.drivermemorypostgrespostgres
storage.modestatusfullfull
logging.levelinfowarnwarn
metrics.prometheusenabledenabledenabled (merged)
metrics.opentelemetryenabledenabled (added)