ADR-001: Go + Node.js Runtime
Status
Section titled “Status”Accepted — March 2025
Context
Section titled “Context”Healthcare integration engines must handle high message throughput with low latency. Hospitals, labs, and payers exchange millions of HL7v2, FHIR, X12, and CDA messages per day. The runtime that processes these messages must be fast, lightweight, and operationally simple.
At the same time, transformer logic — the code that reshapes, validates, and routes messages — must support a rich ecosystem of npm libraries. Packages like node-hl7-client, @types/fhir, xml2js, csv-parse, and dozens of healthcare-specific modules live in the npm ecosystem. Any credible integration engine needs access to them.
Legacy engines like Mirth Connect use the JVM with an embedded JavaScript runtime (Mozilla Rhino). This architecture has well-known problems:
- Rhino supports only ES5 — no
async/await, no arrow functions, no template literals, no modern JavaScript. - The JVM is heavy (hundreds of megabytes of RAM at idle), slow to start (seconds), and complex to tune.
- No access to npm — transformers are isolated scripts with no package management.
- Debugging is painful — stack traces cross the Java/JavaScript boundary unpredictably.
We needed a runtime that is:
- Fast to start — sub-second cold start for CLI commands and channel deployment.
- Lightweight — small binary, low memory footprint, no JVM.
- Easy to deploy — ideally a single binary, minimal dependencies.
- npm-compatible — full access to the Node.js ecosystem for transformer code.
- Type-safe — TypeScript support for transformer authoring with compile-time checks.
Decision
Section titled “Decision”We use a two-runtime architecture:
- Go for the CLI, HTTP/TCP/MLLP/Kafka listeners, pipeline orchestration, storage, the web dashboard, clustering, and all I/O-bound work.
- Node.js worker pool for executing TypeScript transformers, validators, filters, and pre/post processors.
The architecture works as follows:
- At startup, Go spawns a configurable pool of Node.js worker processes.
- Go communicates with workers via stdio using newline-delimited JSON — simple, fast, and debuggable.
- TypeScript transformer source files are compiled to JavaScript at startup and cached, so V8 executes pre-compiled JS with no per-message compilation overhead.
- When a message arrives, Go’s pipeline engine selects an available worker, sends the message payload, and receives the transformed result — all in sub-millisecond time for typical healthcare messages.
- Workers are stateless and recyclable. If a worker crashes or exceeds memory limits, Go restarts it transparently.
Consequences
Section titled “Consequences”Positive
Section titled “Positive”- Small footprint — the Go binary is ~20 MB. Combined with Node.js, total install is under 100 MB, compared to 500 MB+ for JVM-based engines.
- Fast startup — Go starts in under 100 ms. The worker pool is warm within 1–2 seconds. CLI commands execute instantly.
- Full npm ecosystem — transformers can
importany npm package. HL7 parsing, FHIR validation, XML/JSON manipulation, CSV processing — all available. - Worker isolation and parallelism — each worker is an independent OS process. A crashing transformer cannot bring down the engine. Multiple workers process messages in parallel.
- Sub-millisecond transform latency — after the initial warmup, V8 executes cached JavaScript with JIT compilation. Typical HL7v2 transforms complete in 0.1–0.5 ms.
- No JVM dependency — eliminates the single largest operational burden of legacy engines.
- Simple deployment — one Go binary + Node.js. Many healthcare organizations already have Node.js installed for other tools, and those that don’t can install it in minutes.
Negative
Section titled “Negative”- Two-runtime complexity — the build pipeline must produce a Go binary and manage Node.js worker code. CI/CD must test both runtimes together.
- Node.js process lifecycle — Go must manage spawning, health-checking, and restarting Node.js workers. This adds code and failure modes that a single-runtime architecture wouldn’t have.
- Hard dependency on Node.js — unlike a pure Go solution, intu requires Node.js (v20+) to be installed on the host. This is an additional dependency for air-gapped or minimal environments.
- Cross-process overhead — stdio/JSON communication adds serialization cost compared to in-process function calls. However, this overhead is negligible for healthcare messages (typically 1–50 KB), adding only microseconds per message.
Alternatives Considered
Section titled “Alternatives Considered”| Alternative | Why we rejected it |
|---|---|
| Pure Go with embedded V8 (via cgo) | Cross-compilation is fragile with cgo. V8 bindings are poorly maintained in the Go ecosystem. No npm support without significant custom tooling. |
| Pure Node.js | Node.js is single-threaded for I/O by default. Lacks goroutines for concurrent listener management. GC pauses become problematic at high throughput. Not ideal for long-running system daemons. |
| JVM (Kotlin or Java) | Heavy runtime, slow startup, complex GC tuning — exactly the problems we are trying to eliminate by replacing Mirth Connect. |
| Rust | Excellent performance characteristics, but a much smaller ecosystem for healthcare-specific libraries. Steeper learning curve reduces contributor pool. npm interop would require embedding V8 or Deno, adding similar complexity. |
| Deno or Bun | Promising runtimes but less mature. Healthcare organizations are conservative — Node.js has the install base and long-term support track record they require. |