HL7 MLLP server that
validates before routing.
Receives HL7 v2 messages over MLLP, validates them with CEL expressions, routes to downstream systems, and persists everything to an embedded outbox. One binary. No JVM. No message broker. Deploys to Kubernetes or a single VM.
- name: ehr-ingest
type: http
url: https://ehr.example.com/hl7
filter: msh.msg_type == "ADT"
- name: lab-archive
type: postgres
filter: msh.msg_type == "ORU"
validation:
rules:
- name: require-patient-id
expression: pid.id != ""
What the server handles for you
So your integration code only deals with business logic.
Compiled expression rules
Write validation rules once in CEL. They compile to bytecode at startup—no interpreter overhead at message time. Field access, list iteration, optional chaining.
- <100 µs per message
- Cost-limited expressions (max 1000)
- MSH, PID, PV1, OBX, obx_list variables
Mutual TLS, no restarts
Configure TLS 1.2+ with optional client certificate verification. Rotate
certificates by sending SIGHUP—the
server reloads without dropping existing connections.
- TLS_CERT_FILE, TLS_KEY_FILE, TLS_CLIENT_CA
- Hot-reload on SIGHUP
- TLS 1.2 minimum, 1.3 opt-in
CEL-routed pluggable outputs
Each connector declares a CEL filter expression. Matching messages are forwarded; misses are silently skipped. Failed deliveries retry with backoff and land in a dead-letter queue.
- Per-connector CEL filter
- Retry with exponential backoff
- Dead-letter queue (DLQ)
OpenTelemetry out of the box
Metrics exported via OTLP. Ring buffer captures recent log lines for in-process inspection. Every message produces an audit record: who sent what, when, and with what result.
- OTLP metrics (counters, histograms)
- Ring buffer log capture
- Immutable audit trail
Embedded outbox, atomic fanout
bbolt stores every accepted message before any connector sees it. Fanout to multiple connectors is atomic—either all see the message or none do. FIFO delivery is guaranteed per connector.
- bbolt embedded key-value store
- Transactional outbox pattern
- Guaranteed FIFO per connector
Single binary, env-driven
No JVM, no runtime, no config file required. Every option is an environment variable with a sensible default. Drop it anywhere and it runs.
LISTEN_ADDR=:2575
IDLE_TIMEOUT=30s
MAX_CONNECTIONS=0Up in three steps.
From zero to receiving HL7 messages in under a minute.
Download
tar -xzf mllp-server_1.0.1_linux_amd64.tar.gz
# Verify the checksum
sha256sum --check checksums.txt
Run
./mllp-server
# Override any option via env
LISTEN_ADDR=:3000 ./mllp-server
Send a message
printf '\x0bMSH|^~\&|A|B|C|D|20240101||ORU^R01|123|P|2.3\r\x1c\x0d' \
| nc localhost 2575
Message pipeline
Every inbound message follows the same deterministic path.
On validation failure
Returns AR — Application Reject. Connection stays open.
On internal error
Returns AE — Application Error. Message lands in DLQ.
On success
Returns AA — Application Accept. Persisted before ACK.
Ships with Kustomize overlays for dev, staging, and prod.
The repository ships Kustomize overlays for dev,
staging, and
prod environments.
Each overlay patches resource limits, replica counts, and TLS secrets—while the base
manifest stays canonical.
- Graceful shutdown with configurable drain timeout (default 30 s)
- SIGTERM-aware: drains in-flight messages before exit
- TLS secrets mounted from Kubernetes Secrets; SIGHUP for live rotation
- OpenTelemetry collector sidecar pattern supported
kustomize build deploy/overlays/prod \
| kubectl apply -f -
# Or with native kubectl kustomize
kubectl apply -k deploy/overlays/prod
# Available overlays:
# deploy/overlays/dev
# deploy/overlays/staging
# deploy/overlays/prod
# deploy/overlays/persistent
Ready to deploy?
Read the getting-started guide or browse the full configuration reference.