Choosing an HL7 integration engine in 2026
The HL7 integration landscape shifted when Mirth Connect went closed-source. Here are the options — commercial, open source, cloud, and lightweight — and what each is good at.
For years, the default answer to “which HL7 integration engine?” was Mirth Connect. Open source, Java-based, large community. That changed in March 2025 when NextGen closed the source with version 4.6. Two community forks emerged within months. Commercial vendors sharpened their positioning. Cloud providers added HL7 v2 adapters.
If you’re evaluating options today, the landscape looks different than it did two years ago. This article maps what’s available, what each option is good at, and where each falls short.
Full integration engines
These are complete platforms: message routing, transformation, channel management, monitoring, and MLLP support built in.
Mirth Connect (NextGen)
Closed-source since v4.6 (March 2025). Java. On-prem or cloud.
Mirth has the largest installed base of any HL7 integration engine. Decades of community knowledge, forum posts, and channel templates. If you search for how to do something with HL7, the top results are probably Mirth-related.
The closed-source shift means new deployments require a commercial license from NextGen. Pricing isn’t public. Existing open-source installations on v4.5.2 or earlier continue to work but won’t receive security patches from NextGen.
Good at: breadth. Mirth handles HL7 v2, FHIR, X12, DICOM, and custom formats. JavaScript transformers give you arbitrary logic. The channel model is flexible enough for simple routing and complex transformation workflows.
Watch out for: JVM resource consumption. A Mirth instance typically runs with 1-2 GB of heap. Channel configuration lives in a database, not in files, which makes version control harder. The GUI is required for most configuration tasks.
Rhapsody
Commercial. Java-based internals. SaaS, on-prem, or private cloud.
Rhapsody has been the commercial alternative to Mirth for a long time. It has won Best in KLAS in the Integration Engine category for 16 consecutive years. The SaaS model (hosted on AWS and Azure) removes the operational burden of running the engine yourself.
Good at: managed operations. If your organization doesn’t want to run infrastructure, Rhapsody’s SaaS model handles upgrades, scaling, and monitoring. Built-in AI features (Axon) assist with message mapping. Throughput is documented at 30,000 messages per minute.
Watch out for: cost. Rhapsody is significantly more expensive than self-hosted alternatives. Vendor lock-in is real. Migrating channel configurations out of Rhapsody requires rebuilding them.
Cloverleaf (Infor)
Commercial. Tcl, Java, Python, and JavaScript. Docker, AWS, on-prem.
Cloverleaf is the veteran. It handles millions of daily transactions at large health systems. If you need to process high volumes of HL7, X12, and custom formats across hundreds of interfaces, Cloverleaf is built for that scale.
Good at: high-volume production workloads. Mature operational tooling. Strong in organizations that already use Infor’s healthcare suite.
Watch out for: steep learning curve. Tcl is the traditional scripting language (Java, Python, and JavaScript were added later). The platform requires extensive training across its entire toolset.
InterSystems Health Connect
Commercial. ObjectScript (primary), with Java, .NET, and Python via external gateways. On-prem or cloud.
Health Connect is the integration engine within the InterSystems HealthShare product family, built on InterSystems IRIS for Health. The relationship with Epic is deeper than integration. Epic’s EHR runs on IRIS as its database engine, a partnership spanning over 40 years. InterSystems also integrates with Oracle Health (Cerner), Veradigm, and hundreds of other systems.
Good at: deep EHR integration. Organizations running Epic are already running IRIS. Health Connect provides the integration layer on the same platform. The HealthShare Unified Care Record aggregates clinical data from multiple sources into a single patient-centric view, so integration and storage are the same system.
Watch out for: proprietary stack. ObjectScript is InterSystems’ own language. Java, .NET, and Python are supported through external gateways, not as first-class development languages. The talent pool is small and specialized. Cost is high. This is an enterprise platform sold to large health systems, not a tool you download and run.
Iguana (iNTERFACEWARE)
Commercial. Lua scripting. On-prem or cloud.
Iguana positions itself on developer productivity: faster interface build times through a simpler scripting model and visual mapping tools.
Good at: speed of development. Lua is a simpler scripting language than JavaScript or Tcl. The translator tool provides a visual mapping interface. Strong support for HL7 v2.x and FHIR.
Watch out for: smaller community than Mirth or Rhapsody. Pricing is not public. Contact sales.
Mirth forks
When Mirth went closed-source, the community forked the last open-source release (v4.5.2). Two projects emerged.
Open Integration Engine (OIE)
Open source. Java. Community-governed.
OIE is a direct fork with a focus on remaining vendor-neutral. It includes a free TLS plugin that was previously a commercial add-on in Mirth.
Good at: continuity. If you’re running Mirth channels today, OIE is the closest migration path. Same architecture, same channel model, same database schema.
Watch out for: governance is still forming. The project is young (April 2025). Long-term maintenance, security response times, and release cadence are unproven.
BridgeLink (Innovar Healthcare)
Open source. Java. AWS-native with on-prem option.
BridgeLink is another Mirth fork, focused on AWS deployment. It’s available on the AWS Marketplace with a commercial support option.
Good at: AWS-native deployment. If your infrastructure is on AWS, the Marketplace listing simplifies procurement and deployment.
Watch out for: AWS-centric. Smaller community than OIE. Also a young project (March 2025).
Cloud services
Cloud providers offer HL7 v2 support as part of their healthcare data platforms. These are not integration engines. They’re managed services that handle specific parts of the pipeline.
Azure Health Data Services
Commercial. Cloud (Azure).
Microsoft’s healthcare data platform includes a FHIR service and an HL7 v2 connector for Logic Apps. The HL7 connector decodes and encodes v2 messages and supports MLLP through a hybrid adapter (currently in private preview).
Good at: organizations already invested in Azure. Native FHIR service. Logic Apps provides a low-code integration layer.
Watch out for: MLLP support is preview-only as of early 2026. The older Azure API for FHIR is being retired in September 2026. Not a standalone integration engine. It’s a building block within the Azure ecosystem.
Google Cloud Healthcare API
Commercial. Cloud (GCP).
Google’s Healthcare API includes an HL7 v2 store with a REST API. For MLLP, Google provides an open-source adapter that runs on GKE and bridges MLLP connections to the Healthcare API.
Good at: data pipeline use cases. The HL7 v2 store integrates with BigQuery and Dataflow for analytics. The MLLP adapter source code is on GitHub.
Watch out for: the MLLP adapter is a separate component you deploy and manage on GKE. It adds operational complexity. The Healthcare API is a data store, not a routing/transformation engine.
Libraries
HAPI HL7v2
Open source (MPL 1.1). Java.
HAPI is not an integration engine. It’s a Java library for parsing, creating, and validating HL7 v2 messages. It’s the foundation that many integration engines (including Mirth) use internally.
Good at: parsing. HAPI handles every HL7 v2 version and message type. If you’re building your own HL7 processing in Java, HAPI is the de facto standard library. It includes basic MLLP transport support (SimpleServer) for building custom receivers.
Watch out for: it’s a library, not a product. You build your own routing, monitoring, error handling, and operational tooling around it. No GUI, no channel management, no configuration framework.
Lightweight servers
MLLP Server (Aktagon)
Open source (FSL-1.1-ALv2). Go. Single binary.
A different approach: instead of a full integration engine, a single-purpose MLLP server. Receives HL7 v2 messages over MLLP, validates with CEL expressions, routes to downstream connectors, persists to an embedded database. Configuration is environment variables and a YAML file. No GUI. No JVM.
Good at: operational simplicity. One binary, no runtime dependencies. Fits the Kubernetes operational model: env vars, structured JSON logs, TCP health probes, graceful shutdown. Resource footprint is 128 MB, not 2 GB. CEL validation rules are version-controllable configuration, not GUI-managed channel logic.
Watch out for: scope. It receives and routes HL7 v2 over MLLP. It doesn’t transform messages, convert between formats, or provide a visual channel designer. If you need to map HL7 v2 fields to FHIR resources or transform message content, you need additional tooling downstream.
How to choose
The decision depends on what you already have and what you need.
You have Mirth channels and they work. Stay on your current version or migrate to OIE/BridgeLink. The cost of switching to a different platform is high and the benefit is unclear unless you’re hitting specific limitations.
You need a full integration platform with transformation and format conversion. Rhapsody, Iguana, or Cloverleaf. The commercial engines earn their cost when you have dozens of interfaces with complex transformation requirements.
You need to receive HL7 v2 and route it, but not transform it. A lightweight server handles this without the overhead of a full integration engine. Validation, routing, and persistence cover most inbound MLLP use cases.
You’re building on a cloud platform. Use your provider’s native HL7 service (Azure Health Data Services or Google Healthcare API) for the data layer, with a lightweight MLLP adapter for legacy connections.
You’re building custom processing in Java. HAPI gives you the parsing layer. Build the transport and routing yourself.
The honest answer for most organizations: you’ll end up running more than one of these. A lightweight MLLP server at the edge for inbound messages, a cloud service for storage and analytics, and possibly a full integration engine for complex transformation workflows. The tools serve different layers of the stack.
For background on the MLLP protocol that most of these tools speak, see What is MLLP and how does it work. For the difference between MLLP and HTTP transports, see MLLP vs HTTP for HL7 messaging.