Picture a small clinic using OpenMRS. At first, it runs on a single computer or server, and everything feels manageable. But as the clinic grows – or when OpenMRS is used to support dozens of facilities across a region – the system starts to struggle.
Searches slow down. Updates mean downtime. And because everything depends on a single server, one hardware failure can bring the entire system offline.
This has been a real challenge for many OpenMRS Implementers. Scaling often meant setting up dozens of servers by hand, managing them one by one, and wrestling with complicated cloud tools. It worked, but it was time-consuming and fragile.
That’s why the improvements coming in OpenMRS Platform 2.8 are such a big step forward.
Why scaling matters
- Reliability: With multiple servers working together, the system can stay online even if one of them fails. Updates can be done without downtime.
- Performance: As more patients and clinics are added, OpenMRS can spread the load across several servers. Searches and data access remain fast.
- Less burden on teams: Instead of reinventing deployment setups, Implementers now have clear, standardized ways to run OpenMRS in the cloud or on-premises.
What’s new in Platform 2.8
The new release adds several features that make clustering and cloud hosting practical:
| Feature | What it does | Why it matters |
| Infinispan (distributed cache) | Replaces EHCache with a clustered caching layer for Spring/Hibernate | Ensures consistent caching across app instances, improving performance and reliability. |
| ElasticSearch cluster | Replaces in-memory Lucene search with ElasticSearch/OpenSearch | Enables fast, reliable full-text search across multiple replicas and pods. |
| StorageService | Unified interface for file storage (disk, S3, or plugin extensions) | Separates file storage from the application, so we don’t rely on local disk storage, making it safe for clusters and resilient across nodes thanks to replication and automated backups/versioning. |
| Horizontal scaling support | Dozens of smaller features (liveness check for Kubernetes, setup and runtime configuration via environment/system properties in addition to files, DB replicas support) | Teams can deploy multiple O3, DB and ElasticSearch instances behind load balancers or stick with single-instance setups as needed. |
Together, these changes mean OpenMRS can finally run as a reliable, cloud-ready system – on AWS, Azure, Google Cloud, private data centers, or on-premise Kubernetes clusters.
Our experience testing it
At SolDevelo, we had the chance to test the new setup. Using Terraform, Helm, and AWS, we deployed a cloud-ready OpenMRS environment – and the process went smoothly. The documentation was clear, and because the tools are standard and widely used, everything felt familiar and reproducible.
We especially appreciated the short demo video showing OpenMRS running on Kubernetes with just a single command. In under 10 minutes, it demonstrates what used to take days of manual setup.
Why this matters for health programs
With these improvements, OpenMRS is no longer limited by a single server or complex manual setups. Health systems can:
- Keep clinics online with no single point of failure
- Serve larger patient populations without performance issues
- Deploy consistently across different cloud providers or private data centers
- Future-proof their systems with support for clusters and multi-tenancy
This is a big step toward making OpenMRS a reliable backbone not just for individual clinics, but for entire national health systems.
A community effort
These advancements are the result of hard work by the OpenMRS community – engineers, reviewers, and implementers who shared their real-world needs. At SolDevelo, we’re proud to support this effort and excited to see what comes next.
Ready to scale OpenMRS?
The new scalability features in Platform 2.8 open the door to more reliable and future-proof deployments of OpenMRS.
Contact us if you’d like to learn more or need support in implementing OpenMRS for your health program.
You can also try out our AWS-hosted AMIs (Amazon Machine Images) – preconfigured OpenMRS environments that you can deploy instantly, without the hassle of manual setup:
OpenMRS 2.5.9 on Ubuntu 22.04 →
OpenMRS 2.5.9 on Hardened Ubuntu 22.04 →
OpenMRS 3 on Ubuntu 22.04 →













