The OpenMRS FHIR2 module acts as a bridge, exposing OpenMRS data through the Fast Healthcare Interoperability Resources (FHIR) standard. This standard is used worldwide to make healthcare systems communicate with each other — ensuring that patient records, lab results, and other critical medical information can be exchanged quickly and securely.
After improving the Get Location endpoint, our team focused on the Get Lab Result and related endpoints in the FHIR2 module. By using Spring Cache and query optimization, we achieved up to 79% faster responses — even under cloud-like latency. This kind of speed-up can make the difference between a doctor seeing lab results instantly during a consultation versus having to wait and risk losing valuable time.
Here’s the full story, the approach, and the results.
The Challenge: Sluggish Endpoints
Some FHIR2 endpoints, especially those converting Concept objects to FHIR CodeableConcept resources, were slow. In technical terms, the system was making too many repetitive database calls and doing unnecessary data transformations on every request.
But beyond the code, here’s what that means in real life:
- Clinicians waiting for data — When medical staff must pause to retrieve patient records, even for seconds, it adds up during busy clinic days.
- Integration delays — In interconnected health systems, slow endpoints can stall data exchange between hospitals, labs, and national registries.
- User frustration — Public health workers and researchers relying on OpenMRS need a smooth experience to stay productive.
Without intervention, these inefficiencies would not only hurt integration responsiveness but also risk lowering user trust in the system.
Our Approach: A Two-Pronged Strategy for Performance Enhancement
To tackle these performance challenges, we adopted a two-pronged strategy:
- Caching with Spring Cache: By caching the results of frequently called methods, we reduced the number of database interactions and the computational cost of data transformations.
- Database query optimization: We identified and refactored sections of code that were making unnecessary or repetitive database calls, consolidating them into more efficient, single-query operations.
Deep Dive into Implementation: Caching with Spring Cache
One of the most impactful improvements came from integrating Spring Cache into the FHIR2 module. Spring Cache provides a powerful abstraction for adding caching capabilities to Spring applications, allowing us to easily cache method results.
Basic Setup of Spring cache
To start working with SpringCache, you need to:
- Add the appropriate dependencies in your application: spring-context and spring-context-support.
- Enable cache e.g. by registering cacheManager.
- And that’s it! You can now write your code with cache support.
The @Cacheable Annotation
At the heart of our caching solution is the @Cacheable annotation. This annotation is a part of Spring framework. When applied to a method, it tells Spring to cache the results of that method based on its arguments.
Here’s how we used it:


@Cacheable(value = “cacheName”): The value attribute specifies the name of the cache where the method’s results will be stored.
For instance, fhir2ConceptToCodeableConcept and fhir2GetFhirConceptSources are distinct cache regions. When a method annotated with @Cacheable is called, Spring first checks if an entry for the given method arguments exists in the specified cache. If it does, the cached value is returned immediately, bypassing the actual method execution. If not, the method is executed, and its return value is stored in the cache for future requests.
Cache Invalidation
An important aspect of caching is clearing outdated data. For example, if a new medical concept source is added, the system must refresh the list so clinicians see the most up-to-date options. We used the @CacheEvict annotation to ensure that changes in the database automatically trigger a cache refresh:

Next time when saveFhirConceptSource is triggered, it will automatically clear the fhir2GetFhirConceptSources cache so the next fetching of concept sources will hit a database to get the latest data and fetch it for the future.
OpenMRS Cache Configuration
To apply the cache mechanism in any OpenMRS module and control behavior of our caches, we have to define specific properties that come within specific config files. The OMRS Core module handles all the basic caching configuration, so we don’t have to worry about it. We just need to apply it in a specific module.
Depending on the OpenMRS Core version you use in your environment, you need to add one of the configuration files: apiCacheConfig.properties or cache-api.yaml. In our solution, we added both of them to support all core versions:
apiCacheConfig.properties:

cache-api.yaml:

These configurations define how long data stays in the cache, how much memory can be used, and what to do when it’s full. Learn more →
Optimizing Database Queries: Reducing Redundancy
Beyond caching, we identified several areas where the number of database queries could be significantly reduced. A prime example was in the processing of ConceptMap objects.
Before Optimization:
In the previous implementation, iterating through concept.getConceptMappings() often led to repetitive database calls within the loop:

Here, underlying logic in conceptSourceService.getUrlForConceptSource(crt.getConceptSource()) would potentially hit the database for each concept mapping, leading to performance issues, especially with many mappings.
After Optimization:
We refactored this to fetch all necessary concept sources in a single database call before the loop, and then access the results within the loop:

This simple yet effective change significantly reduced the number of database round trips, resulting in a substantial performance gain.
Verifying Our Improvements: Performance Testing and Monitoring
To test and verify the impact of our changes, we leveraged the OpenMRS Performance Test Tool. It allowed us to simulate various load conditions and measure the response times of the improved endpoints.

For monitoring and inspecting the cached results in real-time, JConsole proved to be invaluable. JConsole, a graphical monitoring tool that comes with the Java Development Kit (JDK), allowed us to connect to our running OpenMRS instance and observe the cache statistics, including the number of hits, misses, and the contents of the caches. This allowed us to measure improvements and confirm that our caching strategy was working as intended.

Performance Gains
The optimizations resulted in the following improvements:
| Local – 95th pct [ms] | Local – Mean [ms] | Delay – 95th pct [ms] | Delay – Mean [ms] | |
| Before | 666 | 334 | 22762 | 10907 |
| After | 145 | 73 | 4883 | 2471 |
| Gain | +78.2% | +78.1% | +78.5% | +77.3% |
To evaluate the real-world impact of our optimizations, we tested the Get Lab Result endpoint in two environments:
- Local (No Delay) – running on a single machine, to establish a performance baseline.
- Simulated Cloud (5ms Delay) – emulating slower, real-world cloud conditions.
We measured both average response times and the 95th percentile (meaning 95% of requests were faster than this value). Across both environments, our changes led to huge improvements in results around 77-79%!
For a healthcare worker, that means:
- Getting lab results in less than half the time.
- Reduced waiting during consultations, improving the patient experience.
- Faster data flow between different healthcare systems.
The Power of Collaboration: OpenMRS Community Engagement
It’s important to highlight that this initiative was not undertaken in isolation. Throughout the development and optimization process, we actively engaged with the OpenMRS community.
The most experienced developers within the community provided invaluable guidance, suggesting optimal approaches, reviewing our code, and offering constructive feedback. This collaborative spirit was instrumental in refining our solutions and ultimately led to the successful introduction of our changes to the FHIR2 module. The community-driven approach underscores the strength of open-source development.Our efforts were also appreciated by the OpenMRS community:

Fix what’s slowing you down
If your application is struggling with performance, a structured deep dive into its bottlenecks may unlock major improvements.


















