Not every optimization takes months: We’ve boosted key OpenMRS endpoint performance by 70-85% in just one week

We recently partnered with the OpenMRS community to optimize one of the platform’s most frequently used API endpoints – Get Location. Through targeted improvements, we reduced response time by 70–85%, significantly enhancing system performance for both users and integrated services.

%
reduced response time

Why focus on this endpoint? OpenMRS is an open-source health IT platform used worldwide to manage patient data and support clinical workflows – especially in low-resource settings. The Get Location endpoint is part of the platform’s FHIR API and plays a central role in many routine operations. It’s heavily used by the OpenMRS UI and external systems, making it a high-impact target for optimization. 

It also struck a unique balance – while there are endpoints that are slower and others that are more commonly used, Get Location was both common and slow – an ideal candidate for optimization, as improving it would have a widespread impact on system performance.

In this case study, we’ll walk through how we identified the performance issues, the optimizations we applied, and the impact of our work.

Identifying the problem

To assess the performance of the FHIR Get Location endpoint, we leveraged the existing OpenMRS performance testing tools, specifically the Gatling-based test suite developed by the community. Using these tests, we gathered baseline performance metrics, giving us a clear starting point for optimization.

One enhancement we introduced was running the tests in an environment that emphasized network delays. By simulating real-world cloud deployment conditions, we could better understand how database queries and API response times were affected under less-than-ideal circumstances.

We followed a measure-optimize-measure approach to ensure our work delivered real impact:

  • First, we measured baseline performance to identify bottlenecks.
  • Then, we applied targeted optimizations.
  • Finally, we measured again to verify improvements and quantify the gains.

This structured method ensured that every change was purposeful – and that the performance boost was both real and measurable.

Investigation & optimization

Our profiling identified excessive database queries as the primary culprit behind the slow API response times. Although the source code architecture was clean, readable, and easy to maintain, the existing logic was making multiple redundant queries, significantly increasing response times, especially under network latency conditions.

As a result of an investigation, the key optimization was reducing the number of database queries. By refactoring the Get Location endpoint logic, and its internal algorithms of transforming OpenMRS domain objects to FHIR API complaint data, we were able to minimize unnecessary queries, leading to substantial gains in efficiency.

We didn’t just propose an idea – we delivered working code. A tested proof-of-concept (PoC) showing major performance improvements was submitted to the community as a pull request.

Performance gains

The optimizations resulted in the following improvements:

MetricLocal (No Delay)Simulated Cloud (5ms Delay)
95th Percentile (Before)271 ms3762 ms
95th Percentile (After)58 ms1361 ms
Mean Response Time (Before)200 ms3555 ms
Mean Response Time (After)35 ms955 ms
Performance Gain+78% to +82%+63% to +73%

To evaluate the real-world impact of our optimizations, we tested the Get Location endpoint in two environments:

  • Local (No Delay) – running on a single machine, to establish a performance baseline.
  • Simulated Cloud (5ms Delay) – emulating slower, real-world cloud conditions.

We measured both average response times and the 95th percentile (meaning 95% of requests were faster than this value). Across both environments, our changes led to dramatic improvements:

  • Local tests showed up to 82% faster responses.
  • Simulated Cloud tests showed up to 73% faster responses, greatly improving performance even in less-than-ideal conditions.

Results & takeaways

After applying these optimizations, the FHIR Get Location endpoint saw a 70-85% improvement in response times. This optimization significantly reduced database query overhead, leading to faster API responses, especially in cloud environments where network delays amplify inefficiencies.

This change demonstrated how targeted performance tuning can yield substantial gains without major architectural overhauls. There’s still room for further refinement, such as expanding these optimizations to other parts of OpenMRS and its FHIR API.

Fix what’s slowing you down

If your application is struggling with performance, a structured deep dive into its bottlenecks may unlock major improvements.

Technologies used

java reduced response time openmrs
 reduced response time openmrs hibrentate performance boost
gatling performance boost
spring performance boost
 reduced response time openmrs

Author

Scroll to Top