Can you significantly speed up an application in just a week? Absolutely! Often, all it takes is a fresh perspective, the right diagnosis, and precise adjustments to achieve noticeable performance optimization.
In one of our projects, users reported a major issue – a process was sometimes taking over 30 minutes to complete. Strangely, our tests on UAT and demo environments showed no such delays, meaning the problem remained hidden during standard validation.
Our first challenge was to reproduce the issue in real-world conditions. Once we successfully replicated the long response times, we conducted a thorough performance analysis to identify bottlenecks.
After just a few days of targeted optimizations, we managed to reduce the execution time at the worst case by 56%, from 32 minutes to 14 minutes.
This proves that not every optimization takes months to implement. Sometimes, a focused effort in just a week can deliver substantial improvements by eliminating critical bottlenecks and fine-tuning the configuration.
Optimizing system performance – Eliminating bottlenecks in OpenLMIS approvals
OpenLMIS is a national-level logistics management information system, ensuring the timely distribution of medical supplies and essential goods. One of its most critical operations is Requisition Approval – the final step where an authorized requisition is approved, triggering order creation and stock updates.
This requisition process is made once per month, typically at the beginning, when supply chain teams finalize their requisitions. However, the approval step was painfully slow, sometimes taking over 30 minutes to complete. This wasn’t just an inconvenience – it had real operational consequences:
- 840 users were affected, forced to wait extended periods to complete a crucial task.
- Multiple approvals slowed each other down, causing additional delays.
- Orders and stock updates couldn’t proceed until approvals were finalized, slowing down the entire supply chain.
By optimizing the application, we reduced the execution time by 56% – from 32 minutes to 14. While not yet instant, this improvement means:
- Users spend significantly less time waiting, improving their workflow.
- Bulk approvals at the start of the month are more manageable.
- Orders and stock updates move through the system faster, enhancing overall efficiency.
In large-scale logistics, every delay adds up. By eliminating this bottleneck, we’ve saved thousands of user-hours annually and made OpenLMIS more efficient for those who rely on it.
Under the hood – How we optimized performance
When tackling performance issues, the key to success is understanding the real-world conditions that cause slowdowns. In this case, we needed to accurately reproduce the client’s issue, analyze bottlenecks, and apply targeted optimizations within a short timeframe. Here’s how we did it.
Step 1: Reproducing the issue with real data
To diagnose the problem effectively, we needed a realistic dataset. Instead of using synthetic test data, we:
- Pulled an anonymized snapshot of the production database using AWS RDS snapshots (a far faster alternative to PostgreSQL dumps).
- Used a database state from two months prior, ensuring we had the exact data required to replicate the client’s scenario.
- Automated the approval process by scripting JSON payloads for API calls instead of manually filling out forms in OpenLMIS’s Web UI.
Step 2: Profiling and identifying the bottleneck
With a proper test environment in place, we enabled profiler logging within the application:
- Added org.slf4j.profiler.Profiler to generate structured performance logs.
- Kept the approach lightweight by avoiding additional monitoring tools.
- Successfully reproduced the issue under real-world conditions – confirming that approval times were just as slow as users reported.
The profiler logs pointed us to the Stock Management microservice – specifically, an endpoint responsible for updating stock records associated with affected products.
This was the slowest part of the process, so we focused our efforts on optimizing this specific operation.
Step 3: Targeted optimizations
1. Optimizing JPA and Hibernate configuration
We made several database and Hibernate-related optimizations, including:
- Reduced unnecessary data fetching by converting eager-loading relations to lazy-loading.
- Enabled batching for database operations by adjusting Hibernate properties:
- hibernate.jdbc.batch_size – Enabled statement batching.
- hibernate.order_inserts & hibernate.order_updates – Optimized execution order for batching.
- hibernate.query.in_clause_parameter_padding – Reduced query plan cache fragmentation.
- hibernate.query.plan_cache_max_size – Increased query plan cache size.
- Enabled reWriteBatchedInserts in PostgreSQL for further batch optimizations.
2. Improving data processing logic
Originally, the system iterated over all requisition items (StockCard) twice – first applying partial changes, then looping again for additional updates. This led to unnecessary memory usage.
We restructured the logic so that each StockCard was fully processed before moving to the next one, preventing performance degradation over time.
We used:
- Explicit EntityManager.flush() to move changes to the database incrementally.
- Explicit EntityManager.clear() to prevent the persistence context from growing too large.
This ensured that each iteration maintained consistent performance, eliminating slowdowns in later loops caused by increasing persistence context size.
// Before changes, simpliefied version
{
for (RequistionLine line : lines) {
final StockCard lineCard = findCard(line);
final StockCardEvent newEvent = updateStockCard(lineCard, line);
newEvents.add(newEvent);
}
recalculateStocks(newEvents);
}
// After changes, simpliefied version
for (RequistionLine line : lines) {
final StockCard lineCard = findCard(line);
final StockCardEvent newEvent = updateStockCard(lineCard, line);
recalculateStock(newEvent);
entityManager.flush();
entityManager.clear();
}
3. Fixing inefficient entity mappings
We discovered that an Adjustment entity had a problematic mapping:
- It stored only three fields (quantity, reason, and ID) but was referenced by three other entities, each with a foreign key.
- This caused Hibernate to insert an Adjustment first, then run an update to set the foreign key, doubling the number of writes.
- By defining the necessary mappings on the Adjustment entity, we eliminated the extra update, ensuring a single INSERT per record.
@Entity
class Adjustment {
@Column UUID id;
@Column Integer quantity;
@Column String reason;
}
class Other {
// ...
@OneToMany
@JoinColumn(name = "stockLineid")
List<Adjustment> adjustments;
}
4. Adding a missing index
One of the tables involved in the operation lacked an index on a frequently queried column. Adding an index significantly reduced lookup times, especially for large datasets.
Results & takeaways
After applying these optimizations, the approval process dropped from 32 minutes to 14 minutes for the worst case scenario, a 56% improvement. While there’s still room for further enhancements, these changes provided an immediate boost without requiring major architectural shifts.Even within a tight one-week optimization window, strategic profiling and targeted fixes can yield significant performance gains. If your application is slowing down under real-world usage, a focused deep dive may be all it takes to unlock better performance.
Technologies used



