OpenLMIS, a vital tool in global health logistics, was facing a growing performance issue – especially with login times ballooning to an average of 37 seconds. The culprit? Inefficient use of browser Local Storage. By rethinking how data was saved and retrieved, we achieved faster logins and significantly improved responsiveness across the system.
Here’s how we diagnosed the problem, implemented a scalable fix, and brought login time down to just 8.5 seconds – a 77% improvement.
Why performance matters in health logistics

Picture this: You’re a health worker in a rural clinic. The Internet is unreliable, but you still need to log into OpenLMIS to manage stock, submit orders, and keep your supply chain moving. You open your laptop… and wait. 10 seconds. 20. 30. Sound familiar?
When systems lag, health workers lose valuable time – and in supply chains, every second counts.
OpenLMIS is built to work offline for exactly this reason. It relies on storing key data locally in the browser, using Local Storage, to ensure users can stay productive no matter the connection. But as more facilities, products, and users joined the system, the amount of data ballooned – and the system’s original design started to buckle under the load.
It resulted in slow logins, unresponsive screens, and frustrated users.
The Challenge: Scaling pains hit performance hard
In its early days, OpenLMIS performed well. However, as more health facilities adopted the system and the amount of data grew (more users, products, facilities, configurations), users began experiencing significant slowdowns.
The most obvious pain point was logging in. What used to be quick became a frustrating wait, averaging 37 seconds. This wasn’t a flaw in the original concept but a scaling problem: the way the application managed its growing local data, particularly for offline use, simply wasn’t designed for the load it now faced.
This login delay became a major hurdle, signaling deeper performance issues that emerged as the system scaled. Users also reported sluggishness in other areas related to data handling, confirming the problem was tied to managing the expanding local dataset.
The Investigation: Finding the bottleneck in local storage
To understand why performance degraded at scale, we used browser profiling tools during the login process on systems with realistic data loads. The results were stark: JavaScript activity was consuming nearly 97% of the browser’s CPU power, effectively freezing the application.
Digging deeper, we traced this intense activity to how the application interacted with the browser’s Local Storage.
Finding 1: The “Item-by-Item” Trap
OpenLMIS was saving each data item individually into Local Storage – thousands of localStorage.setItem() calls, one for every little record.
While manageable with small amounts of data, this approach became incredibly inefficient as the dataset grew. The sheer number of separate save operations overwhelmed the browser.
Finding 2: The High Cost of Repeated Conversions
Each time data is saved to Local Storage, it must be converted (serialized) into a text format (JSON). When read back, it must be converted back (parsed) into a usable format. Doing this conversion thousands of times in rapid succession – once for each individual item being saved or loaded – consumed enormous CPU resources. This serialization overhead, minor at first, became a major performance killer as the number of items increased.
Finding 3: How Synchronous Calls Freeze the Browser
Local Storage operations are synchronous – meaning each one blocks the browser until it finishes, effectively freezing everything – including updating the screen or responding to clicks. When thousands of these blocking calls happened one after another, the total waiting time added up significantly, causing the long freezes users experienced during login. The login process, which prepares the application state including the ever-larger offline data cache, suffered most acutely from this scaling failure.
The Solution: Batching Data for Scalable Performance
Knowing the bottleneck was the item-by-item approach failing at scale, we focused on a more efficient and scalable strategy for using Local Storage.
Core Strategy: Grouping Data Before Saving
The key change was to stop saving data piece by piece. Instead, we modified the application to gather related data items into larger groups (batches) first. Then, each complete batch was saved to Local Storage with a single localStorage.setItem() call.
This approach scales beautifully: even if a batch contains thousands of items, it still only requires one save operation and one data conversion (serialization). This dramatically reduced the number of blocking calls and the overall CPU load, even with large datasets.
/**
* BEFORE Optimization: Caches source data item by item.
*/
function cacheSources_inefficient(sources, facilityId) {
sources.forEach(function (source) {
source.facilityId = facilityId;
// Inefficient: Calls storage 'put' individually for each source inside the loop.
offlineSources.put(source);
});
}
/**
* AFTER Optimization: Caches source data in a single batch operation.
*/
function cacheSources_optimized(sources, facilityId) {
var sourcesToStore = sources.map(function (source) {
source.facilityId = facilityId;
return source;
});
// Efficient: Calls a batch storage operation 'putAll' once with all sources.
offlineSources.putAll(sourcesToStore);
}
Streamlining Data Reads
Reading data was optimized similarly. The application now fetches an entire batch with one localStorage.getItem() call and performs the data conversion (parsing) just once. Accessing individual items within the batch is then quick and efficient using the resulting in-memory data.
Supporting Best Practices
- Data Diet: We reviewed the data being stored locally, removing anything non-essential to keep batches smaller and conversions faster.
- Smart Loading: We encouraged loading data batches only when needed, rather than loading everything upfront, especially important as data volumes grow.
By replacing the granular, unscalable method with a consolidated batching strategy, we directly addressed the performance limitations revealed by our investigation.
The Results: Faster Logins and Smoother Sailing
Implementing the scalable batching strategy yielded immediate and substantial improvements:
Login Time Restored
Average login time plummeted from 37 seconds back down to 8.5 seconds. This ≈77% reduction brought relief to users and proved the system could handle larger datasets efficiently. Profiling confirmed the CPU overload during login was gone.
before
after
System-Wide Responsiveness
The benefits rippled throughout the application, ensuring better performance regardless of data volume:
- Modules loaded faster as cached data could be read efficiently in batches.
- The user interface felt much snappier, with less freezing, as main thread blocking was drastically reduced.
- Handling offline data and synchronizing became smoother and more reliable.
- Overall browser stability improved due to more efficient resource use.
Conclusion: Scale Demands Smart Design
The performance journey of OpenLMIS underscores the need for scalable design choices, especially for fundamental operations like client-side data storage for offline use. What started as a small performance hiccup became a major usability issue. But by identifying a core bottleneck – the inefficient use of Local Storage – and replacing it with a scalable batching strategy, we restored responsiveness and ensured OpenLMIS could keep pace with its growing global footprint.
This optimization didn’t just lead to faster logins – it made the entire app feel snappier, more stable, and ready for the future.
This case study clearly shows that carefully optimizing how web applications interact with core browser features is vital not just for immediate speed, but for long-term usability and success as systems grow.
Need help optimizing performance in your system?
We specialize in scalable solutions for complex applications. Let’s talk about your challenges.