At SolDevelo, we know that system optimizations don’t have to take months to deliver game-changing results. In our latest optimization success story we proved that targeted improvements can drastically enhance performance – boosting OpenLMIS performance by 56%!
Following this success, we worked to improve another system – openIMIS, a powerful open-source solution for managing social protection programs. There, we encountered a major bottleneck: processing large-scale beneficiary data was significantly slower than expected. The system struggled with large uploads, hindering scalability and efficiency for social protection programs that rely on rapid processing.
So, we rolled up our sleeves and got to work. The result? A massive speed boost, making large-scale data uploads smooth and fast.
What is openIMIS?

openIMIS is an open-source information management system designed with low- and middle-income countries in mind. Its main mission is to support governments and other institutions in establishing effective social protection processes and ensuring improved insurance coverage. openIMIS reduces administrative burdens, enhances transparency, and ensures that vital resources are allocated to those in need.
It is especially impactful in regions where access to social services is limited or inefficiently managed. By automating complex processes, openIMIS helps improve access to healthcare and social benefits, ensuring vulnerable populations are reached effectively.
The Challenge
One of openIMIS’s critical functions is managing large-scale beneficiary data. The Social Protection Manager needed to upload over 220,000 beneficiaries into the system to enroll them in a cash transfer program. However, the process quickly hit a wall:
- Uploading even 20,000 beneficiaries took 7.5 minutes, making larger uploads seem impossible.
- Errors occurred during processing, further slowing things down.
- Performance bottlenecks risked delays in delivering financial aid to those in need.
Taking all of the above into account, successfully uploading the file with 220,000 beneficiaries seemed impossible.
Our goal? Pinpoint and eliminate inefficiencies to ensure smooth, high-speed data uploads – without overhauling the entire system.
Our timeline? Just a few days!
How did we do that in such a short time?
The initial challenge was to accurately replicate the issue under real-world conditions to ensure meaningful diagnostics. After successfully reproducing the prolonged response times, we performed an in-depth performance profiling to pinpoint critical bottlenecks.
Through targeted optimizations over the course of a few days, we achieved a worst-case execution time reduction of 97.26%, improving from 7.5263 minutes to 12.233 seconds for uploading 20 000 beneficiaries. Moreover, after the fix, uploading the large file containing 220,000 beneficiaries became possible.
Step 1: Collecting real-time data used in production
Collecting real-time data in this scenario went smoothly, thanks to the support of the open-source community. Community members provided valuable insights and shared relevant data that helped us analyze performance bottlenecks effectively.
For future reference and to evaluate other functionalities, we can leverage frequently updated information about specific implementations. The openIMIS documentation provides a comprehensive resource for monitoring and improving system performance: openIMIS Documentation.
Step 2: Setting up a local server for openIMIS testing
The next step involved deploying a local instance of openIMIS to accurately test performance improvements in a controlled environment. We set up a dedicated server running the latest version of openIMIS, ensuring it mirrored the production environment as closely as possible.
To facilitate real-world testing, we developed and executed automated scripts to populate the database with the real-time data collected in Step 1. These scripts ensured that the system operated under realistic conditions, allowing us to analyze performance metrics effectively.
This setup enabled us to isolate and evaluate specific optimizations before implementing them in a live environment, reducing potential risks and ensuring a smooth transition for production use.
To gain deeper insights into openIMIS’s performance and identify potential bottlenecks, we integrated Django Silk, a powerful profiling and inspection tool for Django-based applications.

Installation & Configuration
We installed Django Silk within our local testing environment to monitor HTTP requests and database queries in real-time. The setup involved:
Installing the package via pip:
pip install django-silk
Registering Silk in the INSTALLED_APPS and middleware settings of settings.py:
INSTALLED_APPS = [
...
'silk',
]
MIDDLEWARE = [
...
'silk.middleware.SilkyMiddleware',
]
Running migrations to initialize the Silk database tables:
python manage.py migrate silk
- Configuring access control to ensure profiling data is only accessible to authorized users (e.g., SuperAdmin accounts).
- In the documentation section you can find any other settings which would be helpful while setting up on the server.
As the final step, we added the ‘silk_profile’ decorator to register profiling for specific services where potential bottlenecks might occur. This is how the decorator should be added.
from silk.profiling.profiler import silk_profile
...
@api_view(["POST"])
@permission_classes([check_user_rights(IndividualConfig.gql_individual_create_perms, )])
@silk_profile(name='Import beneficiaries')
def import_beneficiaries(request):
# <IMPLEMENTATION>
Usage & best practices
Django Silk provides a user-friendly UI for analyzing request execution times, query performance, and resource usage. This allowed us to:
- Track slow database queries and optimize them accordingly.
- Analyze API response times to detect potential performance bottlenecks.
- Monitor request processing details to refine system efficiency.
Since profiling tools can introduce overhead, Silk was used strictly in development mode and on a dedicated test server containing a copy of production data with appropriate security measures. Direct connection to production was avoided to maintain system integrity.
By leveraging Django Silk, we were able to pinpoint inefficiencies, prioritize optimizations, and enhance openIMIS’s overall responsiveness.

Step 4: Poor performance investigation
After completing Step 3, we took a closer look at the dashboard to check the details of low performance modules. In this case, we looked deeper into the import beneficiary:

Based on the SQL analysis, it looks like one of the SQL update is “expensive”:



Here we can see that this UPDATE has thousands of cases in the SWITCH statement.

You can see the error has place in this line according to the stacktrace.
IndividualDataSource.objects.bulk_update(data_sources_to_update, ['validations'])
Step 5: Optimalization process
In those pull requests (PRs – see below), we decided to remove the bulk_update method, which was identified as the bottleneck. We replaced this ORM bulk operation by joining multiple UPDATE statements to execute all updates within a single transaction. Additionally, the code now properly handles string formatting using psycopg2.sql.SQL and sql.Literal, which helps prevent potential formatting errors.
Analysis: Comparison with bulk_update ORM (Django)
The original bulk_update method in Django can be very efficient for updating multiple rows in a single query. However, it has limitations when dealing with complex field values or when more control over the SQL query is needed. In this case, using bulk_update would involve passing a list of model instances to be updated, which could result in inefficiency when custom formatting is required, especially for non-trivial fields like JSON or UUIDs.
In contrast, this new approach manually constructs the UPDATE statements and ensures the correct handling of string formatting. The use of psycopg2.sql.SQL and sql.Literal ensures that the values are properly sanitized and formatted, preventing issues such as SQL injection or errors in formatting. This approach allows more flexibility for custom SQL logic, particularly when working with more complex fields or needing to customize queries.
Pull requests:
- https://github.com/openimis/openimis-be-individual_py/pull/150
- https://github.com/openimis/openimis-be-social_protection_py/pull/108
Step 6: Results verification
After successfully deploying the optimized version of openIMIS in our local testing environment, we conducted a detailed performance verification to quantify the improvements.
By running the same dataset through the system, we observed a significant reduction in execution time when importing 20,000 beneficiaries:
before optimization
after optimization
Giving us an impressive:
To ensure the validity of these results, we performed multiple test iterations, here are the results:
Before

451578ms overall
422588ms on queries
44 queries
After

12233ms overall
685ms on queries
43 queries
These results confirm that the optimizations effectively eliminated key bottlenecks, drastically improving system efficiency while maintaining data integrity and stability.
The general results
After implementing these optimizations, the function’s execution time dropped to just 12.233 seconds – a staggering 37x performance improvement! For the worst case scenario, we get a 97.29% performance boost without changing any architectural aspects of the project.
The work that we did is available on openIMIS GitHub pull requests:
- https://github.com/openimis/openimis-be-individual_py/pull/150
- https://github.com/openimis/openimis-be-social_protection_py/pull/108
After the fix, the upload of 228,264 individuals took 2.82 minutes, which is an acceptable time for such a large upload. Previously, it was completely impossible to upload a file containing 220,000 individuals due to the time out errors and also the system doesn’t allow users to import files that big.
Business impact
This optimization significantly improves openIMIS’s ability to handle large beneficiary enrollments, reducing processing delays and ensuring that social protection programs can reach those in need faster.
There will be similar cases where large numbers of beneficiaries need to be uploaded, such as in Malawi, where the population could be even larger compared to other implementations of openIMIS. This is not limited to just one specific implementation, but applies to other current and potential cases that may be relevant to the social protection business flow.
By making the system more efficient, we enhance user experience, reduce operational overhead, and increase scalability for future growth.
At SolDevelo, we are committed to leveraging cutting-edge tools and best practices to optimize system performance. This achievement with openIMIS is just one example of how targeted engineering efforts can lead to impactful results.
Want to try openIMIS in realtime?
The latest released version of openIMIS is available at demo.openimis.org
Credentials:
User role | Admin | Admin | Enrollment officer | Claim administrator |
Language | English | French | English | English |
Username | Admin | Admin_Fr | E00005 | RHOS001 |
Password | admin123 | admin123 | E00005E00005 | RHOS0011RHOS0011 |