In the dynamic realm of healthcare information systems, the OpenIMIS project stands as a shining example of innovative solutions that bridge the gap between technology and healthcare. OpenIMIS, short for “Open Insurance Management Information System,” is a groundbreaking initiative designed to transform how healthcare insurance is managed and delivered, especially in underserved and resource-limited regions. For it to reach its full potential, an effective performance testing is required.
Welcome aboard to our voyage through the intriguing world of performance auditing, as we explore the nuances of OpenIMIS, a project that perfectly captures the dynamic nature of healthcare information management. This blog post is your helpful manual for understanding and succeeding in the field of performance testing.
What is Performance Testing
Performance testing is a crucial stage in the creation and implementation of software, particularly for systems where scalability, speed, and responsiveness are the key. It is a collection of tests designed to assess a software application’s overall performance, stability, and responsiveness under various scenarios.
Why is Performance Testing important?
Performance testing is crucial for several reasons:
- Enhancing User Experience: In the current digital era, users anticipate quick and easy interactions. By ensuring that apps can live up to these expectations, performance testing improves user retention and satisfaction.
- Finding bottlenecks: It assists in locating performance bottlenecks, such as resource limitations or sluggish response times, allowing teams to proactively fix them.
- Evaluation of Scalability: Performance testing is essential for expanding organizations since it reveals how well an application can adapt to growing loads.
- Stability and Reliability: It guarantees that an application maintains stability even in the face of high usage or stress, lowering the possibility of crashes or outages.
Types of tests
Performance testing includes various types of tests, each serving a specific purpose:
Load Testing: Evaluates how a system performs under expected load conditions, such as the number of concurrent users or transactions.
Stress Testing: Pushes the system beyond its expected load limits to determine when and how it will fail.
Scalability Testing: Assesses an application’s ability to scale up or down in response to changes in load.
Endurance Testing: Focuses on how the system performs over an extended period to identify potential issues like memory leaks or performance degradation.
Volume Testing: Checks how well the system handles large volumes of data, ensuring data storage and retrieval efficiency.
Spike Testing: Evaluates how a system copes with sudden and extreme spikes in traffic, such as during promotions or major events.
Failover Testing: Tests the system’s ability to switch to a backup or redundant system in case of a failure, ensuring continuous service availability.
Software applications must meet performance standards and provide a seamless user experience. Performance testing is a critical component of the quality assurance process.
Performance testing is essential to the OpenIMIS project because it guarantees the healthcare information management system’s dependability and efficiency, even in settings with limited resources. The importance of thorough performance testing is glaringly obvious, for example, in the Nepali OpenIMIS instance, which handles 25,000 claims daily.
Available tools and why JMeter
There are many different tools on the market for performance testing, each with unique advantages and features. Your testing efforts’ efficiency can be greatly impacted by the instrument you select. This section will examine a few of the available tools and explain why Apache JMeter is the recommended option for the OpenIMIS project’s performance audit.
Available Performance Testing Tools
Apache JMeter is an open-source, Java-based tool designed for load testing, performance testing, and functional testing. It is versatile, extensible, and well-known for its ability to simulate various scenarios and test a wide range of applications.
LoadRunner is a performance testing tool from Micro Focus. It provides a comprehensive suite of testing capabilities for web and mobile applications, client-server applications, and more.
Gatling is an open-source, highly efficient load testing tool built on the Akka framework. It’s popular for its scripting in Scala and can easily simulate thousands of users.
Locust is an open-source load testing tool designed for simplicity and flexibility. It allows testers to write test scenarios using Python, making it a preferred choice for those comfortable with scripting.
Why JMeter for OpenIMIS?
JMeter’s open-source nature is a major factor in our decision to select it as the best tool for our performance audit of OpenIMIS. This is the reason why:
Open Source Accessibility: JMeter is a totally free utility in addition to being open-source. Because of its accessibility, a wide spectrum of people can choose it, eliminating financial restrictions.
Global Usage: JMeter’s open-source philosophy has facilitated its broad acceptance in a variety of industries. Because of its vibrant user community, which has produced a wealth of documentation, plugins, and support materials, testers, developers, and businesses of all sizes can make good use of its features.
Democratizing Performance Testing: We are democratizing performance testing for OpenIMIS by choosing JMeter. Because JMeter is open source, anyone may access and run our test scripts regardless of resources, which adds to the project’s dependability and transparency.
Versatile and Adaptable: JMeter is an excellent choice for a project like OpenIMIS due to its versatility and adaptability, which extend beyond its benefits as an open-source tool. It can accommodate the complexity of healthcare information systems by simulating a range of scenarios, protocols, and technologies.
Easy-to-use Scripting: JMeter is designed to meet the needs of both experienced and novice performance testers. With its scripting features, experienced testers can refine scenarios; for those who prefer a simpler method, it also gives the ability to record and write test scripts.
Customizability: JMeter is a powerful tool for creating tests that are tailored to individual project requirements because of its extensibility with custom plugins and routines.
Cross-Platform Compatibility: JMeter may run on a variety of operating systems and is not dependent on any one platform. There is flexibility in selecting the test execution environment thanks to this cross-platform interoperability.
JMeter proves to be the best option in the OpenIMIS project setting, where cost-effectiveness, openness, and accessibility are crucial. Anyone can take part in our performance tests because it is open-source, which promotes cooperation and highlights the project’s dedication to transparency.
We will explore the practical aspects of using JMeter to do thorough performance testing on OpenIMIS in the following sections.
Writing a Script in JMeter
General information about writing a JMeter script
1. Test Plan:
Your performance testing efforts can be guided by a JMeter Test Plan. It includes components such as thread groups, controllers, listeners, and timers, and covers the whole testing process. You can simulate different situations and interactions with the target system by following the test plan, which describes the structure and execution flow.
2. Thread Group:
The Thread Group, which specifies the number of users (threads) and the test time, is the foundation of a JMeter test plan. Parameters such as ramp-up time and loop counts dictate how users are introduced to the test over time. Thread groups imitate concurrent users accessing the system. This framework makes it possible to simulate actual user loads, which aids in evaluating how well the system performs under various stress conditions.
In a JMeter test plan, samplers are the main actors that create requests to the target server. Samplers mimic a variety of user actions, including HTTP requests, FTP transfers, and JDBC database queries. Because JMeter comes with a wide range of built-in samplers, it may be used to test various applications and protocols. Furthermore, bespoke samplers can be included to fulfill certain testing needs.
In JMeter, listeners are essential to the display and interpretation of test data. These parts gather and present the data produced when running tests. The Summary Report, which offers a tabular summary of important metrics, and the Results Tree, which offers a tree-like display of sample results, are examples of common listeners. Listeners provide valuable insights into the functioning of the system being tested, assisting in the identification of mistakes, bottlenecks, and general responsiveness.
OpenIMIS Test Plan
Careful preparation is the most crucial element in making sure that the performance testing procedure is successful. This is the stage where you create the framework for your test script and define the goals you want to accomplish. A preview of our QA team’s proposal for the OpenIMIS initiative may be found here. By carefully organizing and specifying the parameters of our examinations, we can identify possible problems and enhance the healthcare information system, increasing its dependability and effectiveness.
‘Get paginated’ claims query
- Description: Test to check submitting claim review
- Prerequisites: Given that the claim is in “checked” status and “selected for review.” The use is on the claim review overview page
- Steps: Run query or user loading claims list on frontend
- Data Inputs: –
- Expected Outcomes: User is taken back to the claim overview page and the review status of that claim is ‘delivered’
- Metrics to Monitor: Response time and resource utilization
- Exit Criteria: Response time is below 5 sec and resource utilization is below 90%
Search for an insuree
Adding insuree to existing family
Search for family
Add new family
In the picture below you can see an identical plan, smoothly converted into a JMeter script. This change serves as a vital link between creating an organized testing strategy and actually carrying out performance tests. We want to show how the complexity of the testing scenarios used in the OpenIMIS project – specifically, the “Claim Review” process – is skillfully represented in the JMeter script’s code. This translation not only makes the testing strategy a reality, but it also gives testers and developers the ability to replicate real-world situations, evaluate system responsiveness, and precisely estimate resource consumption. It is a graphic depiction of how planning becomes a series of doable actions, opening the door to a thorough and perceptive performance testing project inside the framework of the OpenIMIS project.
Claim Review – A Closer Look
A crucial stage in the healthcare claims lifecycle, “Claim Review,” is described in the context of the OpenIMIS initiative. It entails processing, verifying, and making sure these claims are accurate. The claim evaluation process can be complicated and resource-intensive due to the complexity of healthcare systems, which makes it an interesting topic for performance testing.
A testing plan must serve as the foundation for an organized process when writing a JMeter script for the “Claim Review” procedure. To direct our scripting efforts, we will take the following key elements from the strategy and apply them to the “Claim Review” process:
- Claim Review (Simple Controller):
Its only purpose is to clarify that the following script is specific to a defined test case, enhancing readability and understanding.
- Get claim ready to be reviewed (GraphQL HTTP Request):
To facilitate the delivery of a review, it’s crucial to obtain the UUID of a claim whose status allows for the review to be finalized. To achieve this, we construct a tailored HTTP request, incorporating the appropriate filtering options. By sending this request, we aim to retrieve a claim that meets the specified criteria. The result of this request is expected to yield a claim UUID that is eligible for the review delivery process.
- JSON Extractor (JSON Extractor):
To capture and utilize the UUID of the fetched claim from the HTTP request response, we employ a JSON Extractor. This component is instrumental in parsing the response and extracting the specific data we require. By configuring the JSON Extractor appropriately, we can isolate and store the extracted UUID in a designated variable. This variable then becomes accessible for subsequent steps in the testing script, facilitating seamless integration of the fetched claim’s UUID into our performance testing workflow.
The response body of the previous HTTP request has the following pattern:
And here is the expression that gets us a desired UUID:
- If uuidForReview (If Controller):
To account for the edge case where there might be no claim in the system eligible for a review, we implement an if statement. This conditional check ensures that if there is no UUID retrieved, we refrain from proceeding with the subsequent tests to prevent false “deliver review” responses. This strategic use of an if statement serves as a safeguard, allowing us to gracefully handle scenarios where the necessary data is not available and avoiding potential misinterpretations in the testing outcomes.
- Deliver review for Claim (GraphQL HTTP Request):
Once we obtain the desired UUID, we can proceed to send a request to finalize the review for the corresponding claim. This involves submitting an HTTP request specifically crafted to trigger the delivery of the review.
- While Controller (While Controller):
Given OpenIMIS’s reliance on asynchronous communication with the server, directly assessing the exit criteria specified in the test plan, such as response time and resource utilization, becomes challenging. However, a practical approach is to inquire about the status of the claim with a specific UUID by continuously sending an HTTP request. This allows us to gauge the completion of the claim review process by checking if the claim status has transitioned to “review delivered.” Although we cannot directly monitor response time and resource utilization in this asynchronous setting, this method provides a viable means to track the progress and outcome of the claim review process, aligning with the dynamic nature of OpenIMIS’s server communication.
- Counter (Counter):
To monitor the progression of interactions, we implement a counter mechanism. As the counter increments with each interaction, we utilize a while-loop with a conditional statement that evaluates whether the counter value has reached a specified threshold, e.g. 10. Once the counter hits this threshold, the while-loop condition becomes false, enabling precise control over the number of iterations. This counter, in conjunction with a timer that we will elucidate further in the subsequent explanation, provides us with a robust mechanism for managing and regulating the execution frequency of the while loop.
- Get reviewed claim (GraphQL HTTP Request):
The sole purpose of this request is to check the status of a claim.
- JSON Extractor (JSON Extractor):
Similarly to the previous extractor, this section extracts an expression from the response body and stores it in a variable. The purpose here is to evaluate whether the status is “DELIVERED.” If the condition holds true, the while loop concludes, providing a controlled and efficient means to halt the loop once the desired status is achieved.
- Constant Timer (Constant Timer):
It introduces a delay of half a second during each iteration. Coupled with a counter set with a maximum value of 10, this setup provides assurance that the entire while loop will take a maximum of 5 seconds to complete.
- Assert reviewStatus REVIEW DELIVERED (JSON Assertion):
This assertion checks whether the changes, the claim status change, are reflected on the server within a 5-second timeframe. If none of the requests sent within this duration result in the expected status, the assertion fails. This failure indicates that the test did not meet the exit criteria, aligning with the specified conditions in the performance testing plan.
We’ll turn our attention to the actual implementation and analysis of performance testing scripts in our next blog article. This is a crucial step in guaranteeing the dependability and responsiveness of healthcare information systems like OpenIMIS.
As we plan and execute realistic scenarios, with a focus on load distribution and KPI monitoring, our objective is to thoroughly assess the system’s behavior under various conditions. In order to resolve any possible problems, we will examine raw data, evaluate performance metrics, validate against predetermined thresholds, and carry out root cause analysis throughout the analysis phase.
The iterative process of running and evaluating scripts is essential to maximizing system performance because it directs the creation of healthcare technology solutions that are robust, useful, and flexible enough to meet changing market demands.
Stay tuned for the next part, where we will unveil the results of our performance testing and conduct a comprehensive analysis. Join us as we dissect the performance metrics, identify key insights, and navigate the dynamic landscape of health technology through the lens of our testing endeavors.