You are on page 1of 2

Ensuring Accuracy in Performance Testing Most of the time applications may face performance issues even after rigorous

performance validation. This is primarily because of improper performance test environment setup and model. It is a common issue across the industry that the testing tool might not have performed correctly during the load testing. So it is always a best practice and a mandate to validate that the testing tool simulates the network traffic as expected genuinely and to ensure the test environment is also accurate. Here is an idea, how Queuing theory Laws can be applied in validating the Performance test accuracy and to ensure that the application has a smooth accessibility in production without performance issues. Littles Law The long-term average number of customers in a stable system L is equal to the long-term average effective arrival rate, , multiplied by the average time a customer spends in the system, W and it is expressed algebraically, L = W Applying Littles Law in Performance Testing The Average number of (virtual) users N in the system (server) at any instance is equal to the product of average throughput X and average response time Z. It is expressed algebraically, N= X * (Z + R), where R=think time Demonstration of Littles Law to ensure Performance Testing From the results obtained from the performance testing tool, we can find how many actual users have been generated to test the application using Littles Law. A sample load test done on a sample application with 10 users has obtained following test results. Average Transactions/sec=1.7, Average transaction response time=0.5 sec, Average Think time=5sec By Littles Law, Number of virtual users emulated by the performance testing tool is, N=X*(Z+R) =1.7*(0.5+5) N=9.3510 virtual users have been emulated during load test If the actual virtual users used in the system is equal to the Littles Law result, then neither the tool nor the server has undergone any problem. If the Littles Law result is less than the actual virtual users, then it means remaining users were idle throughout the test. * It is understood that the throughput data above has been extracted from the tool but it is always preferred and a best practice to use the throughput data from the server.

Little's Law [Professor John D.C. Little] - Verify Performance Test Results Here is something I often use (and need to lookup) when I'm analyzing performance test reports. Most of the time, these reports are nothing but nebulous and I have no choice but to go back to the basics. Here's the simple procedure I use to verify the sanity of the test: Run a step load test. Get the following data from your load testing tool (averaged) per step: 1. Transactions per second 2. Hits per second 3. Think time 4. Response time 5. Number of virtual users being simulated. 6. Throughput (bytes)

Here's where the math starts: Using Little's Law, you then then deduces the following: Actual Number of Users in the System = (Response time + Think time) * Transactions per second You can also verify the page size: Page size = Throughput / Hits per second You can tell if there is something wrong with your test bed if the actual number of users in the system given by Little's Law is different from the number of virtual users being pumped in by your load testing tool. I.e. if the number of users simulated by the testing tool is more than what you deduce by Little's Law, your system is probably just queuing those extra users. You'll also notice your response time will start to 'knee' once this starts to happen. The main thing here is to do this calculation for an average of all points data points per step change in number of virtual users simulated.

You might also like