I have found contradicting information, one saying JMeter can produce much more load then LR can, the other saying the opposite. From what I know (if we do not consider licencing), each LoadGenerator is only limited by hardware. But so is JMeter. The documentation did not help me much. Does anyone have an experience with both of these so he can compare? I am speaking about 2 000- 4000 users. Thanks
LoadRunner is known to run well with very high volume tests, as is, out of the box.
JMeter can typically hit issues with high throughput, high threaded tests in the following scenarios:
The thing is, it's not that hard to solve JMeter's problems. It's simply a matter of best practice.
You should read those 2 documents for other best-practices:
LoadRunner also has issues at high load - the Analysis and data collation phases can take hours (literally) and you can't get around this. If you have too much data to analyze you can also run into memory issues. Jmeter is not as comprehensive at results analysis but it is much quicker.
If you really need high volume tests then I wrote a script that effectively gives you infinite scalability with JMeter - I've tested it up to 20000 users making 8000 hits a second running over 50 servers. It's 'infinite' because it works by running lots of isolated tests that do not talk to each other until the end of the test, that way there is no bottleneck with compiling results. But there's always another bottleneck somewhere...
Both tools have track records at the level you note, 2-4K users. Where the rubber meets the road is in terms of labor required to deliver test X at quality Y, including detailed analysis. If you are investigating both tools then you should consider a POC on your app.
Document your script and your desired level of analysis independent of either tool and then hire an expert in both to run a POC against your requirements. Time all of the tasks, even to the point of asking people to enter time at start of task and time at end of task in your documentation. Compare both the times and the output at the end of the POC.
You should be aware that when you go to market to get an expert for either tool that the level of outright fraud in skills in the performance testing marketplace is on the order of 97% (or higher). You want to hire someone with the strongest and longest track record with the tool under consideration with many references, otherwise you are likely to get a horribly distorted view of the capabilities and efficiencies of one or both tools which would likely lead to a poor decision on tool choice.
Expect to hire skills you may not have in house for either tool. Many believe that the performance test tool represents 85-90% of the skills required for a performance testing job. The inverse is actually true, with the tool skills running between 10-15% of the skills (critical skills) required to be successful.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With