Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Are there well-identified patterns for software scalability testing?

I've recently become quite interested in identifying patterns for software scalability testing. Due to the variable nature of different software solutions, it seems to like there are as many good solutions to the problem of scalability testing software as there are to designing and implementing software. To me, that means that we can probably distill some patterns for this type of testing that are widely used.

For the purposes of eliminating ambiguity, I'll say in advance that I'm using the wikipedia definition of scalability testing.

I'm most interested in answers proposing specific pattern names with thorough descriptions.

like image 878
Paul Morie Avatar asked May 29 '09 07:05

Paul Morie


People also ask

Which of the following tool is used to test scalability?

Popular scalability testing tools There are various tools available in the market, and a few tools are as follows: LoadUI Pro. LoadNinja. Apache Jmeter.

What is the difference between scalability testing and load testing?

Scalability testing focuses on performance of the application at different stages with varying loads. Load testing on the other hand, focuses on testing under heavy loads and determining the point at which it fails.

What are the differences between performance stress and scalability testing?

In scalability testing, load is varied slowly. In stress testing, load is varied abruptly. It is performed to find the threshold load. It is carried out to find the system recovery ability after sudden failure.


2 Answers

All the testing scenarios I am aware of use the same basic structure for the test which involves generating a number of requests on one or more requesters targeted at the processing agent to be tested. Kurt's answer is an excellent example of this process. Generally you will run the tests to find some thresholds and also run some alternative configurations (less nodes, different hardware etc...) to build up an accurate averaged data.

A requester can be a machine, network card, specific software or thread in software that generates the requests. All it does is generate a request that can be processed in some way.

A processing agent is the software, network card, machine that actually processes the request and returns a result.

However what you do with the results determines the type of test you are doing and they are:

Load/Performance Testing: This is the most common one in use. The results are processed is to see how much is processed at various levels or in various configurations. Again what Kurt is looking for above is an example if this.

Balance Testing: A common practice in scaling is to use a load balancing agent which directs requests to a process agent. The setup is the same as for Load Testing, but the goal is to check distribution of requests. In some scenarios you need to make sure that an even (or as close to as is acceptable) balance of requests across processing agents is achieved and in other scenarios you need to make sure that the process agent that handled the first request for a specific requester handles all subsequent requests (web farms are commonly needed like this).

Data Safety: With this test the results are collected and the data is compared. What you are looking for here is locking issues (such as a SQL deadlock) which prevents writes or that data changes are replicated to the various nodes or repositories you have in use in an acceptable time or less.

Boundary Testing: This is similar to load testing except the goal is not processing performance but how much is stored effects performance. For example if you have a database how many rows/tables/columns can you have before the I/O performance drops below acceptable levels.

I would also recommend The Art of Capacity Planning as an excellent book on the subject.

like image 186
Robert MacLean Avatar answered Sep 28 '22 00:09

Robert MacLean


I can add one more type of testing to Robert's list: soak testing. You pick a suitably heavy test load, and then run it for an extended period of time - if your performance tests usually last for an hour, run it overnight, all day, or all week. You monitor both correctness and performance. The idea is to detect any kind of problem which builds up slowly over time: things like memory leaks, packratting, occasional deadlocks, indices needing rebuilding, etc.

This is a different kind of scalability, but it's important. When your system leaves the development shop and goes live, it doesn't just get bigger 'horizontally', by adding more load and more resources, but in the time dimension too: it's going to be running non-stop on the production machines for weeks, months or years, which it hasn't done in development.

like image 43
Tom Anderson Avatar answered Sep 28 '22 02:09

Tom Anderson