Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance testing with external dependencies

When performance testing in the microservices world (talking mainly load testing), what is your approach regarding external dependencies (APIs) your application relies on, but not owned/controlled by your team. In my case the external dependencies are owned by teams within the same company.So would you point to the corresponding "real" integration non-prod endpoints OR you would create stubs and mimic their response times in order to match production as much as possible?

  • First approach example: A back-end api owned by your team and calling an external api to verify a customer. Your team doesn't have control over the customer api, but you still point to their integration testing endpoint when running the load test.
  • Second approach example: A back-end api owned by your team calls a stub that sends a static response and mimics the response time of the external customer api.

I realise there are pros and cons of the two approaches, and one would favour over the other depending on the goals of the testing. But what is your preferred one? Shouldn't be necessarily a choice between the two mentioned above. Can be a completely different one.

like image 916
martbon Avatar asked Dec 15 '19 05:12

martbon


People also ask

Do we need separate environment for performance testing?

Isolation of a Testing EnvironmentIt's quite important to be sure that there're no actions inside the testing environment which can adversely impact performance testing. It is because of the fact that every test is unique by its results.

Which environment is best for performance testing?

Performance tests are best conducted in test environments that are as close to the production systems as possible. Isolate the performance test environment from the environment used for quality assurance testing.

Can performance testing be done in production environment?

Measuring the performance of applications tested only in the production environment can be risky. To avoid costly performance issues, we recommend testing in the QA environment as well. Also, teams need be on hand to react immediately, depending on the impact the test has in the production environment.


1 Answers

It is important to identify the system (or application) under test. If you are measuring the performance of only your own microservice, then you can consider stubbing as an option.

However, performance test is typically done to assess the performance of the system as a whole. The intent is usually to emulate the latency in actual usage. The only way to model this somewhat accurately is to not stub and use the "real" integration end points. This approach has additional advantages as it can help you to identify potential system performance bottlenecks such as chained synchronous calls between your microservices (Service A calls B and B calls C and C calls D and etc). The tests can also be reused for Load testing.

In short, you would need to do both to ensure:

  1. A microservice is performing within the SLA
  2. The various microservices are performing within the SLA as a whole.
like image 120
Seth Avatar answered Oct 03 '22 10:10

Seth