Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Handling third-party API requests in End-to-End testing

I want to test my Rest API with end-to-end tests. As I understand, the difference between integration tests is that we don't do in-memory system configuration, but use real test DB and network requests.

But I can't understand how to handle third-party API requests(like GitHub or Bitbucket API).

Is it a normal practice to create a fake Github account with fake data that would be fetched by my tests ?

And what to do with access tokens, not all services are public and even public services can fail with rate limit.

like image 314
Enthusiastic Developer Avatar asked Oct 03 '18 12:10

Enthusiastic Developer


People also ask

What testing method would you use to test a 3rd party application?

If you want to test the interaction with the 3rd party or with the file system, then what you need is integration testing which means that you test all the parts that have already been unit tested and assert how they work together.

How do you cover end-to-end API testing?

Key Steps in Setting up End-to-End Tests:Review the requirements you'll be using end-to-end testing to validate. Set up the test environments and outline the hardware /software requirements. Define all the processes of your systems and its integrated subsystems. Describe the roles and responsibilities for each system.

Is API testing end-to-end testing?

In cases where the API is a public one, providing end-users programmatic access to our application or services, API tests effectively become end-to-end tests and should cover a complete user story.

How do you test third party integration?

To test how well your product is integrated with the third-party service, you can input different data and see whether the third-party service provides a correct response (or if your product correctly interprets the service's response).


2 Answers

Is it a normal practice to create a fake Github account with fake data that would be fetched by my tests ?

Yes. The purpose of an E2E test (vs an integration test) is to verify that the full system works with all the real system components in place, both the ones you control and the ones you don't. This can be hard to setup and a pain to maintain; but many of those pain points will be exposing real potential issues in your production service. How your service responds to that instability is itself a feature to be tested: does your system crash and burn, or does it gracefully present an error message and support good retry handling?

This also nets you a type of coverage that mocks cannot provide: If the third party API you're using is naughty and introduces some sort of breaking change, your E2E tests will catch it. This is a decent reason to continually run your E2E suite; not just during deploys.

The next level of this sort of testing is chaos engineering where not only do you test your production systems, but you purposefully introduce faults (yes, into prod) in order to ensure that your service can really handle the pressure.

And what to do with access tokens, not all services are public and even public services can fail with rate limit.

Your staging environment should be configured with separate sandbox accounts for external services. I'm not sure what you mean by "not all services are public" but just strive to keep your staging environment (or test users on prod) as identical to a real prod user as possible. For services that don't support multiple access tokens, you can get creative and try to clearly delineate your test data within their system.

Rate limits can be annoying, but if you're getting close enough that your tests push you over the limit, then you should be pursuing a strategy to address that anyways (negotiating with the service, getting multiple accounts, ...).

like image 194
George Avatar answered Oct 20 '22 00:10

George


Running your tests against 3rd party services can result in slow and flaky tests when the service is down or when network latency triggers certain testing timeouts. Not to mention you run the risk of triggering API rate limits depending on the 3rd party service you're hitting. Your tests should ideally be deterministic, not failing randomly, and not needing conditional logic to handle errors within a particular test case. If you expect to need to handle errors, then there should be a specific test case to cover those errors that runs in every build, not waiting for non-deterministic failures to come in from the 3rd party.

One argument people will make is that your tests should notify you if the 3rd party API breaks for one reason or another. Generally speaking, though, most major 3rd party APIs are extremely stable and are unlikely to make breaking changes. Even if it does happen, this is an awkward and confusing way to find out that the API is broken, and in all likelihood, your tests aren't going to be the first place you hear it from. More likely your customers and your production error tracker will notify you. If you want to track when these services change or go down, it makes sense to have a regular production check of some sort to verify it.

As for how to write tests around these situations, that's a little more tricky. There are tools such as VCR in Ruby which work well for stubbing out your language's internet connections and allowing you to stub out, record, and customize responses (there's a list of similar implementations in other languages further down in their readme). That doesn't work for when your browser connects to those resources in automated end-to-end tests, though. There are tools that proxy your browser's web connection such as Puffing Billy in Ruby, but it's a pretty involved process to set up, including managing security certificates. This seems pretty brittle and hard to debug when something isn't working quite right.

Your best bet for writing tests that are deterministic and maintainable may be to fake out the service in test mode. thoughtbot has a pretty decent video on this and here's a high-level article from CircleCI. Essentially, you swap in an adapter in test mode that stands in for your 3rd party service integration. Maybe what you can do on your local machine is make it possible to optionally use the real service or the adapter via an environment variable in order to verify that the tests run the same against both. You could also set up a daily build to run against the real thing so that it would verify that the tests still work alright without introducing a lot of flakiness to your more frequent builds. One issue I've run into, though, is that even if I set up a test account on that 3rd party service, the results will change over time as I add or modify information for the sake of testing new functionality, such as adding new repos, modifying issues, etc. It requires additional consideration for maintaining your test account as a set of fixtures for all of your tests.

One additional tool I've come across that may be helpful is the likes of ngrok-tunnel (Ruby again). This is only relevant in cases where you need the 3rd party service to contact your app, since they can't send requests across the web to localhost:3000. If you've configured some sort of webhooks, services like this can make testing a lot more straightforward.

like image 42
lobati Avatar answered Oct 20 '22 02:10

lobati