Should We Automate API Testing?

Automating functional tests for covering API endpoints only makes sense when API is the product we deliver.

If API is only used by a front-end application which is the product of our work, then automating API endpoints using functional tests can be an anti-pattern and a waste of time.

Let's see why.

 

Case #1: When API Is the Product

I say API to be the product of our work when we (i.e. development team) deliver API that other systems (developed by other teams) consume.

Many times, I've seen teams avoid or abandon automation of the API endpoints. Sometimes excuse is that there is no front end system that they own, and thus nothing to automate or worry about. Sometimes it gets too ridiculous by saying that we "let other teams worry about it".

When this happens, quality of the API drops or is considered mediocre. Dependent teams have to deal with defects popping up in the API from time to time, by informing the upstream team about these incidents. Eventually, downstream team starts believing that the API developers are not qualified enough to produce a good quality software. And probably that's not far from the truth.

 

When we deliver API as a product, we need to consider it as our most important asset which our customers use. Just as we would consider a UI application. Downstream team is our customer that should not suffer because of our incompetency. If not organizational structure or political decisions holding them with us, they should not be willing to jump off of the integration altogether.

When we realize this, next step is to develop a functional test case suite that covers the API endpoints exposed to the dependent systems. Our team is responsible to run these test cases every time a new feature is developed, or when the regression testing is necessary for occasions such as fixed date releases.

If we need to run the test cases manually, there are plenty of tools at our service, SoapUI and Postman among those that I prefer so far.

When number of test cases increases and running them manually every time becomes painful, it's time to automate the tests. All development languages and tools support calling all kinds of APIs right out of the box, so you won't even need to worry about tools. Sometimes you may need to utilize a framework for your language and IDE.

Automated test cases should run as part of CI/CD process (if their speed allows to do so), or at least as part of the delayed process every once in a while.

Regardless of the method, this will serve as an instrumentation for receiving a feedback about broken API functionality earlier than our dependent teams notice it.

 

As a next step of improvement, we should ask our dependent teams to write and automate tests that cover their basic expectations (Consumer Driven Contracts). I don't mean to ask them to fully cover our API functionality - that's our job and the focus of this article. Instead, they should automate expectations, violating which would mean that we are about to break them.

Once we have this test case suite automated, let's make sure to append it to our own automated test suite, and run them all together.

 

Case #2: When API Is a Backend of a Product (UI Is the Product)

While I've seen teams producing APIs and not testing them, I've also seen teams producing UI and testing APIs that stand only behind those UIs.

I'm talking about functional tests specifically. It's an absolutely valid approach to have integration tests running against APIs even if only our own UI calls it.

First thing that surprises me is, functional test cases are written for such APIs as if we have business expectations for them. We really don't. This is just an outcome of confusion. Our users using UI don't even know that API exists, let alone that it's called from the UI code. We can't possibly say what the user expects from the API. You will argue that we can derive the expectation from those set for the UI. But that's not going to be very accurate, and here is why:

 

But do automate the front end application that consumes API as a backend. This is the product of your work; and just as for every product automated testing, all approaches described in the first part of this article apply.

So far, in this area, my favorite is Selenium WebDriver.

 

Example

A software development team has created an API endpoint, which is consumed by the UI application. Specifically, the new API endpoint is retrieving a restaurant menu information. The UI is retrieving this information and displaying on the screen.

A team member ("tester") dedicated to automating functional tests wrote a test case which adds test menu information into the admin API and then retrieves that information back using this new API endpoint. If the information matches, then the test case is marked as "green".

Another tester wrote a test case which does the same thing, but without sending the request to the API endpoint. Instead, the data is entered and verified on the UI screens.

Both test cases passed. Somebody raised a point they would always have the same status, and why maintain both of them. Since the practice so far was to write both kinds of tests, testers decided to keep both in their test suite.

At some point in the future, test database was accidentally dropped (deleted), and both these test cases started failing. Since there was a practice to log one bug per failing test case, 2 bugs were created and assigned to two different developers. After investigating the root cause for a couple of hours, both developers concluded the same thing - there was something wrong with the database. On the daily standup next morning, both developers gave similar updates, and the Scrum Master asked how they could avoid duplicate efforts next time. Decision was made to eliminate the test case for the API endpoint, since it seemed to provide exact same feedback as the other test case targeting the UI screen.

Ideally, the eliminated test case would turn into an integration test case and would be added to the integration tests suite. However, this particular development team did not have integration tests yet.

While they had spent only a couple of hours for this specific issue, the team realized that they could have spent much more time needlessly in other cases. That was discussed on a Retrospective as a lesson learned.

 

About Author and Content

The author of the above content is Tengiz Tutisani.

If you agree with the provided thoughts and want to learn more, here are a couple of suggestions:

Ready to take the next step?

Let's Talk