Olivier Le Moal - stock.adobe.co

Tip

How to create API automation framework for testing

With an increasing need for API testing, having an efficient test strategy is a big concern for testers. Learn about building a test framework and some alternative approaches.

Modern applications often rely on APIs and organizations need a reliable API testing framework suited to its tech stack, workflows and testing philosophy.

Today's web and mobile software increasingly consist of front-end calls to middle-tier APIs. That makes the UI a thin wrapper, with the business logic inside the API, which calls the database. Testing directly at the API level can result in tests covering more scenarios in less time and are less likely to report incorrect failures or need maintenance over time.

The challenge is to make API tests easy to write, fast to run, available to people of different skill levels and broadly useful to many teams in larger organizations. To do that, organizations turn to frameworks designed to help them configure, set up and run sets of tests.

Learn the core components of an API testing framework and the basics of building one. Then, evaluate a few popular API testing tools' ability to interoperate in a homegrown API testing framework. Finally, consider alternative ways to look at and design an API test framework.

Components of an API test framework

Below are some common components found in most API frameworks, which will guide organizations through their implementation.

Test run

A test run consists of executing a test suite a single time, including all the associated test cases. The run will produce output, which includes a success or failure message, details on the test cases and their number of inspections, pass/fail rates and runtimes. The framework will likely need to run from the command line, from the continuous integration server, which needs to process the output.

Test environment

The challenge is to make API tests easy to write, fast to run, available to people of different skill levels and broadly useful to many teams in larger organizations.

API tests run against at least one server, typically expressed by a URL. This determines where the test sends requests. That base might vary depending on different contexts or test environments. For example, test01.companyname.com and test02.companyname.com might represent different servers in different test environments. Server identifiers might also come as an IP address or port number instead of the full domain name. They could come from the command line or perhaps an environmental variable.

Test cases

For lack of a better term, an API test case is a set of commands, API calls and one or more expected results. These could be set up as a scenario, such as walking through the checkout process, or a set of calls to the same feature. For example, a test case might set up a single known data set of products and then have 50 different search requests with 50 different sets of expected results.

Test suites

A test suite is a file with a collection of test cases. It's usually not necessary to run all tests, every time, across all teams. This allows a programmer to test just the changes for their API quickly.

Test execution engine

The test execution engine runs the test cases. This could be a part of the framework or an external script or tool that the framework triggers. Test cases might be in different formats for different types of API tests. For example, security tests and internationalization (i18n) might require specific formats, or perhaps teams prefer specific tools with particular formats. Allowing teams to "plug in" their preferred tools -- if the tool produces the standard output and runs on standard input -- can greatly increase the framework's adoption rate.

Standard test data sets

Seed the database with known data to ensure predictable results in UI displays and search results. This setup typically includes creating users, accounts, passwords and secret management.

Creating an API test framework

Below are some options for compiling these components into a complete framework.

API test cases look like computer code stored in a standard format. A test case might be a command, followed by parameters, followed by expected results. A test runner is a computer program that opens the file, loops through the code, executes the first command with the parameters and compares the result to the expected result.

One way to store these is as a table with .csv files. Each row represents a test case, and columns represent inputs, expected outputs and data. The test runner would read the .csv file and execute each row as a test.

Another option is to store the tests as a table in a wiki -- an editable webpage -- along with the test suites. The framework accesses the wiki through an API, finds all the links and then calls the executor for each link on the suite page. FIT, the Framework for Integrated Tests, is a dated but complete open source implementation of such an approach. However, FIT does not have a great deal of API support. One way to create API support with FIT is by extending it to support API/JavaScript test functionality in Java or C# -- or integrating it with some existing API test tools.

Using existing tools can be a great way to speed up framework development. Those tools will need to store test cases as text files in version control, provide an output the framework can convert into something standard and support the password/environment setup. Postman is a popular open source test tool for REST APIs and supports JavaScript test scripting. Exporting Postman tests to text and importing them to the framework adds an additional step, but the company offers a fee-based service that resolves that issue. SoapUI is another popular tool with broader support for more types of web APIs, while its commercial cousin, ReadyAPI, provides support for service virtualization.

Once the framework exists and the system runs under continuous integration, the next question is what tests to write. One place to start is with the APIs under active development to test code that is actively being changed. The areas of the code with churn are the most likely to develop regression problems. The API tests can act as both a specification for how the software should work and a tripwire for unexpected changes.

Limitations of traditional API test frameworks

In the classic approach to API test frameworks discussed so far, each framework run executes a defined collection of independent tests against a blank test environment. By having a "proper" setup/teardown every time, traditional structured tests fail to find issues with the following:

  • Memory and extended run times. When tests always start fresh and run in isolation, soak testing -- where the system runs for an extended period to detect memory or performance issues -- generally isn't possible.
  • Multiple or different users. Traditional test frameworks generally focus on one user or session at a time, which does not facilitate load testing.
  • Repetitive operation. Stress and high-volume testing is difficult if tests only handle a select number of problems in isolation.
  • Randomized tests and model-based test automation. These are hard to execute when tests reset everything to a fixed state.

One classic example of these failures is the account signup allowing special characters, but the login process strips them out. Both pieces of software are working according to their specification. Both are edge cases covered by API testing, so they are unlikely to be caught in front-end GUI verification, which tests the happy path. Another is hidden boundary conditions, which can only be discovered by using long strings, large numbers or precise numbers -- with many digits on both sides of the decimal point. These examples show why different testing strategies -- such as load, soak, stress/high volume and exploratory testing -- are important to include alongside standard tests in an API framework. They help find bugs that only appear when all the components work together over time.

Alternative approaches to API frameworks

Which of these additional tests are covered is up to the quality team, but there are a few techniques to consider:

Skip setup and teardown between tests

It's possible to include a framework option to skip setup and teardown between tests or not to create a new user. A test suite in this situation is a long-running chain of events closer to real-world usage patterns.

One place to start is to group independent features into one extended test -- with suites that, in theory, should never touch each other. In e-commerce, that could be running search, product details and checkout tests together. These features won't interfere with one another but more closely mimic a real-world user journey. Longer running sessions help address memory issues and create load and stress testing opportunities.

Introduce variables and randomized inputs

Variables and randomized inputs help expose edge cases and support randomized testing of complex conditions. Tracking what to expect could require variables because more dynamic tests don't work as well with hardcoded expectations. Once tests store and track variables, the framework can randomly vary inputs to those variables and calculate the expected results based on a formula.

Use model-based testing with random walks and backtracking

Model-based testing with backtracking helps find and diagnose bugs in complex conditions. Store the state of the application in a data structure, then take random walks -- random paths through the app -- comparing the API response to the expected state. A framework can automate many of those runs, logging each step and reporting the unexpected results. Then, rerun the model-based test from the last operation backward to find the shortest path to error, to effectively retrace the random walk's steps, making debugging easier.

Implementing an API framework

The simplest way to get started implementing the API test framework might be to look at existing tests to find a way to call them from the command line with setup before and a teardown after.

Put every test scenario or set of automated checks in version control as a separate file, then have a test suite file that points to what checks to run. Take the environment to run in -- probably a URL -- and the test suite to run from the command line. Make sure the output is something the continuous integration tool can digest -- TAP is one open source standard -- and create training and support for the team.

Then, look at the extra problems the framework will not find and consider extending the framework to handle them.

Matt Heusser is managing director at Excelon Development, where he recruits, trains and conducts software testing and development.

Dig Deeper on Software testing tools and techniques