Integration Testing: Definitions, Types, and Best Practices

Brian
9 min readJan 30, 2023

What is integration testing?

Integration Testing is a type of software testing in which the units, modules, and components of a software are integrated and tested as a cohesive unit.

Essentially integration testing is a higher level than unit testing. With unit testing, we only validate if software components function well as a single unit. With integration testing, we want to see HOW those units work together and find out if there are code conflicts between them.

Integration Testing plays a crucial role in modern software testing as the development trend is gradually shifting towards cloud infrastructure, which calls for a higher volume of API testing to see if third-party integrations and APIs are interacting properly with the application itself.

Integration testing occurs after unit testing, and before system testing. The modules that have passed unit testing will be grouped into aggregates.

Why is integration testing important?

Software is often built from many individual software components or modules. These modules may pass unit testing and work perfectly fine individually, yet collapse when put together, for various reasons:

  • Inconsistent code logic: They are coded by different programmers whose logic and approach to development differ from each other, so when integrated, the modules cause functional or usability issues. Integration testing ensures that the code behind these components is aligned, resulting in a functioning application.
  • Shifting requirements: Clients change their requirements frequently. Modifying the code of 1 module to adapt to new requirements sometimes means changing its code logic, which affects the entire application. In case that unit testing can’t be performed due to time constraints, integration testing will be used to uncover the missing defects.
  • Erroneous data: Data can change when transferred across developed modules. If not properly formatted when transferring, the data can’t be read and processed, resulting in bugs. Integration testing is required to pinpoint where the issue lies for troubleshooting.
  • Third-party services and API integrations: Since data can change when transferred, API and third-party services may receive false input and generate false responses. Integration testing ensures that these integrations can communicate well with each other.
  • Inadequate exception handling: Developers usually account for exceptions in their code, but sometimes they can’t fully see all of the exception scenarios until the modules are pieced together. Integration testing allows them to recognize those missing exception scenarios and make revisions.
  • External hardware interfaces: Bugs may arise when there is software — hardware incompatibility, which can easily be found with proper integration testing.

Here are numerous benefits to this practice:

  • Ensure that software modules and components work together in harmony
  • System-level issues are caught early on in the development cycle
  • Improved confidence in the development cycle thanks to higher test reliability
  • Higher code coverage and easier tracking

Types of integration testing

There are several strategies to use when performing integration testing, each of which has its own advantages and disadvantages, with the 2 most common approaches being:

Big Bang Approach: an approach in which all modules are integrated and tested at once, as a singular entity. The integration process is not carried out until all components have been successfully built and unit tested.

  • Advantage: Suits smaller system
  • Disadvantage: costly and time-consuming as testers have to wait until all modules have been developed to start the testing process

Incremental Approach: opposite to Big Bang approach, the Incremental approach involves strategically selecting 2 or more modules with closely related logic to integrate and test. This process is repeated until all software modules have been integrated and tested. The advantage of this is that we can test the application at an early stage.

Incremental Approach is further categorized into 2 different methods:

1. Top-down approach

Testers first integrate and test the modules from the top layer, then sequentially integrate and test the lower-level modules until the entire application is tested. Underdeveloped or unavailable modules are substituted by “stubs”.

Advantages: get to access early architecture defects easily

Disadvantages: hard to write different test environment conditions.

For example, the diagram above illustrates the modules required to complete an eCommerce purchase.

  • Module L: Login action
  • Module O: Order action
  • Stub OS: Order Summary (yet to be developed)
  • Module P: Payment Action
  • Module PC: Payment By Cash option
  • Stub DP: Debit Card Payment (yet to be developed)
  • Stub EP: ePayment (yet to be developed)
  • Stub R: Report (yet to be developed)

If we use the Top-down approach, we need 3 test cases:

  • Test Case 1: Integrate and test Module L and Module O
  • Test Case 2: Integrate and test Module L, O, and P
  • Test Case 3: Integrate and test Module L, O, P, and R

2. Bottom-up approach

Testers integrate and test the modules from the bottom layer first, then sequentially integrate and test higher and higher-level modules until the entire application has been tested. Underdeveloped or unavailable modules are replaced with “drivers”.

  • Advantages: can be used for applications with bottom-up design models.
  • Disadvantages: drivers are harder to code than stubs, and there is no working application until you have built the last module.

Both of these approaches inherit the advantages of Incremental Approach, which is its ease to accurately find interface errors with incremental testing.

There is also Hybrid integration testing or Sandwich integration testing, which is a combination of Top-down and Bottom-up approaches. The goal is to utilize both approaches, but this approach requires developers to develop both stubs and drivers.

Integration testing common mistakes

Integration testing plays an irreplaceable role in any test plan, but pitfalls are common and can be costly if fallen into. If QA teams fail to adopt integration testing best practices, their integration tests are likely to be slow and hard to maintain. Some common mistakes when performing integration testing include:

1. Testing negative scenarios

Negative scenarios is a crucial part of assessing any software product, as it validates how the system behaves when a user provides invalid, unexpected input. Such unexpected input does not happen often, so testers usually ignore it entirely and focus on the “happy path”.

By testing negative scenarios, testers can determine the conditions in which applications can crash, increase test coverage, and ensure that there is sufficient error validation present in the software.

However, it is a huge mistake to include negative scenarios in integration testing. Negative test cases require a lot of setup, including complex data fixtures or application configuration. This greatly inflates the time of test creation.

In addition, to perform negative scenario testing, QA teams have to create custom stubs and drivers, which need to be maintained as code changes occur, resulting in test instability and flakiness.

Instead, use integration testing for positive scenarios, which have clear input conditions. Error handling should be tested using unit tests, which are much faster to run.

2. Attempting to test all scenarios with integration tests

Although it may seem straightforward and intuitive that we want to include all possible scenarios in our integration testing project, it is actually not a good idea to follow this practice, for various reasons:

  • Integration testing tends to be slower and more complicated than unit testing as it requires more layers of access, including database access, file system access, or other third-party APIs.
  • Integration testing may also include more complex data fixtures or specific system configuration to run, which vastly increases the time and effort required
  • Error handling and retry logic should instead be tested using unit or component testing.

3. Over-reliance on existing data

Some QA professionals may fall into the mistake of assuming that there is data in the application’s database to be ready for testing. However, data is susceptible to change, especially when developers and other testers have access to the test environment. This can make your test results unreliable.

Instead, it is highly recommended that testers should prepare all of the data they require for the test beforehand. After the test has been completed, they can remove the data so that it would not interfere with future results.

4. Not running integration tests as part of the CI/CD process

It is a mistake to not run continuous integration tests as a part of the CI/CD process. As release cycles are increasingly shortened, it is unproductive to wait for tests to finish when developing a feature. Instead, you should execute the tests during the continuous integration process so that you know the issues present in the software to immediately address.

5. Not using automated testing for complicated test scenarios

Manual testing is tedious and prone to error, especially highly repetitive or complicated test cases. Automated testing brings greater accuracy and saves tremendous time in those cases. This allows teams to focus more on critical problems and debugging.

Make sure that you run your test automation in an environment specifically configured to closely emulate the production for true-to-life testing results.

Integration tests best practices

1. Design independent tests

The result of 1 test should not affect the outcome of another, and all of the necessary data and resources needed to run the test, including configuration files, databases, environment variables, should always be included in the test itself. By adopting this practice, the tests will be more reliable, as dependence on external resources can cause unexpected behavior if any changes happen.

2. Determine an integration test strategy carefully before executing

If you decide to use the incremental integration approach for integration testing, it is crucial to study the system carefully to devise an integration strategy:

  • Understand the Architecture design of the application and identify the critical modules
  • Depending on whether you follow the top-down or bottom-up approach, you can segment high priority modules
  • Work with developers and relevant stakeholders to identify the requirements (what are the features to be tested, what modules are involved in the tests, what system requirements and test data are required to run the test, etc.)

3. Identify the “stubs” and “drivers” to prepare and maintain

Continuous integration testing requires QA teams to test the features immediately in production for fast feedback, but sometimes certain modules needed for testing may not always be available. In such cases, QA teams need to create “stubs” and “drivers”, which are essentially substituting modules for unavailable ones.

By preparing such stubs and drivers in advance, testers are able to execute integration tests immediately. It is advisable to maintain these mock modules, especially ones with high reusability, for future testing.

4. Validate data integrity between systems

Data can be lost, changed in format, or compromised when transferred across modules and systems, and data integrity is the assurance that that has not happened in any way that can influence test results.

To achieve this, testers should establish a baseline of data for each system, which includes the original values of the data. After completing the integration tests, we can compare the new values with the baseline to identify discrepancies and launch investigation. This process can totally be automated.

How To Perform Integration Testing With Software Management Platform

It is common practice to leverage automation testing tools to automate testing activities and improve efficiency. Integration testing also benefits from automation tools. However, integration testing involves different types of testing, so 1 single tool won’t be able to meet the testing demand. This results in testers having to constantly switch from 1 tool to another.

With a Software Management Platform, it is easier to design test scripts for various purposes. All of your web, mobile, desktop, API tests can be created and managed in 1 space.

katalon-how-to-perform-integration-testing

Effortlessly execute tests and generate insights

After that, you can easily execute those tests on any browsers and devices you want. Thanks to its on-cloud testing capabilities on cloud environments, Katalon gives you access to a wide range of devices and platforms popular to conduct your tests. There is no need to invest in physical devices for testing. Analytics from your testing is recorded in detail for future reference.

AI-powered test maintenance

When code change occurs, testers need to revise their test scripts. If code change happens regularly, your automation scripts are no longer “automated”. Katalon offers self-healing capabilities, which stores locators across tests in an object repository. As changes occur, locators and artifacts are already grouped together to make updates easily.

--

--

Brian

Testing evangelist. Automation advocate. Selenium committer. Experienced with open source, UFT, Katalon, and others.