Multi-cloud testing is the only way to ensure that a multi-cloud deployment achieves its goals under real-world conditions. This QA does not require many new and expensive tools or complex practices, but it will require focusing on certain aspects of the application development and deployment cycles.

Let’s take a look at how testing plays a role in a successful multi-cloud strategy.

Where multi-cloud complicates things

Four specific areas where development and deployment complexity can increase with multi-cloud are:

  • Implementation on multiple clouds. Although common features may be available in all clouds, their implementation usually varies at least slightly from cloud to cloud. Thus, different software versions will be required if developers need to prepare a single component for deployment in multiple clouds.
  • Widespread components. Application performance and quality of experience vary depending on how the components of a workflow are distributed across clouds, making it difficult to monitor and assure performance.
  • Tuning deployments for each cloud. The same operating tools may not be available in all clouds. Similarly, developers may not use certain tools in the same way within each cloud. Therefore, team members may need to adjust deployments and redeployments differently in each cloud. Additionally, redeployment from one cloud to another may need to be validated for each possible cloud combination.
  • Ensure connectivity. Connectivity between Internet users and different public clouds, and between public clouds and the on-premises data center, will vary. Accordingly, network addressing and traffic management practices will need to account for the differences.

Where to address multi-cloud risk

Any increase in complexity creates the risk of application performance or availability issues, security and compliance issues, and/or cloud cost overruns. These risks stem from two factors. The first is that multi-cloud may require the same logic to be deployed, and developers may need to code logic in different ways to accommodate cloud API differences, which multiplies the number of pipelines of apps. The second is the risk that the workflows between clouds in this configuration will vary in ways not intended or considered in application design. It is this variability that the multi-cloud testing strategy must address.

Team members can manage the risk of application proliferation during the development and unit test cycles. However, this type of multi-cloud testing requires a clear separation of software components that require customization per cloud. This separation ensures that code never moves into a cloud where services and features are not compatible. It also means that the development and testing process — right through to load testing — must treat each cloud-specific variant as a separate application or project, with its own development pipeline.

With such a setup, the relationship between each cloud variant and the core component logic must be maintained for parallel maintenance to be effective. Some organizations will maintain a core set of code for each component and then branch it out to cloud-specific releases to ensure that logical changes apply to all cloud releases in a coordinated fashion. When a logical change to a software component that previously did not require cloud-specific code creates a need for customization for one or more clouds, developers then forge the single base version for each cloud that needs to be adapted. Documentation of all of this is essential.

Where workflows become a factor

If the development team partitions cloud-specific code using per-cloud pipelines, the development and testing practices are the same as for a single or hybrid cloud. However, multi-cloud requires special practices and even load testing tools. After all, load testing should simulate the conditions that create workflow variations between clouds. Workflow changes will expose new logic, which QA professionals need to test.

There are two reasons why workflows would vary. First, there may be features or capabilities that a specific cloud provider needs to support. And that means work must be routed to and from that vendor when those features/capabilities are required. Second, an application can selectively use one cloud to back up another or to support scaling under load. In the first of these cases, it is important to focus on test data entry that applies this feature/capability specific logic. In the second case, it is necessary to create the availability or workload conditions that change multicloud workflows and component placement to ensure that the logic works and the QoE does not become unacceptable.

There is no need to use special test data generation tools for multi-cloud testing. But if QA chooses to use them, it’s important to use them correctly. Proper usage starts with ensuring that you provide test data injection simultaneously across all clouds where user access is supported. Such access is always the case when a multi-cloud decision has been made to accommodate a distributed user base. Also ensure that the test data injection for each cloud is local to that cloud, i.e. generated in the same geographic neighborhood where users will connect.

The other critical requirement is to emphasize monitoring focused on the workflow of the test and, in turn, to extend this same view to the issues uncovered during this monitoring. You are looking for situations with unique multi-cloud risks and subsequent responses to specific workflows for scaling, failover, or when the application invokes access to cloud-specific logic. These particular situations will need to be triggered by test data and then measured specifically across the range of components and across all cloud boundaries.

Random test data alone will not always activate the particular situations you are trying to test. Therefore, coordinated and synchronized test sequences across all clouds where users connect are essential. When you find something, follow the symptoms through the workflow to make sure you’ve fully resolved the issue.


Effective Software Testing: A Developer's Guide (Manning)


Software Testing Techniques and Software Automation Testing Tools: A Comprehensive Research

Check Also