There may be a planet with perfect software, but as Google’s Chris DiBona writes, this planet is not the one we live on. As such, developers are left with a trade-off: err on the side of caution and rigorously test your software to find any issues before deployment, or test less and ship faster with greater bug tolerance in production. The old camp is filled with developers working in regulated industries like healthcare and finance; the latter is populated by followers of the famous Werner Vogels “you build it, you run it” dictum (see the PDF on the link).
This trade-off is one of the more nuanced debates about developer productivity.
Wherever developers fall on the testing spectrum, there is no one-size-fits-all solution for software testing, leading developers to constantly search for the right mix of testing approaches to meet their constant needs. evolution. To complicate matters, for any of the available testing approaches to become a habit, developers need to strike a balance between both solving a major problem and not being too slow or complicated not to use it.
Like I wrote recently, unit testing found that sweet spot. As a software testing practice, it allows teams to test small, isolated pieces of code, which not only ensures that software performs according to intended specifications, but also allows developers to reason with parts of the database. code written by other developers. Unit testing has been around for decades, but it only really took root recently, as automation has simplified the user experience to the point where it’s actually usable.
Today there is another form of testing which, similar to unit testing, has been in the works for decades, but is only now finding its sweet spot both for solving a critical problem and for giving developers the right abstraction to a greatly simplified approach. I’m talking about integration testing.
A developer’s job is to glue things together
In conventional three-tier architecture systems, developers could have a database and maybe an API or two to interact with, and that was the scope of third-party components they touched.
Developers these days tend to break a solution down into several different components, most of which have not been written, most of which have not seen the source code, and many of which are written in a different programming language.
Developers write less logic and spend more time gluing things together. Today, the average production system has interactions with multiple databases, APIs, and other microservices and endpoints.
Whenever your software needs to communicate with different software, you can no longer make simple assumptions about the behavior of your system. Each database, message queue, cache, and framework has its own states, rules, and constraints that determine its behavior. Developers need a way to test these behaviors before deployment, and this class of testing is called integration testing.
“Integration testing determines whether independently developed software units work properly when connected to each other,” writes Martin Fowlerwho first discovered integration testing in the 1980s.
Until recently, integration testing meant you needed a replica of your production environment. Manually creating this test environment was an extremely time-consuming process, with a high risk of making mistakes. There were penalties for having gaps between test and production, and there was the ongoing burden of having to make changes to your test environment every time you made a change to production. Integration testing was so difficult to set up and use that for many developers it remained an obscure and inaccessible software testing discipline.
That was then. It is now.
Testcontainers: improving integration testing
Richard North created Test containers in 2015 when he was chief engineer at Deloitte Digital. He observed that the hopelessly complicated setup of integration tests – from creating consistent locales to configuring databases and dealing with countless other issues – was a constant source of beating for developer teams. who needed a reliable way to test their code against real production-like dependencies.
North built Testcontainers as an open source library that allows developers to “test with containers” against data stores, databases, or anything else that can run in a Docker container, including frameworks popular such as Apache Kafka. Testcontainers gives developers an ergonomic, code-based way to leverage containers for local and continuous integration testing, without requiring each developer to become an expert in the many nuances of containers.
Today, Testcontainers is the most popular Docker-based integration testing library, used by thousands of companies, such as Spotify, google, Snapshot, Oracleand Zalando. Part of Testcontainers’ popularity is its library of pre-supported modules which includes just about every known database and many popular technologies, often contributed to the Testcontainers project and directly maintained by database and technology vendors . Earlier this year, North and core Testcontainers maintainer Sergei Egorov secured $4 million in seed funding and launched AtomicJar to continue expanding the ecosystem of supported Testcontainers modules.
Failing faster is a winning model
There will always be heated debates about how best to balance speed versus software quality. One of the reasons for the great popularity of the Java compiler and similar technologies is their ability to help developers find failures closer to the point of development so they can fix them quickly.
There will always be diabolical bugs that will evade your testing, but with the increasing ease of software unit testing and integration testing today, it’s becoming increasingly difficult to credibly argue against the investment of more cycles in testing your code and its integration surface before going live.
Join the newsletter!
Error: Please verify your email address.