Software testing fixes bugs and vulnerabilities before they affect users, but the frantic race to the finish line â littered with obstacles such as limited budgets, incremental sprints and poor management decisions â can hinder the deployment of quality code.
Testing validates that software works as expected, identifying vulnerabilities and minimizing bugs before code is deployed to production. Automated testing tools like Selenium, JUnit, or Grinder use code-based scripts to examine software, compare run results, and report results. However, despite the widespread availability of these tools, most code arrives in production untested; contributing factors include shortage of developers, a lack of skills among testing teams and poor business decisions, according to industry analysts.
“Software is not tested simply because testing [is costly]it takes time and resources, and manual testing slows down the ongoing development process,â said Diego Lo Giudice, vice president and analyst at Forrester Research.
Developers only unit test 50% of their code; Subject matter expert (SME) testers only automate about 30% of their functional test cases, which has nothing to do with code complexity, Lo Giudice said.
“Skills, cost and time are the reasons,” he said.
The total cost poor-quality software exceeds $2 trillion a year, including $1.56 trillion in operational failures and $260 billion in failed IT projects, according to the Consortium for Information and Software Quality (CISQ), a nonprofit organization company that develops international software quality standards.
Most production code is not software tested.
But there’s more than money at stake. “For businesses that do e-commerce, when the system goes down, they lose money and then they lose reputation,” said Christian Brink Frederiksen, CEO of leapwork, a no-code test automation platform. This can lead to customer retention issues, he said.
However, some CEOs wear blinders when it comes to software quality. “If you talk to a CEO about testing, you can see their eyes go, ‘Really, well, what is this?'” he said. “But if they experienced an outage on their e-commerce platform and what the consequences were, then that’s a different story.”
Complex software testing requires complex skills
Software testing poses skill challenges because people have to search for unknown vulnerabilities and try to predict where systems might fail, said Ronald Schmelzer, managing partner of Cognitive Project Management for AI certification at Cognilytica.
Yet there is a shortage of tech talent with the necessary testing skills. While employer demand for tech talent grows exponentially, the number of developers and programmers remains steady, creating intense competition among employers to hire qualified staff, according to Quickbase CEO Ed Jennings in an interview online. may.
In addition to a skill shortage, testing requires repetition of tasks to ensure coverage of all areas and to verify that previous bugs haven’t resurfaced after updates, Schmelzer said.
Bug coverage and hunting becomes more of a challenge for system-wide violations of architectural or coding best practices, said Bill Curtis, senior vice president and chief scientist at Cast Software and executive director of the CISQ. If a violation is contained within a single module or component, a fix can be tested relatively easily, but system-level violations â involving faulty interactions between multiple components or modules â require fixes across multiple system components, Curtis said. .
“Frequently, fixes are made to some but not all faulty components, only to find that operational issues persist and other faulty components need to be fixed in a future release,” he said.
Commercial pressures and methodologies contribute
But while business pressures, such as maintaining a competitive advantage or delivering on time, contribute to the software testing problem, development methodologies are also partly to blame, Leapwork’s Frederiksen said.
âThe core problem with software testing is that companies have probably optimized the whole software development approach with methods like Agile or CI/CD,â Frederiksen said. This gives the false impression that the code ready for deployment has also been optimized, he explained.
This does not mean that one methodology is worse or better than another. âWith Waterfall, testing was no better â in terms of the amount of testing â before the software went into production, compared to testing done in Agile,â Forrester’s Lo Giudice said.
Testing/QA is usually an afterthought and short-staffed. This is what prevents the code from uploading.
Holger MüllerVice President and Analyst, Constellation Research
According to Holger Mueller, vice president and analyst at Constellation Research, Agile can compound testing gaps because people focus too much on time to deployment rather than quality.
Systems that are nearly impossible to repair after deployment, such as satellites or missile guidance software, require 99.999% testing, he said.
“Enterprise software and consumer software are often the sloppy ones, with MVPs [minimum viable products] are often released under time pressure,â Mueller said, referring to the Lean principle of quickly developing a product with enough functionality as well as expecting future updates and bug fixes.
“Testing/QA is usually an afterthought and short-handed. That’s what keeps code from going live,” he said.
That doesn’t mean teams should throw their methodology out with the bathwater, Mueller noted, but effort is needed to ensure systems are tested as a whole.
“While you can create code incrementally, there are limits to incremental testing. At some point, software must be tested holistically…the ‘nut soup’ test,” said Mueller. Testers should install the full application, test that the code works, then uninstall and check for issues, such as ensuring personally identifiable information is removed.
âBasically, quality assurance should follow the entire customer lifecycle in the product,â he said.