AI-based testing has the potential to help solve software quality issues, but it faces significant hurdles on the path to widespread adoption.

Automated testing uses software tools to automate the manual testing process. Testers can use traditional rules or scripts based on code or AI, which builds, launches and executes test models without human intervention.

AI-powered tools like Selenium IDE-compatible Katalon Studio, mabl, and Functionize can free developers from duplicating mundane tasks and monitor complex systems for vulnerabilities. However, a distrust of unfinished technology is hampering adoption rates, according to industry experts.

The tools are also not yet a viable solution to the software testing crisis, said Holger Mueller, vice president and analyst at Constellation Research.

It’s the beginning; AI is just coming to IDEs.

Holger MüllerVice President and Analyst, Constellation Research

“This is just the beginning; AI has just arrived in IDEs,” he said.

Indeed, most AI-based tests are in the early stages of industrial use, Bill said. curtissenior vice president and chief scientist at Cast Software Inc. Curtis is also executive director of the Consortium for Information and Software Quality (CISQ), a nonprofit organization that develops international software quality standards.

A CISQ report released in 2020 and scheduled to be updated later this year identified AI as a key trend in software development for this decade. However, he could not explicitly state that AI was a trend in software quality testing.

“There was not enough data reported to reinforce the presence of AI-based testing in the report,” Curtis said.

But AI-based tests could soon make an appearance in the report. “So-called ‘TestOps’ that use AI systems and automation approaches have taken off in recent years,” Schmelzer said.

An AI test interface is human readable, but its inner workings are difficult to understand.

The battle heats up

TestOps involves the use of AI and non-AI automated testing to automatically scale resources.

AI and non-AI software testing tools have similar benefits. “They address issues such as providing more consistent results compared to repetitive testing, speeding up the testing process, simplifying testing tools and operations, making automated testing more adaptive and resilient, and predicting potential areas of failure. testing,” Schmelzer said.

Eliminating repetitive and mundane tasks frees up the developer and speeds up testing, said Christian Brink Frederiksen, CEO of Leapwork, a no-code automation platform.

“You can reduce the testing time in your release cycles from weeks to days,” he said.

Reducing test time can speed time to market, allowing companies to quickly adapt to changing market conditions, said Gareth Smith, general manager of software automation at Keysight Technologies, a design and development company. validation. He gave the following example: “Now in the UK it’s a heat wave, but now I want to do a ‘hot-wave burger’: buy one burger, get one free. You can create a campaign and then roll it out next weekend,” he said.

But where AI-based testing excels is in bug hunting, Frederiksen said.

“AI-based testing has been shown to reduce the risk of buggy software, which can cause corporate crises like Volkswagen’s,” Frederiksen said, referring to last month. sacking of Volkswagen CEO Herbert Diess, apparently for software quality issues.

Poor software quality has been blamed for a myriad of other headline-grabbing news, including Postal Accounting Errors United Kingdomzoom failures and Unavailability of the 4G network.

But buggy software isn’t just about headlines; it’s a pervasive problem, and companies need to consider a different approach to testing, Frederiksen said.

“Companies are between a rock and a hard place as they struggle to scale their largely manual testing solutions in the face of increasingly complex software and markets that demand product releases by a certain date” , did he declare.

A vote for lack of confidence

AI faces an uphill battle to gain acceptance in the software testing arena. The technology is nearly impossible for humans to understand, leading to a lack of confidence in AI capabilities, Smith said. He provided an analogy to demonstrate how the internal machinations of AI can not only invoke a lack of trust, but also force people to relinquish control to the unknown.

“We’ve developed a self-driving car. It’s got a seat with just a speaker and a microphone, and there’s no seatbelt – it doesn’t need one. You sit on the seat and you say, ‘Take me.’ Wherever you go, and it goes 100 miles an hour. Who would want to buy this car? They say, ‘I’m not going to be the first customer in this car,'” Smith said.

This lack of track record explains why widespread adoption of AI-based testing systems faces an uphill battle, Smith said.

“It’s the future, but we’re not there yet,” he said.

Previous

The undergraduate contributes to cybersecurity research

Next

Software Testing Market Size Expected to Grow from USD 42.5

Check Also