[ad_1]

Christian Brink Frederiksen, co-founder and CEO of Leapwork, explains how trust can be restored in software testing in 2022

Recent notable outages have seen confidence in software testing decline over the past year.

2021 has seen more and more high-profile software crashes, continuing a growing trend in recent years that has seen companies experience a deterioration in their reputation and financial losses. Businesses are increasingly going digital, with more software created and customized than ever before. But when the world is running on software, a single mistake can be catastrophic.

In June, Fastly “broke the internet” when a valid software configuration change by one of its customers triggered an undiscovered bug previously introduced during a software deployment in May. In October, Meta saw Facebook, WhatsApp, and Instagram down for seven hours. The outage cost Meta around $ 100 million in lost online ad sales, as its shares fell 5%, wiping out around $ 40 billion from its market value.

It is only outages that make the headlines. Many more are happening on a regular basis, and there is a common thread running through the cause: human error. More importantly, it’s not just brand damage and bottom line that is affected; people’s lives can be in danger. When the UK’s National Health Service experienced a computer outage, GPs were unable to access critical blood and x-ray results and medical appointments could not take place creating a backlog of care.

Cyber ​​resilience will need to be taken more seriously by healthcare in 2022

Abel Archundia, Managing Director, Life Sciences and Global Industries at ISTARI Global, discusses the need for the healthcare sector to focus more on cyber resilience in 2022, in order to achieve better health outcomes. Read here

These incidents are not inevitable. Instead, they highlight the critical need for companies to properly and thoroughly test their software to identify issues before deployment. Downtime reveals fundamental flaws in the way the majority of companies approach testing.

Millions of organizations rely on manual processes to check the quality of their software applications, despite a completely manual approach presenting a litany of issues. First, with over 70% of failures caused by human error, manual software testing still leaves businesses at high risk for problems. Second, it is exceptionally resource intensive and requires specialized skills. As the world is going through an acute digital talent crisis, many companies are under-staffed to focus on manual testing.

Added to this challenge is the intrinsic link between software development and business success. As businesses are under more pressure than ever to publish faster and more regularly, the sheer volume of software for testing has exploded, placing additional strain on already depleted resources. Businesses should test their software applications 24/7, but the resource-intensive nature of manual testing makes this impossible. It is also demotivating to perform repeated tasks, which usually leads to critical mistakes in the first place. As a result, companies are forced to choose between cutting back on testing or having unacceptable time-to-market and losing their competitive edge. QA teams face an impossible task, and it’s just not an issue that can be resolved by adding more people to the team.

Instead of relying on manual processes, automation can be leveraged to energize testing efforts. By leveraging automation, organizations can increase efficiency, allowing them to test larger volumes of software, while simultaneously eliminating the risk of human error. This can reduce application errors by up to 90%. Automated testing can reduce the time spent preparing for data tests by approximately 80%, while feedback cycles are accelerated. This fosters a culture of continuous integration, with automation built into CI / CD pipelines, ultimately making it effortless to test and release software.

Business continuity thanks to data

Andy Cotgreave, Technical Evangelist at Tableau, explains how to ensure business continuity through the use of data. Read here

These are significant advantages, allowing companies to increase time-to-market tenfold, giving them a clear advantage in our digital economy. However, not all automation platforms are created equal. Some platforms are code-based or low-code, which means that while they create efficiencies over manual processes, they still require technical skills and an understanding of coding to function. With 64% of companies experiencing a shortage of software engineers, finding people who can operate such platforms is exceptionally difficult. But when it comes to test automation, many organizations have the testing and QA talent in-house that, with the right tool, could use their experience to create a scalable test automation strategy. It’s the wrong, code-dependent tools that are the problem, not the people.

This landscape underlines the importance of solutions without code. Although “low code” and “no code” are often used interchangeably, they are not the same. Weak code still requires developer skills, creating scalability issues and impacting resources. In contrast, no code solution can democratize automation because testers can create test logic based on real business processes. This means that test experts already working within the company can easily automate workflows in a scalable and sustainable way.

Rather than forcing companies to look for outside talent, no-code allows companies to leverage their existing capabilities and build a flow in minutes. Technical resources are then free to concentrate on tasks with high added value. This not only helps accelerate innovation and digital transformation strategies, unlocking productivity gains of 97%, but it also allows teams to do more fulfilling work.

Importantly, by adopting a no-code solution, organizations can achieve broader testing coverage without having to compromise existing software or systems. As a result, they can ensure that their applications are of the highest quality, minimizing the risk of damaging outages while accelerating business growth. Essentially, this approach creates confidence in the testing process and gives control back to the business.

25% of total IT spend is on quality assurance, but 85% of all testing is still done manually. As long as manual testing remains important in businesses, we will continue to see high-profile failures like the ones that dominated newsletters in 2021. Businesses must now realize that this is a critical need and that ‘By focusing on solving test automation, they are also focusing on ensuring business continuity.

Written by Christian Brink Frederiksen, co-founder and CEO of Leapwork

[ad_2]

Previous

Hemaya Advances Saudi Arabia With Cyber ​​Security Training

Next

SPCB Action: The Insider Buying That Boomed Cyber ​​Security SuperCom Today

Check Also