Skip to main content

2025 State of Digital Quality Report From Applause Reveals Sharp Increase in AI-Powered Functional Testing, but Human Involvement Remains Critical

To offset risks, a growing number of organizations leverage crowdtesting and most integrate multiple QA methods throughout the SDLC

Applause, the world leader in digital quality and crowdsourced testing, released The State of Digital Quality in Functional Testing 2025, its fourth annual industry report designed to help organizations deliver higher quality apps, websites and other digital experiences. The report shows a significant increase in AI usage for functional software testing, which has more than doubled in the past year – though organizations stand firm on their position that keeping humans in the loop (HITL) is absolutely essential. Crowdtesting is an effective approach leveraged by a third of organizations to help ensure comprehensive digital quality.

Users remain in the driver’s seat when it comes to defining and measuring the goals of software development and QA departments. Customer satisfaction and customer sentiment/feedback are the top metrics to assess software quality, and user experience (UX) testing continues to be the most popular testing type. However, familiar challenges persist, including aggressive timelines and a lack of resources and stability across internal teams. The report’s findings are based on a recent survey of more than 2,100 software development and testing professionals around the world.

Key findings:

AI is becoming more deeply integrated into testing, but human oversight is paramount.

  • 60% of survey respondents reported that their organization uses AI in the testing process. In 2024, our AI survey revealed that only 30% were using the technology to build test cases monthly, weekly or daily – and just under 32% were using it for test reporting.
  • Organizations leverage AI to develop test cases (70%), automate test scripts (55%), and analyze test outcomes and recommend improvements (48%). Other use cases include test case prioritization, autonomous test execution and adaptation, identification of gaps in test coverage and self-healing test automation.
  • AI and automation alone cannot provide the comprehensive, end-to-end test coverage that enterprises demand. One-third of survey respondents (33%) leverage crowdtesting, an effective approach to mitigating risk through HITL test coverage, particularly in the age of agentic AI.

Significant challenges in pre-release testing persist, despite AI efficiencies.

  • With the swift rise in adoption, 80% of respondents are challenged by lack of in-house AI testing expertise.
  • Keeping up with rapidly changing requirements was the most prevalent testing challenge at 92%. Nearly a third of respondents lean on a testing partner to bridge this gap.
  • Additional obstacles to AI quality include inconsistent/unstable environments (87%) and lack of time for sufficient testing (85%).

Organizations are embracing a blended, shift-left approach to quality assurance (QA).

  • A significant shift is underway in the software development lifecycle (SDLC): While a previous survey found 42% of respondents only test at a single stage of the SDLC, this year just 15% limit testing to a single stage.
  • Over half of organizations are now addressing QA during the planning (54%), development (59%), design (52%) and maintenance (57%) phases of the SDLC. 91% of respondents reported that their team conducts multiple types of functional tests, including performance testing, user experience (UX) testing, accessibility testing, payment testing and more.
  • Of the 83% of organizations using multiple metrics to monitor digital quality, 67% use test case reporting and metrics to analyze trends and identify areas for improvement. 58% use the combined data to guide future development.

“Software quality assurance has always been a moving target,” said Rob Mason, Chief Technology Officer, Applause. “And, as our report reveals, development organizations are leaning more on generative and agentic AI solutions to drive QA efforts. To meet increasing user expectations while managing AI risks, it’s critical to assess and evaluate the tools, processes and capabilities we’re using for QA on an ongoing basis – before even thinking about testing the apps and websites themselves. ‘Are we meeting demands in terms of performance? Accuracy? Safety?’ Humans must be kept in the loop to answer these questions effectively.”

Additional findings:

Digital quality is customer-driven – UX, usability and user acceptance testing and metrics are preferred.

  • Customer satisfaction and customer sentiment/feedback are the top metrics for assessing software quality.
  • User experience (UX) testing is the most popular testing type at 68%. This type of testing leverages qualitative research to ensure digital experiences are intuitive, compelling and engaging.
  • Usability testing (59%), which measures ease-of-use, and user acceptance testing or UAT (54%) are also popular.

“Internal QA structure and consistency” was rated highly by respondents, though teams lack comprehensive documentation.

  • 69% of respondents rated their organizations’ structure and consistency around digital quality as falling into the “Excellence” and “Expansion” framework categories.
  • Yet, only 33% reported having comprehensive documentation for test cases and plans.
  • 84% of respondents find it challenging to reproduce defects with available test data – reproducing bugs is crucial to understanding, analyzing and fixing issues.

“The fact is, what we’ve long predicted has become our reality – machines can develop and validate software, to a degree,” continued Mason. “But, even agentic AI – especially agentic AI – requires human intervention to avoid quality issues that have the potential to do serious harm, given the speed and scale at which agents operate. The trick is to embed human influence and safeguards early and throughout development without slowing down the process, and we know this is achievable given the results of our survey and our own experiences working with global enterprises that have been at the forefront of AI integration.”

Applause’s State of Digital Quality content series provides insight into the latest software testing and QA practices and trends, including preferred methods and tools, as well as common challenges faced by software development and testing professionals worldwide. Find additional resources here:

About Applause

Applause is the world leader in digital quality – built by innovators, powered by people and dedicated to the comprehensive digital testing and feedback needs of our global enterprise customers. Our fully managed solutions harness a powerful combination of community-based testing and advanced technology to ensure organizations can move quickly to release apps, devices and experiences that are consistently functional, intuitive and inclusive in any market. Our experts steward customers through the entire testing process, from strategy through execution, at every stage of the software development lifecycle. And, we seamlessly supplement existing resources, providing actionable, real-time insights that drive customer retention and revenue. With specialties including accessibility, AI and payment testing, we’re proud to be an essential partner to the most innovative names in the digital economy, as we work together to ensure technology works for everyone, everywhere.

"The trick is to embed human influence and safeguards early and throughout development without slowing down the process..."

Contacts

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.