What's the role of manual functional testing when we're an automation-first team?
Test automation is booming. And as top-end QA specialists, we help world-class software businesses with high levels of automation competency like Meta and Microsoft drive better productivity across their QA.
Except for one small thing: Global App Testing offers manual software testing. We’re the inventors of “crowdtesting”, a technology which links you to a real professional software tester on your chosen environment mix anywhere in the world. We think it’s an amazing technology, but it’s manual testing. And some would argue that manual testing is slow and expensive.
So why is it that businesses like Microsoft are choosing to use Global App Testing? And how is it that they use manual testing to drive improved productivity when there are more automation options available than ever? That’s what we want to answer below.
A look at the assumptions about automated testing
First, if you want to really drive productivity through automated testing, it’s useful to look at the assumptions engineers make when they think about tests.
Here’s our top three:
- “We’re doing manual tests for now, but we’ll hit a higher % of automated software tests next year, so we can plan to continually reduce our QA spend.”
- “AT requires a one-time investment, and the tests can be used indefinitely, and therefore they’re cheaper and more efficient”
- “MT is boring and is unfulfilling work. This should be AT.”
We think that there’s some truth in all of these, but when you look closer, the picture becomes more complicated.
A. Teams are generally optimistic about their automation timelines
First, in forward planning. Research by TestRail undertaken in 2022 showed that businesses systematically overestimate the amount that they will automate every year.
In TestRail’s annual survey, users of the service estimated they would automate 30% more of their testing suite the following year. But when the same businesses were asked the following year, the proportion didn’t move. The same businesses weren’t automating more. We’d say there is a positivity bias in planning around test automation (i.e. they plan without obstacles).
B. Teams generally lowball the costs of test automation
Second, the economics of test automation. According to our own polling, teams find that flaky tests and maintenance costs are worse than they expected, and the primary reason that teams default to manual tests. In other words, teams seem to lowball the cost of test automation, overestimate their lifespan.
C. Some teams dismiss manual testing as boring, unfulfilling, or low status
And finally, we believe that MT is worthwhile work. Engineers are very fortunate; they are affluent and get to work on something they love. Crowdtesting via Global App Testing offers our testers work which is flexible, which you don’t need a degree for, and which shares the wealth from California further afield. It’s no wonder that we have 90,000 testers keen to deliver manual testing and specialised in delivering results across a wide variety of devices.
A + B + C = motivated reasoning
In other words, there’s a bit of motivated reasoning that gets into the MT / AT planning logic. Managers, who want to save costs; and engineers, who don’t want to test, are both equally motivated to feel that AT will be better than MT and that they can set it up fully next year. But in our experience, that doesn’t always survive an encounter with reality. It would be better for both parties to take a blended test approach, both to ringfence the gains made by automation and to continue the journey to ever-better-quality in applications.
The first step is looking at manual testing blended with automation:
So is automated testing bad?
Absolutely not. The reasons for wanting to automate tests in the first place are still valid.
In fact, when we’ve been able to raise the productivity of different teams the most, it’s because they’ve been automation-first or partially automated in their approach. That’s where we can leverage the right distribution of manual / automated testing to improve the effectiveness of both.
How can the right manual / automated test distribution improve the effectiveness of both?
1. Use it to keep the right people focused on test automation
Generally a great manual tester and a great test engineer are slightly different people and mindsets. In particular, test engineers are likely to want to spend their time automating. It’s the best thing for their professional development. But even great companies under-resource, and QEs often come into post to realise that in the meantime, they’re expected to do both.
That’s how GAT have previously saved companies money by giving them the bandwidth to automate more tests. For example, payments application Flip managed to automate an additional 20% of their test suite after their engagement with GAT. Series A airportr automated an additional 15% of their test suite. Or at Booking.com, where we saved 70% of the time of the lead QA.
2. Getting the moment of test automation right
The second important way to not waste time in test automation is to get the moment of automation right for the automated test to be more efficient. Earlier we referred to flaky tests; the primary reason that tests flake is to do with product volatility. If code is changing a great deal, the automated test is likely to have a very short lifespan.
With human testers, it’s better. Even when reusing test cases in the instance of a UI change, you can leverage “spirit of the test” mode in order to deliver fewer false positives. (GAT’s true positive rate is the highest in the industry). Our advice would be to introduce a delay period between a new feature and the automation of software test automation – there’s always another additional change required later than you’d think.
3. Retain flexible manual test resource during a volatile demand period
There are two ways to think about volatile manual test demand.
The first is to do with the natural ebb and flow of new features on your application. Above, we described a process whereby new features are manually tested until they are appropriate to automate. We might think that we’re working consistently hard across every hour with an even output – that’s rarely a business reality, and our demand for test resources is equally volatile.
The other way in which your demand for manual test supply will be volatile is to protect your QEs in the case of flaky tests. As you can imagine, for our biggest clients, the complexity of the automated test suite is huge. That means that businesses begin to build reliance on automated processes which they don’t have the manual testing capacity to fulfil when they flake or break.
This is frustrating. With a small proportion of flaky tests you could double your test burden. In other words, the same virtuous cycle which makes automated tests so attractive can be a vicious cycle when run backwards (see below.) Having “burstable” test QA can help to even out the workload and avoid frustrating delays.
4. Broadening your environment for more global test coverage
While automated tests are great for ensuring consistency across your primary use cases and target environments, they can often fall short when it comes to comprehensive testing across the full range of user environments and scenarios.
If you have a domestic-first programme, one common way to do this is to run automated domestic test scenarios and then expand to global testing in key moments. We can give you access to a mix of real-world device/OS/browser combinations being used by your customer base around the world. This allows you to validate not just the core functionality, but the actual real-user experience across an exponentially wider matrix of environments.
Today, localization is seamlessly integrated into the software development process, with translators working alongside developers to ensure that content is localized as soon as it's created. However, localization involves more than just translating text. It also takes into account cultural nuances and preferences, such as currency, units of measurement, and societal norms.
5. Identifying quality targets like compliance and accessibility
In addition to functional and user experience testing, there are other critical quality factors that automated tests struggle with, such as compliance, accessibility, and payments. Each of these can take us beyond
By blending automated regression testing with skilled human exploratory testing, you can achieve well-rounded coverage that checks all the boxes. Leverage machines for what they're good at (rapid functional verification) while employing professionals for the nuanced facets that machines cannot easily evaluate.
The next stage: from quality to value
Get started with Global App Testing
As digital products and services increasingly become key value drivers for businesses, the role of testing must evolve beyond just preventing defects. Leading organizations are now using testing as a competitive advantage – leveraging quality insights to directly optimize value, innovation and market responsiveness.
By capturing real-world feedback across your user base through manual testing, you gain a direct line of insight into what customers actually value and struggle with when using your products. You can analyze usage patterns, UX pain points, common workarounds and more. This customer-driven data becomes an invaluable asset for prioritizing improvements, spark innovation, and staying tightly aligned with evolving market needs.
Smart testing leaders are finding creative ways to feed manual test insights upstream into the full product lifecycle – informing not just remediation of defects, but driving continual optimization of value delivered to customers. Testing transforms from a box-checking function into a critical value delivery mechanism for the business.
Want to drive your global growth in local markets?
We can help you drive global growth, better accessibility and better product quality at every level.