The leadership team has decided to outsource a portion of the QA testing effort in order to improve application testing coverage. Good plan. Now how do you decide exactly what testing to outsource? As a leadership team member, do you know precisely what QA testing is currently testing? If not, it’s time to gather a small team made up of developers, QA testers, and the Development and/or QA manager to find out
Now how do you decide exactly what testing to outsource? As a leadership team member, do you know precisely what QA testing is currently testing? If not, it’s time to gather a small team made up of developers, QA testers, and the Development and/or QA manager to find out. You’ll need to understand what testing is currently executed and the scope, or depth of the testing effort, in order to determine what testing value you’ll get from the outsourced testing team.
Gather the team and determine what testing types are currently executed and what testing is missing, or fails to get executed frequently, if at all. For example, generally, the most critical testing is smoke, functional regression, and new feature testing. Internal QA teams tend to be focused on functional testing of new features during development cycles. Smoke and regression testing are scheduled either prior to the release as a separate development sprint, as part of a regression testing cycle, or it may happen continuously as time allows.
It’s rare to see QA testing teams have the time to execute full regression testing test suites, let alone performance, load, integration, usability, accessibility, or security testing. These testing types may be touched upon within the general regression suite, but testing is largely superficial for these areas.
In this instance, the outsourced testing team is assigned load, performance and security testing. If these types of testing haven’t been executed frequently, then you’ve already significantly increased your testing coverage. However, in the likely event, you want to increase testing coverage even more definitively, start by:
- Reviewing existing test case (manual and automated) coverage including test type and depth.
- Review unit testing coverage by development.
- Determine what testing benefits the customer or end user the most and expand from there.
- Review the application areas with the most reported defects. Determine if a certain type of testing covers that area.
The success of the application depends on keeping customers using it. Start by determining what testing provides the most positive outcome for your customer and then add on to it as the schedule allows. You may start by improving the customer experience and then move to testing that improves the organization’s bottom line. If you expand the breadth of the mobile devices you test, can you sell more product? Or likewise, if the mobile application performance improves noticeably, and includes more device types, can you both improve customer retention and the business income?
Another consideration is what testing allows the outsourced testing team to be the most productive, faster? What type of testing makes sense to start with from a training perspective? Once the team gains experience in the application then the testing type and schedule can expand.
Improving testing coverage by executing a larger variety of testing types improves application quality simply from doing more testing with different patterns. Security, usability, accessibility, performance and varied platforms are all options that benefit the end user experience but are typically ignored due to time and resource constraints.
You may consider measuring defect count data and customer service escalations - see where you are currently having issues and measure the improvement with each release. Better testing coverage = fewer customer issues and complaint escalations.
In the end, the more testing executed, the better the application functions and performs and the more you expand your customer base.