As testers we won't set hard boundaries for testing, that's why organisations push for test automation, CI/CD and so on. We expect to get 100% (almost) test or feature coverage for each new build.
But if you are in an organisation that doesn't have test automation then some times test completion deadlines could make you feel challenged. In such cases, few of the things i follow are:
- Sanity testing
This decides the entry criteria. Here we check that basic things are working, you navigate to different pages and see things looks testable and functioning.
- Smoke test:
Test the most critical features
- Using critical thinking and common sense
If you are really time-limited, use critical thinking to choose components that would be affected. For instance, if a header element is updated, then ideally you should check all the pages that use it. But for time-limited, task think about pages that have too many components and could cause alignment issues if any component changes size.
- Never stop testing
The final rule is to test more and keep testing even after you approve the candidate. Never stop testing
I believe it is important to look at test coverage at something that
Needs to be adjusted as features grow and change (proactive)
as opposed to
Ensuring that things that happened in the past are prevented (or at least detected) by adding tests for them (reactive)
The problem with the second approach (particularly common in traditional waterfall and command & control environments) is that it leads to an ever growing and slowing set of tests that, over time, give less value as more issues arise in using, managing and maintaining them - which is done by costly humans. The number of features and bugs times the number of devices and versions is always more than can be tested anyway so all testing requires some form of scoping down already. At worst you end up using KPIs for number of bugs and number of tests to measure quality effectiveness. 7000 UI tests should not be a point of pride in most organizations but I have seen them be in some.
Also, when discovering and fixing a bug or indeed making any change in functionality the order should also be:
- Will unit tests be enough to cover this new condition ?
- If not, will integrated tests cover it ?
- If not will E2E non-ui tests cover it ?
- If not, will an automated UI test cover it ?
- If not, will manual testing be practical to cover it ?
For example if the change is in a back-end routine that prepares data seen in the front end, you want to avoid testing backend code changes by front-end tests whenever possible Instead you use unit tests to test the differences. Often the hard part here is communicating this to the business. That takes care and tacts and excellent communication skills such as listening first.
Every software testing company uses both test automation and functional testing to handle these kinds of situations to cover 100% of the feature coverage.
However, if we don't have automated scripts then for those cases we can perform smoke, sanity and regression testing:
Smoke testing: In this, we decide if the QA build is stable and all critical areas of the application are working as expected.
Sanity testing: In this testing QA team checks the basic functionality and workflow of the application and verifies that it is working fine or not.
Moreover, test all the components that you think have an impact on the changes done by the developer. Execute all the test cases of those areas if time permits.
Do not stop testing the build until the code goes to the live environment to find out all the issues that came after pushing refactored code in the system.
Dev team can suggest the areas where code is refactored by providing the root cause analysis. Rest of the things QA team can handle by performing testing around those areas.