Creating a test strategy is a common task in software delivery projects, and the question of where to begin can often be a challenge. There is many open source resources of information about test strategy, and yet there is no one-size-fits-all solution for how to approaching it to solve your project needs. This is because every project has its own unique requirements, selected tech stack, business goals, and list of constraints. All these factors directly impact how the test strategy is defined. And if success is ultimate goal then it is essential to tailor the test strategy to the specific project context.
Here is two hints how to start — ask 4 Ws and apply Four-corner framework.
Let’s start with 4 Ws — Why? What? Where? hoW?
Why? — Why is the testing necessary? Goals and objectives should be specific, measurable, achievable, relevant, and time-bound (SMART). Examples: 100% of in-scope business requirements are covered by test cases before pre-production deployment, test scripts are running in CI/CD as part of stage deployment, decrease release cycle by introducing test automation and parallel tests execution (test execution time is decreased for 50%), system’s performance metrics under expected and peak load conditions are captured and discussed with stakeholders for further app settings tuning etc.
What? — What needs to be done? Requirements and scope of the testing should be clearly defined and aligned with the project objectives. In scope and out of scope features, what critical business flows will be available and non-functional requirements should be listed and reviewed with stakeholders. Also very important to described high level processes of what team will be performing: requirement analysis, test scenario identification, test cases design and development with defined prioritizes, test data management (synthetic or production like), test case execution and results reporting (test metrics to be captured, test summary report template, cadence of metrics review, defects priorities and severity definition so everyone on the same page how fast fix needs to be delivered etc.), defect triaging and bug fix retesting, regression execution, retrospective on future improvements etc. Well defined “What” should be an input for future estimation of testing effort.
Where? — Where will the testing be perform? Requirements around infrastructure, tools, resources required for testing and environment in which testing will be done can drive certain decisions on how testing will be organized. This will help to establish effective and efficient testing process. Here are some key considerations to include in test bed description:
- Test Environment — specify the environment in which the testing will be conducted. This includes the hardware, software, and network configurations necessary for testing (machines, OS, browsers, DB etc.). If testing will be conducted in an environment that closely resembles the production environment, mention the similarities and differences between the testing and production setups;
- Test Servers — if testing involves server-based applications or systems, specify the location of the test servers (on-premises, cloud-based, or hosted in a specific data center);
- Remote Testing — if testing is distributed across multiple locations, mention the geographical locations involved. Describe how remote testing will be coordinated, including communication channels, collaboration tools, and any infrastructure or network considerations required for remote access;
- Virtualized Environments — if virtualization technologies such as virtual machines or containers are utilized for testing, describe the virtualization platform and the setup details.
hoW? — How will the application be tested? Testing techniques and testing types should cover all aspects of the application — functional, non-functional and incomplete requirements. Yes, you read it right — your testing strategy should not only focus on known things but also assume that there could be incomplete requirements, and encounter them as early as possible is a big benefit from the cost prospective and end user satisfaction with delivered high quality of the application. And yet another challenging question — how to approach selection of all necessary testing activities? What will create a confidence into quality? How to introduce new testing phases? The existence of numerous types and methods presents a plethora of options, and the decision to choose the right one at the right time for your strategy directly influences the quality of the application you will ultimately produce. This is where Four-corner framework will help you.
Main idea of this framework is get right tests executed at the right project phase. As you can see testing types are split into 4 quadrants and grouped by two main objectives:
Objective #1 Test- ensure whether the product is build right:
- right bottom quadrant (TT — Technology Test) — types that test selected technology: tech stack, environment, architecture, point of integration, defined contracts etc.;
- right top quadrant (BT — Business Test) — types that test required business needs— requirements, functional and data flows, partner integration, co-existence with legacy systems, regression areas etc.
Objective #2 Exam — ensure whether the right product is built:
- left bottom quadrant (TE — Technology Exam) — types that examine selected technology: security, performance and how application resilient under stress conditions, compatibility with OS, browsers, screens resolutions, devices etc.;
- left top quadrant (BE — Business Exam)— types that examine required business needs: use case coverage, usability from user prospective, domain context and current market needs etc.
So when building your testing strategy align your objectives, starting with the “Why” to the quadrants that are relevant during the current phase of your project. Because a very important thing is introducing a new type of testing at right time. Too early or too late in the project can have certain consequences:
- Introducing a new type of testing too early when the application is not mature enough may lead to incorrect test results and give a wrong impression of the quality level. This means that if a new type of testing is introduced before the application is at sufficient level of stability or functionality, the testing results may not accurately reflect the actual quality of the software. Such results can mislead and may give a false sense of confidence in the software’s readiness.
- Introducing a new type of testing too late in the project can lead to an increase cost in defect fixes and impact delivery dates, potentially harming the company’s image. When a new type of testing is introduced late in the project, it may uncover a higher number of defects or issues that were not previously detected. This can result in a longer defect fixing phase, delaying the project’s delivery and potentially damaging the company’s reputation if quality issues arise after the product has been released.
In both cases, it is crucial to carefully consider the timing and readiness of introducing new types of testing. So just as your application continues to grow incrementally, it is necessary to gradually expand (I call it increment) the types of testing to be performed and align it with the maturity of the application, ensuring that the necessary foundation is in place for accurate testing results and minimizing the impact on project timelines and the company’s image. Such increments of testing types will be directed by way application is implemented and when stakeholders are available to provide a feedback.
At the onset of a project, the majority of the testing efforts will be directed towards technology-facing testing (TT), aimed at establishing a solid foundation for addressing the business requirements. At this stage most probably no complete features have been developed, but there are small functional subsets available. It’s important to note that addressing defects during the early coding phase is significantly less costly. Therefore, test as early as possible. It means you should consider TT (Technology Test) quadrant types of testing as unit, contract, component testing, applying various tools to perform static code scans (syntax, logic, vulnerabilities), establish environment health checks and smoke tests (this should be kept up to date with every new increment of the system functionality), make sure test integration as soon as possible, start with mocks when applicable.
Next testing “increment” will be defined based on the project needs and context. Here is a couple of possible ways to increment:
B (Business) to E (Exam) increment — maybe the most popular way to approach testing. As new features are developed, the testing goals will gradually shift from “Test” towards critique the business aspects of the application while still supporting the technology side of it. After performing functional and end to end testing, ensuring partners are aligned with changes and you can successfully integrate in real time, with growing stability of application then business validations are started and in parallel we capture performance and non-functional metrics. This allows to reduce test cycle by utilizing the same timeline to get end user feedback earlier.
T (Technology) to B (Business) increment — this approach assumes that after MVP (minimal valuable product) is available and basic technology validations are done then testing will happen in parallel with functional (BT) and non-functional (TE) types. Only when we capture both test results, tune the system settings based on recommendation report, do a code freeze then business user will perform final round of validations on final code base. This approach would be highly recommended for systems that have non-functional requirements as priority (e.g. low latency, high volume transactions, resilience, fast recovery etc. ). It means to examine and critically inspect application’s performance, load, stress, volume metrics, examine for any security vulnerabilities, examine how much the application compatible with different browsers, OS, monitors, devices etc before business user can start validations. As test results it will produce list of potential improvements to tune the application. It needs to take into account that technology is continually advancing, introducing new way of software architectures, enabling the implementation of innovative solutions. Obviously non-functional requirements can be tested at the early stages of software development and this is highly advisable however wholistic view of overall application only achievable when end to end solution is constructed and metrics can be taken from start to the end of business flow. Business user can’t judge the quality if non-functional requirements are not fulfilled and thoroughly tested.
T (Test) to E (Exam) increment — this approach may take place when system has good enough functional flows developed and non-functional requirements are not yet defined or not a priority. In this case when basic technology validations are done (TT phase is completed) then next increment testing will shift to Business Test. In this case application will go through all “Test” types and then only start “Exam” types.
Whatever way to increment you choose the final sign-off will be from selective business user group (beta testing). That’s why every example of testing increment shown above says “Finish testing here” pointing to BE (Business Exam) by emphasizing that business validation is very important in order to get a feedback before your system hits large audience. Perfect example of type of testing that could be applicable — user acceptance, usability and accessibility, exploratory testing. All this really about answering the question — is the right product built? There is always a possibility of deviation between the initial business goals and the market needs when the project is about to go live. It could be as a result of:
- project took much more time then expected and business goals have changed;
- customer preferences, industry trends, and market demands can change over the course of a project’s development. As a result, the initial business request may no longer align with the current market needs by the time the project is ready for launch;
- between start and end of the project new more competitive solution become available to the same targeted users, this can drive businesses to adapt and adjust their strategies to stay competitive;
- stakeholder feedback and insights that prompt adjustments to the project’s direction;
- simple human error when given requirements at the beginning was interpreted incorrectly.
It is crucial for project teams to remain adaptable and responsive to these potential deviations, engaging in ongoing communication with stakeholders and conducting market research to align the final product with the current market needs.
To summaries, developing a robust testing strategy is equally challenging and crucial. However with the proper approach, it can be accomplished effectively. By addressing factors below, a well-defined testing strategy can be formulated to validate the application's functionality, meet business goals, and align with the current market demands:
- always have up-to-date answers on Why? What? Where? hoW?, don’t hesitate to revise your answers and strategy on the way of project development, more information your learn about the system — more precise answers you have;
- the ultimate goal — go beyond expectations in quality, think outside of the box and leave room for innovations;
- apply various testing types based on project current needs, if something becomes outdated in your strategy — call it out;
- engage business and vendors/partners as early as possible incorporating feedback loops, accommodate requested changes as soon as possible.
Continue reading how to level up test automation