“Quality is not an act, it is a habit.” — Aristotle.
And the first step to develop this habit is to set the quality standards. The SMARTer (Specific, Measurable, Achievable, Relevant, Time-based) quality gates are defined in the test strategy from the beginning the best outcome will be achieved at the end. Test orchestration provides a centralized platform for managing the entire testing process. It helps ensure that different types of tests (e.g., unit, integration, functional, performance etc.) are executed in the correct order and according to a predefined schedule. Here is one slide strategy that will help you to organize the process of coordinating and managing various testing activities within a software development cycle:
Quality gates are matter of mutual agreement within the team and help to ensure high standard quality software. These checkpoints must be part of deployment pipeline, triggered automatically each time and must be met before a software project can proceed to the next phase of development or deployment. This can be achieved by integrating with version control systems, allowing teams to trigger tests automatically whenever code changes are pushed to a repository (local -> feature -> develop -> master -> release). This helps maintain consistency between code changes and test execution.
Quality gates that can be considered as part of the strategy:
- Unit level checks
Quality Gate: minimum unit test coverage threshold is achieved, pass code analysis, technical depth is within acceptable limits, pass automated security scans and vulnerability checks etc.
Purpose: ensures that critical parts of the code are developed per best practices (e.g. using TDD approach), adequately tested, reducing the risk of defects. Most popular tools are SonarQube, Codacy, Semmle. Unit level checks must be executed at every stage starting from the local code executed by developers. An example of unit level quality gates:
- Code review approval
Quality Gate: code must pass a thorough peer code review and receive approval from senior developers before being merged into the main codebase.
Purpose: ensures that code is well-written, follows coding standards, and meets architectural guidelines.
At the moment when local code is pushed to remote “feature” branch pull request review becomes a valuable part of processes — allows team to be up to date with all changes made across. Usually feature branch is deployed to Dev environment to let developers continue improving code based on testing results before it is merged to “develop” branch.
- X as Code
Quality Gate: software development, configuration, infrastructure, compliances and operations are presented as code.
Purpose: helps to establish standards that are applied consistently and to scale as application or environments grow.
Code quality is not the only important thing to pay attention especially when we talk about cloud native applications. Infrastructure testing becomes equally important when code is moved from local to remote environment. “X as Code” approach will help to make your strategy neat:
- Requirement as code —described in human and machine readable language. This can be codified as part of BDD or ATDD.
- Pipeline as a code — release workflow (i.e. build, test, deploy etc.) codified as a script, enables complex workflows and reusability.
- Infra as a code — manage and provision servers, databases (schemas, tables, triggers, keys etc.), networks (e.g. DNS settings) and their dependencies through machine readable definition files. This approach allows for consistent and repeatable infrastructure setups.
- Config as a code — associated along with “server as a code” where software configurations for applications, services, and systems are coded in scripts. It ensures that configurations are versioned, documented, and easily reproducible.
- Security as Code — focuses on embedding security practices and controls directly into the development process through code and automation. It includes vulnerability scanning, code analysis, and security testing.
- Compliance as Code — refers to using code to define and enforce compliance rules and policies (e.g. data privacy GDPR or HIPAA, data access, data center location, licensies compliance, audit records etc.). It ensures that software and systems adhere to industry regulations and organizational standards. Great article to read.
Environment health check can be initiated to ensure that all configurations and settings are up to date, all required components of the environment are provisioned. Great approach if after code is refactored and configurations promoted then run selective regression suite to make sure code quality and compatibility with existing code. At this stage also make sense to start running component tests.
- Component Testing Success
Quality Gate: all component tests must pass successfully before the code is considered for deployment.
Purpose: verifies that the software components work together as expected and that integrations are stable. Before code is merged to “develop” branch it’s required to prove that newly introduced changes are compatible with the working build.
Now after enough testing is done on “feature” branch and all defined so far quality gates have passed code is ready to be merge to “develop” branch. At this stage the most intensive testing happens as we are getting close to production build. And system integration tests are must have to be included to the quality gate list. Even end to end and performance tests can be executed at this stage if your QA environment has all integration points to perform E2E flows and configuration is production like to emulate the required loads. In the diagram below you will see must haves but they are not limited only to mentioned types, this can always be extended to execute other type of tests if environment allows — the more testing is done earlier the less cost expenses will be to address the defects.
- System integration test passed
Quality Gate: functional requirements specified in the user stories or requirements document must be met and verified through testing.
Purpose: ensures that the software delivers the intended functionality to users.
When develop branch is thoroughly tested code is ready to be merged to
“master” branch. This round for pre-production testing must include all possible tests especially E2E and performance as stage environment usually is the most integrated and close to production like. It’s recommended to run complete regression post build to make sure all areas are covered to eliminate misbehavior in the existing code.
- E2E and Performance Testing Acceptance
Quality Gate: E2E flows working as expected and deliver business needs. Performance tests must meet predefined benchmarks (e.g., response time, throughput etc.) before the application can proceed to production.
Purpose: validates that the software performs well under expected load and user activity.
- Regression Testing Success
Quality Gate: regression tests must pass, indicating that new changes did not introduce unintended side effects.
Purpose: ensures that new code changes do not negatively impact existing functionality.
Regression must be executed at every stage. Best approach to get the regression site automated and run over CI/CD. Most common approach is to run it over night to avoid overlap with other testing activities in the same environment. In the morning test results can be analyzed and defects raised if any found. Important also to plan different “size and purpose” of regression suites to minimize double work but at the same time find regression issues as early as possible. For example Dev environment would be sufficient to run selective regression tests, at QA environment we have to take care of progressive regression suite, and whenever the application is ready for stage version make sense to execute complete regression. Of course it’s just one example of regression distribution strategy. You can plan your regression in different way based on project needs. To read more about regression, different types of approaches please refer Regression strategy in one go.
If all quality gates are passed and test results are acceptable by stakeholders code version can be tagged with “release” branch and can be deployed to production on demand or on schedule. Important to include all release notes so it will be easier to track changes made over time.
Of course this model can be customized to suit the project’s needs. Here’s how the Trunk-based development approach could look:
Local code version:
- Implement small, frequent commits to undergo Unit level checks.
- Once the code is prepared for integration into the short-lived branch, initiate selective regression testing. This suite might be minimal but should encompass critical priority areas. Utilize a runtime environment with limited resources to cover the test scope effectively.
- Assess short-lived branches against the quality gates shown in the diagram (see attached). Make sure to adhere to the pull request process as a mandatory step.
- Perform progressive regression testing as a post-deployment measure after the short-lived branch has been deployed.
- For the main branch, subject it to comprehensive testing against all designated quality gates. Upon success, the code can be tagged as a “feature.”
- A complete regression test can be scheduled, e.g. on a nightly basis.
Few additional points that closely relates to test orchestration strategy.
- Continuous Integration and Continuous Delivery (CI/CD)
Test orchestration is a fundamental component of CI/CD pipelines. Automated tests are integrated into the CI/CD process, ensuring that code changes are thoroughly tested before being deployed to production. It ensures that every code change is tested, helping to catch bugs early and deliver more reliable software releases:
Regularly assess your development processes, tools, and workflows to identify areas for improvement. Actively seek feedback from your team and stakeholders.
Experiment with new tools and techniques that can enhance software quality and productivity.
Encourage team innovating thinking.
- Adaptability and Maintenance
Test orchestration should be adaptable to changing requirements, new testing tools, and evolving project needs. It should allow teams to easily modify test workflows, add new tests, and integrate with emerging technologies etc..
As projects grow and more tests are added, manual test management becomes challenging. Test orchestration provides scalability by managing a large number of tests, environments, and configurations.
- Parallel and Distributed Execution
Test orchestration enables tests to be executed in parallel across multiple environments or devices. This speeds up testing and helps identify issues that might arise under specific conditions.
Distributing tests across multiple machines or containers allows for efficient resource utilization and faster test cycles.
- Monitoring and Reporting
Test orchestration tools provide monitoring capabilities to track the progress of test execution. They generate detailed reports and dashboards that show test results, pass/fail status, and trends over time.
Reporting helps stakeholders make informed decisions about the quality of the software and whether it meets the required criteria for release.
In summary, test orchestration plays a critical role in managing and streamlining the testing process for software development. It promotes efficiency, automation, and quality assurance by coordinating the execution of tests, managing test environments, and providing insights into the software’s health.