Skip to content

Quality Assurance & Testing Overview

First PublishedByAtif Alam

Quality assurance (QA) and testing here mean how you design and run verification before and alongside production: unit through end-to-end work, pipeline gates, and how you judge whether that investment is paying off.

This section is about intent and practice for product and pipeline quality—not a duplicate of CI/CD configuration tables (those live on CI/CD for applications).

  • QA and Testing Strategies — Layers of testing (unit through exploratory), shift-left ownership, environments, security and other quality dimensions, and pointers to specialized testing elsewhere in the library.
  • Measuring QA and Testing Success — Outcome and process metrics, delivery signals, production quality gates (error budgets and SLOs), and how to avoid gaming the numbers.

How This Differs From Reliability Testing (Chaos)

Section titled “How This Differs From Reliability Testing (Chaos)”

The Chaos engineering section uses reliability testing to mean chaos experiments, game-days, and synthetic checks in production—validating that the running system handles failures and traffic the way you expect.

This quality assurance section focuses on building and shipping: tests you run in development and CI, manual exploration, and metrics that describe whether your testing and QA process is effective.

Both matter. Pipeline tests reduce the chance of bad code reaching prod; chaos and synthetics validate behavior once it is live. For ongoing production validation with artificial traffic, see Synthetic testing and load replay.

  • CI/CD for applications — Pipeline stages and quality gates (tests, coverage, scans, approvals) are where much of your automated strategy runs.
  • Load and stress testing — Pre-production performance and capacity validation complements functional testing.
  • Synthetic testing and load replay — Production-side checks and replay; not a substitute for shift-left tests, but essential for regression detection in real environments.
  • Reliability metrics — Delivery and incident signals (for example change failure rate) overlap with “is our quality practice working?”
  • Error budgets — Production error budgets and SLOs are the runtime complement to pre-ship testing: they tell you whether users are still experiencing acceptable quality after release.

Observability makes both pipeline feedback and production signals meaningful: without SLIs, you cannot tie test gaps to user impact as clearly. That relationship is developed further on the measuring success page.