Verification & Validation (V&V)Glossary

Verification & Validation (V&V) – Fit‑for‑Use Assurance

This topic is part of the SG Systems Global regulatory & operations glossary.

Updated October 2025 • Validation Lifecycle & Evidence • QA, Engineering, Manufacturing, IT

Verification & Validation (V&V) is the structured, risk‑based discipline that proves a system, process, method, or product is built right (verification: specs met) and is the right thing for its intended use (validation: user needs met). In regulated operations this spans the full lifecycle—capturing URS, planning in a VMP, qualifying equipment (IQ/OQ/PQ), validating processes (PV/PPQ), methods (TMV), and software (CSV), with ongoing control via CPV. The outcome is defendable, traceable evidence that supports release, inspection readiness, and patient/customer safety.

“Verification shows you met the spec. Validation proves the spec was the right one.”

TL;DR: Define user needs in a governed URS and plan V&V in the VMP. Verify designs/configuration, then execute domain‑appropriate validation—IQ/OQ/PQ, PPQ, TMV, CSV—capturing raw data, deviations and CAPA in auditable systems. Maintain traceability from URS→tests→evidence; monitor performance with CPV/SPC. Revalidate on controlled changes via MOC. Govern e‑records under Part 11/Annex 11 and Data Integrity expectations.

1) What V&V Covers—and What It Does Not

Covers: requirements definition, risk assessment, design/specification review, test protocol design and execution, equipment qualification, process and method validation, software validation, reporting, and continued verification (e.g., CPV). It ensures fitness for intended use under defined operating ranges.

Does not cover: ad‑hoc experimentation, undocumented configuration “tweaks,” or retrospective approvals without raw data. Routine in‑process checks or QC testing are not substitutes for a validated state; they are evidence used within it.

2) Legal, System & Data Integrity Anchors

Operate under GMP/ICH Q10, device QMSR (replacing 21 CFR 820), drug 21 CFR 211, and software expectations in Part 11/Annex 11 and GAMP 5. Control documents in Document Control; manage changes under MOC; capture attributable, contemporaneous evidence with audit trails per Data Integrity.

3) The V&V Evidence Pack

Maintain: approved URS and risk register (QRM, PFMEA), VMP and plan, design/functional specs, protocol(s) (IQ/OQ/PQ, TMV, UAT, PPQ), raw data and objective results, deviations with impact assessments and CAPA, summary reports with conclusions, a live traceability matrix linking URS→tests→evidence, configuration snapshots, and training records. Store under governed retention.

4) From Requirement to Release—A Standard Path

1) URS & risk. Capture needs, users, environments, and risks.
2) Plan. Build the VMP, define protocols, acceptance criteria, and traceability.
3) Verify design. Review specs/config (design review, FAT/SAT as applicable).
4) Validate. Execute IQ/OQ/PQ, TMV, UAT, PPQ with controlled deviations.
5) Conclude. Issue summary reports, approve release, and enter CPV monitoring with Control Plan rules.
6) Sustain. Revalidate on change via MOC.

5) Verification vs Validation—Practical Demarcation

Verification asks “did we build/configure it to spec?” (e.g., recipe parameters load correctly; label template fields map). Validation asks “does it consistently meet intended use?” (e.g., process yields meet quality attributes across lots; users can perform critical tasks without error under real conditions). Both are required for defensible fitness‑for‑use.

6) V&V by Domain

Equipment/Facilities: IQ/OQ/PQ, environmental readiness, and utilities (UQ).

Process/Product: PV lifecycle—PPQ then CPV/SPC.

Laboratory/Methods: TMV, MSA, and data integrity of chromatograms/ELN records.

Software/IT: CSV per GAMP 5, including UAT, security, and audit trails.

Packaging/Labeling: variable‑data checks and Label Verification under effective‑dated artwork control.

7) Traceability Matrix—Proving Coverage

Maintain a living matrix linking URS requirements to protocols, test cases, objective evidence, deviations/CAPA, and final conclusions. For devices, connect to DHF; for manufacturing, link to MBR/MMR and the executed eBMR; for devices in production, evidence ultimately rolls into the DHR.

8) Risk‑Based Testing & Sampling

Focus testing where risk is highest using QRM tools (PFMEA, hazard analysis). Set acceptance criteria and sampling (e.g., AQL) proportionate to severity and detectability. For automated systems, include negative/edge cases and security/privilege tests.

9) Statistics that Make Validation Stick

Use appropriate sample sizes, confidence/reliability goals, and SPC to demonstrate control. For PPQ, show capability against critical attributes; then keep watching via CPV with defined rules for signals and control limits.

10) Change Control & Revalidation

Route changes through MOC, assess impact/risk, define regression testing, and update the traceability matrix. Revalidate when the validated state could be affected—software upgrades, recipe changes, equipment moves, labeling template edits, or supplier/material shifts.

11) Outsourcing, Suppliers & CMOs

Leverage vendor testing where justified, but perform independent verification of critical requirements. Define responsibilities in the Quality Agreement, review supplier validation packages, and align release criteria and data formats up front.

12) Data Integrity—Evidence that Stands Up

Capture evidence in validated systems with unique user attribution, audit trails, and compliant e‑signatures (Part 11/Annex 11). Keep raw data (e.g., HPLC files), not just summaries; ensure results are attributable, legible, contemporaneous, original, accurate—ALCOA(+).

13) Metrics That Demonstrate Control

  • Right‑first‑time protocol execution and deviation rate.
  • Traceability coverage (URS items with executed, passing tests).
  • Time‑to‑validation (plan→approved report) and aging of open actions.
  • CPV signal rate and capability trend post‑PPQ.
  • Change‑to‑revalidation cycle time and % changes with documented impact assessment.

These KPIs show a predictable, risk‑based V&V system that sustains a validated state over time.

14) Common Pitfalls & How to Avoid Them

  • Weak or generic URS. Make needs testable and context‑rich; avoid copying vendor brochures.
  • Paper validation. Require objective evidence from the system itself (screens, files, signals).
  • Uncontrolled changes mid‑execution. Freeze config during testing; track any drift via MOC.
  • Ignoring negative/edge cases. Include failure paths, permissions, and error handling.
  • Lost raw data. Store native files with links; don’t rely on pasted screenshots.
  • No revalidation. Tie releases and updates to explicit impact assessments and regression testing.

15) What Belongs in the V&V Record

Scope and rationale; URS and risks; VMP/plan; design/functional specs; protocol versions; executed results with raw data; deviations/CAPA; configuration baselines; training and released SOPs; summary report with conclusion; traceability matrix; and post‑release monitoring plan (CPV/control plan). All items should be searchable and immutable under Document Control.

16) How This Fits with V5 by SG Systems Global

Requirements & traceability built‑in. The V5 platform captures URS as governed records, maps them to test cases and acceptance criteria, and maintains a live traceability matrix that updates as protocols execute.

Protocol designer for every domain. V5 provides validated templates and e‑execution for IQ/OQ/PQ, TMV, PPQ, and UAT, with step interlocks, device integrations, and automatic capture of audit trails and screenshots.

CSV and security by design. Aligned to GAMP 5, V5 enforces UAM, e‑signatures, and Part 11 controls; it blocks release if requirements lack passing evidence.

Change‑driven revalidation. When recipes, methods, or software versions change, V5 triggers MOC, proposes risk‑based regression suites, and requires approval before moving to production.

Lifecycle visibility. Post‑release, V5 streams CPV metrics and SPC signals into dashboards so validation remains a managed state—not a one‑time event.

Bottom line: V5 turns V&V into a governed, end‑to‑end workflow—from URS to CPV—with real‑time traceability, interlocks, and inspection‑ready evidence.

17) FAQ

Q1. How many PPQ batches are required?
It’s risk‑based. Three is common, but justify the number using process knowledge, prior data, and criticality; continued performance is demonstrated through CPV.

Q2. What’s the difference between IQ/OQ/PQ and CSV?
IQ/OQ/PQ qualify physical equipment and facilities; CSV validates computerized systems (software + infrastructure) used in GxP processes.

Q3. When is revalidation needed?
After changes that could affect the validated state—software upgrades, recipe/spec changes, equipment relocation, new materials, or labeling template edits—assessed via MOC.

Q4. Can we rely on vendor test results?
Yes with qualification: review and gap‑assess vendor documentation, then perform risk‑based verification of critical requirements in your environment.

Q5. Is UAT alone sufficient for software validation?
No. UAT is one element; a compliant CSV package includes risk assessment, requirements, design review, installation/config verification, testing, security, data integrity, and a final report.

Q6. How do we handle deviations during protocol execution?
Document immediately, assess impact, define corrective actions or retesting, and include the deviation and CAPA in the final validation report.


Related Reading
• Foundations & Governance: VMP | ICH Q10 | GAMP 5 | 21 CFR Part 11 | Annex 11 | Document Control | MOC
• Planning & Requirements: URS | UAT | PFMEA | Control Plan
• Execution & Records: IQ/OQ/PQ | Process Validation | PPQ | CPV | TMV | DHR | eBMR
• Systems & Data: CSV | MES | LIMS | ELN | Audit Trail | Data Integrity



You're in great company