User Acceptance Testing (UAT) – Fit-for-Use Verification
This topic is part of the SG Systems Global regulatory & operations glossary.
Updated October 2025 • Computer System Validation & Go‑Live Readiness • Quality, IT, Manufacturing, Laboratory, Supply Chain
User Acceptance Testing (UAT) is the regulated user’s final, business‑led verification that a configured system is fit for intended use. Unlike vendor demos or technical qualification, UAT proves that real users—operating under approved procedures—can execute end‑to‑end workflows, make correct decisions, and produce compliant records with trustworthy data. In a CSV lifecycle aligned to GAMP 5, it follows build/configuration and OQ, precedes release, and anchors regulator expectations for Part 11/Annex 11 compliance, Data Integrity, and operational control.
“If qualification shows the software works, UAT proves you can work with it—safely, compliantly, every day.”
1) What UAT Covers—and What It Does Not
Covers: acceptance of configured business processes (e.g., electronic batch execution in MES, sample login and reviewing in LIMS, warehouse picking in WMS), roles and permissions, labels and reports, interfaces (e.g., EDI/EPCIS), and the generation of compliant records (e.g., eBMR) under real SOPs.
Does not cover: UAT is not equipment qualification (IQ/OQ/PQ), not vendor unit testing, and not an informal demo. It does not replace security, performance, or backup/restore tests performed elsewhere in the CSV plan—though high‑risk nonfunctional checks can be echoed in UAT to prove operational readiness.
2) Regulatory & System Anchors
UAT derives from the CSV principle that the regulated company must prove intended use under its quality system. Testers operate documented procedures under Document Control, record results with attributable e‑signatures (Part 11, Annex 11), and capture audit trails. The plan and risk basis align to GAMP 5; residual risks route to QRM and changes are governed by MOC. Evidence is retained under Record Retention.
3) The UAT Evidence Pack
A complete pack includes the UAT plan (scope, roles, entry/exit criteria), risk rationale, scenario scripts with acceptance criteria, pre‑approved test data, executed results with screenshots/attachments, audit‑trail extracts, report samples and label print proofs, defect logs and Deviation/NC handling, re‑test evidence, traceability matrices linking scenarios to requirements and risks, tester training records (Training Matrix), environment and configuration identifiers, and formal acceptance under Approval Workflow.
4) From Planning to Acceptance—A Standard Path
Plan: define scope, risks, scenarios, data, roles, and criteria. Readiness: confirm OQ completion, configuration baseline, stable environment, seeded data, and trained testers. Execute: run scenarios as written; collect evidence contemporaneously. Triage: log defects, classify impact, decide fix vs. workaround. Re‑test & regression: verify fixes and guard against breakage. Acceptance: confirm exit criteria, finalize residual risks, and sign off. Release: proceed to controlled go‑live; archive the pack under Document Control.
5) Entry & Exit Criteria That Prevent Surprises
Entry: OQ passed, configuration frozen, defects below threshold, environment validated and stable, test data prepared, testers trained and authorized, UAT plan approved. Exit: all critical scenarios pass; no open severity‑1 defects; severity‑2 mitigated with documented risk and plan; traceability complete; reports/labels verified; audit trails/e‑signatures demonstrated; approvals captured; go/no‑go recorded with rationale.
6) Risk‑Based UAT
Apply QRM to prioritize: batch release, label printing, data migration, master data changes, security and approvals, inventory status changes, and integrations that affect product identity or disposition. Use failure‑mode thinking (what could go wrong?) to design negative and boundary tests—e.g., reject a label with the wrong GTIN, block a pick of quarantined stock, or challenge e‑signature attempts without required privileges.
7) Test Data That Looks Like Real Life
Seed representative items, lots, expiry dates, strengths/sizes, customers/suppliers, and realistic order volumes. Include edge cases: leap‑year dates, daylight‑saving time changes, near‑limit quantities, multilingual labels, and unusual workflows (returns, rework). Anonymize any sensitive data. Keep a definitive catalog of datasets in the UAT pack so re‑tests are reproducible and consistent across cycles.
8) Roles, Permissions & Audit Trails
Prove that each role can do what it should—and cannot do what it shouldn’t. Attempt restricted transactions, verify read‑only views, and confirm segregation of duties for critical steps (e.g., maker/checker for release). Demonstrate the behavior of audit trails (who/what/when/why), password/e‑signature prompts, session timeouts, and account lockouts. Capture extracts as evidence in the UAT record.
9) Interfaces, Devices & Labels
Exercise integrations end‑to‑end: scanners and scales to MES/WMS, LIMS to instruments, ERP to EDI, packaging to label printers, and outbound traceability via EPCIS. Validate that labels meet format and barcode rules (Label Verification) and that transactions propagate correctly across systems without data loss, duplication, or mis‑mapping.
10) Reports, Records & Retention
Verify all regulated outputs: eBMRs, pick/pack tickets, COA printouts, audit logs, management reports, and data exports. Check that formats, filters, time zones, and decimal precision match procedures; PDFs are immutable; and retention policies route outputs to governed repositories (Record Retention). Confirm that search and retrieval work as described in SOPs.
11) Nonfunctional Readiness You Should Prove
While performance and resilience may be addressed elsewhere, UAT should attest that the solution is usable under real workload: screens render fast enough on shop‑floor devices, label printing throughput matches line speed, offline/spotty Wi‑Fi behavior is acceptable, and backup/restore and failover have been demonstrated at least once for the acceptance environment with evidence preserved.
12) Metrics That Demonstrate Control
- Risk coverage: proportion of high/medium risks with executed scenarios and passing results.
- Defect profile: severity mix, cycle time to close, and re‑open rate.
- Acceptance stability: percentage of scenarios passing on first execution; regression pass rate after fixes.
- Evidence completeness: scenarios with full screenshots, logs, labels, and audit‑trail extracts attached.
- Tester readiness: share of testers with current training per the Training Matrix.
These indicators make go/no‑go decisions defensible and reveal whether acceptance is thorough or merely ceremonial.
13) Common Pitfalls & How to Avoid Them
- “Happy‑path” testing only. Design negative and boundary scenarios from risk analysis.
- Unrealistic or ad‑hoc data. Pre‑approve datasets; keep them version‑controlled.
- IT‑only testers. Require business process owners to lead execution and sign‑off.
- Moving configuration during UAT. Freeze builds; any change routes through MOC with impact assessment.
- Thin evidence. Capture screenshots, reports, and audit‑trail extracts contemporaneously.
- No traceability. Maintain a matrix from requirement/risk to scenario to result and defect.
- Interface blind spots. Prove both sides of every integration, including error handling and retries.
14) What Belongs in the UAT Record
Identify the system and version; environment identifiers; approved UAT plan; tester list and training evidence; risk basis; scenario scripts; data catalogs; executed results with attachments; defect/NC and CAPA linkage; re‑test records; traceability matrix; acceptance statement and go/no‑go minutes; and references to governing SOPs and approvals. Archive under controlled access with retention rules.
15) Position vs. IQ/OQ/PQ & Go‑Live
IQ/OQ establish that the system is installed and functions to specification; UAT establishes that your organization, using your procedures and data, can operate it compliantly. Some industries pair UAT with PQ or treat portions of UAT as operational qualification—what matters is that acceptance is user‑owned, risk‑based, and complete before production use.
16) How This Fits with V5 by SG Systems Global
UAT Workspace & Governance. The V5 platform provides a governed UAT workspace inside the QMS: plans and scripts are authored and approved under Document Control, then routed through e‑signature‑based Approval Workflow. Role‑based access keeps builders and testers separated to preserve independence.
Execution with Automatic Evidence. V5 guides testers step‑by‑step and auto‑captures evidence—screenshots, label images, audit‑trail extracts, and report PDFs—storing them with immutable metadata to satisfy ALCOA(+). Where UAT spans modules (e.g., MES to WMS to LIMS), cross‑system evidence is stitched into a single dossier.
Defects to CAPA. V5 turns UAT defects into quality records: critical findings can escalate directly to CAPA, while minor items enter backlog with risk visibility. Re‑tests are linked so closure is auditable.
Risk‑Based Coverage & Traceability. UAT scenarios link to the risk register (QRM), requirements, and configured objects. V5 generates live traceability matrices and coverage dashboards to show which high‑risk functions have passing evidence.
Label & Integration Harnesses. For barcode/label content, V5 integrates with label verification so print samples and grades are automatically attached. Interface stubs validate EDI/EPCIS payloads and shop‑floor device events without standing up every external dependency on day one.
Release & Retention. When acceptance completes, V5 issues a go/no‑go package with all approvals, then locks and archives the dossier under Record Retention. Subsequent changes must pass through MOC and can trigger targeted re‑UAT by risk.
Bottom line: V5 makes UAT repeatable, risk‑focused, and inspection‑ready—turning acceptance from a scramble of screenshots into a governed, end‑to‑end story of fitness for use.
17) FAQ
Q1. Who should lead UAT?
Business process owners under QA oversight. IT supports environment and tools, but acceptance must be user‑led to satisfy intended‑use expectations.
Q2. How is UAT different from OQ?
OQ is technical verification against functional specs, often vendor‑ or IT‑led. UAT is business verification of end‑to‑end processes under SOPs with real‑world data and decisions.
Q3. Do we need negative testing?
Yes—especially for high‑risk controls (e.g., status blocks, e‑signature checks, label mismatches). Negative tests prove the system prevents unsafe or noncompliant actions.
Q4. Can we change configuration mid‑UAT?
Only via controlled MOC with impact assessment and re‑test. Silent changes undermine evidence credibility.
Q5. What evidence is essential for Part 11/Annex 11?
Demonstrations of unique user accounts, e‑signature binding, time‑stamped audit trails, record integrity, and appropriate security/retention—captured as artifacts in the UAT pack.
Q6. How much UAT is enough?
Enough to cover all high/medium risks with passing evidence, all critical workflows and interfaces, and all regulated outputs, with residual risks documented and accepted by the right approvers.
Related Reading
• Validation & Governance: CSV | GAMP 5 | Document Control | Approval Workflow | QRM
• Compliance Foundations: 21 CFR Part 11 | Annex 11 | Audit Trail | Data Integrity | Record Retention
• Execution Contexts: MES | LIMS | WMS | eBMR | Label Verification | EPCIS