Tests – Laboratory Analyses & Review
This topic is part of the SG Systems Global regulatory & operations glossary.
Updated October 2025 • GMP/GLP Testing, Data Integrity & Release Evidence • QC Laboratory, QA, Manufacturing
Tests are the planned laboratory analyses—chemical, microbiological, and physical—that transform samples into evidence for quality decisions. In regulated industries, testing is not an isolated bench activity but a controlled process governed by validated methods, qualified instruments, justified sampling plans, and a second‑person technical review that ensures results are attributable, complete, and accurate. The output of testing feeds QC release, informs MRB and CAPA decisions, and ultimately underpins Lot Release / QA Disposition for finished goods.
“If it isn’t attributable, legible, contemporaneous, original, and accurate—it isn’t a result; it’s a rumor.”
1) What Testing Covers—and What It Does Not
Covers: identity, purity, potency, safety, and performance attributes for raw materials, in‑process controls, environmental and utilities surveillance, packaging verification, and finished‑product compliance. Testing includes method execution (e.g., HPLC, Karl Fischer, micro assays), calculations, and an independent technical review documented in LIMS/ELN.
Does not cover: “testing into compliance,” ignoring sample integrity, or bypassing method/system suitability. Laboratory testing cannot compensate for uncontrolled manufacturing or undocumented changes; results without data integrity are not usable evidence.
2) Legal, System, and Data Integrity Anchors
Laboratory records must meet 21 CFR Part 11 and Annex 11 requirements, with validated software under CSV, unique user attribution, and immutable audit trails. Instruments and systems are qualified (asset, software, and method) per IQ/OQ/PQ, and their status is maintained via calibration & maintenance controls. Methods are validated or verified, living under Document Control with versioned procedures, templates, and acceptance criteria.
3) The Evidence Pack for a Laboratory Result
An audit‑ready result ties together the sample registration (chain of custody, storage conditions), method ID and version, instrument ID and fitness‑for‑use, standards and reagents (lot/expiry), raw data (e.g., chromatograms, micro counts, gravimetric entries), calculations with units and rules for rounding/significant figures, system suitability, analyst attribution and training, second‑person review sign‑off, and the final result versus specification. Deviations, OOS or OOT links, and justifications for exceptions are attached so the complete decision trail is reconstructable.
4) From Sample to CoA—A Standard Path
1) Register & plan. Samples are registered in LIMS with justified tests per specification and sampling plan; chain of custody begins.
2) Pre‑checks. The analyst verifies method version, instrument status, calibration, controls, and environmental conditions; holds are applied if status is out.
3) Execute & document. Work is performed per method; data and observations are recorded contemporaneously in LIMS/ELN; suitability is assessed.
4) Review. A qualified reviewer performs second‑person checks of calculations, integration, audit trails, and anomalies, resolving queries with the analyst.
5) Release. Approved results populate the CoA; QA uses the evidence to decide Hold/Release and proceed to Lot Release.
If any gate fails—unsuitable system, integrity concerns, or anomalous results—the sample and related materials remain on Quarantine/Hold until investigation and remediation are complete.
5) Handling OOS/OOT & Atypical Results
An OOS triggers a structured investigation: immediate data integrity checks and method/system suitability review, hypothesis‑driven retest (not “test until pass”), and impact assessment on related lots. OOT signals from trending challenge process capability and may justify tightened controls or method review. Material disposition flows through MRB with corrective and preventive actions tracked in CAPA. Resampling is distinct from retesting and requires explicit, pre‑defined criteria to avoid bias.
6) Methods: Validation, Verification & Transfer
Methods must be fit‑for‑purpose and controlled. Validation or verification demonstrates accuracy, precision, specificity, linearity, range, robustness, and detection/quantitation limits as relevant. Measurement variation is characterized via MSA, and method transfer documents how equivalence was demonstrated at the receiving lab. All procedures, templates, and acceptance criteria are versioned under Document Control; changes follow a formal Management of Change process tied to training.
7) Data Integrity—Proving the Proof
ALCOA(+) applies to every datum: attributable users, legible records, contemporaneous entries, original sources, and accurate calculations. Audit trails must capture who did what, when, and why—especially for reprocessing or manual integrations. E‑signatures bind identities per Part 11 and Annex 11, and laboratory software is validated under CSV. The goal is reconstructability: an independent reviewer can reproduce the reported result from the raw data and methods without contacting the analyst.
8) Instruments, Qualification & Status
Testing depends on qualified, in‑status instruments and facilities. Analytical assets undergo IQ/OQ/PQ and are maintained with periodic calibration and maintenance. Balance checks, volumetric verifications, chromatography suitability, and environmental controls (temperature, humidity) are documented and must be in tolerance prior to use. If any prerequisite fails, testing halts, and the status is documented as a deviation with impact assessment.
9) Sampling Strategy & Representativeness
Good results start with good samples. Sampling plans reflect material risk, variability, and regulatory expectations, often expressed as AQL or statistically justified approaches per GMP sampling plans. Procedures define where, when, and how to sample; container/closure integrity; composite versus individual sampling; and retention samples. For micro testing, aseptic technique and environmental conditions are critical determinants of result reliability.
10) Calculations, Statistics & Trending
Calculations must be unit‑aware and rule‑based for rounding and significant figures. Laboratories trend method and product data using statistical controls (see SPC control limits) and feed long‑term signals into CPV and PQR. Distinguish clearly between control limits (process stability) and specifications (fitness for use); a result can meet specs yet still warn of drift if it challenges control limits or becomes OOT.
11) Warehouse & Disposition Interfaces
Until testing and review are complete, inventory should remain on Quarantine/Hold in the WMS. Approved results trigger QA Disposition, flipping status in inventory and, for finished goods, enabling shipment. Sampling points and holds should be visible to operations to prevent premature use or release, and Goods Receipt should enforce sampling and testing requirements automatically.
12) Stability, Shelf‑Life & Labeling
Stability testing establishes expiry or retest dates under defined storage conditions, with results trending into shelf‑life governance (see Shelf‑Life/Expiry Control). Stability failures or trends can drive specification changes, label updates, or CAPA. CoAs should clearly state storage conditions and, where applicable, retest dates aligned to stability evidence.
13) Metrics That Demonstrate Control
- Lab Cycle Time: registration‑to‑approval days for critical tests and lots.
- Right‑First‑Time: percentage of results passing review without rework.
- OOS/OOT Rates: frequency, closure time, and recurrence by method/product.
- Audit‑Ready Completeness: results with full raw data, suitability, and signatures.
- Instrument Uptime & Status: percent of tests executed on in‑status assets.
- CoA Amendments: rate and cause of post‑issue corrections (target: near‑zero).
These indicators, trended over time, reveal method capability, review discipline, and data integrity maturity.
14) Common Pitfalls & How to Avoid Them
- Manual transcription errors. Integrate instruments to LIMS/ELN; use validated calculations.
- Uncontrolled reprocessing/integration. Enforce rules, document reasons, and review audit trails.
- Method drift. Routine suitability checks, MSA, and CPV trending catch early signals.
- Sample integrity gaps. Control chain of custody, storage conditions, and hold times.
- Out‑of‑status instruments. Hard interlocks via asset status; block testing if overdue.
- Weak review. Train reviewers; use checklists tied to Part 11/Annex 11 and SOPs.
15) What Goes on the CoA & in the Review Record
A robust CoA identifies the product, lot/batch, specification reference and revision, methods used, individual results with units and acceptance criteria, overall pass/fail, storage conditions/retest date, and authorized signatures with dates. The review record should cite raw data locations, confirm suitability and calculation checks, resolve any queries, and link any deviations, OOS/OOT, or CAPA that influenced the decision.
16) How This Fits with V5 by SG Systems Global
V5 LIMS & ELN. The V5 platform registers samples with barcodes and chain‑of‑custody, auto‑assigns tests from specifications, and enforces method versions under Document Control. Analysts execute guided steps in the ELN—reagent prep, dilutions, suitability checks—with contemporaneous entries tied to user identity. Instrument data can be captured directly to eliminate transcription, while a validated calculation engine applies unit conversions and rounding rules consistently across methods.
Status & Suitability Interlocks. V5 checks asset status and inventory of standards before work can start; if an instrument or standard is out of tolerance or expired, the system blocks execution and raises an exception. Suitability templates for HPLC and other methods ensure the method is performing before samples are analyzed.
Review by Exception & Data Integrity. Second‑person review uses dashboards that flag missing raw data, failed suitability, outliers, or unacknowledged audit‑trail events. Audit trails and e‑signatures are enforced at key steps, and the platform is validated under CSV to support Part 11/Annex 11 expectations. Method changes route through MOC and training, keeping analysts aligned with the effective version.
OOS/OOT, MRB & CAPA Integration. Suspect results can automatically open deviations/NCs, link to MRB for disposition, and escalate to CAPA when root causes are systemic. Investigations carry hypotheses, retest rules, and impact assessments, so QA can see the full decision tree in one place.
Release, Traceability & CPV. Once approved, results populate the CoA and flip inventory status through QA Disposition in the WMS. Data stream into CPV dashboards with SPC controls, allowing proactive detection of OOT trends that would otherwise surface late in stability or the field. End‑to‑end genealogy ensures every tested result is traceable to materials, processes, and shipments.
Bottom line: V5 makes laboratory testing defensible and fast by binding the method, the machine, the analyst, and the result into a single, version‑controlled narrative—ready for review, release, and inspection.
17) FAQ
Q1. What’s the difference between a retest and a resample?
A retest re‑analyzes the original sample; a resample takes new material from the lot. Resampling requires predefined criteria and is not a tool to “test into compliance.”
Q2. Can I delete failed system suitability runs?
No. All runs—pass or fail—remain part of the permanent record with audit‑trail context. Suitability failures halt testing and must be assessed for impact.
Q3. When can I average replicates?
Only when the method defines replicate rules and acceptance criteria. Report significant figures and rounding per the method; do not average away outliers without justification.
Q4. Is manual chromatographic integration allowed?
Yes, if the method permits it, reasons are documented, and a reviewer confirms compliance with defined integration rules. Every change must be traceable in the audit trail.
Q5. Who approves the CoA?
Typically QC issues the CoA after technical review; QA uses it with the broader evidence set to decide Hold/Release and Lot Release.
Q6. Can we release with a closed deviation affecting a test?
Possibly—if root cause and product impact are fully understood, data integrity is intact, and CAPA is effective. QA determines release based on the totality of evidence.
Related Reading
• Governance & Integrity: 21 CFR Part 11 | Annex 11 | CSV | Audit Trail | Data Integrity
• Lab Platforms & Methods: LIMS | ELN | HPLC | Karl Fischer | MSA
• Sampling & Trending: Sampling Plans | SPC Control Limits | CPV | PQR
• Decisions & Disposition: Quality Control (QC) | Quality Assurance (QA) | OOS | OOT | Lot Release
• Stability & Shelf‑Life: Shelf‑Life & Expiry Control