Annex 11 – Computerised Systems (EU GMP)
This topic is part of the SG Systems Global regulatory & operations glossary.
Updated October 2025 • EU GMP, Electronic Records & Signatures, Validation, Data Integrity • Pharma, Biologics, ATMP, Devices, Cosmetics, Food Supplements
EU GMP Annex 11 sets the bar for how life‑science manufacturers specify, validate, operate, and retire computerised systems that influence product quality and patient safety. If a system can change a batch outcome—or the decision to release it—Annex 11 is in scope. The expectation is simple and unforgiving: prove fitness for intended use, keep it under control, and preserve trustworthy records for as long as they matter. The annex leans on risk‑based thinking (QRM), lifecycle validation (CSV), and Data Integrity (ALCOA+) to ensure electronic records—including eBMR and DHR) are complete, consistent, and reviewable.
“If it affects quality, it’s Annex 11. Show your logic, your controls, and your evidence—or be ready to stop the line.”
1) Scope—What’s In and Why It Matters
Annex 11 reaches beyond the obvious MES/LIMS/QMS stack. It includes label systems (Label Verification, UDI), WMS, equipment interfaces (SCADA, balances, checkweighers), and spreadsheets used to make release decisions. If a calculation, scan, or signoff influences identity, strength, quality, purity, traceability, or release status, it’s Annex 11 territory. The scope aligns with 21 CFR Part 11 but is broader on lifecycle and operations—design for both if you supply to the EU and US.
2) Lifecycle & Validation—Risk‑Based, Evidence‑Driven
Start with a top‑level VMP and a defensible URS. Derive functional and design specifications, then test where risk demands it. Infrastructure (servers, cloud, networks) is qualified; application configurations (recipes, forms, label templates) are validated as you would code—because they are code. Use QRM to target testing on failure modes that threaten patient safety, product quality, or data integrity. Maintain a living trace matrix from requirement → risk → control → test → release note. Validate change and deployment processes themselves—not just the app—so that future updates don’t quietly break compliance.
Vendor documentation helps, but it doesn’t replace user validation. A polished brochure is not test evidence. Confirm your intended use in your environment with your data volumes, integrations (ISA‑95), and role model. For cloud and multi‑tenant SaaS, establish how the provider qualifies infrastructure updates, exposes release notes, and supports your re‑validation cadence without surprise downtime.
3) Roles, Responsibilities & Supplier Assurance
Every system needs a business owner (accountable for use and data), QA oversight, and IT operations support. Define who can approve changes, who reviews audit trails, who grants access, and who signs release. For external parties, execute quality agreements that pin down response times, incident routing, data access, and notification for changes, breaches, and NOC. Audit suppliers proportionate to risk; if your batch release depends on their app, you need to understand their SDLC, testing, and security posture—not guess.
4) Data Integrity—ALCOA+ by Design, Not Slogan
Records must be attributable, legible, contemporaneous, original, and accurate (ALCOA) with completeness, consistency, durability, and availability layered on. Configure audit trails for every GxP object: create/modify/delete, identity of the actor, reason for change, timestamps, old vs. new values. Make them reviewable on a defined cadence; “audit trail enabled” is meaningless if nobody looks. Prohibit uncontrolled offline edits, export/re‑import “fixes,” and shadow spreadsheets. Electronic signatures must be uniquely linked to user identity, the exact content signed (not just a hash in a vacuum), and the meaning of the action (review/approve/verify).
Device data matters. If you pull weights from a balance or machine vision flags a defect, preserve the source ID, time, and verification state. Calibration status must be known at the point of capture; if an instrument is out of tolerance, your system should block or quarantine the affected lots automatically (Quarantine/Hold).
5) Access Control—Least Privilege or Bust
Implement role‑based UAM with unique users, MFA where practical, and time‑bound privilege elevation. Review access routinely and on personnel changes. Hard rule: nobody approves their own work. Segregate development, test, and production. If your vendor suggests “shared admin accounts,” push back. It’s 2025; shared credentials are indefensible in a GxP environment.
6) Configuration, Content & Change Control
Treat configuration like code: recipes, label templates, SOP links, checklists, SPC limits, and workflow logic must be versioned, reviewed, tested, and released under Change Control. Document design intent and the risk you’re mitigating; include regression tests for critical paths. Perform periodic reviews to confirm that validated state, security posture, and incident learnings still hold. When in doubt, re‑validate with a focused impact assessment rather than rolling the dice on “probably fine.”
7) Interfaces & Automation—Where Errors Multiply
Annex 11 expects you to understand and control data flows end‑to‑end (ISA‑95): ERP ↔ MES ↔ LIMS ↔ QMS ↔ WMS ↔ label/serialization. Specify message formats (EDI, EPCIS), timing, retries, and error handling; validate at realistic volumes, not toy datasets. For automation—balances (gravimetric), micro‑dosing, vision, serialization—demonstrate accuracy, latency, and tamper‑evidence. Prove labeling logic with label verification checks in the line clearance workflow (Line Clearance).
8) Execution Records—eBMR/eDHR Without Gaps
Electronic batch and device history records must enforce steps, capture materials and lot genealogy, apply spec limits, and require signatures at the right points. Use dual verification for high‑risk actions. Disallow release if any required data, review, or signature is missing. Integrate environmental and utility data (EM, temperature mapping) and ensure the audit trail is part of the record package—not an optional export at inspection time. For controlled reprocessing, link to deviation and risk assessments explicitly.
9) Incidents, Deviations, CAPA—Own the Root Cause
System incidents and data integrity concerns route into Deviation/NC. Investigate with RCA, fix the defect, and implement CAPA that prevents recurrence (not just “retrained user”). If impact touches released product, evaluate recall readiness and mass balance evidence. Uncomfortable truth: if audit trail shows edits with weak reasons and no second‑person review, you’ll be writing commitments to inspectors. Build the habit before the audit.
10) Business Continuity—Backups You Can Actually Restore
Backups are meaningless until you prove restores. Validate that you can reconstruct entire records—data, metadata, signatures, related attachments, and the audit trail—inside a sandbox on a realistic timeline (RTO/RPO). For cloud, document provider responsibilities and your tests. Include DR failover scenarios and practice them. If your first full restore is during an inspection, you’ve already lost.
11) Retention & Archival—Readable for the Whole Lifecycle
Retention follows product and regulatory clocks. Archived records must remain readable, retrievable, and trustworthy for years. Use controlled archival with integrity checks (checksums, signature verification) and tested migration plans when formats or platforms change. Don’t park eBMR PDFs in a file share and call it done; prove you can reproduce the full record with context and audit trail intact.
12) Spreadsheets & Local Tools—Validate or Replace
Critical spreadsheets are applications. Lock cells, document formulas, version under Document Control, track changes, and validate. If risk is high, retire them in favor of validated system functions. The worst inspection finding is “we discovered a hidden column that changes potency calculations.” Prevent that with transparent, validated logic and V&V.
13) The Practical Annex 11 Package—What Good Looks Like
- System Inventory: GxP criticality, owners, data classification, interfaces, and hosting model.
- Requirements & Risks: URS with risk ranking mapped to controls and tests (risk register).
- Validation Evidence: IQ/OQ/PQ with realistic scenarios (PPQ analogies), data migration tests, and performance/volume results.
- Operational Controls: SOPs for access management, change, incident handling, audit‑trail review, periodic review, and backup/restore drills.
- Configurations Under Control: versioned recipes, label templates, limits, with approvals in Document Control.
- Release Readiness: signed release package with trace matrix, deviation closures, and training updates (Training Matrix).
- Metrics: coverage of audit‑trail reviews, stale access removals, restore test pass rate, and change defects per release.
14) How This Fits with V5 by SG Systems Global
Lifecycle & CSV. The V5 platform is implemented under a documented CSV approach with risk‑based test packs and signed releases. Requirements trace into test evidence and into the audit trail of the deployed version. Customer‑specific configurations—formulas, recipe versioning, pack/ship rules—are controlled the same way.
Execution Controls. V5 enforces hard‑gating (electronic pass/fail) for critical steps, checks calibration status before device data is accepted, and blocks release when signatures or data are missing. Label and serialization templates are validated objects with approval history.
Traceability. Every material movement is tied to lots and genealogy; FEFO/FIFO rules are enforced in WMS; Hold status prevents inadvertent shipping before QA disposition.
Data Integrity. Audit trails capture before/after values, reasons, users, and timestamps; reports pull from executed data, not re‑computed spreadsheets. Access is role‑based with periodic review and MFA options (UAM).
Continuity. Backup/restore is tested to recover full records and trails; archival plans preserve readability over time. Bottom line: V5 operationalizes Annex 11 so execution, data, and review stay aligned.
15) Labs, Methods & Data—Bridging Annex 11 with 17025
Where Annex 11 meets laboratories, expect cross‑over with ISO/IEC 17025 and TMV. LIMS and instrument interfaces must maintain chain of custody, secure audit trails, and traceable calibrations. Sampling plans (GMP sampling) and disposition decisions flow back to MES/QMS with full context. If your potency, ID, or micro data drives release, Annex 11 expects the electronic link to be robust and reviewable.
16) Warehousing & Logistics—Quality Doesn’t Stop at the Dock
Inventory status, pack & ship, and unit/case/pallet IDs are within scope when they influence release. Validate carrier labels, case algorithms (GS1‑128), and shipment documents (BOL). Confirm that the WMS cannot ship items in Hold and that recall readiness can trace orders rapidly with complete genealogy.
17) Performance, CPV & Ongoing Fitness
Post‑go‑live, keep proving fitness with CPV‑style monitoring: transaction rates, error rates, audit‑trail anomalies, security events, restore tests passed, and time‑to‑close deviations. For processes with statistical controls (SPC), track Cp/Cpk on critical data streams. When performance drifts or incident patterns emerge, treat that as a signal to tighten controls or re‑validate portions of the stack.
18) Common Pitfalls—And the Fix
- “Config isn’t code.” Wrong. Manage recipes, labels, limits, and workflows as validated, versioned objects under Document Control.
- Audit trail is on but unread. Schedule reviews, train reviewers, and sample high‑risk objects. No review = no control.
- Paper printouts as primary. Unless justified, electronic records are primary. Printing does not preserve metadata and context.
- Weak change impact. Use risk assessments tied to specific data flows and failure modes. Test what can actually hurt you.
- One‑time validation. Periodic review keeps you honest after updates, incidents, or supplier changes.
- Shadow spreadsheets. Replace with validated functions or bring them under full V&V.
- Unqualified instruments feeding MES. Block data from devices with unknown status.
- Missing segregation of duties. Enforce that creators cannot self‑approve; log and review any emergency overrides.
- Uncontrolled integrations. Treat interfaces as validated objects with versioning and regression tests.
19) Metrics That Convince Inspectors
- Validated State Coverage: % of GxP systems with current URS, risk assessment, and recent periodic review.
- Access Hygiene: stale accounts closed; privileged access time‑bound; MFA coverage.
- Audit‑Trail Health: % of high‑risk objects reviewed on schedule; anomalies resolved within SLA.
- Change Quality: changes with impact analysis and passed regression tests; escaped defects per release.
- Continuity Assurance: restore test pass rate; median time‑to‑restore vs. RTO; completeness of restored records.
- Data Integrity Incidents: trend down and closed with effective CAPA, not “retrained user” boilerplate.
20) Quick‑Start Checklist (Use This Tomorrow)
- List your top 10 GxP systems and mark which generate or host primary records.
- Confirm audit trails are enabled and reviewed for those records; document the last review.
- Run an access review; close stale accounts; enforce least privilege in UAM.
- Pick one system and perform a full restore test of a representative record set.
- Identify one shadow spreadsheet that drives a release decision; either validate it or replace it.
- Schedule periodic reviews with risk‑based depth; tie findings to CAPA.
- Document interface versions and set up regression tests before the next release window.
21) Clause‑by‑Clause Walkthrough (Practical View)
Risk Management. Use QRM to rank functions by impact; concentrate testing and controls where the harm is real. Tie risks to controls and to audit‑trail scope. Keep the register current when processes or suppliers change.
Personnel & Training. Map competencies to roles and systems via a maintained Training Matrix. Training completion does not equal competency—assess effectiveness where risk is high.
Suppliers & Service Providers. Perform proportionate qualification of software vendors, hosting, and managed services. Quality agreements must define incident SLAs, vulnerability disclosure, and data portability.
Validation. Validate intended use, not just features. Leverage vendor docs but generate your own risk‑based OQ/PQ with traceability. Re‑validate after significant changes to code, configuration, or infrastructure.
Data. Define data owners, classifications, and retention. Control master data (recipes, specs, label claims) under Document Control with effective dating.
Accuracy Checks. Build automated verification for critical calculations (potency adjustments, conversions), with dual verification where manual entry remains.
Data Storage. Ensure reliable storage with integrity checks; protect against silent corruption; test restores routinely.
Printouts. If you print, capture context and metadata or embed a verifiable record ID. The electronic record remains primary unless justified.
Audit Trails. Enable, scope, review, and secure them. Keep them inseparable from the primary record and protect from alteration.
Electronic Signatures. Unique, bound to meaning, and backed by identity management; prevent delegation without trace.
Batch/Release. Electronic sign‑off must reflect a complete, consistent record; block release on missing data or open deviations.
Business Continuity. Written, tested plans; documented outcomes; improvement actions tracked to closure.
Archiving. Preserve readability and trust; plan for format and platform change; test retrievals periodically.
22) Worked Examples (What Inspectors Love)
Example A—Potency Adjustment in Formulation. A validated rule increases API charge when assay is below target. Evidence includes: controlled formula version, TMV for the calculation pathway, dual verification of assay input, audit‑trail of the rule firing, and batch outcome within spec. Related links: Potency Adjustment, V&V.
Example B—Net Content Control with Guardbanding. Checkweigher integrates with MES; targets and SPC limits are controlled objects; sub‑TNE risk is monitored; label claims pull executed nets. Evidence: device IDs and status, tare tables under Document Control, and audit‑trail of any target change. Related links: TNE, Control Limits.
Example C—Electronic Label Approval. Label templates are versioned and validated; barcode symbology verified; UDI and GTIN rules enforced; release is blocked if template is not effective‑dated for the lot. Related links: Label Verification, GS1 GTIN.
23) Questions Inspectors Ask (Prepare Answers Now)
- Show me the trace matrix for this system—requirement to test to release. Where’s the risk assessment?
- Demonstrate an audit‑trail review on a critical object. What’s your review frequency and who signs it?
- Restore this batch record—including audit trail—into a sandbox. How long did it take and what’s your RTO?
- Walk me through change X—impact analysis, tests executed, and production verification.
- How do you ensure users can’t approve their own work? Show me the role model and last access review.
- What happens if this instrument is out of calibration? Prove the interlock and the disposition flow.
24) FAQ
Q1. Is Annex 11 the same as 21 CFR Part 11?
No. They overlap on electronic records/signatures, but Annex 11 is broader on lifecycle, risk management, and operations. If you sell in both EU and US, design for both from day one.
Q2. Do COTS systems still need validation?
Yes. Vendor documentation supports your effort, but you must validate your intended use, configuration, roles, volumes, and interfaces in your environment.
Q3. Are paper printouts acceptable as primary records?
Only with strong justification. Annex 11 generally treats electronic records as primary; printing does not preserve metadata, signatures, or audit trail context.
Q4. Do spreadsheets fall under Annex 11?
If they impact GxP decisions, yes—control and validate them or migrate the logic into a validated system function.
Q5. How often should we review access and audit trails?
On a defined, risk‑based frequency (often monthly/quarterly), and ad‑hoc after incidents or major changes. Reviews must be documented and sampled for effectiveness.
Q6. What’s the quickest way to show validated state?
Maintain a live trace matrix, current risk assessment, last periodic review, restore‑test evidence, and a signed release package—ready to hand to an inspector.
Related Reading
• Foundations: CSV | QRM | Predicate Rule
• Records & Integrity: Data Integrity | Audit Trail | Record Retention & Archival
• Execution & Release: eBMR | DHR | Label Verification
• Governance: Document Control | Change Control | UAM
OUR SOLUTIONS
Three Systems. One Seamless Experience.
Explore how V5 MES, QMS, and WMS work together to digitize production, automate compliance, and track inventory — all without the paperwork.

Manufacturing Execution System (MES)
Control every batch, every step.
Direct every batch, blend, and product with live workflows, spec enforcement, deviation tracking, and batch review—no clipboards needed.
- Faster batch cycles
- Error-proof production
- Full electronic traceability

Quality Management System (QMS)
Enforce quality, not paperwork.
Capture every SOP, check, and audit with real-time compliance, deviation control, CAPA workflows, and digital signatures—no binders needed.
- 100% paperless compliance
- Instant deviation alerts
- Audit-ready, always

Warehouse Management System (WMS)
Inventory you can trust.
Track every bag, batch, and pallet with live inventory, allergen segregation, expiry control, and automated labeling—no spreadsheets.
- Full lot and expiry traceability
- FEFO/FIFO enforced
- Real-time stock accuracy
You're in great company
How can we help you today?
We’re ready when you are.
Choose your path below — whether you're looking for a free trial, a live demo, or a customized setup, our team will guide you through every step.
Let’s get started — fill out the quick form below.






























