Responsible Use of AI in V5 – ISO 42001, AI Trustworthiness, Risk Management and GMP Reality
Artificial Intelligence is arriving in MES, QMS and WMS platforms long before regulators have finished defining their expectations. Life-science, food, cosmetics and chemical manufacturers are stuck between pressure to “add AI” and the reality that GMP, data integrity and validation have not gone away. The core question is not, “Do you have AI?” It is, “Can you prove that AI does not undermine your control of the process?”
SG Systems Global has taken a deliberately conservative position with V5 Traceability. V5 is not a black-box AI platform. It is a hard-gated execution system that can incorporate intelligence in a way that remains transparent, reviewable, and auditable. Where AI is used, it is bounded by approval workflows, signatures and audit trails in line with 21 CFR Part 11, data integrity expectations and modern guidance on AI governance.
“Our philosophy is simple: AI can suggest, but it cannot silently decide. In V5, every AI-assisted action is traceable, reviewable and subject to the same controls as any other GMP-critical step.”
— Head of Quality, Regulated Manufacturer
The Standards Landscape for AI in Regulated Manufacturing
Several standards and guidance documents now shape expectations for responsible AI use. Three of the most relevant are:
-
ISO/IEC 42001 – Artificial Intelligence Management System (AIMS). See glossary entry:
ISO/IEC 42001 – AI Management System. - ISO/IEC TR 24028 – AI Trustworthiness – security, robustness, transparency, bias, human oversight.
- ISO 23894 – AI Risk Management – risk assessment and control for AI across the lifecycle.
In regulated manufacturing, these emerging AI standards sit on top of familiar foundations:
- 21 CFR Part 11 – electronic records and signatures.
- GAMP 5 and CSV / CSA thinking.
- Quality risk management (ICH Q9, FMEA, HACCP, etc.).
- ALCOA+ data integrity and audit trails.
For customers, the practical question becomes: does the platform’s AI behavior align with these principles, and can that alignment be demonstrated during inspection, not just promised in a slide deck?
What “AI in V5” Actually Means
In V5, “AI” does not mean pushing uncontrolled large language models directly into batch records, labeling or release decisions. Instead, V5 focuses on:
- Deterministic rules and hard-gating of execution steps.
- Assisted actions where AI suggests content (e.g. training drafts, document summaries) but cannot bypass approval workflows.
- Strict separation between suggestion layers and GMP-critical records like eBMR, DHR and MMR.
- Minimal external APIs, with imported data routed through approval workflows and checks before use.
- Audit-ready logs for any AI-assisted event, tied back to users, assets and lots.
When V5 integrates with external AI (for example, to assist with training content), that integration runs in a controlled sandbox. Outputs are treated as drafts that must be reviewed and approved before they become part of the controlled quality system.
ISO/IEC 42001 – AI Management System and the V5 Governance Model
ISO/IEC 42001 defines how an organization structures its AI Management System (AIMS): policies, roles, risk assessment, controls, documentation and improvement. It is the AI analogue of ISO 9001 or ISO 27001, but focused on governance of AI activities.
V5 is designed to sit comfortably inside such an AIMS. At a high level:
- Scope – customers can explicitly scope which V5 features use AI (if any) and lock others to deterministic rules only.
- Roles and responsibilities – AI-assisted features can be limited to defined roles and must follow documented SOPs.
- Risk assessment – each AI use case can be evaluated using the same quality risk management tools already used for process or equipment changes.
- Control measures – hard-gates, approvals and audit trails in V5 become the concrete “control” layer for AI suggestions or imported content.
- Monitoring and improvement – V5 provides event logs and metrics that can feed back into the organization’s AIMS review cycle.
The result is that AI never sits “above” the quality system. It becomes another input into a well-defined digital environment where every action is logged and every critical decision remains under human control.
ISO/IEC TR 24028 – AI Trustworthiness Inside GMP Workflows
ISO/IEC TR 24028 focuses on making AI trustworthy: robust, secure, transparent and understandable to humans. In a GMP or GFSI environment, that translates into:
- No invisible changes to batch instructions, labels or inspection plans driven directly by AI.
- Explainable suggestions – for example, showing why a deviation classification or training update was proposed.
- Bounded autonomy – AI cannot commit a regulatory decision; it can only support the human who will sign.
- Security and privacy controls – data sent to external AI services is limited, anonymized where possible, and governed by supplier agreements.
In V5, trustworthiness is achieved less by making models more complex and more by shrinking the space where they are allowed to act. The platform is built on deterministic control of weighing and dispensing, label control, genealogy and hold / release. Any AI assistance must plug into those existing guardrails.
ISO 23894 – AI Risk Management Aligned with Quality Risk Management
ISO 23894 applies the familiar logic of risk identification, analysis, evaluation and treatment to AI-specific risks. For manufacturers already using tools like FMEA, PFMEA or HAZOP, this is a natural extension.
Typical AI risk questions in a V5 environment include:
- Could an AI suggestion influence a deviation classification or CAPA in a way that hides a critical trend?
- Could external AI input corrupt specification data used for label verification or release decisions?
- Is any AI-assisted function used during process validation, and if so, how is that documented?
Because V5 already supports structured risk registers, change control and validation master plans, customers can extend those frameworks to AI without inventing a parallel system. AI is treated as just another potential risk factor that must be analyzed and controlled.
21 CFR Part 11, Data Integrity and AI-Generated Content
Once AI touches anything that becomes part of the permanent manufacturing record—batch instructions, specifications, labels, deviations, investigations, training—its outputs are subject to 21 CFR Part 11 and ALCOA+. V5 treats AI-generated drafts as uncontrolled until a qualified user:
- Reviews the content inside the platform.
- Modifies it where necessary.
- Approves and promotes it into a controlled document or record using standard workflows.
Every step leaves an audit trail: who reviewed, what changed, who signed, when it went into effect. The AI engine never writes directly into the released revision of a SOP, MMR or label artwork. That separation is critical when explaining the system to inspectors.
GAMP 5, CSA and Validating AI-Assisted Workflows
Under GAMP 5, the focus is on validating intended use. For AI-assisted features, V5 supports:
- Documented descriptions of what the AI feature is allowed to do (and what it is not allowed to do).
- Configuration records showing how the feature is turned on, limited or disabled per site.
- Test evidence that AI suggestions cannot bypass required approvals or alter controlled records directly.
- Ongoing monitoring records to demonstrate continued control over time.
The emerging CSV / CSA mindset is to focus validation effort where the risk is highest. In V5, that means dedicating the most attention to how AI-assisted features interface with:
- batch records and eBMR.
- deviation and CAPA workflows.
- training and competency records.
Because V5 is already deployed as a validated platform, AI-assisted capabilities are positioned as configurable options rather than uncontrolled add-ons. Customers can enable, disable or restrict them per plant, per product family or per role as part of their own validation strategy.
Good vs Bad AI Patterns on the Shop Floor
Good Patterns
- AI used to propose SOP text, training quizzes or investigation narratives, always followed by human review and approval in V5.
- AI used to highlight patterns in deviations, complaints or OOS results, feeding into structured root cause analysis.
- AI used to help classify records or route tasks, while the actual decisions and signatures remain with authorized users.
- All AI activity captured in audit trails, visible to QA and inspectors.
Bad Patterns
- Uncontrolled external chat tools used to design batch instructions, labels or validation protocols with no traceability.
- AI directly editing released documents in the DMS or altering critical specification values.
- Autonomous agents closing deviations or CAPAs, or changing release status, without human review.
- No clear record of what prompt led to what change, making investigations impossible.
V5 is architected to enable the first set of patterns and structurally prevent the second. Even as AI capabilities expand, the principle remains: AI may inform, but it does not silently act.
Example: AI-Assisted Training and Document Control in V5
A practical example is training content and procedural documentation. In many plants, subject matter experts struggle to keep SOPs and training modules synchronized with process changes. AI can help here—if the implementation is controlled.
In a typical V5 scenario:
- A process owner updates a routing or formula in V5. That change is logged via change control.
- The system flags linked SOPs and training units that may need revision.
- An AI assistant can suggest updated wording or quiz questions based on the new process, but these remain drafts.
- QA and training staff review, adjust and approve the new content in the controlled DMS and training modules.
- Updated training is assigned via the training matrix, and V5 hard-gates access to GMP-critical screens until completion.
AI accelerates the drafting step but does not replace ownership, review or approval. All the usual controls—versioning, signatures, effective dates—remain intact.
External References for AI Governance
For teams building their own AI governance framework around V5, the following external documents are useful reference points (all open in new tabs):
- ISO/IEC 42001 – Artificial Intelligence Management System (ISO)
- ISO/IEC TR 24028 – Trustworthiness in Artificial Intelligence (ISO)
- ISO 23894 – Artificial Intelligence Risk Management (ISO)
- FDA – Computer Software Assurance (CSA) Guidance
These external sources, combined with your internal quality system and the V5 control framework, provide the backbone for a defensible AI strategy in regulated manufacturing.
Related SG Systems Glossary and Articles
- ISO/IEC 42001 – AI Management System
- ISO/IEC TR 24028 – AI Trustworthiness
- ISO 23894 – AI Risk Management
- Data Integrity
- 21 CFR Part 11 – Electronic Records & Signatures
- GAMP 5 – Software Validation
- MES – Manufacturing Execution System
- QMS – Quality Management System
- WMS – Warehouse Management System
Bottom line: AI is only acceptable in regulated manufacturing when it lives inside a strong governance framework. V5 is built first as a compliance-grade execution system. AI is allowed into that environment on the same terms as any other critical function: documented, risk-assessed, validated and always under human control.



