AI, Risk & Governance in Regulated Manufacturing — ISO 42001, 23894 and Trustworthy MES/QMS
This topic is part of the SG Systems Global regulatory & operations glossary.
Updated November 2025 • AI governance in manufacturing, ISO/IEC 42001 AI management system, ISO/IEC 23894 AI risk management, ISO/IEC 22989 AI concepts, ISO/IEC 23053 AI lifecycle, ISO/IEC 24027 bias, ISO/IEC 24030 AI assessment framework, ISO/IEC TR 24028 AI trustworthiness, ISO/IEC TR 24029-1 robustness of neural networks, ISO/IEC TR 24368 ethical & societal concerns, ISO/IEC 23893 AI risk vocabulary • MES, QMS, LIMS & WMS in GxP / GFSI environments
AI in regulated manufacturing is not primarily a question of model accuracy or clever prompts. It is a question of governance: who is responsible for AI decisions, how those decisions are constrained, how they are tested and monitored, and how you prove to regulators that AI did not quietly undermine GMP, food safety, data integrity or product quality.
New AI standards—ISO/IEC 42001 AI management system, ISO/IEC 23894 AI risk management, ISO/IEC 22989 AI concepts & terminology, ISO/IEC 23053 AI lifecycle framework, ISO/IEC 24027 bias, ISO/IEC 24030 AI assessment framework, ISO/IEC TR 24028 AI trustworthiness, ISO/IEC TR 24029-1 robustness of neural networks, ISO/IEC TR 24368 ethical & societal concerns, ISO/IEC 23893 AI risk vocabulary—give structure to that conversation.
“In GxP and food plants, the question is not ‘do you have AI?’ The question is ‘can you prove that AI did not quietly override your process controls, your quality system or your regulatory commitments?’”
- AI concepts & lifecycle: ISO/IEC 22989 AI concepts, ISO/IEC 23053 AI lifecycle.
- AI governance & risk: ISO/IEC 42001 AI management system, ISO/IEC 23894 AI risk management, ISO/IEC 23893 AI risk vocabulary, ISO/IEC 38507 governance of AI.
- Trustworthiness & robustness: TR 24028 AI trustworthiness, TR 24029-1 robustness of neural networks.
- Bias, ethics & societal concerns: ISO/IEC 24027 bias, TR 24368 ethical & societal concerns of AI.
- Linkage to GxP and quality: data integrity, GAMP 5, ISO 9001, ISO 13485, GMP/cGMP, data integrity.
V5’s philosophy: AI can suggest, but it cannot silently decide; human- and rule-based controls remain the final gate for regulated actions.
1) Why AI in MES/QMS is a governance problem, not a toy
In social apps, AI hallucinations are annoying. In regulated manufacturing, they can be lethal: wrong setpoints, invalid recipe changes, mis-prioritised deviations, fabricated batch data, incorrect training content. You don’t get to shrug and say “the model messed up.”
Three uncomfortable truths for AI in GxP and food-grade environments:
- AI will be wrong sometimes. Models are probabilistic; they will misinterpret context, drift or fail under edge cases.
- AI can be wrong in ways that are hard to detect. Especially when it produces plausible text or smooth-looking numbers.
- Regulators don’t care that it was AI. They care whether your system met the requirements. Blaming the model doesn’t help in a recall or inspection.
That’s why the emerging AI standards are focused on management systems, risk and governance—not clever algorithms. The goal is to make sure AI is used in ways that are compatible with existing quality and regulatory frameworks, not in opposition to them.
2) The ISO AI standards stack — who does what
Your glossary already covers the key standards. Think of them as layers, not competing documents:
- ISO/IEC 22989 — AI concepts and terminology. ISO/IEC 22989 sets a common vocabulary so legal, quality, IT and data science teams are at least arguing with the same words.
- ISO/IEC 23053 — AI system lifecycle framework. ISO/IEC 23053 describes the lifecycle of AI systems: requirements, data, model development, deployment, operation and retirement.
- ISO/IEC 42001 — AI management system. ISO/IEC 42001 is the “ISO 9001 for AI”: a management-system standard for how organisations govern AI across policies, roles, risk and continuous improvement.
- ISO/IEC 23894 — AI risk management. ISO/IEC 23894 applies the ISO 31000 risk management approach to AI. ISO/IEC 23893 supports it with vocabulary.
- ISO/IEC 38507 — governance of IT & AI. ISO/IEC 38507 focuses on governance of IT and AI from a board / senior leadership perspective.
- ISO/IEC 24027 — bias in AI systems. ISO/IEC 24027 drills into identification and mitigation of bias in AI systems.
- ISO/IEC 24030 — AI assessment framework. ISO/IEC 24030 gives a framework for assessing AI systems, including risk and performance.
- ISO/IEC TR 24028 — AI trustworthiness. TR 24028 summarises traits of trustworthy AI: reliability, robustness, transparency, accountability, safety, etc.
- ISO/IEC TR 24029-1 — robustness of neural networks. TR 24029-1 addresses robustness evaluation of neural networks specifically.
- ISO/IEC TR 24368 — ethical and societal concerns of AI. TR 24368 covers ethical, human-rights and societal impact considerations.
For regulated manufacturing, these standards sit alongside—and must harmonise with—your existing frameworks: GMP/cGMP, ISO 9001, ISO 13485, ISO 14971, data integrity, GAMP 5 and friends.
3) AI use cases in MES/QMS — where risk actually shows up
In a MES/QMS/WMS/LIMS stack, AI can show up in many places. Some examples:
- Decision support. Suggesting root causes of deviations, recommending investigation paths, prioritising CAPA items, identifying batches at risk of failure based on patterns.
- Prediction. Predicting CPP drift, likely equipment failures (predictive maintenance), expected yield, micro risk, or on-time-in-full risk.
- Classification & extraction. Auto-classifying complaints, extracting data from documents (COAs, batch records), flagging anomalies in trends.
- Natural language & training. Generating draft SOPs, training content, operator guidance, or summarising investigations.
Each of these has a subtly different risk profile. For example:
- Decision support that suggests options is lower risk than an agent that triggers holds or releases automatically.
- Auto-extraction of lab data into LIMS might be high risk if misreads are not systematically detected.
- Generating SOP drafts might be fine if clearly labelled and reviewed, but dangerous if pushed into production without scrutiny.
AI governance in this context means being explicit about these use cases, assessing their risk and putting guardrails around them.
4) ISO/IEC 42001 and 23894 in a GxP context
Two of the most practically useful AI standards for regulated manufacturers are:
- ISO/IEC 42001 — AI management system. Comparable to ISO 9001, but for AI. It expects you to:
- Define scope and objectives for AI use.
- Assign roles and responsibilities (governance, risk owners, technical owners).
- Identify stakeholders and requirements (regulators, customers, internal quality, safety).
- Implement policies and processes for AI lifecycle, risk and monitoring.
- Continuously improve.
- ISO/IEC 23894 — AI risk management. This is your risk toolbox: identifying AI-specific risks, assessing likelihood/severity, and choosing controls. It should align with your existing quality risk management (QRM) and ISO 14971-style approaches.
In practice, implementing them doesn’t mean creating a separate “AI QMS”. It means extending your existing QMS/ISMS/IT governance processes with AI-specific controls and language, and making sure AI use cases are visible in your risk registers, validation plans and management reviews.
5) Trustworthiness, robustness, bias & ethics — what they mean on the shop floor
Standards like TR 24028 AI trustworthiness, TR 24029-1 robustness, ISO/IEC 24027 bias and TR 24368 ethical & societal concerns can feel abstract. On a manufacturing floor, they translate into questions like:
- Robustness. Does a predictive maintenance model behave sensibly when sensors drift? Does a suggestion engine fail gracefully when it sees a never-before-seen batch type?
- Bias. Does an AI-assisted deviation triage tool consistently favour or neglect certain product families, equipment or sites based on biased training data?
- Trustworthiness. Are AI outputs clearly distinguishable from validated, deterministic system logic? Can operators tell which is which?
- Ethics & societal concerns. Could AI-driven optimisation inadvertently push behaviour (e.g., reducing cleaning frequency, compressing hold times) in ways that increase safety risks or undermine sustainability and workforce wellbeing?
These questions need to be part of your risk assessment and ongoing monitoring, not just a one-off policy statement.
6) AI & data integrity — avoiding “smart” ways to break GMP
AI does not exempt you from data integrity rules. If anything, it makes some failure modes easier:
- AI auto-filling fields. Models that “guess” missing batch data or IPC results are completely incompatible with ALCOA+. No model may invent regulated data.
- AI rewriting records. Tools that “tidy up” free-text entries, deviations or complaints after the fact risk altering the original meaning and violating audit-trail principles.
- AI-generated SOPs and content. Drafts may be helpful, but they must go through normal review, approval and validation processes under document control—not bypass them.
AI governance has to be consciously wired into your existing Part 11/Annex 11 and CSV expectations (GAMP 5):
- AI is not allowed to tamper with regulated records or audit trails.
- AI suggestions must be clearly labelled and attributable.
- Any automated action with regulatory impact must remain deterministic, validated and transparent.
7) V5’s philosophy — AI can suggest, but it cannot silently decide
The safest way to integrate AI into regulated MES/QMS is to treat it as an assistant that suggests and prioritises, not as an invisible decision-maker. In V5 terms:
- AI as advisor. AI may rank deviations or CAPA items by estimated risk, highlight unusual batch patterns, propose groupings of complaints, or suggest potential root causes—but it does not close, approve or change records itself.
- AI under human & rule control. Final decisions about product release, recipe changes, risk acceptances or deviations always follow existing workflows, electronic signatures and approvals.
- AI clearly labelled. V5 surfaces which outputs are AI-assisted so users and auditors can see where model suggestions were used and where deterministic logic applies.
This “AI can suggest, but cannot silently decide” principle keeps AI inside a governance framework that regulators understand: human accountability, transparent rules and evidence-based decisions.
8) How V5 Traceability aligns AI with MES/QMS/WMS/LIMS reality
V5 Traceability treats AI as a layer on top of a validated, deterministic execution and quality backbone:
- AI on top of clean data. V5’s structured batch records, genealogy, IPC and QC data provide a reliable foundation. AI operates on high-quality data, not ad-hoc exports.
- Configuration, not black-box behaviour. AI-related features in V5 are configured and documented like other system functions, and their role in workflows is visible to QA and IT.
- Risk classification of AI use cases. V5 encourages customers to classify use cases (e.g., advisory vs. operational) and apply AI risk frameworks (ISO/IEC 23894) accordingly.
- Integration with existing QMS. AI governance can be linked to SOPs, risk registers, deviations and CAPA in V5 QMS, making it part of the same continuous-improvement cycle as other quality topics.
The outcome: you can harness AI where it delivers value (faster investigation, better predictions, smarter triage) without undermining your existing GxP and food safety commitments.
FAQ — AI, Risk & Governance in Regulated Manufacturing
Q1. Do we need “AI certification” (e.g., ISO/IEC 42001) to use AI in a GMP or food plant?
Not necessarily. ISO/IEC 42001 certification may become attractive or expected over time, but for now the key is to show that you have a coherent governance approach: defined AI use cases, risk assessments (aligned with ISO/IEC 23894), roles and responsibilities, and documented controls integrated with your QMS. Certification can be a way of proving that maturity, but it is not the only way.
Q2. How is AI risk management different from our existing QRM / ISO 14971 processes?
AI risk management per ISO/IEC 23894 builds on familiar risk principles, but focuses on AI-specific hazards: model drift, data quality, bias, opacity, adversarial input, unexpected generalisation, etc. The idea is to integrate those concerns into your existing risk processes rather than bolt them on separately.
Q3. What AI use cases are “safer” in a regulated environment?
Generally, advisory and analytical use cases are lower risk: ranking tasks, suggesting likely root causes, summarising documents, highlighting anomalies. High-risk use cases include those that directly set CPPs, generate regulated data, change recipes or automatically approve/release product. The latter typically require very strong controls, validation or are simply out of scope for AI in many organisations.
Q4. Can AI help with data integrity, or does it just create more risk?
Both are possible. AI can help detect anomalous patterns, incomplete records or unusual user behaviour that might signal data integrity issues. But if misused—for example, to auto-fill missing entries or rewrite history—it can severely damage data integrity. The difference is governance: clear rules on what AI may and may not touch.
Q5. How should we talk to regulators about our AI use?
Be concrete and honest. Explain:
– Where AI is used (and where it is not).
– What decisions it influences (and what remains human/rule-based).
– How you assessed risk (referencing ISO/IEC 23894, QRM, etc.).
– What monitoring, validation and override mechanisms you have.
Avoid hand-waving about “smart systems”—focus instead on risk controls and accountability.
Q6. What is the minimum viable AI governance approach for a MES/QMS user?
At minimum:
– An inventory of AI use cases in your MES/QMS/WMS/LIMS ecosystem.
– A basic risk classification for each (low/medium/high).
– Clear rules on where AI is allowed to suggest vs. decide.
– SOPs that describe how AI outputs are used, reviewed and overridden.
– Integration of AI-related risks into your existing QRM and management reviews.
Related Reading (Glossary)
• AI Concepts & Lifecycle: ISO/IEC 22989 AI Concepts & Terminology | ISO/IEC 23053 AI System Lifecycle Framework
• Governance & Risk: ISO/IEC 42001 AI Management System | ISO/IEC 23894 AI Risk Management | ISO/IEC 23893 AI Risk Vocabulary | ISO/IEC 38507 Governance of AI
• Trustworthiness, Bias & Ethics: TR 24028 AI Trustworthiness | TR 24029-1 Robustness of Neural Networks | ISO/IEC 24027 Bias in AI Systems | TR 24368 Ethical & Societal Concerns of AI
• Quality & GxP Context: GMP/cGMP | ISO 9001 | ISO 13485 | Data Integrity | GAMP 5
• V5 Platform & AI: V5 Solution Overview | V5 MES | V5 QMS | V5 Connect API
OUR SOLUTIONS
Three Systems. One Seamless Experience.
Explore how V5 MES, QMS, and WMS work together to digitize production, automate compliance, and track inventory — all without the paperwork.

Manufacturing Execution System (MES)
Control every batch, every step.
Direct every batch, blend, and product with live workflows, spec enforcement, deviation tracking, and batch review—no clipboards needed.
- Faster batch cycles
- Error-proof production
- Full electronic traceability

Quality Management System (QMS)
Enforce quality, not paperwork.
Capture every SOP, check, and audit with real-time compliance, deviation control, CAPA workflows, and digital signatures—no binders needed.
- 100% paperless compliance
- Instant deviation alerts
- Audit-ready, always

Warehouse Management System (WMS)
Inventory you can trust.
Track every bag, batch, and pallet with live inventory, allergen segregation, expiry control, and automated labeling—no spreadsheets.
- Full lot and expiry traceability
- FEFO/FIFO enforced
- Real-time stock accuracy
You're in great company
How can we help you today?
We’re ready when you are.
Choose your path below — whether you're looking for a free trial, a live demo, or a customized setup, our team will guide you through every step.
Let’s get started — fill out the quick form below.






























