ISO/IEC 38507 — Governance of AI
This topic is part of the SG Systems Global regulatory & operations glossary.
Updated November 2025 • ISO/IEC 42001, ISO/IEC 23894, ISO/IEC 23053, ISO/IEC TR 24028 • Governance, Quality, IT, Manufacturing, Compliance
ISO/IEC 38507 defines how governing bodies—boards, executive teams, senior leaders—should oversee artificial intelligence. Where most AI standards focus on lifecycle, technical controls or risk methodologies, ISO/IEC 38507 sits higher: it explains what organizational leadership must do to ensure AI is aligned with strategy, compliant with regulations, accountable, safe, ethical and operationally controlled. It is not written for engineers; it is written for directors, CEOs, compliance officers, executive sponsors and governance committees. In regulated industries such as pharmaceuticals, medical devices, food manufacturing and cosmetics, this standard creates the “tone from the top” that ensures AI is not developed in a vacuum, but in alignment with corporate duty of care.
“Governance determines whether AI is an asset or a liability. ISO/IEC 38507 defines accountability at the highest level.”
1) Purpose & Intent of ISO/IEC 38507
The purpose of ISO/IEC 38507 is to define governance responsibilities for AI at the organizational level. It explains what governing bodies must do—not how AI is technically built. It requires leadership to establish policies, assign responsibilities, allocate resources, understand risk exposure, ensure compliance, monitor performance, and respond to incidents. In regulated sectors, this governance aligns AI oversight with the same rigour applied to quality, safety and financial controls. The standard addresses strategic alignment: ensuring AI investments support mission, ethics, compliance and long-term value. It also addresses accountability: leadership must be able to explain the purpose, risks, and controls associated with any AI deployed in the organization.
2) Relationship to 42001, 23894, 23053 & 24028
ISO/IEC 38507 sits above the AI governance ecosystem. ISO/IEC 42001 defines the operational AI Management System (AIMS)—policies, procedures, metrics, roles, audits. ISO/IEC 38507 defines what the board must do to ensure that AIMS is effective, funded and strategically aligned. ISO/IEC 23894 describes AI risk management; 38507 ensures senior leadership approves risk appetite, reviews risk registers and holds owners accountable. ISO/IEC 23053 defines the lifecycle of AI systems; 38507 ensures the lifecycle is followed consistently. ISO/IEC TR 24028 defines trustworthiness; 38507 makes leadership responsible for ensuring trustworthiness is designed, tested and monitored. ISO/IEC 38507 is the governing shell around the entire AI standards family.
3) Governance Principles for Artificial Intelligence
The standard is built around governance principles consistent with the broader ISO/IEC 38500 series. These include responsibility, strategy, acquisition, performance, conformance and human behaviour. ISO/IEC 38507 extends these principles to AI-specific concerns: transparency, explainability, fairness, security, ethical alignment, human oversight and societal impact. Governing bodies must ensure AI is used responsibly, that intended and unintended effects are evaluated, that AI supports organizational goals, and that safeguards exist to prevent harm. In regulated manufacturing, these principles align naturally with GxP expectations: responsibility maps to QMS; conformance maps to regulatory requirements; human behaviour maps to training and oversight; performance maps to operational monitoring. This alignment gives organizations a unified language for governing AI alongside other critical systems.
4) Leadership Accountability & Roles
ISO/IEC 38507 elevates AI accountability to the executive level. Boards must assign clear roles for AI governance, including executive sponsors, risk owners, data-governance leads, and oversight bodies. Leadership must approve AI strategy, ensure risk assessments are performed, require adherence to AI standards, and review AI metrics. When issues occur—bias, drift, non-conformance, safety concerns—leadership is accountable for ensuring corrective actions are taken. This prevents the common failure mode where AI is treated as “IT’s responsibility” with minimal strategic oversight. In regulated industries, this aligns with expectations that executives sign off on QMS performance, risk controls, and data integrity; AI becomes part of that oversight structure rather than a separate, poorly governed island.
5) Strategic Alignment & AI Portfolio Governance
One of the strongest themes in ISO/IEC 38507 is that AI investments must support the organization’s vision, mission and core responsibilities. Governing bodies should maintain a high-level view of all AI systems in use: which departments own them, what risks they carry, how they support strategic goals, and how they impact compliance, quality, customers and operations. This is analogous to portfolio-level quality oversight in regulated manufacturing: leadership must understand how AI modifies business processes, what dependencies exist, and which systems are most critical. ISO/IEC 38507 therefore encourages organizations to maintain an AI portfolio register, risk tiers, periodic review cycles and escalation paths for high-risk AI use cases.
6) Risk Management & Risk Appetite
Governing bodies must define and approve risk appetite for AI—including tolerance for bias, uncertainty, black-box behaviour, automation levels and safety implications. ISO/IEC 38507 requires leadership to ensure risk assessments based on ISO/IEC 23894 are conducted, reviewed and updated throughout the lifecycle. Leadership should understand the risks of each AI system: operational, ethical, regulatory, cybersecurity, privacy, reputational and societal. In manufacturing environments, leadership must understand risks related to batch release, product quality, training gaps, deviation handling and supplier scoring. ISO/IEC 38507 ensures these risks receive the same attention as financial, compliance or safety risks—requiring executive ownership and review.
7) Oversight of Lifecycle Execution (23053)
ISO/IEC 38507 does not dictate lifecycle tasks itself—it ensures leadership requires AI systems to follow the lifecycle pattern defined in ISO/IEC 23053. Governing bodies must verify that concept definition, data governance, model development, validation, deployment and monitoring are controlled and documented. They should demand evidence during internal audits and periodic reviews. This aligns directly with how executives oversee validated systems under GxP. The intention is clear: AI must be subjected to the same level of lifecycle rigour as any other controlled system in a regulated environment, with leadership ensuring consistency, resourcing and escalation paths.
8) Trustworthiness, Ethics & Societal Considerations
ISO/IEC 38507 requires leadership to ensure AI respects trustworthiness dimensions from ISO/IEC TR 24028 and fairness expectations from ISO/IEC 24027. Trustworthiness is not merely technical—it includes ethical behaviour, transparency, accountability, respect for human rights and alignment with societal norms. Governing bodies must ensure organizational policies reflect these requirements and that AI systems are tested and monitored accordingly. In manufacturing, this extends to fairness across suppliers, consistency across sites, avoidance of unintended bias in sampling or quality decisions, and transparency in safety-critical contexts. ISO/IEC 38507 ensures leadership acknowledges and controls ethical implications rather than delegating them to technical teams.
9) Human Oversight & Competence
Leadership must ensure that AI does not erode human responsibilities. ISO/IEC 38507 requires governing bodies to confirm that human-in-the-loop, human-on-the-loop or human-in-command oversight models—defined by ISO/IEC 22989—are appropriate for each AI system. They must ensure that staff are trained, competent and empowered to override or question AI recommendations. For regulated manufacturing, this parallels training and competency requirements in QMS frameworks: operators, QA reviewers and supervisors must understand AI limitations and escalation processes. Leadership must ensure training programs, SOPs and competency records reflect AI-augmented tasks, not just classical procedures.
10) Transparency, Explainability & Accountability
ISO/IEC 38507 requires leadership to ensure AI systems are transparent enough for stakeholders—regulators, customers, employees and partners—to understand how key decisions are influenced. This does not require complete model interpretability but requires clarity about purpose, capabilities, constraints, risks and oversight mechanisms. Leadership must ensure roles and accountability structures are documented: who owns model performance, who signs off on changes, who handles incidents, who reviews monitoring reports. In regulated manufacturing, accountability includes ensuring that any AI influencing regulated records—e.g., in eBR, eMMR, deviations or CAPA—has clear approval chains and traceability.
11) Resource Allocation & Investment
Effective AI governance requires resourcing. ISO/IEC 38507 expects governing bodies to allocate budget, personnel, infrastructure and training resources necessary for responsible AI deployment. This includes lifecycle tooling, monitoring infrastructure, data-governance platforms, quality oversight capacity and skills development. Leadership must ensure AI investments are sustainable, not under-resourced pilot projects that become operational liabilities. For manufacturers, this includes allocating resources for data-quality initiatives, validation effort, audit readiness, retraining cycles and API integrations. Governance is not merely policy—it requires sustained investment.
12) Incident Response, Escalation & Reporting
ISO/IEC 38507 requires leadership to ensure that incidents related to AI are handled through formal quality and operational processes. This includes deviations, non-conformances, CAPAs, cybersecurity incidents, bias findings, drift events or regulatory concerns. Leaders must demand timely investigation, root-cause analysis, corrective actions and management reports. They must ensure escalation paths exist for severe or high-risk incidents. In manufacturing this aligns with established quality processes: an AI-induced deviation must be treated with the same seriousness as any other process or system deviation. Leadership must review aggregated incident data during periodic reviews to determine whether governance or lifecycle improvements are required.
13) Vendor Governance & Third-Party Oversight
ISO/IEC 38507 emphasizes that leadership is accountable for AI acquired externally. This includes due diligence of vendors, review of their lifecycle and governance practices, and contractual alignment with organizational requirements. Governing bodies must ensure third-party AI systems do not introduce unmanaged risks, lack transparency, or violate compliance constraints. Procurement should require documentation of vendor AI practices, risk assessments, testing, monitoring, data-handling policies and change-notification procedures. For regulated industries, this parallels supplier qualification and vendor-auditing processes already in use—now extended to AI suppliers as well.
14) Performance, KPIs & Governance Review
ISO/IEC 38507 requires leadership to monitor AI performance through KPIs, dashboards and periodic governance reviews. These may include numbers of AI systems by risk tier, incident counts, drift rates, override rates, fairness indicators, validation status, documentation completeness and vendor-model change notifications. Leadership must review these metrics regularly and take actions when KPIs indicate weaknesses. This aligns with management review expectations under ISO/IEC 42001 and quality management frameworks such to accordance with ISO 9001 and ISO 13485. AI governance becomes part of routine executive oversight, ensuring that AI evolves safely and remains aligned with business and regulatory expectations over time. The goal is not to micromanage technical details but to ensure leadership visibility, accountability and intervention when metrics reveal performance or compliance concerns.
15) FAQ
Q1. Does ISO/IEC 38507 apply only to large enterprises?
No. Any organization using AI—regardless of size—benefits from clear governance structures. Smaller organizations may scale responsibilities differently, but accountability, oversight and transparency remain essential.
Q2. Does ISO/IEC 38507 require technical expertise from board members?
Not directly. The standard requires leadership to ensure expertise is available—internally or externally—and that decisions are informed by competent advice. It does not require executives to become AI engineers.
Q3. How does ISO/IEC 38507 relate to ISO/IEC 42001?
ISO/IEC 42001 defines the AI Management System (AIMS). ISO/IEC 38507 defines how leadership must oversee AIMS, approve policies, review performance, evaluate risks and ensure strategic alignment. Together they ensure governance from top to bottom.
Q4. Does ISO/IEC 38507 require ethical review committees?
Not explicitly, but it requires leadership to ensure ethical oversight exists. This may take the form of ethics committees, AI governance boards, risk committees or other governance structures appropriate to the organization.
Q5. What is the first practical step to adopt ISO/IEC 38507?
Begin by identifying all AI systems across the organization, assigning executive ownership, and establishing a governance committee that reviews AI risks, performance and lifecycle compliance. From there, formalize policies and integrate AI into existing governance cycles.
Related Reading
• AI Governance & Risk: ISO/IEC 42001 | ISO/IEC 23894 | ISO/IEC 24027 | ISO/IEC 23053 | ISO/IEC 22989
• Quality & Systems: ISO 9001 | ISO 13485 | CSV | VMP
• Execution & Records: MES | eBR | eMMR | Deviation/NCR | CAPA
OUR SOLUTIONS
Three Systems. One Seamless Experience.
Explore how V5 MES, QMS, and WMS work together to digitize production, automate compliance, and track inventory — all without the paperwork.

Manufacturing Execution System (MES)
Control every batch, every step.
Direct every batch, blend, and product with live workflows, spec enforcement, deviation tracking, and batch review—no clipboards needed.
- Faster batch cycles
- Error-proof production
- Full electronic traceability

Quality Management System (QMS)
Enforce quality, not paperwork.
Capture every SOP, check, and audit with real-time compliance, deviation control, CAPA workflows, and digital signatures—no binders needed.
- 100% paperless compliance
- Instant deviation alerts
- Audit-ready, always

Warehouse Management System (WMS)
Inventory you can trust.
Track every bag, batch, and pallet with live inventory, allergen segregation, expiry control, and automated labeling—no spreadsheets.
- Full lot and expiry traceability
- FEFO/FIFO enforced
- Real-time stock accuracy
You're in great company
How can we help you today?
We’re ready when you are.
Choose your path below — whether you're looking for a free trial, a live demo, or a customized setup, our team will guide you through every step.
Let’s get started — fill out the quick form below.






























