ISO/IEC 23053 — AI System Lifecycle Framework
This topic is part of the SG Systems Global regulatory & operations glossary.
Updated November 2025 • ISO/IEC 42001, ISO/IEC 23894, ISO/IEC TR 24028 • Governance, Quality, IT, Manufacturing, Compliance
ISO/IEC 23053 defines a reference lifecycle for artificial intelligence systems, describing how AI should be conceived, designed, implemented, verified, deployed, monitored and retired in a structured and auditable way. Where ISO/IEC 42001 sets expectations for an organisation’s AI Management System (AIMS), ISO/IEC 23053 focuses on the lifecycle of individual AI systems that live inside that governance shell. For regulated sectors—pharma, medical devices, food and beverage, cosmetics, chemicals and other GxP environments—this lifecycle view is essential. It ensures AI does not arrive as a one-off pilot or opaque “add-on” to MES, QMS, LIMS or WMS platforms, but as a controlled system subject to the same scrutiny as any other validated technology.
“AI is not just a model. It is a system that must be governed across its entire life—from idea, to deployment, to retirement.”
1) Purpose & Scope of ISO/IEC 23053
ISO/IEC 23053 exists to prevent AI from being treated as a one-time “project” and instead embed it as a lifecycle-managed system. Its scope covers the technical and procedural stages that an AI system should pass through, regardless of model type (machine learning, deep learning, rule-based hybrids, etc.) or deployment context (on-prem, cloud, embedded in equipment, or integrated into enterprise platforms). The framework is intentionally technology-agnostic: it does not prescribe specific algorithms or architectures, but it demands that every AI system has a documented purpose, a controlled design process, defined verification and validation activities, operational monitoring and a clear end-of-life strategy. In regulated manufacturing, this directly aligns with expectations from GxP, where lifecycle control, traceability and ongoing oversight are non-negotiable.
2) How ISO/IEC 23053 Relates to 42001, 23894 & 24028
ISO/IEC 23053 sits in the middle of the AI standards stack. At the top, ISO/IEC 42001 describes how an organisation governs AI overall: policies, roles, management review, metrics and continuous improvement. At the system level, ISO/IEC 23894 describes how to identify, analyse and treat risks associated with particular AI use cases, while ISO/IEC TR 24028 sets out properties such as robustness, explainability, security and fairness. ISO/IEC 23053 provides the lifecycle skeleton into which these other standards are plugged. During concept and requirements, 23894 helps classify risk and define controls; during model development and evaluation, 24028 ensures trustworthiness properties are addressed; during deployment and operations, 42001 ensures that the system remains under governance. Organisation-wide AI maturity without ISO/IEC 23053 risks becoming abstract. An AI lifecycle without 42001, 23894 and 24028 risks becoming technically competent but poorly governed.
3) Concept, Intended Purpose & Requirements
The lifecycle begins with a clear definition of why the AI system exists and what it must (and must not) do. ISO/IEC 23053 expects organisations to document the intended purpose, operational context, boundaries and assumptions of each AI system. This phase should answer questions such as: Which process or decision is being supported? What data sources will be used? How critical is the outcome for product quality, patient safety, regulatory compliance or financial integrity? What are the acceptable failure modes and what is non-negotiable? In a GMP environment, this phase closely resembles a URS (User Requirements Specification) but extended with AI-specific terminology from ISO/IEC 22989—for example, distinguishing clearly between “AI system”, “model”, “training data”, “input data” and “human-in-the-loop”. Risk management under ISO/IEC 23894 should start here, classifying the system’s criticality and determining the intensity of controls needed in later phases.
4) Data Acquisition, Preparation & Governance
Data is often the single largest determinant of AI performance, and ISO/IEC 23053 treats data governance as a lifecycle concern, not a one-time pre-processing task. The standard calls for documented data sources, justification for their selection, quality checks, lineage tracking, and clear separation of training, validation and test datasets. It expects analysis of representativeness and bias, including demographic, process, equipment and supplier dimensions where relevant. For regulated environments, these expectations intersect with data-integrity controls under 21 CFR Part 11 and Annex 11—for example, ensuring that data used for training is attributable, legible, contemporaneous, original and accurate (ALCOA+). At this stage, guidance from ISO/IEC 24027 on bias in AI systems can inform dataset selection, labelling strategies and the design of evaluation scenarios. The goal is not “perfect” data but data whose limitations are understood, documented and mitigated in a risk-based way.
5) Model Development, Training & Trustworthiness by Design
During model development, ISO/IEC 23053 expects structured processes rather than ad-hoc experimentation. That means version-controlled code and configuration, reproducible training procedures, and clear mappings between requirements, risk controls and model features. Model architectures, hyperparameters and training strategies should be documented with technical and business justification. Critically, trustworthiness properties from ISO/IEC TR 24028—such as robustness, interpretability, resilience and fairness—are not an afterthought; they must be designed into the model from the start. For example, if human operators in a MES need to understand why an AI flagged a deviation, the model and interface must support sufficient explainability. If the AI influences sampling or release decisions, robustness to plausible input variation must be quantified. These design choices should be traceable back to the risk analysis in 23894 and the terminology structure in 22989, creating a coherent chain from intent to implementation.
6) Verification, Validation & Acceptance Criteria
Verification and validation in ISO/IEC 23053 serve the same fundamental purpose as in traditional Computer System Validation (CSV), but with AI-specific nuances. Verification confirms that the system has been built correctly according to design specifications—covering aspects such as feature engineering logic, training code integrity, model versioning and interface behaviour. Validation evaluates whether the AI system is fit for its intended purpose in the real operational environment, using test data and scenarios that reflect real-world variability, including edge cases and failure conditions. Acceptance criteria must be predefined, objective and aligned with the system’s risk tier. For example, an AI-assisted anomaly-detection system in sterile manufacturing will require more conservative thresholds and more extensive challenge testing than a non-critical demand-forecasting model. ISO/IEC 23053 encourages organisations to embed AI validation into existing V-model or lifecycle frameworks, linking test documentation, deviations and residual risk decisions to the broader quality system and the Validation Master Plan (VMP).
7) Deployment, Integration & Human Oversight
Deployment in ISO/IEC 23053 is not simply a technical release to production; it is a controlled transition from development to ongoing governance. The standard expects documentation of how the AI system is integrated into existing workflows—whether in an eBR/eBMR, eMMR, LIMS or WMS. This includes interface specifications, triggering conditions, fail-safe mechanisms, and how human operators interact with AI outputs. Human oversight concepts from ISO/IEC 22989 (human-in-the-loop, human-on-the-loop, human-in-command) are used to make oversight design explicit rather than implied. For high-risk decisions, the AI system may only provide recommendations that require human confirmation; for lower-risk optimisation tasks, human oversight may be more supervisory. Deployment also includes configuring audit trails, electronic signatures and access controls consistent with 21 CFR Part 11 and Annex 11, ensuring that AI-related actions are fully traceable and attributable.
8) Monitoring, Drift Management & Lifecycle Control
Once in operation, AI systems change behaviour over time as data distributions shift, processes evolve, equipment is replaced or upstream systems are modified. ISO/IEC 23053 therefore mandates continuous or periodic monitoring of both performance and context. Typical metrics include prediction error, false positive/negative rates, override rates, model-drift indicators, latency, and the frequency and severity of incidents linked to AI behaviour. Monitoring outcomes must feed directly into quality processes such as Deviation/NCR management and CAPA, ensuring that anomalies are investigated and corrective actions implemented. When monitoring indicates that the AI system is no longer operating within its validated envelope—due to drift, new use cases, or regulatory changes—change control must be triggered. Retraining, re-validation or decommissioning decisions must follow documented processes consistent with the organisation’s VMP and with governance structures under ISO/IEC 42001. The core message is simple: AI cannot be left unattended after go-live.
9) Security, Integrity & Operational Resilience
Security in ISO/IEC 23053 extends beyond conventional IT controls. AI systems can introduce new attack surfaces—for example, through data-poisoning, adversarial inputs or model-stealing. The standard expects organisations to consider these risks when designing the lifecycle, integrating them into overall information-security and risk-management frameworks. That means securing data pipelines, protecting training and inference environments, limiting access to model artefacts, and detecting abnormal usage patterns. In manufacturing and quality operations, resilience is equally important: the AI system must fail safely. If upstream data becomes unavailable or corrupt, if a model no longer meets performance thresholds, or if monitoring detects unstable behaviour, the system should degrade gracefully—falling back to validated non-AI logic, suspending certain functions, or increasing human oversight. These behaviours should be documented and tested, not left to assumption. Security and resilience considerations also feed into supplier management, especially when third-party AI components or external APIs are involved.
10) Documentation, Evidence & Traceability
ISO/IEC 23053 treats documentation as a first-class lifecycle artefact, not administrative overhead. For each AI system, organisations are expected to maintain a dossier that links concept, requirements, design decisions, data governance, training runs, verification results, validation reports, deployment configurations, monitoring logs, incidents and retirement records. This dossier should provide a coherent narrative: what the AI system is for, how it was built, how it was tested, what risks remain, and how those risks are managed operationally. In regulated environments, much of this documentation is subject to record-keeping and inspection expectations similar to those applied to validated computer systems under CSV. Good documentation habits also reduce practical risk: when team members change or vendors exit, the organisation retains the knowledge required to maintain, update or retire the AI system safely. Traceability across lifecycle stages is particularly important when AI behaviour is questioned after an incident or regulatory finding.
11) Retirement, Decommissioning & Lifecycle Closure
Every AI system eventually reaches end of life. ISO/IEC 23053 explicitly includes retirement and decommissioning as lifecycle stages, not as informal clean-up. Retirement planning should define criteria for when an AI system is no longer appropriate—because of changes in process, regulation, technology, risk appetite or performance. Decommissioning activities may involve disabling the AI components, reverting to non-AI logic, archiving models and datasets, revoking access credentials, updating documentation, and communicating changes to users and stakeholders. For systems that affect regulated records or product quality decisions, retirement must also ensure that historical decision-trails remain interpretable: auditors may still ask how past decisions were made and which AI models were in use at a given time. Capturing these details at retirement prevents knowledge gaps years later. In many organisations, retirement and replacement are also opportunities to re-evaluate risk assumptions, improve design patterns and feed lessons learned back into governance under ISO/IEC 42001.
12) Metrics & KPIs for AI Lifecycle Performance
ISO/IEC 23053 does not mandate a specific KPI set, but AI lifecycle performance must be measurable if it is to be controlled. Typical system-level metrics include: number of AI systems in each lifecycle stage; time from concept to deployment; number of validation iterations; percentage of AI systems with up-to-date risk assessments; incidence of model-drift events and their time-to-detection; override rates for AI recommendations; number and severity of deviations or complaints involving AI; and time-to-closure for AI-related CAPAs. At governance level, these metrics roll up into management reviews under 42001, providing evidence that AI is managed systematically rather than opportunistically. Over time, organisations can use these KPIs to compare AI projects, refine lifecycle controls, prioritise remediation work, and identify where training, documentation or tooling gaps are causing friction or risk.
13) Embedding ISO/IEC 23053 in Regulated Operations
For regulated manufacturers, adopting ISO/IEC 23053 is less about inventing a new lifecycle and more about extending existing lifecycle thinking to AI. Many organisations already operate V-models, stage-gate processes or validation lifecycles for MES, LIMS and enterprise applications. ISO/IEC 23053 can be mapped onto those existing structures: concept aligns with business case and URS; data governance aligns with specification and design; model development fits inside implementation; verification and validation sync with testing; deployment integrates with go-live and change-control; monitoring ties into periodic review; retirement fits within system decommissioning procedures. The main difference is the additional emphasis on data, trustworthiness and drift management. In practice, organisations often start by piloting the lifecycle on one or two AI systems, refining templates and work instructions, and then rolling out common patterns across multiple use cases. Over time, ISO/IEC 23053 becomes “the way we do AI” inside the quality system rather than a separate, parallel process.
14) Cross-Functional Responsibilities & Ownership
AI lifecycle control cannot be delegated to a single team. ISO/IEC 23053 assumes a cross-functional model similar to traditional validated systems but with additional stakeholders. Governance bodies, often defined under ISO/IEC 42001, approve high-level AI use cases, risk classifications and lifecycle policies. Quality and compliance teams ensure that lifecycle documentation, validation, deviations and CAPAs meet regulatory expectations and are integrated into the QMS. IT and data teams own infrastructure, security controls, data pipelines and technical implementation. Process owners and operations teams define the intended use, assess fitness-for-purpose, monitor day-to-day behaviour and escalate when AI outputs conflict with process knowledge. Vendors and service providers may contribute models, tools or platforms, but responsibility for lifecycle control remains with the regulated organisation. Clear role descriptions, RACI matrices and training plans help prevent gaps—such as a model being changed without quality review or a monitoring signal being ignored because no one owns it.
15) FAQ
Q1. Is ISO/IEC 23053 mandatory for AI in regulated manufacturing?
No single AI standard is universally mandated at the time of writing, but regulators increasingly expect AI to be managed with the same discipline as other computerised systems. ISO/IEC 23053 provides a lifecycle model that fits naturally into existing CSV and QMS frameworks, making it a pragmatic reference for audit-ready AI deployments.
Q2. How does ISO/IEC 23053 differ from ISO/IEC 42001?
ISO/IEC 42001 defines the AI Management System at organisational level: governance structures, policies, metrics and continual improvement. ISO/IEC 23053 focuses on the lifecycle of each AI system: concept, data, development, verification, validation, deployment, monitoring and retirement. 42001 asks, “How do we govern AI as a whole?”; 23053 asks, “How do we manage this specific AI system throughout its life?”
Q3. Can we apply ISO/IEC 23053 to non-AI analytical models?
Yes. While written for AI, the lifecycle thinking in ISO/IEC 23053 also applies to advanced analytics, predictive models and algorithmic decision-support tools. Many organisations choose to use a unified lifecycle pattern for both AI and non-AI models to reduce complexity and improve consistency.
Q4. How does ISO/IEC 23053 support validation and inspection readiness?
By demanding traceability between requirements, data, design, testing, deployment and monitoring, ISO/IEC 23053 aligns well with CSV principles. The lifecycle dossier created under 23053 often becomes the primary evidence set presented to inspectors when explaining how an AI system is controlled, validated and periodically reviewed.
Q5. What is a practical first step to adopt ISO/IEC 23053?
A pragmatic starting point is to inventory current and planned AI use cases, pick one high-value but manageable system, and map its existing steps against ISO/IEC 23053. The gaps—often around data documentation, monitoring, and formal retirement criteria—then inform updates to templates, SOPs and governance processes before scaling the lifecycle pattern across other AI projects.
Related Reading
• AI Governance & Risk: ISO/IEC 42001 | ISO/IEC 23894 | ISO/IEC TR 24028 | ISO/IEC 22989 | GxP
• Quality & Systems: ISO 9001 | ISO 13485 | CSV | VMP
• Execution & Records: MES | eBR | eMMR | Deviation/NCR | CAPA
OUR SOLUTIONS
Three Systems. One Seamless Experience.
Explore how V5 MES, QMS, and WMS work together to digitize production, automate compliance, and track inventory — all without the paperwork.

Manufacturing Execution System (MES)
Control every batch, every step.
Direct every batch, blend, and product with live workflows, spec enforcement, deviation tracking, and batch review—no clipboards needed.
- Faster batch cycles
- Error-proof production
- Full electronic traceability

Quality Management System (QMS)
Enforce quality, not paperwork.
Capture every SOP, check, and audit with real-time compliance, deviation control, CAPA workflows, and digital signatures—no binders needed.
- 100% paperless compliance
- Instant deviation alerts
- Audit-ready, always

Warehouse Management System (WMS)
Inventory you can trust.
Track every bag, batch, and pallet with live inventory, allergen segregation, expiry control, and automated labeling—no spreadsheets.
- Full lot and expiry traceability
- FEFO/FIFO enforced
- Real-time stock accuracy
You're in great company
How can we help you today?
We’re ready when you are.
Choose your path below — whether you're looking for a free trial, a live demo, or a customized setup, our team will guide you through every step.
Let’s get started — fill out the quick form below.






























