ISO/IEC TR 24368 — Ethical and Societal Concerns of AI
This topic is part of the SG Systems Global regulatory & operations glossary.
Updated November 2025 • ISO/IEC 38507, ISO/IEC 23894, ISO/IEC TR 24028, ISO/IEC 42001 • Ethics, Governance, Trust, AI Oversight
ISO/IEC TR 24368 examines the ethical, societal and human-impact concerns associated with artificial intelligence. While many AI standards focus on lifecycle, robustness, risk or governance mechanics, TR 24368 focuses on the broader question: What does AI mean for individuals, society, fairness, autonomy, accountability and human welfare? The standard does not define how to build ethical AI; instead, it outlines the ethical dimensions and societal risks that organizations must take into account when deploying AI systems. In regulated sectors—pharmaceuticals, medical devices, food, cosmetics and chemicals—these concerns intersect with safety, compliance, data integrity, workforce dynamics and public trust.
“AI decisions do not exist in a vacuum. Ethical and societal considerations determine whether those decisions are acceptable, sustainable and worthy of trust.”
1) Scope & Purpose of ISO/IEC TR 24368
ISO/IEC TR 24368 focuses on the ethical and societal implications of AI systems. It explores questions that may not appear in traditional risk analyses but become critical when AI decisions affect human welfare, fairness, opportunity, safety or agency. The standard discusses fairness, discrimination, autonomy, transparency, accountability, dignity, public welfare, sustainability and societal stability. Its purpose is to help organizations identify and consider these broader impacts during design, deployment and governance. Ethical concerns are not treated as philosophical abstractions—they are treated as practical risks that must be evaluated and controlled, especially in regulated or safety-critical environments.
2) Relationship to Governance, Risk & Trust Standards
ISO/IEC TR 24368 connects closely with the AI-governance ecosystem. ISO/IEC 38507 sets the governance duties of leadership; TR 24368 explains the ethical content leadership must oversee. ISO/IEC 42001 requires ethical considerations to be integrated into the AI Management System. ISO/IEC 23894 requires ethical and societal harms to be treated as risk impacts. ISO/IEC TR 24028 includes ethics as a key dimension of trustworthiness. TR 24368 provides the conceptual grounding for these requirements, giving organizations a shared understanding of what “ethical impacts” and “societal concerns” actually mean in practice.
3) Ethical Foundations & Human-Centric Principles
The standard identifies ethical principles that should guide AI design and governance. These include respect for human autonomy, prevention of harm, fairness, justice, explicability, accountability and informed consent. Ethical foundations in TR 24368 overlap with global AI ethics frameworks, but the ISO perspective focuses on practical application: ensuring that AI respects human rights, supports human welfare, avoids discrimination, and promotes transparency. Organizations are encouraged to embed these principles into policies, requirements, validation strategies, training materials and governance processes. The ethical foundations become part of the baseline expectations for responsible AI deployment, particularly where AI influences regulated records, safety or public-facing decisions.
4) Fairness, Equity & Non-Discrimination
TR 24368 highlights fairness and non-discrimination as central concerns. Neural networks and other AI models can unintentionally amplify existing inequalities or create new ones. The standard discusses systemic bias, historic inequities, under-representation, disproportionate impacts and unequal access. It encourages organizations to analyse how AI outputs might advantage or disadvantage groups, suppliers, regions, operational units or user types. This aligns with the bias-specific guidance in ISO/IEC 24027. In regulated manufacturing, fairness considerations influence supplier scoring, audit prioritisation, deviation triage and training verification. TR 24368 recognises that fairness is not a purely statistical property—it is a socio-technical and organizational one.
5) Transparency, Explainability & Comprehensibility
The standard emphasizes the need for transparency and explainability, but with nuance: not all AI must be fully interpretable, but stakeholders must be able to understand the AI system’s purpose, limitations, expected behaviour and reasoning patterns at their level of responsibility. Operators need clear guidance on how to use and override AI outputs; auditors need visibility into decision logic; leadership needs comprehensible summaries; and impacted parties need clarity about how AI affects them. TR 24368 aligns with the explainability expectations of TR 24028. In regulated contexts, explainability is also a quality and compliance requirement—stakeholders must be able to justify AI decisions during audits or investigations.
6) Autonomy, Agency & Human Oversight
Human autonomy is a major theme in TR 24368. The standard warns against over-automation, AI decision dominance, erosion of human judgement and reductions in operator or reviewer authority. AI should support—not replace—human decision-making in most regulated contexts, consistent with human-oversight models from ISO/IEC 22989. The standard stresses that AI must be designed so users can question, override or escalate concerns. In manufacturing, this aligns with expectations that QA retains final decision authority over batch disposition, deviations, CAPA and quality events. TR 24368 positions autonomy and oversight as essential ethical safeguards.
7) Accountability, Responsibility & Liability
ISO/IEC TR 24368 stresses the need for clear accountability: AI does not reduce an organization’s responsibility for its outcomes. Governing bodies, system owners and operators must remain accountable for AI-influenced decisions and harms. The standard requires transparency about who is responsible for design, training, deployment, monitoring, review and incident response. It aligns with leadership duties described in ISO/IEC 38507 and reinforces the principle that AI accountability remains human, not algorithmic. This is especially critical in environments where AI influences regulated outcomes, safety, compliance or quality decisions.
8) Societal Risks, Public Welfare & Long-Term Impact
TR 24368 extends beyond organizational impacts to societal concerns. It discusses systemic risks—economic displacement, erosion of trust, unequal access to AI benefits, safety concerns, misinformation, regulatory instability, and impacts on public health or welfare. Organizations should consider long-term societal implications of AI deployment, especially where AI influences public-facing systems, critical infrastructure, healthcare decisions or safety-critical manufacturing. In regulated manufacturing, societal risk includes product-quality failures, supply-chain inequities, environmental harm and failures that propagate downstream into consumer or patient outcomes. TR 24368 encourages organizations to broaden their risk lens to include broader human and societal impact, not just corporate risk.
9) Ethical Data Considerations
The standard highlights ethical concerns related to data—privacy, consent, ownership, stewardship, security, bias and representativeness. Data is the foundation of AI, and ethical misuse of data can create systemic harm. TR 24368 aligns data-ethics considerations with risk methodologies in 23894 and trustworthiness expectations in TR 24028. For regulated industries, ethical data considerations overlap with data-integrity frameworks such as 21 CFR Part 11 and Annex 11. Organizations must ensure that data used for training, validation and monitoring respects confidentiality, ownership and regulatory constraints.
10) Socio-Technical Systems & Organizational Culture
TR 24368 emphasizes that AI is not just a technical system—it is a socio-technical system embedded in organizational culture, workflows, incentives and human behaviour. Ethical concerns arise not only from AI models but also from how organizations adopt them, communicate them, monitor them and rely on them. Culture affects how AI outputs are interpreted, challenged or overridden. Ethical AI requires a culture where concerns may be raised, where transparency is valued and where safety and fairness override convenience. In regulated manufacturing, culture is already shaped by QMS expectations; TR 24368 encourages extending that culture to AI-related decisions, behaviours and oversight.
11) Workforce, Skills & Human Impact
AI systems may alter job roles, skill requirements and workforce expectations. TR 24368 addresses concerns related to job displacement, reskilling, role clarity, decision-authority erosion and worker stress caused by opaque or inconsistent AI outputs. Ethical AI deployment requires organizations to consider workforce transition, training and empowerment. In regulated manufacturing, this includes updating competency matrices, training plans and SOPs so operators are equipped to use and interpret AI responsibly. TR 24368 positions human capability and dignity as essential components of ethical AI systems, ensuring that automation enhances rather than diminishes the workforce.
12) Long-Term Governance & Ethical Review
ISO/IEC TR 24368 highlights the need for ongoing ethical oversight—not a one-time review. Ethical risks evolve as data, context, technology and society change. Organizations should embed ethical considerations into governance review cycles under ISO/IEC 42001, conduct periodic ethical assessments, and revisit policies as AI use expands. Ethical review may be conducted by dedicated ethics committees, risk boards or governance councils. The standard encourages creating mechanisms for whistleblowing, concern-raising, external feedback and stakeholder involvement. Ethical governance becomes a continuous process that parallels technical lifecycle management.
13) Integration with Risk Management & Organizational Controls
TR 24368 integrates ethical and societal concerns into risk management (23894), governance (42001, 38507), lifecycle controls (23053) and trustworthiness evaluations (24028). Ethical concerns become risk sources and harm categories within structured risk registers. This integration ensures ethical risk is not isolated as a separate discussion but embedded in the processes used to evaluate all AI systems. Organizations should treat ethical risks with the same discipline applied to safety, compliance or operational risk—documenting hazards, probability, impact, controls and residual risk. In regulated environments, this creates a clear, auditable story for regulators: ethical impacts were considered, analysed, controlled and monitored using formal processes.
14) Benefits for Regulated Manufacturers & Public-Facing Organizations
TR 24368 provides significant value to regulated manufacturers and organizations operating under public oversight. Ethical concerns relate directly to product safety, quality, trust, patient welfare and consumer protection. AI systems that influence deviations, supplier scoring, sampling, QC decisions, training verification or material release must operate ethically and consistently across all contexts. Failure to consider ethical concerns can lead to process inequities, unsafe outcomes, legal risk or reputational damage. TR 24368 strengthens organizational resilience by ensuring the human and societal impacts of AI are part of core governance—not external or optional considerations.
15) FAQ
Q1. Does ISO/IEC TR 24368 mandate specific ethical rules?
No. It identifies ethical and societal concerns organizations must consider but does not define prescriptive rules or moral codes.
Q2. Does ISO/IEC TR 24368 require explainability for all AI systems?
No. It requires organizations to consider the explainability needs of each stakeholder. Some high-risk or regulated use cases demand stronger explainability; others may rely on transparency about purpose, limitations and oversight instead of full interpretability.
Q3. How does TR 24368 relate to ISO/IEC 38507?
TR 24368 defines ethical and societal concerns; ISO/IEC 38507 defines leadership responsibilities for governing those concerns. Together, they ensure ethics is part of top-level oversight.
Q4. Does TR 24368 apply to internal-facing AI?
Yes. Even internal AI systems can create fairness, workforce, autonomy or societal issues. Ethical considerations apply to all AI that affects humans directly or indirectly.
Q5. What is a practical first step?
Start by adding an “Ethical & Societal Impact” section to AI risk assessments, using TR 24368 categories. Then integrate these concerns into governance reviews and lifecycle documentation.
Related Reading
• Ethics, Governance & Trust: ISO/IEC 38507 | ISO/IEC 42001 | ISO/IEC TR 24028
• AI Lifecycle & Risk: ISO/IEC 23053 | ISO/IEC 23894 | ISO/IEC 24027
• Core Standards & Compliance: ISO/IEC 22989 | CSV | ISO 9001 | ISO 13485
OUR SOLUTIONS
Three Systems. One Seamless Experience.
Explore how V5 MES, QMS, and WMS work together to digitize production, automate compliance, and track inventory — all without the paperwork.

Manufacturing Execution System (MES)
Control every batch, every step.
Direct every batch, blend, and product with live workflows, spec enforcement, deviation tracking, and batch review—no clipboards needed.
- Faster batch cycles
- Error-proof production
- Full electronic traceability

Quality Management System (QMS)
Enforce quality, not paperwork.
Capture every SOP, check, and audit with real-time compliance, deviation control, CAPA workflows, and digital signatures—no binders needed.
- 100% paperless compliance
- Instant deviation alerts
- Audit-ready, always

Warehouse Management System (WMS)
Inventory you can trust.
Track every bag, batch, and pallet with live inventory, allergen segregation, expiry control, and automated labeling—no spreadsheets.
- Full lot and expiry traceability
- FEFO/FIFO enforced
- Real-time stock accuracy
You're in great company
How can we help you today?
We’re ready when you are.
Choose your path below — whether you're looking for a free trial, a live demo, or a customized setup, our team will guide you through every step.
Let’s get started — fill out the quick form below.






























