ISO standards for AI: what to use and how to comply
In this article
Summary
AI brings clear opportunity and real risk. ISO standards for AI help you build systems that are safe, fair and secure while remaining compliant and auditable. This guide highlights the core ISO/IEC portfolio you should know, explains how ISO/IEC 42001 fits with ISO/IEC 27001, and gives you a pragmatic path to adopt the right controls across the AI lifecycle.
The core ISO standards for AI you should know
ISO and IEC publish a coherent set of standards that cover AI concepts, lifecycle engineering, risk management, governance and quality. Start with the items below to structure your program and your audits.
| Standard | Purpose in your AI program |
|---|---|
| ISO/IEC 42001:2023 (AIMS – Requirements) | Sets certifiable requirements for an Artificial Intelligence Management System governing AI policies, roles, risk, human oversight and continuous improvement. |
| ISO/IEC 23894:2023 (AI – Risk management) | Guidance to identify, analyze, treat and monitor AI risks across the lifecycle, aligning with enterprise risk practices. |
| ISO/IEC 22989:2022 (AI – Concepts and terminology) | Common vocabulary and taxonomy so teams, vendors and auditors use the same definitions for AI capabilities and artifacts. |
| ISO/IEC 5338:2023 (AI system lifecycle processes) | Process guidance for engineering and managing AI systems from concept to retirement, including roles, activities and work products. |
| ISO/IEC TR 24028:2020 (Trustworthiness in AI) | Overview of trustworthiness characteristics – reliability, safety, security, privacy, resilience – and methods to assess them. |
| ISO/IEC 25059:2023 (Quality model for AI systems) | Quality attributes and measures for AI and AI-aided systems to drive requirements, verification and acceptance. |
Together, these iso standards for ai give you a practical roadmap: define what “good” means, engineer accordingly, manage risk, and demonstrate control through a certifiable management system. For organizations deploying AI in business, consider building on existing frameworks and standards.
Key ISO standards for AI you should know
The International Organization for Standardization (ISO) is one of the world’s oldest non-governmental organizations, bringing global experts together to establish the best way of doing things—from making a product to managing a process.
ISO has been promoting safer, more secure, and profitable global trade and cooperation since 1946, by publishing standards designed to make lives “easier, safer, and better.”
ISO compliance holds significant value because its standards are widely respected within the global business community.
Compliance with these standards, particularly in AI, will foster the adoption of best practices. This, in turn, can be key in achieving improved performance, regulatory adherence, and operational efficiency—all of which contribute to building a stronger, more trusted brand.
ISO/IEC 42001 - Artificial Intelligence Management System (AIMS)
ISO/IEC 42001 is the certifiable management system for AI. It helps you set scope and policy for AI use, define roles and accountability, implement risk and impact assessments, manage data and model lifecycle, govern third parties, monitor in production, and drive continual improvement. If you already run ISO/IEC 27001, you will recognize the PDCA cycle and integration points. Certification demonstrates that you operationalize responsible AI across the organization, not only in individual models.
ISO/IEC 23894 - AI risk management
This standard provides a practical risk process tailored to AI. It guides you to map system context and stakeholders, identify harms such as bias, security, privacy and safety issues, evaluate likelihood and impact, select and implement controls, and monitor residual risk through the lifecycle. Use it to underpin 42001 requirements, inform go-live decisions, and align with external frameworks like NIST AI RMF while keeping an ISO-conform approach. For a practical comparison, see Building on existing AI governance frameworks.
ISO/IEC 22989 - AI concepts and terminology
ISO/IEC 22989 defines the vocabulary for AI, including concepts such as training data, model, inference, explainability and human-in-the-loop. Earlier drafts were known as ISO/IEC DIS 22989. Adopting 22989 avoids ambiguity in policies, control descriptions, and supplier contracts, which is essential when you audit or certify against 42001.
ISO/IEC 5338 - AI system lifecycle processes
ISO/IEC 5338 standardizes lifecycle processes for AI systems, from requirements and data readiness to verification, validation, deployment, monitoring and retirement. It complements software engineering practices and clarifies checkpoints such as data provenance, model testing, drift monitoring and human oversight. Software Improvement Group contributed to the development of ISO/IEC 5338 and applies it in assessments and coaching. For a deeper dive, see ISO 5338: the global standard on AI systems. To understand our contributions, read Our role in ISO/CEN/NEN AI standardization.
ISO/IEC 23053 - Framework for AI systems using machine learning
ISO/IEC 23053 describes how ML components fit into AI system architectures. It helps teams design modular, testable solutions by distinguishing datasets, training pipelines, models, serving layers and feedback loops. Use it to structure documentation and align engineering with governance expectations in 42001 and 23894.
ISO/IEC 31700
The ISO/IEC 31700 standard is beneficial for defining high-level requirements for privacy by design, ensuring that privacy is safeguarded throughout the entire lifecycle of a consumer product, including the data processed by the consumer.
The core principle of ISO/IEC 31700 is “privacy by design,”. Privacy by design encompasses various methodologies for developing products, processes, systems, software, and services. These methodologies prioritize consumer privacy throughout the design and development phases, considering the entire lifecycle of the product.
Business benefits of ISO/IEC 31700 adoption
Organizations implementing ISO/IEC 31700 can improve regulatory compliance, enhance innovation and business agility, and reduce privacy- and data-breaches related risk.
ISO/IEC 42001 vs ISO/IEC 27001: how they differ and fit together
ISO/IEC 27001 is an ISMS focused on information security for the entire organization. ISO/IEC 42001 is an AIMS focused on governing AI systems and their lifecycle risks. Both follow Plan-Do-Check-Act and both are auditable, but their scopes and control objectives differ:
- Scope – 27001 protects information assets across the business. 42001 governs AI use cases, data, models and operations.
- Controls – 27001 Annex A targets security controls. 42001 requires AI-specific controls such as data governance for training, model risk assessment, validation, human-in-the-loop and monitoring of model drift.
- Risk – 27001 prioritizes confidentiality, integrity, availability. 42001 extends risk to bias, explainability, safety, compliance and societal impacts, supported by ISO/IEC 23894.
In practice you align them: integrate AI assets into the ISMS, apply AI-specific governance via the AIMS, and reuse shared processes like incident management and supplier oversight. For regulatory alignment, see our EU AI Act summary.
ISO/IEC 5338’s processes can be applied within an organization or project when developing or acquiring AI systems. It emphasizes the unique considerations for AI in every stage of the lifecycle process. These include:
- The need to protect sensitive training data used by engineers to create models, unlike regular software engineering that uses only anonymous test data.
- Addressing new risk factors such as transparency, unwanted bias, and purpose-binding.
- Understanding that AI projects can be unpredictable during experimental stages, which is important for project managers.
- Recognizing the requirement for different skill sets in HR for AI projects.
- Continuously validating the performance of models in production to detect issues and prevent them from becoming ‘stale’.
For traditional software or system elements within an AI system, the software life cycle processes in ISO/IEC/IEEE 12207 and the system life cycle processes in ISO/IEC/IEEE 15288 can also be used.
A practical path to adoption
- Map AI use cases and risks – Inventory models, data flows and third-party services. Classify impact using ISO/IEC 23894.
- Establish governance – Define your AIMS scope, roles and policies per ISO/IEC 42001. Embed human oversight and accountability in line with governance, risk and compliance (GRC) best practices, supported by Managing AI in business: GRC.
- Engineer the lifecycle – Use ISO/IEC 5338 to structure activities and artifacts. Set quality targets with ISO/IEC 25059 and trust goals with ISO/IEC TR 24028.
- Operationalize controls – Implement data governance, model validation, monitoring for drift and bias, change management, and robust operational playbooks. Trace decisions to evidence.
- Measure and improve – Define KPIs, audit readiness and corrective actions. Align with your ISMS where relevant.
Software Improvement Group (SIG) supports this journey end to end. Our Sigrid platform analyzes software quality and security to give you actionable baselines for AI-enabled systems, and our AI Readiness Guide provides step-by-step guidance to translate standards into governance and processes. SIG also contributed to the development of ISO/IEC 5338, bringing deep lifecycle expertise to your implementation, and to broader ISO, CEN and NEN AI standardization efforts.
FAQs
Is there an ISO standard for AI?
Yes. The portfolio includes ISO/IEC 42001 (a certifiable management system for AI), ISO/IEC 23894 (risk management), ISO/IEC 22989 (concepts and terminology) and more. Use 42001 to govern and audit your AI program, and the other standards to shape lifecycle, risk and quality.
Which ISO covers AI?
AI standards are developed by ISO/IEC JTC 1/SC 42. For most organizations, start with ISO/IEC 42001, ISO/IEC 23894, ISO/IEC 22989, ISO/IEC 5338 and ISO/IEC 25059. See the table above for what each standard adds to your controls and evidence.
What is ISO/IEC 22989 (and ISO/IEC DIS 22989)?
ISO/IEC 22989:2022 defines AI concepts and terminology so teams and auditors share the same vocabulary. ISO/IEC DIS 22989 was the draft stage name before formal publication. Citing 22989 avoids confusion across projects, suppliers and audits.
Want to accelerate adoption with proven practices and measurable quality? Explore the resources above and see how Sigrid can baseline and monitor your AI-enabled software portfolio.
The standard offers crucial guidance in the rapidly evolving field of AI, addressing unique challenges such as ethical considerations, transparency, and continuous machine learning. It also provides organizations with a structured approach to manage AI-related risks and opportunities, balancing innovation with effective governance.
Moreover, ISO/IEC 42001 provides an integrated approach to managing AI projects designed to be future-proof—essential as AI technology continues to evolve rapidly. It provides an integrated approach to managing AI projects, from risk assessment to mitigation.
Take the next step
Accelerate responsible AI with confidence. See how Sigrid® can automate quality and governance checks for AI-enabled software, and talk to SIG experts about implementing ISO-aligned AI management across your portfolio.
Conclusion
As the technological breakthrough of Artificial Intelligence continues to take the world by storm, businesses seek to benefit. Promises of improved turnover, ROI, efficiency, productivity, and product performance invite leadership across the global spectrum of industries to consider adopting AI systems—if not develop their own.
Yet, AI implementation in business also carries its fair share of risks. Poor-quality AI applications, a lack of understanding as to what AI is and how it operates and, at present, limited regulation of this exciting new technology all contribute to a technological environment filled with risks.
Whilst regulatory bodies around the world move to shape the future legal framework of AI, organizations and leaders currently using or planning to use AI to optimize their operations should consider adopting ISO standards for AI before they do.
Learn more about AI in business with the Software Improvement Group blog.
With the rise of AI, adopting ISO standards is crucial for secure, responsible implementation. Our AI readiness guide, authored by Rob van der Veer, offers 19 actionable steps for board members, GRC leaders, and IT professionals to align AI adoption with ISO/IEC standards. The guide ensures your organization leverages AI effectively while staying compliant in a fast-evolving regulatory landscape.
Align your AI strategy with global standards. Download our AI readiness guide and discover how ISO compliance can safeguard your AI initiatives (last updated in August of 2025).