New standard guides companies in adopting responsible AI practices, focusing on risk, transparency, and regulatory compliance.
*By Darci de Borba
Why is ISO/IEC 42001 important now?
The rapid advancement of artificial intelligence (AI) has driven profound changes in how organizations develop products, provide services, and make strategic decisions. However, the increasing technical complexity and the ethical, legal, and social implications associated with the use of AI are making the adoption of robust governance practices indispensable.
In this context, the publication of the ISO/IEC 42001:2023 standard represents an unprecedented international milestone by offering a formal model for managing artificial intelligence systems. This standard establishes clear criteria for auditing, certifying, and building responsible AI systems, enabling organizations of different sizes to align their processes with growing expectations of transparency, security, and accountability. More than a technical reference, ISO/IEC 42001 is a concrete response to the global demand for trust and legitimacy in the adoption of algorithmic technologies. (1).
Initial studies on the organizational impacts of the standard indicate that its implementation could generate significant transformations in governance structure, quality management systems, and compliance programs. ISO/IEC 42001 requires organizations to integrate specific algorithmic risk and explainability requirements into their decision-making processes, and its alignment with frameworks such as ISO 9001 contributes to a more cohesive systemic approach. (2,3).
Furthermore, the standard has the potential to act as a link between established cybersecurity practices, such as COBIT, and new regulatory requirements regarding the commercialization of large-scale language models. (4). Thus, ISO/IEC 42001 emerges not only as a voluntary certification, but as a strategic instrument for organizations seeking to mitigate risks, create sustainable value, and strengthen trust with customers, investors, and regulatory bodies.
What does the standard establish?
ISO/IEC 42001:2023 establishes requirements and guidelines for creating, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). Unlike other standards that are limited to aspects of quality or information security, ISO/IEC 42001 combines specific technical, organizational, and ethical dimensions of the AI lifecycle. Its main objective is to ensure that AI systems are managed responsibly, transparently, and in line with legal requirements and social expectations. (1).
The standard is structured around a process model based on the PDCA cycle (Plan, Do, Check, Act). This approach facilitates integration with already established management systems, such as ISO 9001 and ISO/IEC 27001, allowing organizations to create more cohesive governance. (5). Among the key elements, the following stand out:
AI Scope and Policy: The organization must clearly define the scope of application of the standard, establish an AI policy approved by senior management, and ensure that its principles are communicated and understood internally (ISO/IEC 42001:2023, cl. 4–5). This policy must include a commitment to responsible use, continuous improvement, and compliance with legal and ethical requirements.
Risk and Impact Assessment: One of the cornerstones of the standard is the requirement to identify, assess, and mitigate AI-specific risks, including technical risks (e.g., unexpected performance), ethical risks (such as algorithmic bias), and security and privacy risks. (6). This process should be documented and reviewed periodically.
Transparency and Explainability Requirements: The standard emphasizes the need for AI systems to be understandable by users and stakeholders Relevant information. This includes documentation on objectives, operation, limitations, and automated decisions. (2). The degree of explainability should be proportional to the associated risk.
Skills and Awareness: The organization must ensure that the people involved in the AI lifecycle are competent and capable of performing their roles appropriately. This involves training, risk awareness, and clarity of responsibilities (ISO/IEC 42001:2023, cl. 7).
Data and Documentation Management: The standard requires the establishment of robust processes to control documented information, from training data to audit records and decisions. (4).
Monitoring, Auditing and Continuous Improvement: Like other management systems, ISO/IEC 42001 requires systematic monitoring of AI system processes and results, internal audits, and corrective actions when necessary. This framework aims to ensure the adaptability of the AI management system to technological, regulatory, or organizational changes.
By aligning AI management practices with well-known frameworks – such as COBIT for IT governance. (4) – and incorporate specific ethical and explainability requirements, ISO/IEC 42001 proposes a comprehensive and integrative model. As such, it becomes an essential reference for organizations that wish not only to mitigate risks and meet regulations, but also to consolidate a culture of trust and responsible innovation.
Benefits and challenges of adoption by organizations
Adopting ISO/IEC 42001:2023 offers significant strategic and operational benefits for organizations that develop or use artificial intelligence systems, but it also poses challenges that require planning and commitment from senior management.
Among the main benefits, the strengthening of organizational trust and corporate reputation stands out. Certification or even voluntary adherence to the standard signals to the market, investors, and regulators that the company adopts consistent practices of risk management, ethics, and transparency in AI. (1). This stance can be decisive in consolidating competitive advantages in increasingly rigorous regulatory environments subject to algorithmic accountability audits. Another important benefit is the reduction of legal and reputational risks, since the standard provides systematic methods to identify and mitigate problems such as discriminatory biases, security breaches, and misuse of sensitive data. (6). Furthermore, integration with existing management systems, such as ISO 9001 and ISO/IEC 27001, can generate operational synergies, making compliance and audit processes more agile and less fragmented. (5).
However, adopting ISO/IEC 42001 involves significant challenges. One of the main ones is the technical and organizational complexity required to map AI risks, establish coherent policies, and document processes in a clear and auditable way. (2). Smaller organizations or those with limited maturity in quality management may face difficulties adapting their practices and forming multidisciplinary teams with legal, technical, and ethical knowledge. Another recurring obstacle is cultural alignment and internal awareness, since the success of the system depends on the engagement of areas that traditionally do not interact, such as algorithm development, compliance, legal, and human resources management. (4). Furthermore, since the standard requires continuous monitoring and improvement, companies that lack structured governance and audit processes may underestimate the effort required to maintain compliance over time. Table 1 This summarizes the main benefits and challenges associated with adopting the standard.
Table 1Benefits and challenges in adopting ISO/IEC 42001:2023
| Dimension | Potential Benefits | Risks / Challenges |
| Governance | It strengthens the culture of responsibility and accountability.
Define clear roles and responsibilities. |
Cultural resistance and low internal adherence.
Difficulty in aligning technical and strategic areas. |
| Technique | It improves the traceability and explainability of AI systems.
It reduces the risk of undetected errors or biases. |
Complexity in technical documentation
The need for continuous updating of controls. |
| Organizational | Integrates AI management into existing ISO systems (ISO 9001, ISO 27001)
It facilitates audits and compliance. |
Demands for multidisciplinary training
Possible operational overload at the start. |
| Legal / Ethical | It reinforces compliance with privacy, data protection, and human rights regulations.
Minimizes legal liabilities. |
The need for constant monitoring of new legislation.
Difficulty in assessing complex ethical impacts. |
| Market / Image | It increases the confidence of customers, partners, and investors.
Competitive differentiation and access to new markets |
Risk of negative perception if the standard is adopted only superficially (facade compliance). |
Thus, while the potential benefits of ISO/IEC 42001 are clear and aligned with global AI regulatory trends, its successful implementation requires committed leadership, investment in capacity building, and careful integration with other existing management systems. When addressed in a planned manner, these challenges can be transformed into opportunities for differentiation and the creation of sustainable value.
ISO/IEC 42001:2023 inaugurates a new phase in the relationship between companies and Artificial Intelligence, in which trust and responsibility become competitive differentiators. By adopting robust management practices, organizations can align technological innovation with ethical and regulatory principles, anticipating market transformations and social expectations.
The time to act is now: companies that lead this movement will be better prepared to seize opportunities and mitigate risks inherent in the use of AI.
References
- Benraouane SA. AI Management System Certification According to the ISO/IEC 42001 Standard: How to Audit, Certify, and Build Responsible AI Systems [Internet]. AI Management System Certification According to the ISO/IEC 42001 Standard: How to Audit, Certify, and Build Responsible AI Systems. Taylor and Francis; 2024. Available at: DOI: 10.4324/9781003463979
- Biroğul S, Şahin Ö, Əsgərli H. Exploring the Impact of ISO/IEC 42001:2023 AI Management Standard on Organizational Practices. Adv Artif Intell Res Jun 16, 2025;5(1):14–22.
- Gueorguiev T. An approach to integrating Artificial Intelligence in ISO 9001-based quality management systems [Internet]. Vol. 38, Measurement: Sensors. Elsevier Ltd; 2025. Available at: 10.1016/j.measen.2024.101787
- McIntosh TR, Susnjak T, Liu T, Watters P, Xu D, Liu D, et al. From COBIT to ISO 42001: Evaluating cybersecurity frameworks for opportunities, risks, and regulatory compliance in commercializing large language models [Internet]. Vol. 144, Computers and Security. Elsevier Ltd; 2024. Available at: 10.1016/j.cose.2024.103964
- Gueorguiev T. The Process Approach in Artificial Intelligence Management Systems [Internet]. 2024 9th International Conference on Energy Efficiency and Agricultural Engineering, EE and AE 2024 – Proceedings. Institute of Electrical and Electronics Engineers Inc.; 2024. Available at: 10.1109/EEAE60309.2024.10600591
- Ricciardi Celsi L, Zomaya AY. Perspectives on Managing AI Ethics in the Digital Age [Internet]. Vol. 16, Information (Switzerland). Multidisciplinary Digital Publishing Institute (MDPI); 2025. Available at: 10.3390/info16040318
*Darci de Borba is a researcher at ABES Think Tank, planning and research technician at Ipea, PhD in Administration from the University of Vale do Rio dos Sinos (UNISINOS) and Master in Administration from the Pontifical Catholic University of Rio Grande do Sul (PUCRS).
Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies
Article originally published on the IT Forum website: https://itforum.com.br/colunas/isoiec-420012023-adocao/













