Share

Trust in AI systems begins with responsible governance — and those who anticipate regulations gain a competitive advantage.

* By Thomaz Côrte Real

The development of systems using artificial intelligence is already a reality for many companies in Brazil and worldwide. These solutions are present in various sectors of the economy and directly impact our lives, whether in the provision of services, profile analysis, automated decision-making, or support for internal processes. Given this scenario, there is a growing need for developing companies to adopt good governance practices from the conception of their solutions, ensuring that innovation goes hand in hand with responsibility.  

AI governance comprises a set of guidelines, frameworks, and responsibilities designed to ensure that the development, deployment, and use of artificial intelligence systems are aligned with ethical, legal, and social principles. For companies developing AI systems, this represents a new paradigm, as in addition to designing efficient algorithms, it is necessary to make them understandable, auditable, secure, and fair.  

One of the first pillars of this governance is transparency, which must be addressed from two perspectives: regarding usage and regarding operation. Transparency regarding usage requires informing the user that they are dealing with an AI system and what its level of autonomy is. Transparency regarding operation deals with internal aspects of the system, such as how it works, what data it uses, and what the decision criteria are. Although total explainability is not always possible in more complex models (such as those based on...) deep learningCompanies should document design decisions, select more interpretable models whenever possible, and adopt practices such as decision logs, bias testing, and external validations, while always preserving trade secrets and intellectual property.  

Another essential principle is accountability (accountabilityGovernance requires identifying who is responsible for the system's behavior and what measures have been taken to mitigate risks. This implies structuring multidisciplinary teams, reviewing training data for structural biases, and providing channels for human challenge to automated decisions—measures that should be considered from the initial stages of development.  

To support this journey, it is recommended to structure governance in well-defined phases: PlanningDefining objectives, system scope, and identifying ethical and legal risks; Responsible designl: Choosing models and data aligned with privacy and explainability criteria; Responsible technical implementationDocumenting decisions, evaluating biases, and testing robustness.; Continuous monitoringInternal audits, whistleblowing channels, impact measurement, and model updates; Engagement and communicationDialogue with stakeholders, users and regulatory bodies, ensuring trust and clarity.  

It is important to recognize that the intensity of governance measures should be proportional to the risk that the AI system presents. Systems that directly impact fundamental rights, such as those used in areas such as health, credit, or public safety, will naturally require a higher degree of control, transparency, and oversight. On the other hand, low-impact solutions, such as internal administrative support tools, can adopt a lighter governance structure, provided that they preserve basic principles of accountability and security. This risk-based approach avoids regulatory overreach and allows innovation to flourish in a balanced way, respecting the related ethical and legal boundaries.  

Brazil is moving forward with Bill 2338/2023, which proposes regulating the use of artificial intelligence systems, especially those classified as high-risk. The bill introduces principles such as non-discrimination, human oversight, security, and transparency, establishing obligations that directly impact the activities of developing companies. In parallel, technical standards such as ISO/IEC 22989:2022, which establishes definitions, classifications, and fundamental principles for AI systems, offer conceptual and terminological support for ethical, transparent, and secure development.  

Therefore, it is crucial that AI system development companies do not wait for definitive regulatory approval to adopt good governance practices. Now is the time to anticipate risks, incorporate ethical criteria into system design, and understand that market and societal trust will be the new competitive differentiator.  

AI governance is not a barrier to innovation; it is a path for it to happen in a sustainable, inclusive, and secure way, with companies developing AI systems playing a central role in this process. 

*Thomaz Côrte Real is a legal consultant for the Brazilian Association of Software Companies (ABES).

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

Article originally published on the IT Forum website: https://itforum.com.br/colunas/governanca-etica-desenvolvimento-ia/

quick access