*By Natalia Marroni Borges
Since the popularization of ChatGPT and other generative AI tools, a curious phenomenon has been intensifying. Numerous reports from different sources show impressive usage percentages in business, educational, and public contexts—for example, McKinsey pointed out in 2024 that 65% of organizations already use generative AI regularly, almost double the number in 2023.
What is rarely discussed, with the necessary depth, is the nature of this use. This is because, to a large extent, this use is predominantly personal, and not institutional. This argument is reinforced when 75% of knowledge workers They already use AI, and of these, the 78% brings its own tools. (“BYOAI” Bring Your Own AI), outside the corporate umbrella, according to a survey conducted by Microsoft. Before that, in 2023, Salesforce already indicated that more than half of its users employed AI without formal employer approval — a clear sign of Shadow AI.
The analogy with Shadow IT It's straightforward and inevitable. Previously, discussions revolved around macros, apps, and parallel spreadsheets; now we're talking about prompts, models, and connectors that can often access sensitive data without policies, audit trails, or controls that, in other situations, would be considered minimal. The risks, of course, are proportional: Gartner, for example, projects that more than 40% of data breaches involving AI by 2027 will stem from the misuse of GenAI – exacerbated by regulatory requirements crossing borders.
At this point, there is also what can be understood as a maturity gap. Although many publications argue that the adoption of generative AI is "spread out" in terms of volume of use, McKinsey itself (2025) observed that only 1% of executives in developed markets describe their rollouts GenAI is described as "mature." This can be interpreted as a lot of experimentation at the source and little end-to-end institutionalization.
This "accounting" that considers personal and organizational uses in the same account inflates optimistic graphs of "efficiency gains," but masks an important reality: use becomes, in a way, disjointed and without governance, which leads to the point of reflection that this text aims to reach: individual use and corporate use are different movements.
In the individualSpeed, flexibility, low cost, and rapid learning. These gains are real—and explain why so many people are "jumping the queue" of institutional processes to use AI in a way that is "“shadow”.
In the organizationPolicies, architecture, data, compliance, security, budget, change management, and value measurement. Without these, leaders admit they have no clear plan to move from individual impact to operational/financial impact.
The confusion surrounding the modes of adopting generative AI creates a certain illusion of maturity – because it's not the company that has become smarter; it's its employees often "improvising" with powerful tools that are now at their fingertips. There would be room for many pros and cons regarding this so-called "“shadow AI”"And the key point is not to criticize this type of use – but rather to truly differentiate between 'shadow' use and institutional use.".
Thinking in terms of institutionalization, this is when the difference between experimentation and organizational capability becomes clear. Adopting AI is not the same as integrating it into the company's operations. Institutionalization is the moment when individual curiosity becomes collective competence; the moment when its use ceases to be episodic and becomes part of a planned and coherent operational model.
Organizations that are moving in this direction are taking different paths—but at their core, they are dealing with the same challenge: transforming fragmented use into structure. Some choose to develop their solutions internally (build); others acquire ready-made technologies on the market (buyThere are those that build strategic partnerships with consultancies and providers (partner); and those that internalize skills through the acquisition of startups or the creation of laboratories (here-hireAll of these approaches are valid, but none is sufficient in isolation. What differentiates them from superficial attempts is the presence of real governance, capable of connecting strategy, technology, data, and people around a clear purpose.
The institutionalization of AI, therefore, is not merely a technical move. It is an organizational architecture decision—redefining roles, decision flows, and boundaries of responsibility. It means creating mechanisms that support the continued use of AI with security, value measurement, and ethical alignment. It means understanding that without structured data, usage policies, prioritization criteria, and a mature digital culture, AI will continue to orbit on the periphery of operations, fueled by individual initiatives and illusory metrics.
The turning point lies in the shift from enthusiasm to methodology. Institutionalizing means designing the space where AI can generate measurable impact—while simultaneously recognizing where it shouldn't enter. It's about establishing criteria for productivity, quality, and experience; defining guardrails Data and security; creating governance and compliance processes; investing in the training of leaders and users; and, finally, connecting all of this to an operational and scalable strategy.
When this happens, the use of AI ceases to be a set of isolated experiments and becomes an organizational asset — with purpose, coherence, and... accountability. Until then, adoption statistics will continue to be inflated, but real capabilities will remain underdeveloped. Because the challenge is not... to use AI. The challenge, yes, is to institutionalize it.
*Natália Marroni Borges is a researcher at the ABES Think Tank; a research member of the IEA Future Lab group (linked to the Federal University of Rio Grande do Sul – UFRGS); a postdoctoral researcher in Artificial Intelligence and Foresight; and a professor at UFRGS.
Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies
Article originally published on the Mobile Time website: https://www.mobiletime.com.br/colunistas/18/11/2025/natalia-borges-ia-institucionalizada/













