Open source infrastructure, governance, and interoperability emerge as decisive factors for achieving financial results with artificial intelligence.

The adoption of artificial intelligence entered a new phase in 2026. After years of promises of productivity gains, cost reduction, and new revenue streams, the model based on one-off tests and proof-of-concept projects without a plan for large-scale application is beginning to lose momentum. Pressed for concrete results, companies have stopped questioning what is possible with AI and are now focusing on what is profitable to achieve with the technology.
The discrepancy between successful demonstrations and scalable applications makes it clear that a good prototype is not equivalent to a solution ready to operate safely, with governance and continuity. Data Studies from MIT show that only 5% of integrated AI pilot projects are delivering value, while the vast majority (95%) remain stagnant, with no measurable impact on the balance sheet.
The reality is that, while AI models are becoming commodities, the underlying infrastructure represents a significant bottleneck in achieving a return on investment. “For AI to move beyond the experimental phase and become profitable, companies need to stop treating it as a monolithic block and start applying the same rigorous governance, compliance, and privacy standards they require for any other mission-critical application,” explains Thiago Araki, Senior Technology Director for Latin America at Red Hat. This means creating the technical, operational, and organizational conditions to take AI from isolated experiments to production environments, with real business impact.
Maturity, strategy and infrastructure
Moving in this direction requires, first and foremost, a change in mindset. Leaders need to recognize that it's not enough to invest in AI; they need to be prepared for it with people, processes, and, above all, a suitable technological ecosystem. Today, chip architectures, software layers, inference tools, protection mechanisms, and the ability to operate consistently across data centers and clouds weigh more heavily in calculating return on investment than the choice of a specific language model.
One study A recent Gartner report reinforces this view, pointing out that global investments in AI are expected to grow by $4.4 trillion this year, reaching $2.52 trillion. Of this total, $401 billion is expected to be directed towards infrastructure, indicating that companies are beginning to recognize that the success of AI depends less on the model itself and more on the foundations that allow it to operate at scale and securely. “There is no single enterprise AI approach that works for all organizations, which makes choice and flexibility central factors in this journey,” emphasizes Alejandro Raffaele, Senior Director of Enterprise Sales for Latin America at Red Hat.
According to the executive, open-source platforms and standards help preserve freedom of choice by ensuring interoperability between different environments. "Companies that prioritize this path tend to advance more safely and quickly, reducing the risk of excessive dependence on proprietary suppliers or technologies," he points out.
A practical example comes from ARSAT, an Argentinian state-owned telecommunications company. Facing operational bottlenecks, high costs, and slow response times, the company initiated an infrastructure overhaul based on open source, which became a clear example of innovation and technological transformation. By structuring its AI strategy on an open, standardized platform prepared for hybrid environments, the company was able to advance from the experimental stage to applications aligned with its operational needs and the regulatory requirements of the public sector.
Without a foundation, there is no digital future.
The importance of a solid technological foundation is highlighted by Harvard Business Review, which identified the lack of integration between innovation and operations as one of the main obstacles to capturing value with AI. "Artificial intelligence is not the foundation of the technology, it's the finishing touch," says Gilson Magalhães, vice president and general manager for Latin America at Red Hat.
To illustrate the point, the executive uses a metaphor from the construction industry: “No building can stand without a foundation, adequate materials, and rigorous processes. The same is true with technology. Without a solid infrastructure base, using AI is like trying to operate a skyscraper built on quicksand.”
In this context, Red Hat Enterprise Linux positions itself as the foundation upon which modern applications, whether AI-based or not, can scale consistently, maintaining operational control and reducing risks. “An efficient AI strategy has no room for fragmented environments or solutions; it demands consistency, speed, and security. A robust operating system, prepared to meet these demands, is a fundamental part of a successful, scalable, and sustainable long-term implementation,” reinforces Sandra Vaz, Country Manager Brazil at Red Hat.
To give an idea of the central role of the operating system for profitable AI, Red Hat expanded its collaboration with NVIDIA to align enterprise open source technologies with the evolution of AI at scale. Red Hat Enterprise Linux for NVIDIA, an edition optimized for the NVIDIA Rubin platform, is designed to power future production operations, creating environments prepared for different models, architectures, and clouds, without compromising governance or efficiency.
From basic to AI at scale
Red Hat Enterprise Linux also integrates Red Hat AI, a platform designed to accelerate the development, deployment, and operation of artificial intelligence solutions in hybrid cloud environments. Based on open source technologies, the solution allows companies to advance from initial AI experiments to complete enterprise architectures, with the flexibility to operate different models, on different hardware accelerators, and in different environments.
The value proposition of Red Hat AI includes fast and efficient inference, better utilization of computing resources, a simplified experience for connecting models and data, and accelerated development and delivery of agentic AI-based applications. At the same time, it offers operational consistency for scaling AI workloads across the hybrid cloud, with control, security, and predictability.
In this new phase, it's clear that enterprise AI fails not due to a lack of models, but because of a lack of fundamentals. And investing in the technological ecosystem is no longer a strategic choice, but a minimum requirement for competitiveness.













