Share

*By Américo Alonso

The increasingly frequent cases of fraud involving Artificial Intelligence (AI) — especially GenAI — raise concerns about the security and reliability of this technology in companies and public institutions. Despite the uncertainties, digital transformation is irreversible. The challenge for leaders is to balance innovation and protection, adopting AI in a safe and strategic way.

Executives worldwide recognize the impact of AI on their businesses. A Cisco study of 2,503 CEOs across five continents revealed that 581% believe a lack of knowledge on the subject can compromise organizational growth, while 611% intend to invest in training. The message is clear: it's not enough to adopt the technology—it's necessary to understand it and apply it ethically, safely, and in line with the company's purpose.

In this context, GenAI Security emerges as a critical front. Protecting the chains of generative models, strengthening the governance of data used in training, and mitigating hallucinations or inappropriate responses become essential to prevent AI from becoming a new avenue of attack. Security and privacy must be inherent in the design of the systems.security-by-design / privacy-by-design), anchored in responsible use policies, privacy controls, and continuous auditing.

According to Fortinet's Threat Landscape 2025, cybercriminals performed 36,000 malicious scans per second in 2024, with an annual growth of 16,74%, driven by automation and AI. This scenario is likely to become even more complex with the arrival of the quantum era, which threatens current cryptographic methods. Global reference bodies (NIST, CSA, ANSI, BSI, NSA) are already warning about the so-called Y2Q (Year-To-Quantum) around 2030. The concept of Quantum-Safe – solutions resistant to quantum computing, initially supported by Post-Quantum Cryptography (PQC) – becomes a priority to protect critical information and digital infrastructures.

Cybersecurity is also entering a new cycle. Traditional Detection and Response, based on signatures and fixed rules, is giving way to dynamic systems supported by Agentic AI, capable of acting autonomously, contextually, and collaboratively. It is prescriptive security succeeding predictive security—just as predictive security replaced reactive security—combining intelligent automation and threat modeling (including in quantum scenarios) to anticipate risks before they become incidents.

But it's not just about tools. Organizations need to invest in culture and governance. Digital security must be treated as a strategic pillar, not just a technical one. The responsible adoption of AI requires continuous education, ethics, transparency, and leadership capable of connecting people, processes, and technology.

The changes brought about by AI and quantum computing are both technological and cultural. They require leaders to rethink structures, roles, and responsibilities to ensure that innovation goes hand in hand with protection. The future of AI will be built not only on the power of algorithms, but on the maturity with which companies are able to govern them.

In short, the AI paradox remains: the more powerful it becomes, the more responsibility it demands. It is up to leaders to transform this challenge into a competitive advantage, guiding their organizations along a path where innovation and security coexist—even in quantum times.

*Américo Alonso, Director of Digital Security at Atos South America 

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

quick access