Between existential fears and technological promises, the discussion revolves around how far AI can – and should – go.
*By Anderson Rohe
For a long time now, neuroscientist Miguel Nicolelis has been a skeptic regarding so-called Artificial Intelligence – AI, stating that it “It is neither intelligent nor artificial.l”, describing her using the affectionate acronym NINA as “one of the greatest hoaxes humanity has ever produced”That's because, in his words, the systems that currently present themselves as "intelligent" do nothing more than... to reproduce fundamental limitations, without the ability to capture the complexity of the human mind.. And she adds that when AI is announced as a product or service in an advanced stage, the industry would be... selling with promises it cannot keep..
And Nicolelis is not alone in this! There are those who go even further, interpreting this indiscriminate race towards an intelligence superior to ours as the greatest conspiracy theory of our time.
These positions, however, contrast with a scenario in which AI has advanced at an unprecedented rate since the launch of ChatGPT in November 2022. First with Generative AI (producer of synthetic media) and now with Agency AI (that of autonomous agents that operate independently with minimal human intervention).
Including due to the fact that 2025 is considered a watershed year, fulfilling many of the predictions of British physicist Stephen Hawking., ...in the sense that AI is consolidating itself as a driver of profound transformations in society, especially in the automation of services, work, and... more recently, human relations. This constitutes a paradigmatic shift, therefore, between errors and successes in the direction of AI. And 2026, in particular, will emerge as the... boom of AI agents, so that – in the near future – An Artificial Intelligence designed for each individual to call their own., just like the PC – personal computer – was in its time.
And the realization that we have already experienced the prelude to a A new era with Agency AI translates into numbers. which comes from financial market, whether it's for analyzing large volumes of data, protecting clients and investments by anticipating risks and detecting fraud, or... retail, acting in the automation of purchases, inventory management and price adjustments according to the customer's profile and their consumption patterns and personal behavior.
It is therefore necessary to examine whether these statements, some more skeptical and optimistic than others, are consistent with the current state of AI or are out of step with our reality, causing certain concepts to become outdated or even stuck in time.
The appeal of the Future of Life
From time to time, conflicting feelings arise in those who confront a technological future that is still uncertain, but which is already approaching. This was the case with the disruptive arrival of electric light, the steam engine, and the internet, in the sense that ambivalence, moral/ethical dilemma and insecurity aroused.
Hence the reason to slow down or even pause the development of a technology when it advances by leaps and bounds without a well-defined system of rules, controls, checks and balances. Therefore, it would be no different with AI.
Following the example proposed by Future of Life Institute – FLI, a US-based non-profit organization focused on addressing the risks and security of emerging technologies, is calling for a ban or temporary prohibition of what could become a Superintelligence (greater than human intelligence itself). Until there is strong public support and scientific consensus that it will be done under reasonable control and safety levels..
The argument being made is that there is a disparity between market expectations and the actual return that AI can provide. This is corroborated by two open letters – one at the time of ChatGPT in 2023 and Another one now signed by heavyweight figures like Geoffrey Hinton and Yoshua Bengio., two of the most cited scientists as references on the subject, considered to be the godparents of AI. And they warn about the danger of the advancement of a highly evolved AI that could replace humans and take their power. A concern, incidentally, shared by Hawking., ...that in the future an automated technology could get out of control and materialize an existential threat to humanity and the safety of the planet.
So, this isn't the first time this has come to light! While in 2023 the Future Life Institute I was beginning to fear the experiments stemming from ChatGPT, requesting a a moratorium of at least six months on the training of generative AI systems more powerful than GPT-4., ...and by 2025 it will go even further and opposes the race for Superintelligence (that is, it is concerned with hypothetical situation of an AI that, "unlike common AIs," despite the benefits it will provide., (which could outweigh the potential harm and risks to the future of humanity).
THE open letter from 2025 This is yet another sign, as it manages to bring together around seven hundred voices in unison, including scientists, experts, and public figures, regarding the pause in the advancement of the still hypothetical Artificial Superintelligence. And it fuels the debate not only about regulation, which was already underway, but also about the security of a more powerful artificial intelligence, over which there was no control before, nor is it yet well understood how it works.
The concern, therefore, is with developing AI systems only when there is certainty that their effects will be positive and their risks manageable. However, this is not what happens without proper planning and risk management. Because, according to the Asilomar's AI Principles, the arrival of advanced AI The need for increasingly larger and more unpredictable models should be preceded by better care, planning, and management.“with the resources already available”.
It can be seen, then, that what the two letters have in common is a reflection on what is the social benefit What emerging technologies like AI might bring to the forefront of this scenario of fear and uncertainty? While Hawking was an enthusiast of innovation stemming from machines that could one day think and decide for us, he also advocated for the ethical and responsible use of technology, fearing the power of machines to replace humans. This concern, therefore, is legitimate, regarding safety and control. However, it is still impossible to categorically state that... It will destroy the human race..
However, in contrast to the call for a pause in the advancement of AI, it is also necessary to investigate. who it's the what That's behind those letters. Even to understand why there's a disconnect today between what society wants and what large technology companies want. So, what first What is striking is the volume and diversity of public figures who signed the letter organized by Future of Life Institute (FLI). Secondly, the objectives you intend to achieve:“that there be scientific consensus so that a Superintelligence can be developed in a safe and controllable manner; and that society demonstrates a genuine interest in this type of technology.", as analyzed by Professor Diogo Cortiz, a leading expert in AI and human behavior.
According to him, two interpretations can be drawn from the letter. Imagine, then, an AI more powerful than the one that exists today, operating on a network, in real time and autonomously, with minimal human supervision, entering the field of critical infrastructures such as “energy, finance, logistics, health and defense”.
Remember, too, that discussing this topic is neither trivial nor simple. First, because if we don't yet know how to define what this Superintelligence would be, much less how to reach it, what's the point of debating the dangers of a hypothetical problem? The second reason follows, since “Some researchers are saying that this discussion is a complete waste of time.”.
However, considering other possibilities, even extreme, hypothetical ones, or those far removed from our reality, also helps us prepare for possible futures, so that we can decide together which is the best path to follow. That is why it is so important to know who is behind this movement and what political-ideological spectrum might guide them.
Well, FLI is known for aligning itself with the debate about existential and future risks to humanity. And for linking itself to philosophical currents that receive criticism from economists, sociologists, and even Philosophy for its "technocratic" nature in focusing excessively on the "end of the world"., While they fail to discuss the current problems of the present world "in different economic, political, cultural, cognitive and even affective dimensions" that may already be underway.
The risk, it seems, is not the immediate takeover by machines (as in an apocalyptic tale or a Terminator-style science fiction film). Rather, it is the gradual naturalization of this AI automation process to the point where we no longer know what to do. how assign responsibilities and, above all, to whom Distribute them when something goes wrong or is different from what was expected. It's time, then, to learn how to measure limits and controls.
Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies
Article originally published on the IT Forum website: https://itforum.com.br/colunas/superinteligencia-artificial-futuro













