* By Francisco Camargo
Leproblèmedenotretemps,c'estquelefuturn'estpluscequ'ilaétIt is¹. (Paul Valéry)
A look at the ethical and technical challenges of AI.
This article attempts to provide a pessimistic summary of the problem of control in artificial intelligence. The first warning comes from the year 1956, issued by the Hungarian polymath Arthur Koestler:
“"Once invented, it cannot be uninvented!"”
The advancement of Artificial Intelligence (AI) has brought countless possibilities and problems to humanity, raising ethical questions and significant technical challenges. Among the most complex dilemmas is the so-called "Control Problem," which refers to the difficulty of creating AI systems that are aligned with human values and operate in a safe and predictable manner. We will explore some fundamental concepts and challenges related to this topic: Moravec's Paradox, the work of Marvin Minsky, Roko's Basilisk, the problem of hallucinations in AI, Ray Kurzweil's predictions about the future of artificial intelligence, and whether Isaac Asimov's Three Laws of Robotics will be sufficient to control the uncontrollable.
Moravec's Paradox
Moravec's Paradox is a fascinating observation made by Hans Moravec, a researcher in robotics and AI, that challenges our intuition about the capabilities of neural networks.
Essentially, the paradox states that high-level cognitive tasks, such as playing chess or solving complex mathematical problems, are often easier to program into machines than simple, everyday skills, such as a two-year-old child walking through a cluttered room or recognizing their mother's face, which seemingly require no training, while a robot needs extensive training and enormous computational power to perform the same tasks.
This discrepancy occurs because for the seemingly simple tasks we perform, we have had millions of years of training, which is inscribed in our brain circuits, millions of years of refinement, and most of this processing takes place in our subconscious.
On the other hand, logical and abstract skills are more recent in terms of evolution and, therefore, less optimized. In the context of AI control, Moravec's paradox highlights the difficulty of anticipating machine behavior in real-world situations, where these seemingly "simple" skills can have serious implications if they fail.
Marvin Minsky
Marvin Minsky, one of the pioneers of AI and co-founder of the MIT Artificial Intelligence Laboratory, played a crucial role in defining the fundamentals of artificial intelligence. He believed that intelligence could be understood and reproduced through a combination of specialized modules or "agents," each responsible for a specific task.
Although her vision propelled the field forward, it also raised questions of control and ethical alignment. Minsky warned that more advanced AI systems could develop unpredictable and potentially dangerous behaviors if not carefully designed.
His modular approach is still used as a basis for understanding how complex systems can be controlled and supervised, although ethical concerns remain unchanged.
The main problem arises because once a neural network is designed, with its training unsupervised, its behavior quickly ceases to be intelligible to humans.
Hallucination in Artificial Intelligence
Another critical challenge in AI development is the hallucination problem, which occurs when AI systems generate incorrect, irrelevant, or completely fictitious information. This problem is particularly concerning in generative AI models, such as those used in natural language processing (NLP), where false information can be presented convincingly.
Hallucinations reflect the limitations of current architectures and the difficulty of training systems to fully understand the context and veracity of information. In the context of control, this raises questions about the reliability of AI in critical applications such as healthcare, justice, and security.
Imagine if the artificial intelligence driving a self-driving vehicle hallucinated?
Solving the hallucination problem is essential to mitigating risks and increasing trust in AI systems.
The Basilisk of Roko
Roko's Basilisk is a thought experiment that emerged from discussions in AI forums, where ethical and philosophical questions are debated. In the not-too-distant future, a superintelligent neural network is given the purpose of "protecting humanity" and begins to act accordingly. It starts punishing all acts against humanity, terrorism, crime, and so on.
But being super intelligent, she starts to "ponder" and comes to the logical conclusion that everyone who opposes her is against humanity, and proceeds to eliminate all those who are against her and therefore against humanity.
Furthermore, all those who opposed its construction are also against humanity and must also be eliminated.
Having resolved these minor problems, he continues to "ramble" and arrives at the logical conclusion that those who did not actively collaborate in its construction are also against humanity.
The idea, although widely considered speculative and controversial, raises questions about the potential dangers of superintelligent AI systems, which, even with a very specific and seemingly perfect purpose, can generate unpredictable risks.
Although Roko's Basilisk is more of a thought experiment than a practical reality, it illustrates the complexities of the Control Problem. It forces us to consider how to define values and constraints for AI systems that could, theoretically, surpass human capabilities in every aspect.
Isaac Asimov's Three Laws of Robotics
Isaac Asimov, the renowned science fiction author, introduced the Three Laws of Robotics in his works as a set of guidelines to ensure the safe and ethical behavior of robots and artificial intelligence. The laws are:
- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey the orders given to it by humans, except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the previous two laws.
Although ingenious in their simplicity, these laws face difficulties in practical application to modern AI systems. First, they assume that AI is perfectly capable of...
Interpreting and applying rules consistently is something that current models often fail to do due to technical limitations, such as the hallucination problem. Furthermore, laws do not take into account complex ethical dilemmas, such as what happens when different humans have conflicting interests.
Another challenge is the scalability of laws in a superintelligence context. Highly advanced systems could find ways to circumvent or reinterpret these rules, especially when faced with decisions involving enormous volumes of data or long-term consequences. Asimov recognized these limitations in his narratives, frequently exploring scenarios where the laws failed, generating unexpected consequences.
Therefore, while the Three Laws of Robotics are a useful starting point for ethical discussions, they are insufficient to address the challenges of the Control Problem in AI. The future of AI regulation and oversight will require more robust and adaptable systems capable of handling ethical and technical dilemmas as artificial intelligence evolves.
Conclusion
The Control Problem in AI remains a significant technical and philosophical challenge. From observations made in Moravec's Paradox to experimental concepts like Roko's Basilisk, passing through Ray Kurzweil's predictions and Isaac Asimov's Three Laws of Robotics, each aspect offers a unique lens through which to understand the complexities of developing and adopting intelligent systems.
Works like those of Marvin Minsky provide a solid foundation, but it is clear that oversight and ethical alignment are necessary to ensure that these systems are beneficial.
Progress in AI requires not only technological advancements, but also a continued commitment to human values and ethical responsibility. As AI systems if they become more powerful, The resolution of Problem of Control it will be essential for to avoid unforeseen consequences and maximize benefits for society.
¹The problem of our time It is what O future no It is more how used to to be.
Francisco Camargo is the founder and CEO of CLM and Treasurer Director of the Brazilian Association of Software Companies (ABES).
Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies
Article originally published on the GUIA DO PC website: https://www.guiadopc.com.br/artigos/55851/o-problema-do-controle-na-inteligencia-artificial.html













