By Maria Luiza Reis, DSC
A recent tragedy reported in the newspapers – the death of a child due to a medical prescription error – goes beyond an isolated clinical failure. The doctor acknowledged the mistake, and the nursing technician admitted to finding the dosage strange, but administered it anyway because she was following orders. This case, beyond the necessary investigations, serves as a stark warning about the culture of teamwork in modern society.
The fundamental premise of any collective system is that people fail. The antidote to this inherent fallibility is not individual infallibility, but collaborative vigilance.
We work in groups precisely so that one eye can critique another, so that a different perspective can intercept an error before it becomes a tragedy. The crucial question is: do we still know how to work this way?
There is a growing sense that individualism is eroding this principle. Often, professionals reduce their scope of responsibility to a closed list of tasks, as if their ethical and intellectual commitment ended at the edges of their job description. "It's not my job" or "I already have enough problems" become mantras that isolate and weaken the entire system.
This behavior comes at a very high cost. Teams or institutions that disdain questions and constructive criticism create a culture of silence. In this environment, avoiding embarrassment or conflict becomes more important than avoiding disaster. The professional with a critical perspective, essential for the group's resilience, is stifled or marginalized.
Herein lies a dangerous paradox: by behaving like robots – executing orders without question, focusing only on predefined processes – we ourselves pave the way for our replacement. Automation and AI advance precisely in repetitive, rule-based tasks with a limited scope.
But the tragedy mentioned exposes precisely what machines cannot offer: ethical responsibility, empathy, and contextual judgment. A robot can accumulate all the medical knowledge in the world and be infinitely fast, but:
He cannot take responsibility for a mistake, only point out its possible occurrence;
He doesn't feel the discomfort of empathy that makes a technician find an abnormally high dose for a child strange;
He doesn't seek solutions to problems outside his initial scope because he doesn't have a "moral scope," only a technical one.
Worse still, we may be creating an absurd precedent. A robot can make as many or more mistakes than a human, but the blame, by definition, will always fall on a human being – the programmer, the supervisor, the institution. This transfer of responsibility does not make the system safer; it dilutes accountability and removes from leadership positions precisely those who would have the courage to assume responsibility for the whole.
The true antidote, therefore, is not more blind technology, but the courageous reaffirmation of what makes us human at work: the courage to question, the commitment to the collective result (and not just the individual task), and the empathy that makes us care about the consequences of our actions – and the actions of our colleagues.
The dangerous maxim is not "robots will replace people," but "people who behave like robots have already become replaceable." Our future value will not lie in carrying out orders, but in having the discernment to question them when necessary, in the name of a greater good. It is this humanity, unique and complex, that still makes us irreplaceable and capable of preventing the most avoidable tragedies.
*Maria Luiza Reis, DSC, CEO Maps245, VP Confederação Assespro, VP Assespro RJ, Advisor ABES Software and Director of ABEINFO.
Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies
Article originally published on the IT Portal website: https://itportal.com.br/por-que-os-humanos-sao-insubstituiveis/













