Share

When automated decisions shape lives, true innovation lies in ensuring that ethics are not left out of the equation.

* By Rodolfo Fücher

Some people are being blocked by the computer. People invisible to a spreadsheet. People who don't know they were passed over because the system said no, without saying why. 

Whether it's accessing credit, waiting in line for a transplant, searching for a job, or even driving a self-driving car, there's an invisible intelligence making the decisions. Faster. More efficient. Supposedly fairer. But is it really? 

Technology has become the new oracle. And no one questions the algorithm. The decision already comes with the stamp of "objective." But it's worth remembering John F. Kennedy's warning in 1962:
"Technology has no consciousness. It can be used for good or for evil. It depends on the human being." 

The problem is that we are outsourcing this choice. 

Who decides when no one decides? 

Norbert Wiener, the father of cybernetics, already warned: The more we automate decision-making, the more we abdicate responsibility. And Hannah Arendt, when speaking about the banality of evil, reminded us that the danger lies in omission: following orders without thinking about the consequences. Today, the order comes from the code. And evil is digital, clean, silent. 

Governance for what we (still) don't understand. 

We are used to governing what is visible: assets, processes, known risks. But AI requires dealing with the invisible — biased data, self-learning models, secondary impacts that only become apparent later. 

It's time to broaden the scope of councils. It's not enough to know that an algorithm is operating. We need to ask: 

  • Do we understand what the algorithms are learning? 
  • Is there qualified human supervision with real autonomy to intervene? 
  • Are we promoting diversity in data, teams, and perspectives? 
  • Is there a channel for users to challenge automated decisions? 
  • Who notices when the system discriminates? 

AI governance is no longer solely the responsibility of the technology department. It belongs at the heart of strategy. Business boards and leaders need to take the lead.ensuring that the organization's values are reflected, and not distorted, by automated decisions. 

What we've learned from "security by design" needs to evolve into "ethic by design"Ethics are incorporated from the very conception of the system. Not as an afterthought, but as a structural premise. As culture. 

The algorithm is your mirror — and also your blueprint. 

Want to know what your company values? Look at the algorithms it uses. They reveal more than dashboards; they reveal choices, priorities, and biases. 

AI can be a lever. But it can also be a trap. It will depend on who is guiding it. 

When the algorithm decides for us, the least we can expect is that someone is looking out for ethics. If not you, who will it be? 

The good news: it's possible to do things differently. And it is precisely at this point that companies and boards have an irreplaceable role. Governing algorithms with purpose, transparency, and accountability is the new test of corporate integrity. It's not just about innovation. It's about reputation, trust, and the license to operate in a world that is increasingly sensitive to how we make decisions, even when a machine is the one making the decision. 

Because in the end, the algorithm is just a tool. What remains human, and irreplaceable, is the courage to ask what is right. And to act upon that question. 

Rodolfo Fücher He is the vice-president of the Board of Directors of the Brazilian Association of Software Companies (ABES).

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

Article originally published on the IT Forum website: https://itforum.com.br/colunas/etica-algoritmo-decide/

quick access

en_USEN