Share

How continuous monitoring can ensure responsible innovation and reduce risks in the use of artificial intelligence.

By Prof. Dr. André Filipe de Moraes Batista

The advancement of artificial intelligence (AI) presents governments and companies with a recurring dilemma in a new context: balancing the innovative drive with the need for control. Excessive regulation can inhibit discoveries and discourage investment, while the absence of rules exposes society to diffuse and difficult-to-repair risks. In this context, algorithmic surveillance emerges as a viable alternative. 

The inspiration for this model comes from the pharmaceutical sector. No drug is considered completely safe at the time of its launch. Continuous and rigorous monitoring allows for the identification of previously unknown side effects and enables necessary adjustments. This practice, called pharmacovigilance, has saved millions of lives over the decades. Considering that AI currently permeates medical diagnoses, judicial decisions, and public policies, an equivalent becomes necessary: algorithmic surveillance. 

Instead of trying to anticipate all risks before implementation, algorithmic surveillance proposes a regulatory learning model. This model allows for technological experimentation, provided it is accompanied by systematic monitoring, up-to-date documentation, and clear incident response protocols. In this way, security becomes a continuous process, not an obstacle to innovation. 

This approach has the potential to reconcile legal and practical aspects. Instead of static legislation based on hypothetical scenarios, it proposes the adoption of dynamic monitoring structures capable of updating standards as concrete evidence emerges. Thus, regulation evolves in parallel with technological development, reducing the gap between legal predictions and effective practices. 

It is necessary to create the concept of an "AI leaflet." Just as a drug leaflet informs about risks, indications, and limitations, AI systems could provide transparency regarding training data, performance, biases, and update mechanisms. Such a measure does not impede innovation; on the contrary, it fosters institutional trust, an essential element for the sustainable development of the technology. 

In essence, algorithmic surveillance proposes to overcome the dichotomy between innovation and control. The goal is to promote responsible innovation, based on learning from real-world use and course correction whenever necessary. It is a proposal for regulatory maturity that avoids both censorship and the absence of supervision, prioritizing intelligent monitoring. 

Considering AI as the new engine of the economy, it is essential to ensure that control mechanisms exist. Innovation will only be legitimate if accompanied by responsibility, and algorithmic surveillance is the instrument capable of enabling such a balance. 

*Prof. Dr. André Filipe de Moraes Batista is a researcher at ABES Think Tank and Professor of Artificial Intelligence at Insper. 

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

Article originally published on the IT Forum website: https://itforum.com.br/colunas/ia-inovacao-vigilancia-algoritmica/

quick access