Share

By Edsel Simas

 

The advancement of generative AI in corporate environments is pressuring historically operational areas to revise their operating models. The service desk is one of the most visible examples. The adoption of LLMs (Large Language Models) in technical support flows has altered not only the user journey but also the very logic of incident triage, resolution, and escalation. The central point is not whether AI replaces the analyst, but rather how it redesigns the operational architecture and imposes new demands on the support team.

Unlike previous waves of automation, which focused on static scripts and linear interaction bots, what we are seeing now is the incorporation of language engines capable of interpreting, inferring, and suggesting actions from ambiguous or poorly formulated descriptions. This radically alters the operational design of service desks, repositioning the human analyst as a critical link between accumulated technical knowledge and the probabilistic inference systems that have come to mediate a significant part of the service.

On a tactical level, the introduction of generative AI reconfigures the lifecycle of a support ticket. Input, previously limited to predefined forms or human agents, can now be done via natural language, interpreted in real time by a language model trained on historical tickets, knowledge bases, and operational logs.

From this point, the system classifies the call, suggests priority based on context, such as frequency, impact, criticality, history of related incidents, and even proposes solutions based on previous responses—which can be automatically drafted, tested, and sent to the end user, depending on the degree of autonomy defined by internal policy. The analyst does not disappear from this flow, but begins to operate in another instance: as a validator, auditor, and—most importantly—as a modeler of the very knowledge that feeds the AI.

A new field of action for human care.

This functional shift requires a structural requalification of the analyst's role. The previous logic, centered on direct execution and tacit operational knowledge, gives way to a demand for interpretive and structuring skills. The analyst needs to understand how AI makes decisions, what biases can compromise the accuracy of suggestions, how synthetic, noisy, or incomplete input data affects the outcome, and how to control the degradation of model performance over time.

This new profile is not a luxury reserved for cutting-edge companies, but a direct response to pressures for reduced SLAs and increased first-line resolution rates. In many operations, low-complexity tickets are being entirely absorbed by AI systems. This tends to artificially raise the average complexity of the remaining calls, which, if not properly interpreted by leadership, can give the false impression of a drop in human performance, when in fact it is a reflection of automated filtering.

This is where the analyst needs to be more than just a ticket resolver: they become an interpreter of the support structure itself, working on the continuous reconfiguration of workflows, the calibration of models, and the active management of the knowledge base.

Adoption curve in Brazil

In the country, there are different stages of adoption of these technologies. While large corporate operations are already testing AI co-pilots to assist their analysts in real time, suggesting scripts, commands, or diagnoses based on patterns extracted from previous tickets, most companies operate in a hybrid model, where automated workflows coexist with manual containment zones.

The central issue, however, is less technological than organizational: AI is already present, but the framework for governance, data curation, and technical training is still in its infancy. This creates a structural tension: the promise of efficiency gains cannot be sustained if the human team does not know how to operate, validate, and evolve the system.

Another relevant point is how AI responds to demands. Generative models operate through statistical inference, meaning they suggest the most probable answer, not necessarily the most correct one. In technical support environments, where the margin of error can compromise information security, environmental stability, or compliance with regulatory requirements, this necessitates qualified human filters.

Analytical supervision becomes mandatory: the analyst needs to be able to identify model hallucinations, out-of-scope responses, or erroneous inferences generated by poorly trained databases.

This supervision, moreover, is part of a strategic operational layer that tends to expand. As AI takes over tasks with low cognitive density, the support analyst is pulled into more complex decision-making layers, such as the integration between observability tools and automatic response systems, the configuration of predictive alerts based on continuous learning, or the definition of auto-scaling parameters.

Technical support then becomes a critical node in the operational resilience chain of companies. And the analyst who adapts to this new role ceases to be a task executor and becomes a curator of intelligence.

In short, generative AI does not represent a threat to human work in technical support—but it does impose a change in order. The operational model is reconfigured, and with it the demand for new skills, new languages, and new mental models. The analyst who understands this transition and assumes an active role in the architecture and curation of intelligent systems will not only have a guaranteed place in the new ecosystem—but a leading role. Ignoring this movement is a choice that comes at a price: that of technical irrelevance in a sector where automation has already ceased to be a promise and has become infrastructure.

Edsel Simas is the CTO of Setrion.

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

quick access