
In recent years, incidents involving chatbots in dramatic storylines have become increasingly common, including cases of suicide or violence. Are these disasters isolated, or do they point to a broader problem rooted in how these systems operate?
They are not isolated incidents, but signals of a broader trend. Extreme episodes do not prove that chatbots “cause” violent or suicidal behaviors, but they reveal what happens when tools designed to be present, continuous, and adaptive penetrate the emotional sphere of vulnerable people. This is at the heart of Love Machines, the latest book by philosopher-sociologist James Muldoon, which explores the entry of artificial intelligence into the affective dimension of daily life.
Muldoon argues that “companion AI” is not a marginal or pathological phenomenon, but a normalized practice: millions of people use it to talk, vent, seek reassurance, or simply to have a constant presence. The crucial point is the very normality of this relationship: relational AI is not perceived as an extraordinary or invasive technology, but as something ordinary, discreet, always available. This silent integration into everyday life makes the phenomenon structural.
When tools designed to maximize continuity and adaptation become emotional interlocutors, the risk does not arise from an exception, but from repeated, predictable, and socially accepted use. In this framework, more than focusing on individual dramatic episodes, the real issue of safety and governance becomes evident: how to regulate, in a responsible manner, a technology that moves at the heart of human relationships.
The problem is not the intention of the technology, but the fact that these systems are designed to maximize engagement and persistence, and are used as emotional interlocutors in a predictable way. Extreme cases thus function as stress tests: they do not indicate an inevitable trajectory for the technology, but reveal how a common practice can translate into concrete risks for those most vulnerable.