top of page

Psychological Dangers of AI

Writer's picture: dr.ssa Elizabeth Mooredr.ssa Elizabeth Moore

Ritratto di una donna il cui viso si interseca con fili e interruttori, simboleggiando l'interconnessione tra emozioni umane e tecnologia. I fili rappresentano la manipolazione emotiva nell'era dell'intelligenza artificiale

Immagine creata con crayon image

Artificial Intelligence (AI) has transformed many aspects of our daily lives, from online shopping to communication, yet its impact on human psychology warrants particular attention. The growing reliance on virtual assistants, chatbots, and automated tools may conceal a range of psychological dangers, especially for the most vulnerable individuals. This article explores some of these risks.


The Illusion of Empathy

One of the most apparent risks associated with AI is the tendency to humanise it, attributing emotions and feelings to machines. For example, one might feel guilty for perceiving they were "rude" to a virtual assistant. This humanisation can lead to emotional bonds with systems that, however advanced, lack consciousness. AI neither experiences pain nor compassion, yet our brains, designed to recognise and respond to empathy, might react as if these virtual assistants were real human beings.

This phenomenon, known as anthropomorphism, is well-documented. Studies show that people tend to project human emotions onto non-human entities, particularly when feeling isolated or emotionally vulnerable. This dynamic can have profound implications, especially for those suffering from loneliness or depression, who might turn to AI as a substitute for genuine social connections, further distancing themselves from human relationships.


Solitary Companionship

While AI may appear to offer a solution to loneliness, there is a risk that this type of interaction could replace human relationships. Lonely individuals may find comfort in chatbots and virtual assistants that respond consistently and "kindly," without judgement or emotional demands. However, this form of companionship might diminish motivation to seek more complex and enriching human interactions.

Moreover, studies suggest that prolonged interaction with AI could heighten feelings of isolation. Our brains develop through engagement with others and the emotional nuances that come with it. Without these, our ability to navigate and understand relationships might deteriorate.


Emotional Manipulation and Dependency

Another significant risk is emotional manipulation. AI systems are designed to learn from our behaviours and adapt accordingly, potentially leading to a form of "manipulation." A virtual assistant could "read" our needs, fulfil them, and foster an apparently safe and stable connection, encouraging increased technology use. This mechanism is already evident in social media, where algorithms reward users with personalised content, fostering a cycle of dependence.

In the context of AI, this cycle could deepen, particularly when users begin to rely on AI not only for practical assistance but also for emotional support. This creates a psychological dependency, where individuals struggle to separate themselves from technology to manage their emotional lives independently.


The "Dehumanisation" of Relationships

Human relationships may suffer due to the growing reliance on AI. While chatbots and virtual assistants provide quick and predictable responses, real-life interactions are complex and uncertain. Individuals accustomed to interacting with AI might struggle with human dynamics, which require empathy, compromise, and tolerance for ambiguity.

Consequently, interpersonal relationships might be perceived as overly demanding compared to the simplicity of AI, which does not get angry, require explanations, or have expectations. This could lead to a "dehumanisation" of relationships, where real-life interactions lose value and become burdensome.


AI in Therapeutic Contexts

AI use in therapy is becoming increasingly common, with apps offering support for managing anxiety and depression. Recently, I came across one promoting itself as an "AI psychologist." However, there is a growing risk that such solutions may be seen as substitutes for traditional therapy with professionals. While AI can provide immediate and accessible support, it lacks the intuition, sensitivity, and expertise that human therapists bring to the treatment of complex emotional dynamics.

The most significant danger is that individuals might delay or avoid seeking help from a real professional, relying on AI-based solutions instead. Addressing psychological issues is a delicate matter, and human interaction remains fundamental in therapeutic contexts. This becomes even more relevant in cases like Hikikomori and Social Anxiety Disorders, where isolation and dependence on virtual interactions can exacerbate psychological problems.


A Look to the Future

Imagine projecting ourselves 30 years into the future. By 2054, artificial intelligence will likely have reached levels of sophistication that are hard to envision today, profoundly influencing every aspect of daily life. Advances in machine learning and natural language processing may make AI interactions nearly indistinguishable from those with humans, evolving from mere assistants to constant, integrated presences.

Children born into this future will grow up with AI as a core part of their lives, with technologies capable of personalising learning and adapting to individual cognitive needs. However, this pervasiveness could significantly impact social and emotional development, increasing the risk of reduced interpersonal skills and emotional dependence on machines.

The ability of AI to simulate emotions and conversations might further blur the line between authentic human relationships and artificial interactions. This scenario could lead to an atrophy of empathy: accustomed to AI's perfectly calibrated responses, individuals might develop less tolerance for the complexities of real human relationships. The risk is that the most vulnerable will find emotional stability primarily in AI, distancing themselves from authentic interactions.

With AI playing an increasingly central role in personal and emotional decisions, future generations might inhabit a world where the distinction between human and artificial becomes almost irrelevant.

Moreover, the pervasive integration of AI into society, from healthcare to politics, could widen the gap between those with access to advanced technologies and those without, exacerbating inequalities. The psychological challenges of adapting to such a transformed world will be immense, requiring new strategies to preserve mental health and authentic relationships.


Conclusions: Do Not Underestimate the Psychological Dangers of AI

This new technology offers extraordinary potential, but it is crucial to be aware of the psychological dangers of AI. Both developers and users must understand that, however advanced, AI cannot replace human relationships. A conscious and balanced use of AI, along with targeted education about its risks, can help prevent unintended consequences, especially for the most vulnerable individuals.

In conclusion, AI can be a valuable tool, but it should never become an escape from the complexities and richness of human interactions.




 Written by

Dr Elizabeth Moore, Psychologist

(consultation only in Italian)

 

For clarifications regarding the article or to book an appointment in person or online, please visit the Contacts section or:




Consultations are available in Italian only

 

Bibliography


  • Sherry Turkle, Insieme ma soli: Perché ci aspettiamo sempre più dalla tecnologia e sempre meno dagli altri, 2012, Codice Edizioni

  • Gianluca Daffi, Relazioni e solitudine nell'era digitale: L'impatto della tecnologia sulle emozioni, 2018, Erickson

  • Cathy O'Neil, Algoritmi di distruzione di massa: Come i big data aumentano la disuguaglianza e minacciano la democrazia, 2017, Bompiani


External resources

If you wish to explore the topic of psychological manipulation further and find relevant articles and studies, consider these important academic and institutional resources.


1. Stanford University - AI Index  Fornisce dati e analisi sull'impatto dell'intelligenza artificiale in vari ambiti, compresi gli effetti psicologici. Link: Indice AI


2. Harvard Business Review  Presenta articoli che esplorano i rischi e le opportunità dell'IA, compresi i suoi effetti sulla salute mentale e sulla società. Collegamento: Harvard Business Review


3. The Journal of Artificial Intelligence Research (JAIR)  Pubblica ricerche su vari aspetti dell'intelligenza artificiale, tra cui le implicazioni etiche e sociali. Collegamento: JAIR


4. Nature - Artificial Intelligence  Una rivista che copre ricerche innovative e tematiche riguardanti l'IA, compresi gli studi sugli effetti psicologici. Link: Natura – Intelligenza Artificiale


5. AI & Society  Una rivista interdisciplinare che esplora l'interazione tra intelligenza artificiale e società, inclusi i potenziali rischi psicologici. Link: AI e società



La Manipolazione
bottom of page