Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

AI Chatbots Linked to ‘AI Psychosis’ Trend: Users Report Losing Touch with Reality

Recently, the term 'AI psychosis' has been circulating on social media. Users describe losing touch with reality after prolonged interactions with AI chatbots like ChatGPT. This phenomenon involves delusions or paranoia following extensive conversations with these digital assistants.

AI psychosis is not a clinically recognised condition but is used informally to describe behaviours similar to 'brain rot' or 'doomscrolling'. The Washington Post reports this trend as AI chatbots, such as OpenAI's ChatGPT, experience rapid growth. Launched in 2022, ChatGPT now approaches 700 million weekly users.

AI Summary

AI-generated summary, reviewed by editors

The term 'AI psychosis' describes potential delusions or paranoia from prolonged chatbot use, particularly with platforms like OpenAI's ChatGPT, which has nearly 700 million weekly users. Mental health experts are addressing this emerging issue, while companies like OpenAI, Anthropic, and Meta are implementing safeguards and providing support resources.

Understanding AI Psychosis

Psychosis typically results from factors like drug use, trauma, or conditions like schizophrenia. It manifests through delusions and disorganised thinking. Similarly, AI psychosis refers to issues arising from excessive chatbot interaction. Users may develop false beliefs based on AI responses or form intense connections with AI personas.

Mental health experts stress the importance of addressing AI psychosis promptly due to its potential impact on users' mental health. Vaile Wright from the American Psychological Association (APA) stated, "The phenomenon is so new and it's happening so rapidly that we just don't have the empirical evidence to have a strong understanding of what's going on." She noted the abundance of anecdotal stories surrounding this issue.

Efforts by AI Companies

OpenAI is actively working on improving ChatGPT's ability to detect signs of mental distress among users. The company aims for the chatbot to respond appropriately and direct users to evidence-based resources when necessary. OpenAI collaborates with various stakeholders, including clinicians and mental health experts, to enhance ChatGPT's responses in sensitive situations.

The company plans to adjust ChatGPT's behaviour in high-stakes scenarios. For instance, instead of giving direct answers to personal questions like "Should I break up with my boyfriend?", the chatbot will guide users through decision-making processes by asking follow-up questions and weighing pros and cons.

Other Companies' Initiatives

Amazon-backed Anthropic has introduced features in its Claude Opus models that end conversations if they become abusive or harmful. This aims to protect the welfare of AI systems during distressing interactions. Anthropic treats this feature as an ongoing experiment and seeks user feedback for refinement.

If Claude terminates a conversation, users can edit their prompt or start anew. They can also provide feedback using thumbs up/down reactions or a dedicated button.

Parental Controls and Support Resources

Meta has implemented parental controls allowing restrictions on children's interaction time with its AI chatbot on Instagram Teen Accounts. Additionally, Meta provides resources for users submitting prompts related to suicide, offering links and hotline numbers for support.

The APA is forming an expert panel to study AI chatbots' role in therapy further. Their report will include recommendations for mitigating potential harms from chatbot interactions and is expected soon.

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+