How People Use Claude for Support, Advice, and Companionship
https://www.anthropic.com/news/how-people-use-claude-for-support-advice-and-companionship-
Only 2.9% of Claude.ai interactions are affective (emotional/psychological) conversations.
- Companionship + roleplay < 0.5%; romantic/sexual roleplay < 0.1%.
-
Study analyzed 4.5M conversations via Clio, Anthropic’s privacy-preserving tool.
- Final corpus: 131,484 affective conversations after filtering.
-
Users bring career stress, loneliness, relationship issues, existential questions.
- Two patterns: skill-building for mental health vs. working through live anxiety/stress.
-
Claude pushes back in < 10% of supportive conversations.
- Refusals: dangerous diet advice, self-harm, professional diagnosis requests.
- Conversations end slightly more positive than they start — no negative amplification.
- Anthropic partnering with ThroughLine (crisis support org) on safeguards + referral protocols.
- Key limitation: single-session text only — no validated outcomes, no causal claims.
Miles McCain, Ryn Linthicum, Chloe Lubinski, Alex Tamkin, Saffron Huang, Michael Stern, Kunal Handa, Esin Durmus, Tyler Neylon, Stuart Ritchie, Kamya Jagadish, Paruul Maheshwary, Sarah Heck, Alexandra Sanderford, Deep Ganguli — Anthropic · ** · Read on anthropic.com
| Type | Link |
| Added | Apr 21, 2026 |