Counseling and Psychological Services

AI for Mental Health Support

Emergency Contacts

GMU Crisis Service:
703-993-2380, option 1
or
GMU Police: 703-993-2810
or
911

AI as a Tool, Not a Therapist

AI has a valuable role in mental health, but its best use is as a supplement rather than a replacement for therapy. Instead of thinking of AI as an alternative to human therapists, we should consider how AI and human expertise can work together.

Guidelines for Safe and Informed Use

1. Understand the Tool’s Purpose
• Use AI chatbots for general wellness support, mood tracking, or self-help—not as a replacement for therapy.
• Look for tools that clearly state they are not substitutes for professional care.
2. Check Privacy and Security Policies
• Read the privacy statement before sharing personal details.
• Prefer apps with end-to-end encryption and the option to delete your data.
3. Use Reputable, Evidence-Informed Tools
• Choose chatbots developed in collaboration with mental health professionals or backed by credible institutions.
• Look for scientific validation or peer-reviewed studies if available.
4. Stay Critical of the Information
• Verify advice from the chatbot with trusted mental health resources or professionals.
• Be cautious of any chatbot that claims to “diagnose” or “treat” mental illness.
5. Know the Limits in a Crisis
• Always have a crisis plan (e.g., local emergency numbers, suicide hotlines).
• If you are in danger or distress, contact human support immediately rather than relying on AI.
6. Maintain Human Connections
• Balance AI interactions with friends, family, or therapists.
• Use the chatbot as a supplementary tool, not a replacement for real relationships.
7. Report Problematic Outputs
• If the chatbot gives harmful, misleading, or triggering responses, report it to the developers or platform host.
• Avoid platforms that do not provide a way to give feedback or flag issues.

Risks of Using Mental Health Generative Chatbots

1. Lack of Professional Oversight
• Chatbots are not licensed therapists and cannot provide clinical diagnosis or treatment.
• Risk: Users may rely on AI instead of seeking appropriate professional care.
2. Inaccurate or Harmful Advice
• AI models may generate incorrect, oversimplified, or culturally inappropriate suggestions.
• Risk: Following flawed advice could worsen mental health symptoms or delay effective treatment.
3. Data Privacy and Confidentiality
• Conversations may be stored or analyzed to improve AI systems.
• Risk: Sensitive emotional or mental health information could be inappropriately accessed or shared if data policies are unclear or breached.
4. Overreliance and Emotional Dependence
• Users may form a false sense of companionship or overdependence on the chatbot.
• Risk: Reduced engagement with real social support networks or professionals.
5. Limited Crisis Support
• AI tools typically cannot recognize or respond adequately to emergencies such as suicidal ideation or self-harm.
• Risk: Delayed intervention in life-threatening situations.
6. Bias and Cultural Limitations
• Chatbots are trained on broad datasets that may reflect social or cultural biases.
• Risk: Insensitive, invalidating, or discriminatory responses.
7. Transparency and Accountability
• Some chatbots do not clearly disclose their limitations, data use, or AI nature.
• Risk: Users may assume they are interacting with a human counselor.

If you are interested in learning more about the use of AI for mental health support, these are some helpful resources:

Leveraging AI to Support Student Mental Health and Well-Being

The Rise of AI in Mental Health: Promise or Illusion? Bridging the gap between human connection and machine innovation.

American Counseling Association: Artificial Intelligence in Counseling

Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring, and intervention applications

American Psychological Association: Artificial intelligence, wellness apps alone cannot solve mental health crisis