AI-Based Mental Health Chatbot
Role: Research Assistant
Collaborators: Dr. S.N. Omkar (Supervisor), Dhruv Shinde (Collaborator)
Institution: Computer Intelligence Lab, Indian Institute of Science (IISc)
Duration: [ June 2023 - December 2023]
Project Overview
College students face rising levels of anxiety and depression, while access to trained therapists remains limited. Long waitlists, stigma, and affordability barriers reduce help seeking behavior.
This project explored how Cognitive Behavioral Therapy principles could be translated into a structured conversational AI system. Rather than validating clinical outcomes, the focus was on designing a research grounded intervention model and proposing a rigorous evaluation framework.
The chatbot was built using the Rasa framework and structured around CBT based therapeutic flows.
Problem Context
Mental health disorders represent a significant public health burden.
38 million
people in India were living with anxiety disorders.
(World Health Organization, 2017)
56 million
people in India were living with depression.
(World Health Organization, 2017)
1 in 7
people in India experience a mental disorder.
(National Mental Health Survey of India, NIMHANS, 2016)
1 in 7
adolescents aged 10–19 worldwide experience a mental health disorder.
(World Health Organization, 2021)
At the same time, Cognitive Behavioral Therapy is one of the most evidence supported treatments for anxiety and depression.
Meta analyses indicate CBT demonstrates moderate to large effect sizes for anxiety disorders, with response rates often between 50 to 75 percent, depending on condition and study design (Hofmann et al., 2019, Cognitive Therapy and Research).
The structural challenge:
How might we design an AI system that responsibly delivers CBT informed support at scale while maintaining therapeutic integrity?
Research Objectives
-
Translate CBT intervention structure into conversational AI flows
-
Identify limitations in existing mental health chatbot solutions
-
Design an evaluation strategy grounded in clinical research standards
-
Define ethical guardrails for AI based mental health support
Methodology
Literature Review
Reviewed:
-
CBT clinical frameworks
-
Digital mental health intervention studies
-
Mental health chatbot evaluations
-
Ethical risks in AI mediated therapy
This review informed the decision to prioritize structured therapeutic scaffolding over open ended chat.
CBT to Conversation Translation
Designed conversational modules aligned with CBT flow:
Trigger → Automatic Thought → Cognitive Distortion → Cognitive Reframe → Action Plan
Modules included:
-
Cognitive distortion identification
-
Behavioral activation prompts
-
Thought journaling
-
Emotional labeling exercises
Each conversational branch was structured to reduce ambiguity and guide users toward reflection rather than passive venting.
Competitive Landscape Analysis
Analyzed existing mental health chatbots on:
-
Therapeutic depth
-
Evidence based grounding
-
Personalization logic
-
Crisis detection handling
-
User engagement design
Key finding:
Many solutions emphasize mood tracking and motivational messaging but lack structured cognitive restructuring protocols aligned with CBT methodology.
Proposed Evaluation Framework
Instead of claiming therapeutic efficacy, this project developed a proposed randomized controlled trial model for future validation.
Suggested design:
Participants
College students self reporting anxiety symptoms
Groups
Control group receiving static campus mental health resources
Intervention group receiving chatbot access
Outcome Measures
-
GAD 7 (Generalized Anxiety Disorder scale)
-
PHQ 9 (Depression scale)
-
Engagement frequency
-
Drop off rates
-
Session completion rates
Evaluation Goals
-
Measure symptom reduction over time
-
Assess adherence
-
Identify dropout patterns
-
Evaluate perceived empathy and usability
This framework aligns evaluation standards with established clinical methodology rather than relying solely on UX metrics.
Key Design Insights
1. Structure Matters More Than Fluency
Conversational smoothness alone does not ensure therapeutic value. Structured CBT sequencing is necessary to prevent superficial interaction.
2. Empathy Requires Tone Calibration
Overly clinical language reduced perceived emotional safety. Tone adjustments improved engagement potential.
3. Evaluation in Healthcare Must Be Clinical
Usability testing is insufficient in mental health tools. Any deployment must include symptom measurement and ethical oversight.
Ethical Considerations
-
Clear disclaimers distinguishing support from therapy
-
Escalation protocol for crisis language
-
Data privacy and confidentiality safeguards
-
Avoiding overclaiming efficacy
The project prioritized research design integrity over rapid deployment.
Role
Researcher and Design Strategist
-
Responsibilities included:
-
Literature synthesis
-
Competitive analysis
-
CBT module structuring
-
Evaluation framework design
-
Prototype critique and iteration recommendations
Impact
This project established:
-
A structured blueprint for CBT aligned conversational AI
-
A clinically grounded evaluation proposal
-
A framework for ethically deploying AI mental health tools
-
Rather than presenting a validated product, the outcome was a rigorously defined research pathway toward validation.
Reflection
The most important lesson was methodological.
In healthcare, innovation without validation is irresponsible.
This project reinforced that UX research in sensitive domains must integrate psychological science, clinical evaluation standards, and ethical accountability before claiming impact.
Sources
World Health Organization. Depression and Other Common Mental Disorders Global Health Estimates, 2017.
WHO. Adolescent Mental Health Fact Sheet, 2021.
National Mental Health Survey of India, NIMHANS, 2016.
Hofmann SG et al. The efficacy of cognitive behavioral therapy A review of meta analyses. Cognitive Therapy and Research, 2019.