top of page

Survey: Consumers Open to AI in Mental Health Triage — But Only with Human Oversight

  • Writer: Niv Nissenson
    Niv Nissenson
  • Sep 18
  • 2 min read

ree

Iris Telehealth has released new findings from its 2025 AI & Mental Health Emergencies Survey, shedding light on how Americans view the role of AI in identifying and responding to behavioral health crises. The results show a cautious openness: consumers are willing to let AI help detect risk, but overwhelmingly want humans to remain in charge of care decisions.


Key Findings

  • Privacy trade-offs are acceptable: Nearly half (49%) would allow AI to monitor facial expressions, voice tone, or typing patterns if it meant earlier crisis detection.

  • Human oversight is critical: 73% say providers must make the final call in emergencies flagged by AI; only 8% trust AI to act alone.

  • AI as an alert system: 21% see AI as innovative and potentially life-saving, but concerns remain about false positives (30%) and weakening human connection (23%).

  • Preferred crisis response: Consumers favor personal responses — such as notifying a trusted friend/family member (28%) or a counselor call within 30 minutes (27%).


Demographic Gaps

  • Gender: Men (56%) are more open to AI monitoring than women (41%), while women are more insistent on human final decisions.

  • Generations: Millennials and Gen Z are far more comfortable with AI detection than boomers.

  • Income: Lower-income respondents (61%) are more receptive to AI monitoring than high earners (44%).


AI is finding its way into healthcare but the survey highlights a delicate balancing act. Consumers are pragmatic: they’ll accept privacy trade-offs and algorithmic monitoring if it speeds up life-saving intervention. But they draw a hard line at letting AI make final calls. Out take is that AI isn't ready to work without human oversight on critical decisions. We've highlighted issues like hallucinations, alignment faking and probabilistic variance that make having a human Doctor in the loop very necessary in the near future.


The challenge ahead is that building AI for crisis detection is technically difficult — false positives could overwhelm clinicians, while missed cases could be devastating. The market will reward solutions that combine rapid detection with empathetic, human-led response. In mental health, trust and connection may matter as much as the technology itself.

bottom of page