The Ethical Dilemma of A.I. and Mental Health
Health and Gender Policy Brief #158 | By: Geoffrey Small | March 10, 2023
Header photo taken from: APA.org
Follow us on our social media platforms above
Browse more health and gender policy briefs from the top dashboard

Photo taken from: Shutterstock
Policy Summary
The United States continues to fall short in providing basic healthcare necessities that other high-income nations provide globally. The proportionally low quality healthcare system in the U.S is compounded by a growing mental health crisis in a post-pandemic society, where demand for psychological help is higher than ever. With increasing demand, a shortage of mental health professionals make accessibility for individuals in need even more daunting. U.S. companies are trying to find innovative solutions to this shortage by turning to A.I (artificial intelligence).
However, as mental-health companies are taking the initiative to help with this shortage of accessibility in a time of crisis, there is a debate in the healthcare community about tech companies ethical practices. This policy Brief will explore the use of A.I. to address the national shortage of mental healthcare and the concerns that public health and tech professionals have about the ethics of introducing A.I. without protocols that the scientific community has used since the Tuskegee Study.
Policy Analysis
According to the Commonwealth Fund, U.S. citizens experience the worst overall health outcomes of any other high-income nation. People in the U.S. are more likely to die younger from avoidable causes than peer countries. Health care spending for a person in the U.S. is significantly more than any other high-income nations. Americans see health care professionals, like physicians and psychologists, far less than citizens in other countries. This comes at a time where the World Health Organization has reported a 25% increase in anxiety and depression worldwide, which is directly related to the COVID-19 pandemic.
Business insider recently profiled a nonprofit mental health company’s solution to this growing crisis in accessibility. Rob Morris, the cofounder of Koko, tweeted that his company used GBT-3, chatbots to help develop responses for 4,000 users who were in need of mental health-related support. Morris claims that the experiment was “exempt” from informed consent laws due to the nature of the test.
Even though he indicated that humans were supervising A.I. responses, the experiment did not work when people later learned a machine was involved in their online conversations. Morris later followed up his tweet clarifying that people were not paired with chatbots without their knowledge, once his previous tweets drew the ire of public health and tech professionals.
Some public health and tech professionals claimed that this method violated informed consent laws. The HHS (Department of Health and Human Services) clearly states that “legally effective informed consent of individuals before involving them in research is one of the central protections provided.”

Infographic taken from: Frontiers (.org)
(click or tap to enlargen)
A full understanding of the information needs to be disclosed in order for the participant to make informed decisions. These guidelines were formulated after the 1979 Belmont Report, published by a Federal commission that investigated the Tuskegee Study, where African Americans were clinically observed for long-term effects of syphilis without treating the individuals.
After criticism from the public health community, Morris stated that the A.I. program was discontinued in January. Innovative methods may be needed to address the growing U.S. mental health crisis, but the tech industry needs to be aware of informed consent laws when exposing participants to Artificial Intelligence.
Data and advocacy from organizations like The National Alliance of Mental Illness and Mental Health America are conducted with ethical best practices. That is why it’s important to donate to these organizations to better understand the repercussions A.I. chatbots may have on willing participants with mental health issues.
Engagement Resources
Click or tap on resource URL to visit links where available
https://www.mhanational.org/donate-now