The Ethical Dilemma of A.I. and Mental Health

Health and Gender Policy Brief #158 | By: Geoffrey Small | March 10, 2023

Header photo taken from: APA.org


Facebook


Twitter


Linkedin

Follow us on our social media platforms above

Browse more health and gender policy briefs from the top dashboard

Shutterstock 2237655785
As AI becomes more integrated with our lives, we grow increasingly concerned about ethical and legal regulations that should accompany its use. These are the questions that one mental health company (Koko) had to ask itself, having provided AI-written counseling to 4,000 people without informing them.

Photo taken from: Shutterstock

Policy Summary

The United States continues to fall short in providing basic healthcare necessities that other high-income nations provide globally. The proportionally low quality healthcare system in the U.S is compounded by a growing mental health crisis in a post-pandemic society, where demand for psychological help is higher than ever. With increasing demand, a shortage of mental health professionals make accessibility for individuals in need even more daunting. U.S. companies are trying to find innovative solutions to this shortage by turning to A.I (artificial intelligence).

However, as mental-health companies are taking the initiative to help with this shortage of accessibility in a time of crisis, there is a debate in the healthcare community about tech companies ethical practices. This policy Brief will explore the use of A.I. to address the national shortage of mental healthcare and the concerns that public health and tech professionals have about the ethics of introducing A.I. without protocols that the scientific community has used since the Tuskegee Study.

Policy Analysis

According to the Commonwealth Fund,  U.S. citizens experience the worst overall health outcomes of any other high-income nation. People in the U.S. are more likely to die younger from avoidable causes than peer countries. Health care spending for a person in the U.S. is significantly more than any other high-income nations. Americans see health care professionals, like physicians and psychologists, far less than citizens in other countries. This comes at a time where the World Health Organization has reported a 25% increase in anxiety and depression worldwide, which is directly related to the COVID-19 pandemic.

Business insider recently profiled a nonprofit mental health company’s solution to this growing crisis in accessibility. Rob Morris, the cofounder of Koko, tweeted that his company used GBT-3, chatbots to help develop responses for 4,000 users who were in need of mental health-related support. Morris claims that the experiment was “exempt” from informed consent laws due to the nature of the test.

Even though he indicated that humans were supervising A.I. responses, the experiment did not work when people later learned a machine was involved in their online conversations. Morris later followed up his tweet clarifying that people were not paired with chatbots without their knowledge, once his previous tweets drew the ire of public health and tech professionals.

Some public health and tech professionals claimed that this method violated informed consent laws. The HHS (Department of Health and Human Services) clearly states that “legally effective informed consent of individuals before involving them in research is one of the central protections provided.”


fsurg 09 862322 g001
Various ethical and legal conundrums are involved with the usage of artificial intelligence in healthcare as illustrated above.

Infographic taken from: Frontiers (.org)

(click or tap to enlargen)

A full understanding of the information needs to be disclosed in order for the participant to make informed decisions. These guidelines were formulated after the 1979 Belmont Report,  published by a Federal commission that investigated the Tuskegee Study, where African Americans were clinically observed for long-term effects of syphilis without treating the individuals.

After criticism from the public health community, Morris stated that the A.I. program was discontinued in January. Innovative methods may be needed to address the growing U.S. mental health crisis, but the tech industry needs to be aware of informed consent laws when exposing participants to Artificial Intelligence.

Data and advocacy from organizations like The National Alliance of Mental Illness and Mental Health America are conducted with ethical best practices. That is why it’s important to donate to these organizations to better understand the repercussions A.I. chatbots may have on willing participants with mental health issues.

Engagement Resources​

Click or tap on resource URL to visit links where available

nami logo blue

https://donate.nami.org/give/197406/#!/donation/checkout?utm_source=globalNav&utm_medium=website&utm_campaign=DonationTracking&c_src=WEBDG

shareimg

https://www.mhanational.org/donate-now

DONATE NOW
Subscribe Below to Our News Service

x
x
Support fearless journalism! Your contribution, big or small, dismantles corruption and sparks meaningful change. As an independent outlet, we rely on readers like you to champion the cause of transparent and accountable governance. Every donation fuels our mission for insightful policy reporting, a cornerstone for informed citizenship. Help safeguard democracy from tyrants—donate today. Your generosity fosters hope for a just and equitable society.

Pin It on Pinterest

Share This