Business using ChatGPT for mental health support raises ethical issues

Business using ChatGPT for mental health support raises ethical issues

  • A digital mental health company is angered for using GPT-3 technology without informing users.
  • Koko co-founder Robert Morris told Insider that the experiment is “exempt” from informed consent laws due to the nature of the test.
  • Some medical and engineering professionals said they feel the experiment was unethical.

As ChatGPT’s use cases expand, one company is using the artificial intelligence to experiment with digital mental health care, shedding light on ethical gray areas surrounding the use of the technology.

Rob Morris — co-founder of Koko, a free mental health service and nonprofit that works with online communities to find and treat at-risk people — wrote in a Twitter thread on Friday that his company used GPT-3 chatbots to help develop responses to 4,000 users.

Morris said in the thread that the company was testing a “co-pilot approach with humans monitoring the AI ​​as needed” in messages sent via Koko peer support, a platform he described in an accompanying video as “a place where you can get help from our network or help someone else.”

“We’re making it really easy to help other people, and with GPT-3 we’re making it even easier to be more effective and efficient as an aid provider,” Morris said in the video.

ChatGPT is a variant of GPT-3, which creates human-like text based on prompts, both created by OpenAI.

Koko users weren’t initially informed that the responses were generated by a bot, and “once people learned that the messages were generated by a machine, it didn’t work,” Morris wrote Friday.

“Simulated empathy feels weird, empty. Machines haven’t lived, human experience, so when they say ‘that sounds hard’ or ‘I understand,’ it sounds inauthentic,” Morris wrote in the thread. “A chatbot response generated in 3 seconds, no matter how elegant, feels cheap somehow.”

But on Saturday, Morris tweeted “some important clarification.”

“We didn’t pair people to chat with GPT-3, without their knowledge. (in retrospect I could have worded my first tweet to better reflect this),” the tweet read.

“This feature was opt-in. Everyone knew about the feature when it was active for a few days.”

Morris said Friday that Koko “pulled this from our platform pretty quickly.” He noted that AI-based messages were “rated significantly higher than those written by humans on their own” and that response times decreased by 50% thanks to the technology.

Ethical and legal considerations

The experiment sparked an outcry on Twitter, with some public health and technology professionals calling out the company over claims it violated informed consent act, a federal policy that requires human subjects to consent before involvement in research purposes.

“This is deeply unethical,” media strategist and author Eric Seufert tweeted on Saturday.

“Wow, I didn’t want to admit this publicly,” Christian Hesketh, who describes himself on Twitter as a clinical scientist, tweeted Friday. “The participants should have given informed consent and this should have gone through an IRB [institutional review board].”

In a statement to Insider on Saturday, Morris said the company “does not connect people to chat with GPT-3” and said the ability to use the technology was removed after realizing it “felt like an inauthentic experience.”

“Rather, we offered our peer supporters the opportunity to use GPT-3 to help them write better answers,” he said. “They got suggestions to help them write more supportive answers faster.”

Morris told Insider that Koko’s study is “exempt” from informed consent laws, citing previously published research from the company that was also exempt.

“Each individual must provide consent to use the service,” Morris said. “If this was a university study (which it isn’t, it was just a product feature being explored), this would fall under an ‘exempt’ research category.”

He continued: “This posed no additional risk to users, no deception, and we do not collect any personally identifiable information or personal health information (no email, phone number, ip, username, etc.).”

A woman is sitting on a sofa with her phone

A woman seeks mental health care on her phone.

Beatriz Vera/EyeEm/Getty Images



ChatGPT and mental health’s gray area

Nevertheless, the experiment raises questions about ethics and the gray areas surrounding the use of AI chatbots in healthcare in general, after it has already caused unrest in academia.

Arthur Caplan, professor of bioethics at New York University’s Grossman School of Medicine, wrote in an email to Insider that using AI technology without informing users is “grossly unethical.”

“The ChatGPT intervention is not standard care,” Caplan told Insider. “No psychiatric or psychological group has verified its effectiveness or laid out its potential risks.”

He added that people with mental illness “require special sensitivity in any experiment,” including “close review by a research ethics committee or institutional review board before, during, and after the intervention.”

Caplan said using GPT-3 technology in such ways could impact the future of healthcare more broadly.

“ChatGPT may have a future like many AI programs such as robotic surgery,” he said. “But what happened here can only delay and complicate that future.”

Morris told Insider that his intention was to “underscore the importance of the human in the human-AI discussion.”

“I hope it doesn’t get lost here,” he said.

Leave a Reply

Your email address will not be published. Required fields are marked *