Rapid Reads News

HOMEcorporateentertainmentresearchmiscwellnessathletics

What HCPs should know about using AI for serious mental illness


What HCPs should know about using AI for serious mental illness

Generative AI has several shortcomings for serious mental illness, including its tone and how it sometimes structures responses.

The increasing use of generative AI tools to research health information is a trend that all health care providers should be aware of, with implications for how we proactively communicate with the people we support.

In my decades of working with people living with serious mental illness, I have learned the importance of clear, consistent and open communication. Now, as I find myself more and more in conversations with people who have used AI platforms to source information about their diagnosis or treatment, I am aware of both its potential applications and where I believe caution is prudent.

I have seen a recent shift in patients using generative AI platforms as a de facto search engine, which is driving the topics and questions they are bringing to my office. For example, they may come into the clinic saying, "Here's what I found ... what do you think?"

It is important to recognize their efforts to engage in their care and support them in making sense of that information in a clinical context. Some patients may feel confident exploring these topics, while others may approach them with hesitation, particularly if they have had prior experiences of feeling dismissed or misunderstood in the health care system.

As a general recommendation, I encourage patients to learn more about their diagnoses. Equipping themselves with knowledge to understand their illness and be active members on the team navigating the treatment journey is a good thing. But it is important that this information is accurate and integrated into conversations with care teams in the context of ongoing clinical care.

As providers, our job is to listen, engage thoughtfully and knowledgeably, and provide our clinical perspective. If a patient references AI, the first step is to understand the context. Is this someone who has been thinking about changing their treatment for a while? Are they using AI simply to get more informed? Or do they feel like they are not getting the information or support they need from their care team? Then, I aim to use their questions as opportunities for productive conversations, especially when the patient or their care partner may be looking to better understand a disease state or engage in shared decision-making.

In addition, I do think AI tools may have a role in helping to reduce stigma. For patients who may not feel comfortable asking their health care provider questions or talking to people in their support system, AI can be an accessible starting point to explore what they are experiencing. If typing a question into a search box gives someone the language or confidence to advocate for themselves, that is a win.

My concern, however, is that the information presented by various AI platforms can be oversimplified, potentially misleading or, even if not technically inaccurate, missing the clinical nuance that comes from a trusted, established patient-provider relationship. As a result, AI introduces new dynamics into the patient-provider relationship that benefit from thoughtful, collaborative discussion. In my experience, this can be particularly relevant in mental health care, which is widely acknowledged in medical care to be complex and heterogeneous.

One of the key issues is that the tone and structure of AI responses can give the impression that the information is neutral, objective or even authoritative, when in reality it might be incomplete, lacking clinical insight or otherwise inappropriate as a substitute for medical advice. In certain cases, patients may take a phrase or recommendation out of context or focus on one piece of information without the benefit of their HCP's perspective.

As humans, we are all prone to confirmation-seeking. It is in our nature to look for information that supports existing beliefs and makes us feel comforted and confident. Generative AI tools are particularly susceptible to reinforcing this tendency, especially for individuals who may be experiencing psychosis or delusional thinking as part of their condition. For example, a person experiencing paranoia might repeatedly prompt AI until it offers a response that aligns with their fear. Or if a person who does not believe they are ill or is skeptical of medication asks questions in a certain way, they may receive responses that validate those beliefs, regardless of whether they reflect clinical reality. This confirmation bias can end up affirming false or harmful beliefs. There have already been reports of generative AI tools giving unsubstantiated advice, such as telling a patient with schizophrenia to go off her medications, with real implications for health and safety.

Psychiatry is an inherently subjective science. There is no blood test to confirm the severity of depression, and no lab result to determine which antipsychotic will work best for a particular patient. That means AI, by default, is going to miss important data that HCPs rely on when partnering with patients on clinical decision-making. During appointments, I talk to my patients about their family history and past treatment responses, information that AI would not know and a patient may not feed into search queries. Even if AI had full access to such individualized information, it would still miss subtle cues, such as symptom severity, effects related to medication side effects or whether a patient is taking care of their daily needs. These cues can only be assessed during a person-to-person appointment.

Treatment planning and medication choice for those living with serious mental illness is complex and highly personal in nature. I rely heavily on all the diagnostic tools available to me, elevated by my knowledge of the individual person I am working with and their needs. Thinking about the potential impact of AI on a person's choice to take their medication as prescribed, it is one reason I may consider long-acting injectables (LAIs) for appropriate patients. LAIs provide consistent medication coverage and can act as a safeguard against sudden interruptions in treatment plans that may be influenced by AI-generated content. As ever, though, it is important to broach conversations about LAI treatment with respect, so that the person living with the serious mental illness feels empowered in their choice.

As HCPs, we should view AI like any other external influence: validate where it is helpful, address where it's misleading and always bring it back to shared decision-making grounded in clinical expertise and human connection. While AI can be helpful, it should complement, not replace, the human relationships that are central to effective care.

There is a place for AI in health care, but it can't be the mainstay of treatment -- it is only one tool in the toolbox.

Richard Miller, MD, staff psychiatrist at Elwyn Adult Behavioral Health in Rhode Island, can be reached through Sam Brancato at [email protected].

Previous articleNext article

POPULAR CATEGORY

corporate

4942

entertainment

6174

research

3012

misc

6057

wellness

5063

athletics

6306