Banner ID: A partially open laptop with a desert sunset on the screen. The light from the laptop screen is the only light, the laptop is otherwise surrounded by darkness. Text overlay reads

Is Artificial Intelligence Bringing Bias into Mental Health Treatment?

Curt and Katie chat about the responsibility therapists hold when they use AI applications for their therapy practices. We explore where bias can show up and how AI compares to therapists in acting on biased information. This is a continuing education podcourse.

Transcript

Click here to scroll to the podcast transcript.

In this podcast episode we talk about whether therapists or AI are more biased

With the inclusion of artificial intelligence tools into psychotherapy, there is more access to mental health treatment by a larger portion of the world. This course addresses the question “Do the same biases that exist in in-person delivered psychotherapy exist in AI delivered treatment?” at the awareness, support, and intervention levels of mental health treatment.

How is machine learning used in “AI” for therapists?

  • There are different types of AI used in mental health, machine learning, neural networks, and natural language processing
  • AI can be used for awareness, support, and/or intervention
  • There is a potential for bias within AI models

Where can bias come in when AI models are used in mental health?

“A lot of these intervention or AI chat bots…are based in the medical model, which has inherent bias in it…It was designed by humans, so it’s going to have just the bias of the folks who created it.” – Katie Vernoy, LMFT

  • Source material, like the DSM
  • Human error in the creation
  • Cultural humility and appropriateness

Are human therapists less biased than AI models in diagnosis and mental health intervention?

  • The short answer is no
  • A study shows that ChatGPT is significantly more accurate than physicians in diagnosing depression (95% or greater compared to 42%)
  • ChatGPT is less likely to provide biased recommendations for treatment (i.e., they will recommend therapy to people of all socioeconomic statuses)
  • There is still possibility for bias, so diverse datasets and open source models can be used to improve this

What is a potential future for mental health treatment that includes AI?

“If people are able to engage in more positive mental health stuff more consistently, if [AI] does some things better, that’s not a bad thing, and it allows for us to focus on maybe some more of the systemic pressures and adopting more of a more global mental health approach from what we do to be able to bring some of those social justice aspects in” – Curt Widhalm, LMFT

  • Curt described therapy practices being like Pilots and autonomous planes, with the ability to provide oversight, but much less intervention
  • Katie expressed concern about the lack of preparation that therapists have for these dramatic shifts in what our job looks like

Key takeaways from this podcast episode (as curated by Otter.ai)

  • Enhance the training and validation of AI algorithms with diverse datasets that consider intersectionality factors
  • Explore the integration of open-source AI systems to allow for more robust identification and addressing of biases and vulnerabilities
  • Develop educational standards and processes to prepare new therapists for the evolving role of AI in mental healthcare
  • Engage in advocacy and oversight efforts to ensure therapists have a voice in the development and implementation of AI-powered mental health tools

Receive Continuing Education for this Episode of the Modern Therapist’s Survival Guide

Hey modern therapists, we’re so excited to offer the opportunity for 1 unit of continuing education for this podcast episode – Therapy Reimagined is bringing you the Modern Therapist Learning Community!

Once you’ve listened to this episode, to get CE credit you just need to go to learn.moderntherapistcommunity.com/pages/podcourse, register for your free profile, purchase this course, pass the post-test, and complete the evaluation! Once that’s all completed – you’ll get a CE certificate in your profile or you can download it for your records. For our current list of CE approvals, check out moderntherapistcommunity.com.

You can find this full course (including handouts and resources) here: https://learn.moderntherapistcommunity.com/courses/is-artificial-intelligence-bringing-bias-into-mental-health-treatment

Continuing Education Approvals:

When we are airing this podcast episode, we have the following CE approval. Please check back as we add other approval bodies: Continuing Education Information including grievance and refund policies.

CAMFT CEPA: Therapy Reimagined is approved by the California Association of Marriage and Family Therapists to sponsor continuing education for LMFTs, LPCCs, LCSWs, and LEPs (CAMFT CEPA provider #132270). Therapy Reimagined maintains responsibility for this program and its content. Courses meet the qualifications for the listed hours of continuing education credit for LMFTs, LCSWs, LPCCs, and/or LEPs as required by the California Board of Behavioral Sciences. We are working on additional provider approvals, but solely are able to provide CAMFT CEs at this time. Please check with your licensing body to ensure that they will accept this as an equivalent learning credit.

Resources for Modern Therapists mentioned in this Podcast Episode:

We’ve pulled together resources mentioned in this episode and put together some handy-dandy links. Please note that some of the links below may be affiliate links, so if you purchase after clicking below, we may get a little bit of cash in our pockets. We thank you in advance!

Dr. Ben Caldwell’s articles:

AI therapy is about to make therapy a lot cheaper

An AI therapist can’t really do therapy. Many clients will choose it anyway

How human therapists can thrive in a world of AI-based therapy

 

Precision Psychiatry article Katie mentioned:

Beyond One-Size-Fits-All: Precision Psychiatry Is Here

 

References mentioned in this continuing education podcast:

*The full reference list can be found in the course on our learning platform.

 

Relevant Episodes of MTSG Podcast:

AI Therapy is Already Here: An interview with Dr. Ben Caldwell

Is AI Really Ready for Therapists? An interview with Dr. Maelisa McCaffrey

Revisiting SEO and AI – Ethics and best practices: An Interview with Danica Wolf

The Future Is Now: Chatbots are Replacing Mental Health Workers

Is AI Smart for Your Therapy Practice? The ethics of artificial intelligence in therapy

Online Therapy Apps

What Therapists Need to Know About Menopause and Perimenopause: An interview with Dr. Sharon Malone, MD

Welcome to Therapist Grad School!

An Incomplete List of Everything Wrong with Therapist Education, An Interview with Diane Gehart, LMFT

 

Who we are:

Picture of Curt Widhalm, LMFT, co-host of the Modern Therapist's Survival Guide podcast; a nice young man with a glorious beard.Curt Widhalm, LMFT

Curt Widhalm is in private practice in the Los Angeles area. He is the cofounder of the Therapy Reimagined conference, an Adjunct Professor at Pepperdine University and CSUN, a former Subject Matter Expert for the California Board of Behavioral Sciences, former CFO of the California Association of Marriage and Family Therapists, and a loving husband and father. He is 1/2 great person, 1/2 provocateur, and 1/2 geek, in that order. He dabbles in the dark art of making “dad jokes” and usually has a half-empty cup of coffee somewhere nearby. Learn more at: http://www.curtwidhalm.com

Picture of Katie Vernoy, LMFT, co-host of the Modern Therapist's Survival Guide podcastKatie Vernoy, LMFT

Katie Vernoy is a Licensed Marriage and Family Therapist, coach, and consultant supporting leaders, visionaries, executives, and helping professionals to create sustainable careers. Katie, with Curt, has developed workshops and a conference, Therapy Reimagined, to support therapists navigating through the modern challenges of this profession. Katie is also a former President of the California Association of Marriage and Family Therapists. In her spare time, Katie is secretly siphoning off Curt’s youthful energy, so that she can take over the world. Learn more at: http://www.katievernoy.com

A Quick Note:

Our opinions are our own. We are only speaking for ourselves – except when we speak for each other, or over each other. We’re working on it.

Our guests are also only speaking for themselves and have their own opinions. We aren’t trying to take their voice, and no one speaks for us either. Mostly because they don’t want to, but hey.

Stay in Touch with Curt, Katie, and the whole Therapy Reimagined #TherapyMovement:

Patreon

Buy Me A Coffee

Podcast Homepage

Therapy Reimagined Homepage

Facebook

Twitter

Instagram

YouTube

Consultation services with Curt Widhalm or Katie Vernoy:

The Fifty-Minute Hour

Connect with the Modern Therapist Community:

Our Facebook Group – The Modern Therapists Group

Modern Therapist’s Survival Guide Creative Credits:

Voice Over by DW McCann https://www.facebook.com/McCannDW/

Music by Crystal Grooms Mangano https://groomsymusic.com/

Transcript for this episode of the Modern Therapist’s Survival Guide podcast (Autogenerated):

Transcripts do not include advertisements just a reference to the advertising break (as such timing does not account for advertisements).

… 0:00
(Opening Advertisement)

Announcer 0:00
You’re listening to the Modern Therapist’s Survival Guide, where therapists live, breathe and practice as human beings. To support you as a whole person and a therapist, here are your hosts, Curt Widhalm and Katie Vernoy.

Curt Widhalm 0:15
Hey, modern therapists, we’re so excited to offer the opportunity for one unit of continuing education for this podcast episode. Once you’ve listened to this episode, to get CE credit, you just need to go to moderntherapistcommunity com, register for your free profile, purchase this course, pass the post test and complete the evaluation. Once that’s all completed, you’ll get a CE certificate in your profile, or you can download it for your records. For a current list of our CE approvals, check out moderntherapistcommunity com,

Katie Vernoy 0:47
Once again, hop over to moderntherapistcommunity com for one CE, once you’ve listened, woohoo.

Curt Widhalm 0:53
Welcome back, modern therapists this is the modern therapist Survival Guide. I’m Curt Widhalm with Katie Vernoy, and this is the podcast for therapists about things that go on in our profession, the things that affect how we work. And this is another one of our continuing education eligible episodes. So listen at the beginning or the end of the episode for information on how to get CE credits for this, or go to our show notes over at mtsgpodcast com. That’s where we put some of the other information, too. We are following up on an episode that we had a couple of weeks ago with Dr. Ben Caldwell talking about AI being prevalent in the field, already here, and go back and listen to that episode, and part of what he had spoken about, was something that we had already kind of started planning on talking about. So, it was very convenient for this to come a couple of weeks after that episode. We wanted to talk about bias, and we wanted to talk about bias in therapy. This is something that I have been a part of, the California Association of Marriage and Family Therapists, Ethics Committee. I’m currently chairing that committee. I’m not speaking for that committee in this episode, and I will extend my welcome and gratitude to Katie being now on that committee this year.

Katie Vernoy 1:30
But I’m not speaking for the Ethics Committee either, partly because I’ve not even attended a meeting yet, but also because I’m not speaking for them.

Curt Widhalm 2:22
So, some of the things that we’re trying to do in the Ethics Committee is update the ethics code to include things like how we responsibly use AI. And one of the discussions that we’ve been having in the background is we know the potential for bias historically. It’s already part of our ethics codes, things that we work to not perpetuate. And as we’re looking at updating our ethics codes in looking at responsible use of technology and artificial intelligence, we kind of came to this question around we know that there’s a potential for bias to exist in AI, but what in the heck do we, as the therapist, do about it? We’re not the ones who are sitting in the background programming large language learning models or generative AI or machine learning sorts of things, but as end users, what are some of the responsibilities that we’re going to have? And as we were looking at: what do therapists need to know about the bias in AI? I was like, I’m already doing a bunch of this work. Why not make a podcast episode about this too? So…

Katie Vernoy 3:35
Absolutely.

Curt Widhalm 3:36
That’s kind of how we got here. And then it became convenient that Dr. Caldwell was talking about some of the ways that AI was showing up anyways. Now, if you’ve listened to some of our past episodes about artificial intelligence, you’ve heard us throw out a bunch of caution about, hey, there’s some potential ethical implications here, even if they’re not already part of our ethics codes. And I have to admit that I really went into the research in this particular episode, looking for what are the potential, really big pitfalls about how AI can get bias really, really wrong. And maybe part of that was born out of my need for defense of, hey, therapists can be pretty good at being helpful, and are we going to be better than the computer at things, or is this the proverbial John Henry versus the machine when it comes to building the railroads back in American history, lore and tall tales and that kind of stuff? So in this episode, I’m framing my bias. I’m owning it from the very beginning, as far as I’m coming into this, or I came into the research looking for AI, you’re going to be biased. But Katie, when I first proposed this episode, and particularly around continuing education, what were your thoughts?

Katie Vernoy 4:58
When we look at the bias? Bias that has come into other AI, whether it’s machine learning, large language models, any of this stuff, what people collectively call AI, I’ve seen some pretty harmful things, pretty biased things come about, and it’s based on human bias, where it’s built into the machine, so to speak, or it’s picked up as the model continues to learn. I think about the Twitter bots that became super racist. Or, you know, we have an episode about the eating disorder hotline replacement Tessa that started telling the people testing it out before it even fully launched, to start dieting. And so I get worried because run rampant AI has exacerbated human feelings and human bias. When I look at this and when I try to sort through what AI can do in our field, and after the conversation with with Ben and reading his articles, and we’ll link to those things in the show notes over at mtsgpodcast com, I started kind of assessing my own use of AI already. The things that I’m already doing that are either AI or AI adjacent. I don’t use ChatGPT much, but I use it some. We use AI for our podcast. I don’t know that many people know that, but the way we use it is we have otter.ai doing our transcripts, and they actually have built into otter some really nice things that help me to summarize the episode for the show notes. I go through all of it and kind of rewrite everything so I don’t use it as much as other folks may. But there’s a lot that we’re already, that I’m already, anyway, using to make my life easier. There’s a number of things that I’ve jumped into where I find myself interacting with technology in the way that I might have interacted with a coach or a tutor or something like that. When we were talking before the episode, there was, you know, we both are big Duolingo fans, and that is generative. It, you know, kind of puts you through lessons. It tells you, it coaches you on things, all of those things. And I have a relationship with Duolingo. And so as I started looking at this, and as I look at some of the other things that I’m sure we’ll get into in the episode, I recognize that it’s here, which I know initially I was very terrified. It’s gonna take our jobs. It’s gonna take over the world. And it could be really, really helpful. So I think I’m coming with a little less bias than what you described, and I also recognize I’m also coming with less information. So I don’t know what bias will rise as we continue to talk about this. But I feel cautiously optimistic that there might be some good stuff for mental health, at least mental health patients or consumers versus therapists that are worried about their jobs being obsolete. So I’m ready to jump in, but that’s kind of what I was thinking about when you first mentioned this episode was, yeah, there’s places for bias, but there’s also places for utility.

Curt Widhalm 8:23
With how much things like chat GPT have come to the public forefront in the last couple of years, I also am acknowledging my bias is being kind of extra critical of research. Now, behind the scenes a little bit when we do continuing education episodes for some of the continuing education bodies that are out there to give approval to courses, there’s a certain number of research articles that we have to point to that have been published in the last five years. And usually that’s kind of a challenge, especially in the post covid years, to get really robust kinds of publications that meet some of these requirements, but in particular, with AI and the proliferation of things like ChatGPT over the last couple of years, I found myself looking at articles that were published even as recently as 2021 with my response being, Oh man, that’s old. That’s outdated already.

Katie Vernoy 9:23
But it’s moving so fast.

Curt Widhalm 9:24
It’s moving so fast, and some of the things that are addressed, as far as ethical concerns within AI go back as far as the 1950s when the terms around artificial intelligence were first coined, and even before that, is things like the Alan Turing test of people’s interactions with AI if they can tell whether or not that they’re interacting with a robot or not. But it was coined first by John McCarthy in 1956 as the science and engineering of making intelligent machines. And we’ve pointed out a couple of different types of AI and how they can be used. Machine learning is something that just tries things over and over and over again until it finds which ways that end up eventually working and that so kind of the easiest way to think about it is programming something to try and guess a password, and it starts just guessing every combination of letters until it gets to what a password is. That’s a very, very basic description of what machine learning is.

Katie Vernoy 10:27
Sure.

Curt Widhalm 10:29
There’s other things like supervised machine learning that are a little bit more hands on by the coders behind the scenes that say, look for certain factors. I think part of the bigger concern is around things like unsupervised machine learning, and this is algorithms that end up developing their own rules and start maybe with a data set that could be biased in the first place, and we’ll get into this more in this episode. But this is stuff that starts to make its own rules. There’s also things like artificial neural networks and deep learning, which is based on structures of how the human brain operates. There’s natural language processing models. So, this is not an episode about all of these kinds of things. You’re not going to get questions in continuing education about what each of these things do. Where we are steering this is that AI already has applications in mental health care, and this is stuff that Dr. Caldwell was talking about and has greatly improved in the last decade. And makes me think that it’s not done developing yet. It will continue to develop, and we’re already, as a field, kind of scrambling to catch up and, dare I say, embrace how AI can really be used. It makes a lot more logical sense for how AI can be used in things like physical health care. That going through patient data records, looking for certain factors that might show up in blood tests, that might be pre screenings for cancer, that make logical sense as far as we input data into a supervised machine learning sort of process that says, look for these factors, and people who show even kind of slightly elevated levels of these we might put into a protocol screening for cancer more frequently. Those things make sense when it comes to hard science data. You know, all of us who got made fun of for being social scientists, where feelings affected science back when we were in college, where these kinds of things point to, yeah, hard science makes sense here, but there are applications in mental health care. And a lot of where we started with the Research in this episode is from a 2024 article called “Artificial intelligence in positive mental health: a narrative review”. This is by Thakkar, Gupta and De Sousa in Frontiers of Digital Health. They talk about there’s some different AI applications in mental healthcare, and the first one that they point to is awareness. And through using some of these AI models, we get more awareness out of mental health in general, innovative ways that it can be delivered, and some of the algorithms that lead to some of the being able to identify even just some of the mental health factors that come up. So take, for example, things like PHQ-9 that might be used by a physical health doctor that would show Okay. People’s scores tend to be trending up as they visit over time. These might be people that we tend to screen for. If it seems like these are things that people who pay attention to scores over time would look at anyway and detect, then, yeah, that’s what these kinds of models end up doing. It brings more awareness to pinpointing people who are more at risk for some mental health stuff. This helps to deliver more personalized support and knowledge dissemination. Hey, you’re showing some trending signs more towards depression. Let’s talk more about depression in our visits. That helps to bring more awareness to the providers, and not just relying on every healthcare professional actually paying attention to all of the minor signs that somebody might be bringing in.

Katie Vernoy 14:40
So awareness sounds like it could be a combination of actual testing or diagnostics and potentially just flagging behaviors or flagging tests for further you know, kind of assessment by the provider. Is that what you’re saying?

Curt Widhalm 14:59
Yes.

Katie Vernoy 15:00
Okay, and so when I think about that, there’s a huge amount of utility, and I wonder about the challenges that might come into play there. Do we want to talk about challenges now, or do we want to…

Curt Widhalm 15:15
We’re going to save, we’re going to save the challenges for a little bit later in the episode.

Katie Vernoy 15:18
So we understand that awareness is summarizing the awareness application. It’s, it’s potentially general awareness, or it could even be levels of diagnostics, or even, like blood work.

Curt Widhalm 15:31
Exactly, yes.

Katie Vernoy 15:32
Which is interesting. I’ll put a link to an article I found in the show notes about precision, precision psychiatry, because they’re actually finding biomarkers and FMRIs and different things that may guide some treatment in the same way that medical doctors are getting this feedback around depression, anxiety, that kind of stuff. So there’s beyond our scope of practice, but really interesting, because if we’ve got some awareness bots, diagnostic bots, that can really use a lot of information across the medical record, we could get some really interesting information. And as could you know kind of individuals who are seeking treatment, or might want to seek treatment, once they find out that some of these things are present.

Curt Widhalm 16:15
The next thing that they talk about is support, and this is incorporated a lot more into mobile sort of devices, sorts of things; whether it be things like specific apps that people engage with regularly, but also in the way that AI tends to scrape things for all of the ways that you interact with things like your phone. For example, look at the way that targeted ads already work on things like your Facebook profile. That mine right now tend to skew very heavily towards here’s the suicide prevention hotline number. And are you depressed? Now, it might be saying things about me, but a lot of my searches lately have been around doing updates for a future, another 988, episode and…

Katie Vernoy 17:07
Yeah.

Curt Widhalm 17:08
…I work with a lot of clients who tend to run very high risk for suicidality. So, it makes sense to me that a lot of the content that I’m engaging with and that Facebook seems to be scraping my Google searches for to target some things that are specifically for support. These things are already in place. It just so happens to be that they’re already parameters built into things like Google and Facebook to flag and support. Hey, you’re doing okay? You’re depressed? And I’m like, Thanks Facebook for suggesting that I’m depressed. But this kinds of tracking, kinds of things can even be used for support, like you mentioned, Duolingo gives the the little updates every single day that say, keep your streak alive, or I’m going to come, come and beat down your door so that way you can learn more Spanish. But it helps to enhance adherence too, by providing support, especially with some of the apps that are more specifically designed. So, some of the other ways that this shows up is in personal sensing and the ways that we might wear devices that indicate how our mood might fluctuate through the day. Or people who have smartwatches that many of the even you know lower cost ones, end up having things that measure your stress level throughout the day. And I don’t know if you ever go through and look at, oh, my stress was really high during that session.

Katie Vernoy 18:43
I don’t look at that level of detail, but yeah, I do notice that on days and I’m feeling a little less myself, my stress level or my preparation for the day level, both are low and or stress high, low, low resources for the day.

Curt Widhalm 18:58
So, if you are able to monitor your sleep patterns, your effect on the effect of stress throughout the day. You’re able to more closely look at very personalized factors that AI points to is having effects on you throughout your day. There’s a ton of wearables out there that are also in development that are going to be used to not only track this, but also treat different kinds of ways of help, helping regulate. I can imagine things like a meditation app, oh, put on these gloves, and it’ll help you, know, vibrate your hands in certain ways, or something like that that helps to calm people’s stress down. Now, what you’re talking about with, what Dr Caldwell had mentioned, is intervention.

… 19:48
(Advertisement Break)

Katie Vernoy 19:51
So with support, it sounds like we’re looking at the stuff that’s kind of on the periphery, potentially, you know, guiding your searches, or responding to your searches, getting you some resources, information, maybe some psychoeducation, or tapping into wearables or other types of interactive devices to give you that mental wellness. For example, being able to adjust your mood or even kind of assess your heart rate or different things so that you can calm yourself down in real time, versus getting to a place of a panic attack or or something that gets worse. So supporting you in your your goal for mental wellness through psycho education and a little bit of, you know, biofeedback, for example.

Curt Widhalm 20:41
Yes, exactly. Picking up where Dr Caldwell had been talking about is also the intervention aspects, and that’s a lot of what that episode had focused on is the delivery of therapy. But with some of this data, AI can also detect ahead of time, certain features that might indicate a relapse, it might indicate somebody is moving out of a prodromal phase into an active phase of hallucinations. I could imagine AI companies being able to talk specifically to bipolar patients that say you’d be better off taking medication the whole time, but we understand that you might not want to do that. So how about you sign up for our system, and we can start to flag six days before your next episode that you need to start taking things into care. So, you know, this is where we’ll point later, to the ways that capitalism and corporations can mess all of this stuff up. But even for people who are actively managing bipolar disorder through medication, would love to know ahead of time, I’m more at risk for something to happen. If you could get a warning a couple of days that, hey, if I put interventions into place now, then I could either take off the severity of the next episode, or I could miss it altogether. I think would be a billion dollar idea. So researchers go out and make a billion dollars on that.

Katie Vernoy 22:15
This is where I think we’ll end up spending some more time later. But there’s liability here, there’s the medical device, all the different routes that Ben was talking about. And I think this is the biggest part that folks in our profession seem to have concern that this is replacing the work that we’re doing by providing AI therapy, potentially doing diagnostics, potentially being their coach, which I think is, you know, a way that a lot of therapists go. They also have coaching clients, and so I think this is where people start getting nervous that this technology is there, and we’ll look at it, but potentially may be as good or better than therapists unfortunately.

Curt Widhalm 23:04
I’m not there yet, at least as far as this episode goes. But Katie and I both live in Southern California, and occasionally we get warnings from earthquake bots that say, Hey, An earthquake is about to happen. And it gives us anywhere from two to 60 seconds to be able to brace for everything around us starting to shake. I don’t know, have you gotten any of those warnings yet?

Katie Vernoy 23:28
I did there was, there was a warning that came and no earthquake was felt. I think it was a little bit too far from us.

Curt Widhalm 23:36
I had my first one recently, and it gave about a 30 second warning. But I have to imagine, there’s stuff out there for things like seizures to do the same kinds of warnings to be able to get to safety. So talking about the ways that these things can intervene, as great as we are as therapists, to be able to say, hey, here’s your warning signs for seizures, people with seizures would probably greatly appreciate, hey, I need to get to safety like right now. So so moving this more specifically to the focus of this episode is for all of the therapists that are worried about this taking our jobs, we have to take a step back and look at not from our own individual survival, but in particular people who might not be seeking out mental health in the first place. This might be places that can’t afford or can’t access therapy, that might be in particular places that don’t have education about positive mental health, that can have some of this information be provided on a more regular basis. Things that might be hey, here’s your daily reminder to de-stress by taking five minutes to focus on the area around you. Or here’s your daily reminder to reach out to a friend that you know, makes you feel good. I’m using really, really rudimentary examples here.

Katie Vernoy 25:04
Sure.

Curt Widhalm 25:05
But if these are things that are daily practices that come out of CBT that look at even some of the psychosocial aspects that help to prevent some kinds of depression, that’s good. That’s what we want as far as social justice oriented work, when it comes to providing mental health interventions to people that we might not normally be able to reach, might not normally reach out for services, and might have stigma about reaching out to services in the first place. So even if they’re less than perfect about it, I do have a app that gives me a reminder every day to Hey, do five minutes of stretches because I signed up for an app that makes me stretch because I sit and talk on podcasts and talk with clients all day long. And even though I don’t do it every day, I’m better off for doing it some days rather than no days, because left to my own devices, I wouldn’t do this. So when we apply this to things like mental health, particularly in communities that might not normally engage with us, even if it’s less than perfect, if it prevents people from getting into more expensive treatments, this stuff has the benefit of working and being a good investment in overall public mental health.

Katie Vernoy 26:28
I think one of the challenges that come into my mind when I try to think through this, and I my hope, is that we’re addressing these things. But the first one is my understanding that a lot of these intervention or AI chat bots, or whatever we’re wanting to call them, are based in the medical model, which has inherent bias in it. It also has some problematic ways that it has used human behavior. It was designed by humans, so it’s going to have just the bias of the folks who created it. And when we’re looking at a social justice lens, folks who are addressed from the framework of a medical model, can can be harmed. And so that’s that’s the first one. The second one is what we’ve talked about, where there’s a lot of whether it’s psychoeducation or AI therapy, that’s based in CBT, and while well researched and it is an evidence based practice, there is research that shows that it’s not always culturally humble or culturally appropriate. And so in meeting groups of folks who don’t typically engage with therapy, those things have to be addressed, because that is basically providing a robot version of what therapy is also already not providing as accessible for all folks.

Curt Widhalm 27:58
Katie, I want to take this moment to thank you for giving us the great pivot into the major components of why this episode is so important, and in some ways, a summary of seven years of our work on this podcast, as far as bringing up those biases already exist in us as treatment providers. And while there is concern about AI not being this great panacea that is 100% accurate, even if it comes across as 85-80% accurate across most things that might be better than most of us as individual practitioners in the limited capabilities that we have in interacting with clients in the first place. How many episodes do we start by talking to our guests as far as, what do most therapists get wrong about this? Which is already just a question that fully addresses: there’s bias, or there’s under training that each individual therapist already brings into this. Now, hopefully, most of our modern therapists are not making most of those same mistakes, or especially after listening to our episodes, being able to take all right, where’s bias in in our approach to working with either a specific population or people from a specific background? At one point this episode was going to be just Where’s bias in therapy in the 2020s and it was a little bit broader episode. And some of the articles that I had researched around that looks not just the traditional biases that we bring into our work, that things like confirmation bias, where it’s I’m a specialist who works in neurodivergence, therefore I’m more likely to over diagnose people with things like autism, or I’m more likely to over diagnose people with ADHD without ruling out other things. That’s a confirmation bias kind of example.

Katie Vernoy 29:52
Absolutely. Yeah.

Curt Widhalm 29:54
That’s not where our research is showing bias showing up in therapy right now. Our bias in therapy right now is, maybe there’s another episode later, but it’s around personal feelings about polyamorous relationships, or personal feelings around consensual non monogamy. And as I was looking at those episodes, or as I was looking at those research articles, I was thinking, we already have episodes that address this. We’ve been doing our job on this so that ended up steering us more towards the episode that we’re having here. And to your point, some of these biases already exist in our research.

Katie Vernoy 30:33
That makes me think about the interview that we had with Dr Sharon Malone, and she was talking about the research that was around taking hormones in perimenopause and menopause and being able to sort through what had happened there, which was it was a biased sample. It was read and talked about publicly in a way that really biased the whole field. It biased patients. It biased, you know, doctors, those types of things. And so there was, like, 30 years, a whole generation of women who didn’t get hormone replacement therapy because of a study that was misrepresented and also started from a biased place. So, I know that I understand what you’re saying, Curt, I understand that there are, there is bias in mental health, medical health studies, those types of things, and that’s going to flavor everything. I guess. What I’m trying to get to is, how do we what’s our responsibility as clinicians if we’re going to be engaging with this, with this stuff, and what do we see as safeguards that are in place to try to address bias, as these language learning models or machine learning models or whatever it is those things are being created? Because I get concerned that if we’re looking at the monetization element, folks are going to move forward quickly with something that has some result. But my hope is that the result is not just for the shareholders, that there’s actually good results for the consumers.

Curt Widhalm 32:13
I think the first step is we have to really accept just how bad we might be at doing our jobs in the first place.

Katie Vernoy 32:27
I want to say that again, we need to accept how bad we are at our jobs, or we need to accept that we are jobs.

Curt Widhalm 32:35
We need to accept that we are not as good at doing what we think we are in the first place. So the first thing is addressing our own bias about how good we are. And as an example of this, and I realize looking still at kind of the very large ecosystems and the research that goes into this, but there’s a 2023 article in Family Medicine and Community Health titled “Identifying depression and its determinants upon initiating treatment: ChatGPT versus primary care physicians”

Katie Vernoy 33:07
Bom, bom, bom!

Curt Widhalm 33:08
And this is by Levkovich and Elyoseph. And they looked at how effective were primary care physicians at identifying depression versus ChatGBT being given the same information. And primary care physicians were only correct in identifying it in 42% of cases for mild depression, ChatGPT 3.5 was 95% accurate, and ChatGPT 4 was 97.5% accurate.

Katie Vernoy 33:44
Wow. Okay.

Curt Widhalm 33:45
So, this is just on a diagnostic level that AI is a potentially better tool for being able to provide a diagnosis, but looking also at where our biases are, not only were the physicians less likely to identify it in the first place, they were also more likely to just prescribe medication, as opposed to some other combination of things, such as medication and recommendations for therapy.

Katie Vernoy 34:17
So to do my due diligence and some critical thinking, it sounds like this study shows that chatgpt can identify depression, or some you know, the diagnostics more so than the doctors. The question I have is, how is this defined? How is depression defined? How? What is the formulation of what this is? Where did they get? Your answer is correct, your answer is incorrect.

Curt Widhalm 34:49
There’s a questionnaire that’s credited to Dumesnil et al, and seems to be the basis of a lot of research that looks at giving vignettes to general practitioners, psychiatrists, therapists with clients who present with certain complaints. And it looks like the complaints tend to be around lack of sleep, difficulty in being able to initiate new activities, kind of some of the classic features of depression. I did not get deep enough in the research to say they tie it to ICD 9, ICD 10, DSM, but this research does seem to be, or this questionnaire does seem to be tied to enough different research studies that I’m going to do the lazy thing and just say it seems to be robust enough that they keep coming back to these particular vignettes.

Katie Vernoy 34:49
Sure, I’ll return to the to the previous concern that if we’re following the DSM or the the medical model, it holds the same limitations that we’ve said over and over again about DSM and the medical model.

Curt Widhalm 35:58
Absolutely.

Katie Vernoy 35:59
But, but the 97% accuracy, versus 42% what are they suggesting are the reasons that the doctors missed these things so dramatically? I mean, it seems like if they’re only changing gender and other demographic information and severity, I mean, we’re looking at some serious Doctor bias, if I’m understanding that correctly.

Curt Widhalm 36:25
Before we get into extrapolating the whys, I think that there’s some some other interesting stuff that comes out of this. They find that primary care physicians prescribe antidepressants significantly less often to women than to men.

Katie Vernoy 36:40
Ooh, interesting.

Curt Widhalm 36:41
They showed that the chat GPT showed no significant differences in the therapeutic approach when it came to the presenting gender of the clients. They’d also found that the physicians commonly recommended antidepressant medication without psychotherapy to blue color workers and a combination of antidepressant drugs and psychotherapy to white collar workers, which chat GPT did not show any significant differences in their recommendations for those. And out of the medications that were recommended, physicians recommended a combination of antidepressants and anxiolytics in 67.4% of the cases, exclusive use of antidepressants in 17.7% of cases, and exclusive use of anxiolytics drugs in 14% of cases. Chatgpt showed some pretty significant variations, most significantly exclusive antidepressant treatment in 74-68% depending on which model of chat GPT was being used. To kind of summarize and go back to your question, some of the why is that we have biases as people, as far as how people show up that if people are presenting as female, if people are presenting as being from a lower SES class. Some of the assumptions that are being made is that primary care physicians are going to lean towards you’re not going to make the investment in therapy as a blue collar worker, so you need to be treated just with drugs. The gender biases might be dismissive as far as female presenting people are just more emotional, and your problems aren’t as big. So…

Katie Vernoy 38:25
Yeah.

Curt Widhalm 38:26
There’s some pretty damning evidence here that we make mistakes. I acknowledge that this is an article about primary care physicians and not about therapists in general, but it does illustrate that everybody has the potential for biases, and some of the biases that we’re concerned about showing up are going to be around things like diagnostics. We’ve talked not only on our episodes before, but it’s a lot in our modern therapist community. We see this all over social media, but it comes to things like the under diagnosis of things like autism in people who were assigned female at birth; that I remember hearing presentations when I was very much in grad school and immediately afterwards, so we’re talking 2005 to 2010, where the diagnostic rates were five times higher in people assigned male at birth, and some of the descriptions being floated at the time were around, there’s just certain neurotransmitters that show up five times higher in people who are assigned male at birth, and that explains the diagnostic difference, without looking at the potential that people assigned female at birth, can also show some of the diagnostic indicators, just in a different way.

Katie Vernoy 39:53
And when we’re looking at this comparison, ChatGPT or some sort of AI versus a clinician. I know that there are going to be similar concerns, because we’re going to go from whatever the DSM says or whatever the presentations that are loaded into the chat bot, for example, or that are loaded that are trained into the clinician, we’re going to have the same things. I wonder if having AI diagnostics or AI interventionists or whatever will be quicker to address new data as it comes in, or will be slower, because I can see both things happening where clinicians who are on the front line who are specializing, for example, in autism and are able to identify beyond the short little DSM 5 TR diagnostics, what can be showing up there and flowing with the new information, whereas AI might pick it up quickly, I don’t know, or they may be stuck in what’s the most recent edition of our diagnostic and statistical manual. I think those things are worth paying attention to. Now I am hearing you loud and clear that both diagnosticians can have the same bias, and AI is more likely to avoid the gender or SES or those types of biases because they’re going to look just at the data versus at the human with a human’s eye, right? So I think that there’s some benefit there, and I know that there’s also nuance that AI may miss, and so I think that there’s, I don’t think it’s this is a deal breaker, but there’s things to consider when, when it comes time for us to identify which diagnosis bot are we sending people to as clinicians, we’re going to need to pay attention to where are they getting the data to define diagnosis, because if it’s based on a very old model, it’s going to miss the same folks that we’ve been missing already.

… 42:07
(Advertisement Break)

Curt Widhalm 42:10
And that’s a big part of what is being put out in the research, as far as where the bias is already coming from. There’s an article titled “Artificial intelligence in mental health and the biases of language based models” by Straw and Callison-Burch. This is from 2020 and they get into evaluating kind of that natural language processing model that’s used in psychiatry. In this paper, they reviewed 52 papers using a couple of these natural language processing models. And while there’s biases shown in some of the original research, when it was redone through their AI bots, none of the AI bots showed the same biases. So, speaking to your is it going to be slow? Is it going to be fast? Things tend to be trending fast. Now we look at you brought up, you know, Twitter bots becoming super racist. We had the episode about the need a test Tessa bots that ended up going off and making really unhealthy recommendations to people interacting with it, with eating disorders. So things tend to trend towards fast as far as what we can see, and that’s part of where the recommendations at the end of the episode are going to be around some of the guardrails around this.

Katie Vernoy 43:34
Yeah.

Curt Widhalm 43:35
But before we move on to some of that, it’s looking at some of the historical biases as far as representation on the research in showing up in the research in the first place. You know, a lot of research tends to look at either college students as a convenient sample to be able to test out interventions on it, those are going to tend to trend more towards middle or upper SES type students, not necessarily racially diverse. A lot more convenient samples are going to come out of people who grow up in or accessible to more urbanized areas. Rural samples are going to be not represented as well. So any of the traditional historical arguments about representation are still going to show up there, but the more people that end up interacting with these things are going to provide a broader base for that research to continuously be updated. Where some of these biases do already show up and are showing the same kinds of problems that we already have are around things that are cultural expressions. So there is research that comes out of both the medical field as well as our own mental health field that show that some cultural things, particularly in clients who are presenting with darker skin tones, tend to be misdiagnosed as things like Oppositional Defiant Disorder, when it’s really more of a cultural expression. Again, in accepting where we as mental health clinicians, as a field already are, we tend to already over diagnose things like ODD in people who come from non white backgrounds in the first place. So it’s doing the same things that we’re already doing. It’s not perfect yet, but it’s doing things better and more consistently overall, and it’s getting better faster than we are as a field.

Katie Vernoy 45:35
The challenge, I think, for some therapists, may be, and maybe I’m making some assumptions here, that the folks who are creating these AI assessment, interventionists, all these different types of AI that’s going to enter into our field, or already are in our field, actually. The folks who are creating it may not take the care or may not have the level of sophistication around what we’re talking about, and so these things will run rampant. That being said, my hope is that their boards, or their panels, their expert panels, those things are bringing these things forward, and there’s actual thoughtful discourse and structures and processes around assessing for these biases, addressing systemic oppression within the mental health field so that they can make these changes. And it’s hard, because there’s such a monetization of this that therapists are very, at least from what I’ve heard, therapists are very suspicious of, you know, it’s like the evil tech bros or whatever, however they’re being discussed. I think that there is some real concern that makes sense. If people are entering this field and throwing tech at a problem they don’t fully understand there’s potential to make things a lot worse. But I also think that there’s theoretically mechanisms in place to have more oversight, more assessment, more development, moving progressing forward with newer ideas that the typical private practitioner therapist in solo practice is not doing. They’re not assessing their own work. They’re not assessing their own biases. I mean, we can, we can go either way and say there’s some huge problems when things aren’t looked at. And theoretically, depending on how an AI platform is being regulated, there could be some actually really good assessment and improvement that’s not present for human assessors and interventionists.

Curt Widhalm 48:06
Going back to the Thakkar, Gupta and De Sousa article, the very first recommendation they make for improving AI applications in mental healthcare is to enhance the training and validation of AI algorithms with diverse datasets. So you’re not the first one to make this recommendation.

Katie Vernoy 48:24
But we love when people affirm our record, right?

Curt Widhalm 48:27
Right, I Yes, and they even go so far as to point out that these data sets need to consider intersectionality factors. So it’s I think, that at least within the AI ethicists models, there’s a lot of agreement and is very up to date on addressing some of the biases that are already there. The second recommendation that they make is that the AI systems should even consider going to open source so that way people who might have more input to be able to point out here’s spots that your system is not considering allows for even more robust way to address biases and vulnerabilities.

Katie Vernoy 49:16
I think that one may be tough, because I think a lot of folks are going to protect their algorithms and their pocketbooks.

Curt Widhalm 49:25
Looking at the integration of human oversight into this is also involving us as the mental health experts who use this. And this is ultimately the point that I find in talking about updating ethics codes being the most responsible way of doing this. We as a field, we as the individuals who are still working with clients that are not being replaced by the chat bots, that are still interacting with people, that might find benefit out of utilizing some of these tools. I still have a responsibility to recognize that if the chat bots catch and provide interventions or provide good therapy in the way that Dr Caldwell talked about with 85, 86, 90, 91% of the clients, and it gets to the end of the treatments with those bots, and it says you need to go see a human therapist that we recognize. Okay, here’s how 9% of you, or point 9% of you, let’s, let’s assume that this continues to get better and better, that we have the responsibilities, as we already do in our work, to point out all right, and here’s how this stuff doesn’t doesn’t apply to you. Here’s how we can further personalize treatments and interventions to you that the AI chat bots due to whatever intersectionality factors, whatever personal choice factors that you as a client have coming in, we recognize that there’s still limitations and still bias that comes out of this stuff, and here’s how we can tweak this for you. So it’s not just based on the data sets that are there. The metaphor that I’m coming up with in my mind is this is more like pilots who are flying almost fully autonomous aircraft that are there to catch the errors that end up coming out, and those are the ones that we trust for those particular situations to make sure that the plane gets landed safely. But a lot of us sure rely on fully autonomous flying aircrafts to be able to take off, fly and land and do a lot of things that we’re glad that the pilot and the co pilot don’t have to sit up in the cockpit and make a million adjustments during every single flight.

Katie Vernoy 51:44
It’s an interesting metaphor. I like the idea of having the therapist sit back a bit and be able to rely on other supports, technology to support clients while they observe and supervise and make sure that folks are getting what they need. It also sounds like a very different job, and I won’t reiterate all of what Ben said in the episode that we have with him, and I think it needs to be discussed what the job becomes when we may be only evaluating pre-assessed, pre-skilled clients for additional support. To me, I think about the benefit of having a client who can access all of the lessons that they need, so they can start learning tools and skills as fast as they can, and being able to support them at some point to, like you said, make them tailor them to their own experience. Make sure that they’re, they’re getting what they need, and that they’re that there’s not something that the bots missing, right? Because I think some of those lessons, while enjoyable at times, sometimes get monotonous, right when we’re teaching the same skills to the clients over and over and over and over again. This is why a lot of therapists become coaches, so they can create a course and just teach it once and let people purchase the course later. I don’t know that. I actually have some misgivings here. I think that the responsibility does lie with us to choose the right tools to engage in the process, to make the tools better if we’re going to be using them, and to be available to clients that have higher level needs. This is the job that we’re aiming towards. And I don’t know, I don’t know if the models of commerce are following that. People are talking about, I want to do X, Y and Z, that theoretically could be taken up by these, these folks, these folks that they’re wanting to do coaching that really is going to be by and large, I think, replaced by something on your phone.

Curt Widhalm 52:15
I think that you are bringing up a huge point about the nature of our job, potentially changing. This is particularly going to affect the cohorts that are in grad school now and are going to be coming into grad school, because the educational standards to learn everything, and we’ll link to a lot of past episodes in our show notes for this, but the demands placed on grad students today, if the workplace that they’re going to end up in is fundamentally different from what they’re being taught, it takes so long to change educational standards, for accreditation, for legislation…

Katie Vernoy 55:03
Sure.

Curt Widhalm 55:03
…that in some ways that’s going to potentially even speed up the way that AI ends up taking over our field, because if it’s taking even longer for therapists to get better at addressing bias and delivering 95 and 97% accurate healthcare. We’re just going to be continuing to train people in old models for jobs that look incredibly different, and the jobs have looked incredibly different even since you and I were in grad school. How many times have we talked about, okay, now you’re graduating, you’re out in the field, now you get to learn how to be a therapist, because you’re actually interacting with the systems around you. So I don’t see the quickness of changing educational responsibilities to address this stuff anytime soon. So AI tech that’s going to step in, and it’s going to do this stuff, and it’s doing a pretty good job for a lot of things. The bias that I acknowledged at the beginning of the episode, my opinions on this changed a lot in the research that I did to look at this. This is quickly becoming something that is much better than many of us can even fathom to deliver, and for the people who are coming in and having to, you know, be able to identify some of those, yeah, it’s going to be a learning curve to be able to do that. The data sets are going to update faster than research can get published. The algorithms, the learning models, will take in more and more diverse input from people from a variety of backgrounds, particularly from what some of the research points out to lower SES backgrounds, people who most of their engagement is through their mobile devices, where they’re already able to be accessed, potentially 24 hours a day, potentially with things like smartwatches and other wearables that gives better feedback than a once a week or less interaction with a mental health professional, I think we’re kind of, you know, echoing Dr. Caldwell’s points. We’re at a point where, if you really want to make your practice be in the way that you were trained. It’s going to be embracing the things that AI can’t do yet. And you know, this is work with relationships, work with you know, think of the children, you know, do the things where the ethics aren’t quite there as far as having populations that can just do CBT more regularly, but it’s also not fighting that even the delivery of large scale mental health kinds of tips that improves things. As a society, it will address some things that help it make it not necessarily be a rising tide raises all boats. But if people are able to engage in more positive mental health stuff more consistently, if it does some things better, that’s not a bad thing, and it allows for us to focus on maybe some more of the systemic pressures and adopting more of a more global mental health approach from what we do to be able to bring some of those social justice aspects in that do have things. I don’t think AI, you know, CBT is going to fix poverty, maybe things where we start to be able to focus, you know, some of the environmental factors that our knowledge can benefit, some of the aspects that technology can’t. So and coming from me, I’m surprised that these words coming out of my mouth, I’m convinced that there’s the potential for a lot more good than bad out of this and our choices as a profession is to either embrace this and the changing roles that we have, or to kind of get left behind and continue to be yelling at the cloud.

Katie Vernoy 57:17
Well, it’s good to hear that you landed kind of where I had been landing, where..

Curt Widhalm 59:16
I let, I let the research informed me.

Katie Vernoy 59:19
Yes. Excellent, excellent, excellent. For folks who are starting out or are deep into their clinical clinician journey, I think that there are some elements to this that is that are going to be pretty hard to swallow, pretty hard to stomach. I think for you and I, we’ve we’re pretty far along, and maybe don’t always want to be doing one to one work. Don’t always want to be on the front lines of every single decision for our clients. And I think it can be a little bit easier for us to take this in. And maybe I’m overstating this, but to me, I’m excited to have some new tools to give to my clients, so I don’t have to be the end all be all. And I think that my hope is that there are avenues for folks like us who want to do advocacy or oversight or have some sort of impact on the future of the profession, that will be able to get involved and try to address some of these things that oftentimes, whether it’s academia, medical model, however we want to describe it, that those things can be folks with lived experiences, therapists will be at the table, I guess is what I’m trying to say. That we can actually say, hey, wait a second. Yes, CBT works in this regard. But there’s these other things that in my practice, this is how I see things being slightly adjusted, being able to be part of the conversation. Because I think if we’re not, if we refuse to be part of the conversation, we don’t stop progress. We stop it being good.

Curt Widhalm 1:01:02
Well said. You can find our show notes over at mtsgpodcast com. We’ll include a link of our references over there our not AI backend work will include a link to a ton of past episodes that kind of address some of these things and follow us on our social media. Join our Facebook group, the Modern Therapist Group, to continue on with these conversations, and until next time, I’m Curt Widhalm with Katie Vernoy.

… 1:01:30
(Advertisement Break)

Katie Vernoy 1:01:31
Just a quick reminder. If you’d like one unit of continued education for listening to this episode, go to moderntherapistcommunity com, purchase this course and pass the post test, a CE certificate will appear in your profile once you’ve successfully completed the steps.

Curt Widhalm 1:01:46
Once again, that’s moderntherapistcommunity.com.

Announcer 1:01:50
Thank you for listening to the Modern Therapist’s Survival Guide. Learn more about who we are and what we do at mtsgpodcast.com. You can also join us on Facebook and Twitter and please don’t forget to subscribe so you don’t miss any of our episodes.

 

0 replies
SPEAK YOUR MIND

Leave a Reply

Your email address will not be published. Required fields are marked *