Banner ID: A robot sitting on a bench and working with something that looks across between a book and a tablet with photo of Ben Caldwell to one side and text overlay

AI Therapy is Already Here: An interview with Dr. Ben Caldwell

Curt and Katie interview Dr. Ben Caldwell about the state of Artificial Intelligence in therapy. We look at the “AI Therapists” that are already working as well as how they are being regulated (or not). We talk about how AI therapy chatbots are being received and likely next steps in innovation. We also explore what “human therapists” can do to protect their practices and address the influx of low cost, always available AI therapy.

Transcript

Click here to scroll to the podcast transcript.

An Interview with Dr. Benjamin E. Caldwell, PsyD, LMFT

Photo ID: Dr. Ben Caldwell, LMFTBenjamin E. Caldwell, PsyD is a California Licensed Marriage and Family Therapist and Adjunct Faculty for California State University Northridge. He is the author of Basics of California Law for LMFTs, LPCCs, and LCSWs, and lead author of AAMFT’s Best Practices in the Online Practice of Couple and Family Therapy. He has published and presented around the country on issues related to ethics, technology, supervision, and professional development. His company High Pass Education provides exam prep and continuing education courses for mental health professionals.

In this podcast episode, we look at the latest developments in Artificial Intelligence in Mental Health

Our friend Dr. Ben Caldwell has been writing some articles on the current state of AI in therapy. We decided this information needed to come to the podcast, so we invited him back on the podcast.

What is the current state of AI in the therapy profession?

  • There are chatbots providing “therapy” or mental health support
  • Some apps are going the path of becoming registered as a medical device with the FDA, some are staying in the coaching space

Who is regulating AI therapy?

  • Licensing boards for “human therapists” may have no ability to regulate the use of the term therapy by apps, medical devices, or “AI therapists”
  • State legislators may be the avenue for regulation, but there may not be an appetite to do so
  • FDA can regulate apps that get registered as a medical device

Who wants AI therapy?

“I think every audience except human therapists like us is going to be very much in favor of the widespread ready availability of AI therapy.” – Dr. Ben Caldwell

  • Clients or patients will seek out AI therapy as a very cost-effective and available option for mental health support, also AI therapists will not judge clients and will always remember what clients have said
  • Insurance providers will see AI therapy as a way to expand networks
  • Legislators will likely purchase AI therapy for state and county Medi-caid services as well as support expansion to address mental health shortages
  • Basically, everyone wants AI therapy except for human therapists

What are the concerns about AI therapy?

  • It is only approximating the relationship between therapist and client
  • An AI therapist doesn’t have morals and values, ethics
  • The apps are working only from manualized treatments
  • It may be only psychoeducation, without current ability for deeper work

What can therapists do to protect their practices now that AI therapy is here?

“I think a lot of us are not prepared for it, and some of preparing for it really amounts to thinking through what kind of practice do I want to have, especially in a world where clients can access AI therapy for less than 10 bucks a session. How am I going to set myself apart in a way that clients will be interested in, take me up on and, frankly, pay me more money for?” – Dr. Ben Caldwell

  • Make sure to vet any AI services or applications that you use
  • Shift to services that AI therapy doesn’t provide (like diagnosis, or more niche services with children, families, and couples)
  • Move to overseeing AI as an adjunct to therapy (i.e., “prescribe” a particular chatbot or AI therapist and check in with clients periodically or when the client is in crisis)
  • Work with AI therapy companies to train the AI therapists

Resources for Modern Therapists mentioned in this Podcast Episode:

We’ve pulled together resources mentioned in this episode and put together some handy-dandy links. Please note that some of the links below may be affiliate links, so if you purchase after clicking below, we may get a little bit of cash in our pockets. We thank you in advance!

Ben’s articles:

AI therapy is about to make therapy a lot cheaper

An AI therapist can’t really do therapy. Many clients will choose it anyway

How human therapists can thrive in a world of AI-based therapy

Ben’s Websites:

BenCaldwell.com

HighPass.com

PsychotherapyNotes.com

Facebook: facebook.com/highpasseducation

 

Woebot Health™:  woebothealth.com

Wysa: wysa.com

 

Relevant Episodes of MTSG Podcast:

 

Ben’s episodes:

Want to Fix Mental Health Workforce Shortages? Speed up the Licensing Boards: An interview with Dr. Ben Caldwell

Addressing Racism in Clinical Licensing Exams: An Interview with Ben Caldwell and Tony Rousmaniere

The Fight to Save Psychotherapy, An Interview with Benjamin Caldwell, Psy.D.

COVID-19 Legal and Ethical Updates, An Interview with Ben Caldwell, PsyD, LMFT

Noteworthy Documentation: An Interview with Dr. Ben Caldwell, LMFT

Waiving Goodbye to Telehealth Progress, An Interview with Dr. Ben Caldwell, LMFT

Other relevant episodes:

Is AI Smart for Your Therapy Practice? The ethics of artificial intelligence in therapy

Is AI Really Ready for Therapists? An interview with Dr. Maelisa McCaffrey

The Future Is Now: Chatbots are Replacing Mental Health Workers

What Actually is Therapy?

 

Who we are:

Picture of Curt Widhalm, LMFT, co-host of the Modern Therapist's Survival Guide podcast; a nice young man with a glorious beard.Curt Widhalm, LMFT

Curt Widhalm is in private practice in the Los Angeles area. He is the cofounder of the Therapy Reimagined conference, an Adjunct Professor at Pepperdine University and CSUN, a former Subject Matter Expert for the California Board of Behavioral Sciences, former CFO of the California Association of Marriage and Family Therapists, and a loving husband and father. He is 1/2 great person, 1/2 provocateur, and 1/2 geek, in that order. He dabbles in the dark art of making “dad jokes” and usually has a half-empty cup of coffee somewhere nearby. Learn more at: http://www.curtwidhalm.com

Picture of Katie Vernoy, LMFT, co-host of the Modern Therapist's Survival Guide podcastKatie Vernoy, LMFT

Katie Vernoy is a Licensed Marriage and Family Therapist, coach, and consultant supporting leaders, visionaries, executives, and helping professionals to create sustainable careers. Katie, with Curt, has developed workshops and a conference, Therapy Reimagined, to support therapists navigating through the modern challenges of this profession. Katie is also a former President of the California Association of Marriage and Family Therapists. In her spare time, Katie is secretly siphoning off Curt’s youthful energy, so that she can take over the world. Learn more at: http://www.katievernoy.com

A Quick Note:

Our opinions are our own. We are only speaking for ourselves – except when we speak for each other, or over each other. We’re working on it.

Our guests are also only speaking for themselves and have their own opinions. We aren’t trying to take their voice, and no one speaks for us either. Mostly because they don’t want to, but hey.

Stay in Touch with Curt, Katie, and the whole Therapy Reimagined #TherapyMovement:

Patreon

Buy Me A Coffee

Podcast Homepage

Therapy Reimagined Homepage

Facebook

Twitter

Instagram

YouTube

Consultation services with Curt Widhalm or Katie Vernoy:

The Fifty-Minute Hour

Connect with the Modern Therapist Community:

Our Facebook Group – The Modern Therapists Group

Modern Therapist’s Survival Guide Creative Credits:

Voice Over by DW McCann https://www.facebook.com/McCannDW/

Music by Crystal Grooms Mangano https://groomsymusic.com/

Transcript for this episode of the Modern Therapist’s Survival Guide podcast (Autogenerated):

Transcripts do not include advertisements just a reference to the advertising break (as such timing does not account for advertisements).

… 0:00
(Opening Advertisement)

Announcer 0:00
You’re listening to the Modern Therapist’s Survival Guide, where therapists live, breathe and practice as human beings. To support you as a whole person and a therapist, here are your hosts, Curt Widhalm and Katie Vernoy.

Curt Widhalm 0:12
Welcome back, modern therapists. This is the Modern Therapist’s Survival Guide. I’m Curt Widhalm with Katie Vernoy, and this is the podcast for therapists about the things going on in our profession, the things that affect the ways that we practice. And we are once again visiting artificial intelligence. And I don’t know, about a year and a half ago, we had done an episode about is AI coming for your job? And we are here to definitively tell you they already have. They are doing it well, and this is going to have some bigger implications for the way that we practice, the ways that things like managed care end up looking at it, all sorts of questions around, can they really call it that, that ore so far undefined, but probably and here to help us with that conversation today is our very good friend. Dr. Ben Caldwell.

Dr. Ben Caldwell 1:07
Thank you, Curt. Thank you Katie. Good to be with you both as always.

Katie Vernoy 1:09
So you’ve been on the show like a million times, but for folks who somehow haven’t heard who you are and what you’re putting out in the world, can you share that with them?

Dr. Ben Caldwell 1:17
Happily, I’m Ben Caldwell. I’m a licensed LMFT in California, and my company is High Pass Education. We do continuing ed stuff and exam prep.

Curt Widhalm 1:27
You have been writing some blogs lately about some artificial intelligence. You’ve been on your usual lecture circuit. You’ve been talking about AI quite a bit lately, and as we start many of our episodes, as you’ve been putting this information out there, what are you finding that other mental health professionals are kind of getting wrong about AI right now?

Dr. Ben Caldwell 1:50
Well, I think it’s not so much that therapists are making some enormous mistake around AI. Most of the discussion that I hear among therapists about AI has to do with documentation, and that is good necessary conversation about, you know, do we need client consent, and how do these different documentation systems work in terms of the level of PHI that they use? But I’m talking more in my discussion about AI actually providing therapy services. And I think the overwhelming majority of clinicians that I know, that I’ve talked to about it, have no idea how far out of the station the proverbial train already is, the level at which AI therapists are already providing therapy, and the degree to which a lot of stakeholder interests are going to be aligned in making AI therapy widely available at low cost. Of course, that’s going to impact us. But when I start talking with therapists about it, most folks I know just just aren’t aware of where the technology is right now.

Katie Vernoy 3:02
Well and I think a lot of folks are also saying, Well, this is wrong, and it’ll never replace therapists, and they there’s a lot of denial, a lot of anger, and then there’s also, I think legitimately, some concerns about what this means, not just for our own businesses and our profession, but for clients. So in reading the article, you mentioned one of my favorite names in all of the world, which is Woebot.

Dr. Ben Caldwell 3:27
Yes.

Katie Vernoy 3:28
I love that name, although I I wasn’t super excited about it when I first heard about it. I don’t know that I’m excited about it now, but woebot, w, o, e, bot is one of the first I heard about, like years ago that had started down this path, and was one that you’re saying now is actually doing potentially some decent work. What does AI therapy actually look like right now?

Dr. Ben Caldwell 3:52
So most AI therapy right now is in the form of chat bots, and Woebot is a great example. Woebot is a company that has sort of chosen to go down the regulatory path. And what that means is that right now, what Woebot does has careful guardrails around it, has a lot of supervision, and is in the process of getting registered with the FDA as a medical device. And that’s a really important distinction when we think about the regulatory framework. Medical devices are regulated at the federal level, unlike therapists who are regulated at the state level. And so right now, what Woebot Health can do is with significant limitations, can be prescribed by a physician for what amounts to sort of supervised work in the treatment of postpartum depression. It’s a very limited, narrow set of uses, again, careful guardrails. What they are wanting to do, clearly, is work within the federal regulatory framework to get broader recognition as a medical device for additional uses, and that’s one path that these companies can pursue. And certainly, Woebot has come a very, very long way from its origins. It’s come a long way even in the past year or two. There is a great deal of existing research on Woebot and its work that seems to suggest, even in randomized, controlled trials, that Woebot is in some ways comparable to a human therapist. And just to be clear, from a philosophical standpoint, I don’t believe that the human connection between client and therapist can ever be fully replaced by technology. I think there is something important, something even kind of magical, in that connection that is by itself, healing. But as we’ll discuss here, AI gets better and better all the time, at approximating that kind of a relationship, and for a lot of clients and other stakeholders, I think that approximation is going to be close enough.

Curt Widhalm 6:15
I remember a blog article that you had written several years ago testing Woebot. This was probably 2017 or so where just typing in, hey, I’m suicidal, and its responses were Oh no. That would make people sad, and it has really transformed over the last seven years to be a lot more responsive. But I do hear in a lot of the discussions that I have around AI this point that you’re bringing up that is, oh no, the therapeutic relationship can never be replaced. And now I’m thinking it’s just going to take some kind of MRI machine or something that just like triggers people’s mirror neurons, in addition to these things, that will totally replace us at some point in the future. But until we get there, are clients actually using these kinds of programs?

Dr. Ben Caldwell 7:07
Yes, and that’s one of the things that I think most therapists just aren’t aware of, because it’s not part of our day to day world. That article that you’re referencing, by the way, that was, it’s on my blog site, psychotherapynotes.com but I actually didn’t write that article. That was my friend Jeff Liebert who wrote that article. It’s fantastic. It’s well done, but it is, as you said a few years back, and the technology has come a long way since then. Woebot says on its website that it has already provided more than 77 million minutes of support to clients, and they talk about how usage spikes between 8 and 11pm those times when people might most acutely need a therapist, but have the most difficult time actually connecting with a human therapist. Because a lot of human therapists, for various good, understandable reasons don’t want to work that late into the evening. So for clients, having Woebot available on demand, it’s actually a pretty great resource. There’s another company or another AI system called Wysa, w, y, s, a, and they’re taking a different path in terms of the regulatory framework, where they are advertising their service is mental health support or mental health coaching. They’re being pretty careful to avoid using the terminology of psychotherapy in the US because of regulatory concerns. But they say that their site, their platform, whatever you want to call it, their AI therapist has already had more than 500 million conversations and delivered more than 2 million sessions of CBT. That is a tremendous amount of therapeutic work that these chat bots have already done.

Katie Vernoy 8:58
What do those sessions look like? You said it’s chat bots. I know a year and a half ago, Curt mentioned it. There was Tessa, the eating disorder hotline chat bot that had a curriculum loaded in and was providing prompts and conversations based on that curriculum. I think it was probably CBT based, but I don’t know that for a fact. And when it was tested, when it was brought online, before the end of the day, it started telling folks who are calling an eating disorder hotline to diet and and all of those things that we really don’t want them to do.

Dr. Ben Caldwell 9:37
Right.

Katie Vernoy 9:38
And so what’s changed? How is that, how is the work looking? What is the work looking like, and what really does this resource provide to clients?

Dr. Ben Caldwell 9:48
It’s a great question, and part of the difficulty of the conversation is that different chat bot systems work differently. If you look at something like Woebot, at least from how they have describe their work publicly. There are some pretty careful guardrails in that system. It is heavily scripted, and it has very limited opportunity to be generative, to develop novel kinds of responses that would then run the risk of what you’re talking about, or like we saw in the news a few months ago. Google’s AI system, for a while, was telling people to put glue on their pizza, right? We don’t want AI therapists telling people that kind of thing. But so there are really careful guardrails there. If you look at other companies developing AI systems, the level of guardrails that they are using, the level of scripting that they are using, as opposed to allowing their AI therapists to be generative, it’s going to vary from one company to another. So, it’s tough to treat this as a monolith, but certainly what guardrails are necessary and appropriate for AI therapy is a great question that we should be asking, and alongside that is the question, who’s responsible for putting those guardrails in place? Because if you take a company like Wysa, and this is not in any way a knock on the company, it’s just pointing out a difference in regulatory frameworks. They’re not, to my knowledge, going through that FDA, federal device registration process,. So, their accountability in terms of guardrails on providing good services really kind of comes from the market and whether people are willing to use it and feel bothered enough by bad outcomes to sue them for fraud or other things, and again, not a knock on the company at all. It’s not a statement about their quality. They’ve got some what seem to be pretty positive research findings so far. But if you look at Woebot, if there’s negative research that comes out, if there are bad things resulting from those chat conversations, well, then the FDA can step in as a regulator and say, We’re not going to move you forward in the process of device recognition. There’s not that same kind of regulator in place for these devices that are not advertising themselves as therapy and not working toward that kind of registration, but are more in the consumer space of mental health support or coaching.

… 12:27
(Advertisement Break)

Curt Widhalm 12:30
So all of the state licensure boards are really proactive on this right now, and are taking all kinds of steps to protect our licenses and even just the terms like therapy, correct?

Dr. Ben Caldwell 12:42
Unfortunately, no, but I think it’s, it’s important to hold in mind here that I think a lot of folks have questions about, is this my job? If you’re a licensing board for human therapists, do you even have any ability to regulate what a medical device can do, or what it can call its work? I don’t know. And I’m not sure that they know either. You can look at what happens at the state level in terms of title protection and practice protection that you need to have a certain license in order to call yourself whatever that title is and offer this particular service under this label. But because medical devices are regulated at the federal and not the state level, I’m sure licensing boards are, to whatever degree they’re actually looking at the issue, I’m sure they’re kind of scratching their heads, wondering if they even have a mechanism. Like, let’s say they find a an AI company that is out there doing something that the board thinks is inappropriate. Well, licensing boards tend to exist in this world of administrative law. They can issue licenses and they can revoke licenses. There’s not a ton else that they can do. So, for a medical device company that’s not trying to get or stay licensed as a therapist, a board might not be the regulatory avenue. State Legislatures might be the regulatory avenue. They can be the ones who can change statute about, well, what can these kinds of services truly call themselves? But, and I know we’ll get there in a minute, I don’t know that state legislatures are going to want to do that. Because the motivations of other policymakers, I think, are going to lean in the direction of making AI therapy widely available at low cost. So we wind up in a situation where, I think nobody really knows whose job it is to police this stuff.

Katie Vernoy 14:44
Well, and to get to that next point, I don’t know that people are going to want to do that, because this is theoretically a way to address access, a way to address the cost of mental health services. And so my assumption, and I’d like you to talk more about it, is that people, not necessarily therapists, but people will want this to continue to move forward as a way to make mental health support more available.

Dr. Ben Caldwell 15:10
100% agree, and I think every audience except human therapists like us is going to be very much in favor of the widespread ready availability of AI therapy. And of course, everybody has an interest in making sure that it’s safe, making sure that it continues to improve, making sure that it doesn’t give people unproductive, harmful or even dangerous guidance. But if you look at the different audiences; so regulators: regulators will want AI therapy to be widely available, because right now, not enough people can get therapy. We know there’s a tremendous amount of need around us that is not being met because there simply aren’t enough therapists. And regulators have not shown the appetite to take more drastic steps to create a larger population of licensed therapists. Things like making the requirements for licensure easier, offering provisional licenses to people who haven’t yet passed a clinical exam, or getting rid of a clinical exam entirely. You know, those are all steps that could immediately increase the population of licensees, but there’s not a great appetite among legislators to do that. So AI therapy gives them a way to help solve this access issue, and to do it at relatively low cost. Bear in mind that state governments are also big purchasers of mental health services through their state and county systems. So even at the state level, where in California, if they can make AI therapy available to Medi Cal recipients, that’s actually going to save the state a ton of money, they’re going to be happy to make that change. If you look at insurance companies, they’re facing the same problems. This falls under the banner of network adequacy for insurers, but basically it comes down to there are some insurers that keep getting fined because they don’t have enough therapists to provide adequate access to care for the people who have health insurance that ostensibly covers mental health care. If they can contract with one of these AI companies and use AI therapy, even as kind of an adjunctive support or as a stopgap until somebody can meet with a human therapist and they can do it at really, really low cost, then they’re going to go back to their regulators and say we have solved our network adequacy problem, and not only that, but we’re now able to make services more widely available in people’s preferred languages with some sort of cultural context and cultural appropriateness to it, if AI functions as well as it theoretically can, in terms of the customization of treatment. So insurers and regulators are going to be on board. And for clients themselves, I think we as therapists understandably underestimate the degree to which clients are going to really want this service. There was one study, and we don’t have great research on this yet, but there was one study a couple of years ago out of Turkey, and that study basically asked people from the general public their opinions about AI therapy. Would they prefer a human therapist or an AI one? They asked a bunch of related questions, and more than half of the people in that study said that they would prefer an AI therapist to a human therapist, and it’s for all the reasons that you might expect. It’s the lower cost, it’s the fact that they don’t have to go through the hassle of scheduling an appointment. It’s because they know an AI therapist can’t judge them the way that a human therapist might and an AI therapist is not bound by state lines in the way that human therapists are. You put all of that stuff together with low cost, and of course, a lot of clients will take advantage of that option once they know it’s available to them. It’s really only us. It’s only the human therapists who might be in a position of saying, wait a minute, wait a minute, hold on. I don’t know that this is a good idea. Everybody else is going to be on board.

Curt Widhalm 19:32
You know, I’m starting to wonder if you just wrote this from AI and all of your talking points are coming from the AI itself. This sounds incredibly just pro AI without addressing any of the things like ethical concerns or any of this kind of stuff. And for those who are unaware, I’m taking comments from one of your posts on LinkedIn and what some of people’s responses seem to boil down to which is which AI companies are paying you, Ben?

Dr. Ben Caldwell 20:07
No, no AI companies are paying me. This is simply my effort to kind of inform people about where the technology is. And listen, I don’t land in a place that is particularly pro AI or anti AI. I really deeply struggle with this question of access, knowing that for an awful lot of people, we’re not talking about the difference between an AI therapist and a human therapist. We’re going to be talking about the difference between getting therapy from an AI therapist and not getting care at all. And so the question then becomes, for those folks, is some level of treatment through AI therapy, better than nothing? I really, really wrestle with that. I don’t know where I land. It changes day to day. But I do think that these are services that already exist, where public knowledge and utilization of these services is going to continue to grow, where there need to be guardrails, and I’m not sure that any public entity is in a position right now to take responsibility for making sure that those guardrails exist, you know, leaving aside those companies that are going through like FDA approval process. So there are, there are real and significant ethical concerns here. But I just think a lot of us aren’t aware of how far down the road we already are. And I think that some therapists understandably look at AI therapy, and they say, Well, it’s kind of in the the uncanny valley, stage right. You can tell when you’re talking to a robot, maybe not immediately, but usually pretty quickly. And to that, I would say a couple of things. One is that some other research shows us people might actually prefer the robot. And that goes back to again, the fact that robots can’t judge you, will remember what you’ve said, et cetera, et cetera. I think we we overestimate the degree to which the general public cares a ton about meeting with a human person. But the other thing is, if you look over historical timelines, and you think about, I can always go back to Tom Hanks in Polar Express as the example of this. Like Tom Hanks in Polar Express is, is the example of the uncanny valley, where he was that character, and it’s a good movie. That character at the time was the best that technology could get us in terms of motion capture and expression capture, trying to show us a human character in animated form. But it wound up coming across to a lot of folks as kind of wooden and creepy, unreal. But then you look today at the very realistic and believable de-aging of Indiana Jones. That progression didn’t take all that long, historically speaking. And that is the same kind of pathway that we are on with AI therapy. Again, I don’t think it’s ever going to fully replace that human relationship that I think is so important, so healing, so magical. And people who have concerns about AI talk about AI being unable to act as what they would call a moral agent, like it doesn’t have the ability to ethically reflect on its own work. It offers an approximation of therapy and not therapy itself. Those are all very real concerns, but again, I think for a lot of clients and other audiences, it’s going to be close enough. And so we have to think through, what does that mean for my practice then, and how I can thrive in a world where cheap, readily available AI therapy exists and is being used.

… 24:11
(Advertisement Break)

Katie Vernoy 24:16
I want to get to that question, but before we head there, either, I still have a question on what AI therapy actually is. Because what you’ve described, it’s mostly chat bots. Oftentimes, it seems to be drawing from, you know, curriculums to CBT or to find interventions. And I think about a lot of the self help apps that have come out over the years, whether it’s Calm or, you know, like all the different ones, right? There’s, there’s something that people can log into at any time, get some specified education, really, maybe there’s availability for questions. To me that that isn’t therapy, that’s psychoeducation, or that’s some sort of self help. What are we actually, are we differentiating between that and AI therapy? I’m not quite clear what we’re actually talking about.

Dr. Ben Caldwell 25:07
Well, and that that’s a fascinating question. It gets to sort of what you call therapy and what you don’t call therapy.

Katie Vernoy 25:14
Sure.

Dr. Ben Caldwell 25:15
And if you have a chat bot that is working from a defined curriculum, think about how much over the past couple of decades, therapists have worked to manualize the approaches from which we practice, and try to standardize practice a bit from one clinician to another. So we are often talking about, you know, manualized to the point of being almost scripted approaches to treatment. Now, what these chat bots tend to offer is not the level of depth and personal growth that some long term therapy clients might reasonably want and expect. Instead, they tend to get built toward particular therapeutic tasks and again, more manualized approaches. There’s a reason why a lot of these AI chat bots are being built with CBT specifically in mind. Manualized approach, definable tasks, things that you can look at in small chunks to see, did this particular, specific intervention work as intended. For those clients who want that sort of deeper, longer term work, I don’t know that the technology is sort of there yet, but there certainly are companies working in that direction. And so, you know, if you want to get into a philosophical discussion about the definition of therapy, I think that’s actually a worthwhile conversation to have. APA, the American Psychological Association, specifically defines therapy as a service provided by a qualified professional, meaning, a person. So under the APA definition, no chat bot can offer therapy, because it’s not a human relationship. But if you look at sort of the delivery of therapeutic interventions with the intention of treating a defined mental health issue or a mental health diagnosis, then absolutely, that’s what these companies are offering. And again, right now, it tends to be text based. It tends to be chat bots, but the technology already exists that if you or I, or anybody else wanted to, well, let me back up a little bit. So I do a lot of teaching and training, right? And so I see this stuff in the teaching and training world, where there are companies I could go to their studio right now today, and they could spend the day engaging in a process with me of motion capture, voice capture, sort of video capture to get my expressions, modeled as closely as they can. And they would create an avatar of me, basically a three dimensional looking video image of me. And then I could give their technology a script for a training that I want to do, and I could do little things in that script to talk about points of emphasis when I want to drop my voice, etc, and they can give me a video that looks, sounds and moves like me delivering the script that I have given them. It is a very, very short walk from that to the script being dynamically generated in conversation. So right now, most of what we have are chat bots, but this is going to be very similar to one on one video based therapy very quickly.

Curt Widhalm 28:59
So for our audience, who are now looking at bajillions of competitors coming in the form of chat pods and that kind of stuff, what can they do to protect their practices at their jobs going forward?

Dr. Ben Caldwell 29:12
I think it’s a great question, and there are two ways that we can look at that, right. One way is, how do you interact with AI systems if you want to make that part of your work, if you want to be involved in that world? And first and foremost, I would suggest that folks pay really careful attention to the privacy policies and practices of those companies, because, again, they’re not all created equal. And what they do with user data, the various ways in which they protect or commercialize user data, is going to be different from one company to the next. That’s even true with documentation stuff. Now, some of the AI documentation companies, they use session recordings to generate first drafts of progress notes. Others don’t. Some of them don’t want protected health information. They will tell you specifically not to use any PHI. But if you give it a little bit about what happened in session, it will generate a note for you based on that, and then you can kind of plug in the client specific information. So there’s a really broad range of how these companies use data, and if you want to be involved with the AI world, that’s an important thing to pay attention to. If you don’t want to be involved in that world, if you’re interested in protecting your practice, making sure that you can thrive in this very rapidly approaching world. I think there are a handful of things that you can do as a therapist that AI is unlikely to be able to do on its own in the short to middle term future. So things like diagnosing. Diagnosing is still very much the realm of people, very much the realm of trained professionals. AI can’t do that at this point, and I do foresee a possible world where some therapists act kind of in the way that many psychiatrists do now, where maybe you do an intake with a client, issue a diagnosis, and I’m using this term loosely, not technically, you sort of prescribe, in air quotes, AI based therapy for this person’s problems, then you become sort of a supervisor of that AI therapy. You become a person that the client checks in with once in a while if the AI therapy goes wrong, or if it doesn’t seem to be working, but most of what you are doing is that initial intake diagnosis, and then I’m here for you if you need, right. So that’s one possibility. You can specialize. If you are someone who works with more involved kinds of disorders, with longer term treatment, with with some of the more particular forms of therapy that AI is not currently pursuing, then I think you’re probably safer. I think AI is coming first to replace those of us who primarily do one on one, telehealth based therapy, with adult clients, with kind of garden variety problems, so, anxiety, depression, etc. The more you specialize, I think the more protected your practice is.

Katie Vernoy 32:27
You’re saying specialize as well as work with more challenging clients. So those of us who would prefer to work with the worried well, good luck keeping that population.

Dr. Ben Caldwell 32:38
Yeah.

Katie Vernoy 32:40
I just want it, because I could specialize into a very nice little niche of worried well. That’s not going to help me.

Dr. Ben Caldwell 32:47
Yeah, and listen, I think there’s always going to be a market for people who very much want to work with a human therapist, and that includes that kind of worried well population. But I think that market is probably going to get smaller as more of it’s going to be taken up by AI. The market of worried well, who wants to work with a human therapist, as that market shrinks, it’s going to be comparatively wealthier, and so if you’re one of the therapists who you don’t want to do anything about this rapid approach of AI, you want to just keep your practice as it is right now, working with those worried well, kind of clients, that’s great. More power to you. I think it is going to become more difficult to compete. I don’t think your market’s going to disappear, certainly not overnight, and maybe not even in the long term, but it is going to shrink to some degree, and it’s going to be harder for you to keep that same kind of practice going. I think if you work with couples, families and kids, you’re safer, because AI is not currently capable of doing that, especially well, to my knowledge, especially if you’re working with folks in the room, that’s just harder for AI to intervene in a way that we would with our physical bodies, moving people around the room, or getting down on the floor to play with a kid. You know, those kinds of practices, I think are probably better protected. And then, if you are up for it, not every therapist is, and I certainly understand if you’re not up for it, I think there is going to be demand from the AI companies to have therapists involved in helping train the AI therapists in flagging. That’s okay. I think some of the companies, and maybe not all of them, but some of them are going to want therapists involved in training the AI flagging conversations that produce bad outcomes, or that, metaphorically speaking, tell clients to put glue on their pizza and make in corrections to their models so that they will continue to get better. Now, a lot of therapists like Curt and honestly, like me, this is kind of where I land too. I’m not interested in that kind of a role. I would be too uncomfortable with that, even though, overall, I’m not all that pro or con either way. But I wouldn’t want that kind of a role for myself in training the AI systems. But for those therapists who do want to be involved in the development and the improvement of the technology, I think a lot of these companies would be happy to have you.

Katie Vernoy 35:27
It’s such a challenging time, I think for therapists, because I’m going to use a cliche here, but big tech has definitely come for our jobs from a lot of different angles. We’ve got from the insurance side, where these gigantic group, quote, unquote group practices, where individuals can barely even negotiate with insurance companies anymore, and really it doesn’t make sense for individuals to take insurance in a lot of a lot of cases. There’s the AI tech that’s coming in that to potentially do a therapies, or we’re really raising the bar of what would qualify someone to maybe for insurance purposes, but maybe even for their own pocketbooks, qualify for human to human therapy. And so, you know, the profession that I keep thinking about is like tailors and folks who actually make individual clothes, versus all of the warehouses that just make these clothes that work for everybody. I buy clothes that are probably mass produced. And what you’re talking about is really refining your practice to be exceptional with a specialized practice that is not easily treated by anyone else or potentially cater to the most, most wealthy, and especially the most wealthy who want their privacy and don’t want to be putting their data into a chatbot.

Dr. Ben Caldwell 36:53
Right.

Katie Vernoy 36:54
And so it really shifts, I think, what our profession is or could be. And I don’t know if I have a question about it, but I just, I I feel, I feel okay. That’s what my practice kind of is. But I, I don’t, I don’t think that there’s a lot of therapists who are ready for this.

Curt Widhalm 37:14
And I don’t think that the graduate programs are preparing new students for this and the upcoming changes that are happening in our field yet, either.

Dr. Ben Caldwell 37:27
I would agree with that, and I think, again, it’s not that therapists are making some horrible mistake here. I think it’s just that a lot of us are not aware of how far this technology has already come and how close we are to what I think is going to be a really significant shift in how people access mental health care. So I my semester at CSUN just started yesterday, and I started talking with my students about this, but quite honestly, I scratched the surface last year, and I hadn’t been talking with students about it before then. Because this is such a rapidly developing area, I think a lot of us are not prepared for it, and some of preparing for it really amounts to thinking through what kind of practice do I want to have, especially in a world where clients can access AI therapy for less than 10 bucks a session? How am I going to set myself apart in a way that clients will be interested in, take me up on and, frankly, pay me more money for? I do worry about the possibility of kind of a two tiered system where human therapists with a lot of privacy protections and other guardrails that are built into the licensure process are available for the wealthiest and those in the most need, while AI therapy with far less consumer protection is what gets offered to everybody else. And I think each of us has to decide for ourselves, what role do we want to play, either in that kind of a world or in pushing back against the the momentum that would take us there.

Curt Widhalm 39:19
So if people want to find out more about these updates. Where can they find out as you continue to put out new information about this and about the other cool things that you do in general?

Dr. Ben Caldwell 39:31
Well, thank you. So my articles about this topic are at the blog site, psychotherapynotes.com, and all the other stuff that I do the exam prep, which we just released some fun new features for continuing ed, all that kind of thing that’s all at Highpass.com h, i, g, h, p, A, S, s.com,

Curt Widhalm 39:52
And we’ll include links to those in our show notes over at mtsgpodcast.com. Follow us on our social media. Join our Facebook. Group, the Modern Therapist Group, to continue on with this conversation, and until next time, I’m Curt Widhalm with Katie Vernoy and Dr. Ben Caldwell.

… 40:07
(Advertisement Break)

Announcer 40:07
Thank you for listening to the Modern Therapist’s Survival Guide. Learn more about who we are and what we do at mtsgpodcast.com. You can also join us on Facebook and Twitter, and please don’t forget to subscribe so you don’t miss any of our episodes you.

 

0 replies
SPEAK YOUR MIND

Leave a Reply

Your email address will not be published. Required fields are marked *