Photo Id: A computer screen with a futuristic looking mix of letters and symbols with text overlay

The Future Is Now: Chatbots are Replacing Mental Health Workers

Curt and Katie chat about what happened with the National Eating Disorder Association and their chatbot, Tessa, as well as new prompts to make ChatGPT act like a CBT therapist. We also look at the risks related to chatbots taking over mental health and crisis services. We also discuss what therapists can do to safeguard their practices in the wake of the robot revolution.


Click here to scroll to the podcast transcript.

In this podcast episode we talk about Tessa, the chatbot replacing NEDA hotline workers

After their hotline workers unionized, National Eating Disorder Association (NEDA) fired all of their hotline workers and replaced that service with Tessa, an AI chatbot. This chatbot quickly started telling folks seeking eating disorder assistance that dieting could be a good idea. There are already prompts folks are using to have ChatGPT act as your therapist. We decided we needed to talk about how the chatbots are coming for our jobs.

What happened with National Eating Disorder Association (NEDA) and Tessa?

  • The hotline workers unionized and were fired by the association and replaced with Tessa
  • Tessa is a prevention chatbot that was created to provide support to folk waiting for resources
  • Tessa was launched and when tried, provided harmful advice, and then was taken down
  • Now there is no crisis hotline or back up chat support offered by NEDA

What is the Tessa Chatbot?

  • An evidence-based practice was redesigned as conversations
  • Writing prompts and infographics to break up the text
  • Studies were done to see how it works and to fix some of the errors

What are the risks related to chatbots taking over mental health services?

“The capability of AI at this point, is limited. There are things that it doesn’t do as well as a human. But this is a very short period of time. And the more that humans are interacting with, whether it’s a Tessa chatbot, that specific to eating disorders, or chatGPT, the more that it’s going to get better and better. And it’s going to take all the feedback and all of the information, all the data that’s being processed and get better and better.” – Katie Vernoy, LMFT

  • There are now instructions for prompts to have ChatGPT act as a CBT therapist
  • As people interact with chatbots, they will add to the dataset, theoretically improving it
  • The concerns about the iterations, if unchecked, will become more and more harmful as it adopts human disordered thinking and language
  • Evidence-based practices are prime to be put into chatbots
  • The utility of the resources that chatbots can provide (including coping strategies and writing prompts)

What can therapists do to address the concerns of the robot revolution?

“What are the things in our field that really rely on humans to be able to fix and interact with? You know, if a lot of CBT can be spit out for: oh, here’s enough journal prompts that can make it to where I’m able to get to the cause of why I’m in a depressive episode right now; or, here’s … steps that I can do for myself about exposure and response prevention, that is very manualized. And it gives at least some passing, here’s my limits, as a learning language model that ends up: ‘you should go to a professional right now.” When you get those clients as a professional, they’re like, ‘Yeah, I’ve already tried XY and Z. I’m actually looking for more than what the computer spit out at me.’” – Curt Widhalm, LMFT

  • Understand the technology and pay attention to the innovation process
  • Look at where AI may not be able to replace immediately, focus your attention there in developing your skills
  • Intuitive leaps that humans make that AI may find too risky
  • Higher level risk assessment and more challenging diagnoses may be reserved for humans


Resources for Modern Therapists mentioned in this Podcast Episode:

We’ve pulled together resources mentioned in this episode and put together some handy-dandy links. Please note that some of the links below may be affiliate links, so if you purchase after clicking below, we may get a little bit of cash in our pockets. We thank you in advance!

NEDA and Tessa articles:

Effectiveness of a chatbot for eating disorders prevention: A randomized clinical trial

The Challenges in Designing a Prevention Chatbot for Eating Disorders: Observational Study

Development and usability testing of a chatbot to promote mental health services use among individuals with eating disorders following screening

Using Technology to Innovate Prevention – Get Involved – NEDA

Can a chatbot help people with eating disorders as well as another human? – NPR

Chatbot to Replace Human Staffers at National Eating Disorders Association Helpline – People


Our Linktree:

Relevant Episodes of MTSG Podcast:

Is AI Smart for Your Therapy Practice: The ethics of Artificial Intelligence in therapy

Private Practice Planning for the Future of Mental Healthcare: An Interview with Maureen Werrbach

The Sky is Falling: How Therapists Can Protect Our Industry, Patient-Centered Care, and Our Businesses, An Interview with Dr. Ajita Robinson

Beyond Reimagination: Improving your client outcomes by understanding what big tech is doing right (and wrong) with mental health apps

Who we are:

Picture of Curt Widhalm, LMFT, co-host of the Modern Therapist's Survival Guide podcast; a nice young man with a glorious beard.Curt Widhalm, LMFT

Curt Widhalm is in private practice in the Los Angeles area. He is the cofounder of the Therapy Reimagined conference, an Adjunct Professor at Pepperdine University and CSUN, a former Subject Matter Expert for the California Board of Behavioral Sciences, former CFO of the California Association of Marriage and Family Therapists, and a loving husband and father. He is 1/2 great person, 1/2 provocateur, and 1/2 geek, in that order. He dabbles in the dark art of making “dad jokes” and usually has a half-empty cup of coffee somewhere nearby. Learn more at:

Picture of Katie Vernoy, LMFT, co-host of the Modern Therapist's Survival Guide podcastKatie Vernoy, LMFT

Katie Vernoy is a Licensed Marriage and Family Therapist, coach, and consultant supporting leaders, visionaries, executives, and helping professionals to create sustainable careers. Katie, with Curt, has developed workshops and a conference, Therapy Reimagined, to support therapists navigating through the modern challenges of this profession. Katie is also a former President of the California Association of Marriage and Family Therapists. In her spare time, Katie is secretly siphoning off Curt’s youthful energy, so that she can take over the world. Learn more at:

A Quick Note:

Our opinions are our own. We are only speaking for ourselves – except when we speak for each other, or over each other. We’re working on it.

Our guests are also only speaking for themselves and have their own opinions. We aren’t trying to take their voice, and no one speaks for us either. Mostly because they don’t want to, but hey.

Stay in Touch with Curt, Katie, and the whole Therapy Reimagined #TherapyMovement:


Buy Me A Coffee

Podcast Homepage

Therapy Reimagined Homepage





Consultation services with Curt Widhalm or Katie Vernoy:

The Fifty-Minute Hour

Connect with the Modern Therapist Community:

Our Facebook Group – The Modern Therapists Group

Modern Therapist’s Survival Guide Creative Credits:

Voice Over by DW McCann

Music by Crystal Grooms Mangano

Transcript for this episode of the Modern Therapist’s Survival Guide podcast (Autogenerated):

Transcripts do not include advertisements just a reference to the advertising break (as such timing does not account for advertisements).

… 0:00
(Opening Advertisement)

Announcer 0:00
You’re listening to the Modern Therapist’s Survival Guide where therapists live, breathe, and practice as human beings. To support you as a whole person and a therapist, here are your hosts, Curt Widhalm and Katie Vernoy.

Curt Widhalm 0:15
Welcome back, modern therapists is the Modern Therapist’s Survival Guide. I’m Curt Widhalm with Katie Vernoy. And this is the podcast for therapists about the things going on in our field, the things that we do as therapists, the ways that we describe the sky is falling for our profession. And here, of course, we are talking about how the robot revolution is here. All of…

Katie Vernoy 0:39
The future is now!

Curt Widhalm 0:41
The future is going to continue to be more future-ee and more now. Even more now than we want to believe. And we are in maybe a first in our podcast history, where we’ve recorded an episode, and by the time that it was scheduled to come out, I don’t know, like three weeks after we recorded it. Pretty much everything that we had talked about in the episode was starting to be outdated. And we just got to the point where it was like, we’ll throw it out to our Patreon members, they can go and listen to us, but it’s immediately outdated. And we try to be current in a lot of the things that we discuss. Sometimes we’re successful at that’s more so than others. But right now, we’re talking about the National Eating Disorder Association had a helpline that people with concerns about body image issues or eating disorder issues, could call in and they can talk to volunteers. And this was largely by everybody seemed to be a pretty good idea.

Katie Vernoy 0:42
Yes. And they actually got a lot more calls during the pandemic. And so they were getting more crisis calls. They were they were fielding a lot of calls. And I think they had like six staff overseeing hundreds of volunteer.

Curt Widhalm 2:10
Thousands of volunteers. Yes.

Katie Vernoy 2:12
Thousands of volunteers, hundreds of 1000s of volunteers. I don’t know, there was a lot of volunteers. And they wanted to unionize because of how hard it was to deal with the demand of of the number of calls that were coming in.

Curt Widhalm 2:29

Katie Vernoy 2:30
But in the background. They were creating Tessa the chatbot that was designed to be prevention. And then once the folks unionized they they fired all the the people.

Curt Widhalm 2:47
They announced the shutdown of the human run calls.

Katie Vernoy 2:52

Curt Widhalm 2:53
And, and I believe the research that I saw it was in 2022, this call line fielded 70,000 calls. So…

Katie Vernoy 3:02
Oh, my gosh.

Curt Widhalm 3:03
This is a very big move. Of course, NEDA denies that any of this has to do with the staff trying to unionize.

Katie Vernoy 3:14
But come on.

Curt Widhalm 3:14
If you believe you believe what their PR is spinning out there about this. I’ve got a bridge to sell ya. But the dubiousness of all of this has some some multiple layers to this and the development of this chatbot called Tessa was based on some research by a number of people headed up by Dr. Ellen Fitzsimmons-Craft. There is an article in the International Journal of Eating Disorders from March 2022, called the “Effectiveness of a chatbot for eating disorders prevention: A randomized clinical trial.” And this article goes on to describe that people who interact with this chatbot as compared to waitlist participants, after three months showed less signs of eating disorders, disordered eating behavior. And I think that this also carried on to six months, if I remember correctly.

Katie Vernoy 3:17
There was there was some complexity there because I think the waitlist controls at six months were as good as the Chatbot folks in some areas and not in others it was. But I mean, we’re talking about waitlist controls, not compared to folks that actually called into the hotline.

Curt Widhalm 4:29
And in the ever emerging market of capitalism. Why compared two things that are working when you can compare them to absolutely nothing at all, and then base all of your decisions on that.

Katie Vernoy 4:44
There was a second study that I saw and it’s “The Challenges in Designing a Prevention Chatbot for Eating Disorders: Observational Study,” and Ellen Fitzsimmons-Craft as a second author, there behind William W. Chan, it’s we’ll put links to these articles in the show notes, but they were basically talking about all of the stuff they found wrong with Tessa, and quote unquote fixed before putting Tessa out into the world, even though this was actually advertised through Facebook and people were able to interact with this. So there was harm caused in this observational study, because there were a lot of the same things: a lack of empathy and compassion. And this is something that we’re coming to find out. They’ve now shut down Tessa.

Curt Widhalm 5:30
Well, okay.

Katie Vernoy 5:30
We’ll get to that. We’ll get to that. But, but we’ve already we’ve already know that some of this stuff already came back around, but there’s a lack of empathy and compassion, insufficient instructions, unable to provide clarifications, ignoring questions, reinforcing potentially harmful behavior, and inappropriate positive responses. And so I’ll let you talk about that it got shut down basically for the same things that they had said that they had fixed in this observational study.

Curt Widhalm 6:01
Kind of putting the timeline more together here, the staff members at NEDA had voted to unionize. And before this was to all go into effect four days after the vote to unionize in May of 2023. Four days later, the head of NEDA announced that the entire call to volunteers hotline was going to be closed as of June 1, and entirely replaced by Tessa. So we have dubious research, we have dealing with people trying to unionize, we have this perfect storm of things that’s going to happen. Now, we’re recording this on June 1. This is the day that the human operated volunteer hotline is supposed to be completely gone, it is gone. There is no more of that. They followed through with that.

Katie Vernoy 6:55
There’s nothing on the website. Yeah.

Curt Widhalm 6:58
They were supposed to fully have Tessa being replace. But on May 31, yesterday, several news articles came out about how Tessa is giving bad advice that leads many people who are prone to eating disorders, to have risk factors and vulnerabilities to developing eating disorders.

Katie Vernoy 7:21
I read that they actually had, that Tessa actually gave dieting advice on losing one to two pounds a week and how like five to 500 to 1000 calorie deficit and was talking about how healthy weight loss and healthy body image can go together. Stuff that was completely against the original CBT student bodies curriculum that had been transformed from like a digital webinar kind of thing to conversations. And they first said that people were lying, that this wasn’t happening. But then screenshots came in, you know, cats and dogs playing together, whatever it is, but it was like, there was a lot of stuff that came in that said no, no. Tessa’s gone off the rails. And when I was reading it before, it’s like pre scripted. There shouldn’t have been anything like this in there. So Tessa is now part of the board like Tessa is actually became sentient, it seems decided that we all needed to go on a diet.

Curt Widhalm 8:24
To add a little bit to this that now NEDA is blaming the users for lying, which NEDA is giving a masterclass on how to just be a horrible organization and earlier this year that they didn’t come out against weight loss recommendations for children. They’ve dubiously fired people for unionizing. And now they’re championing a chatbot that…

Katie Vernoy 8:50
Has gone rogue! It’s gone rogue.

Curt Widhalm 8:52
And maybe, maybe their business plan is if we fix eating disorders, and there’s no National Eating Disorder Association,

Katie Vernoy 9:02
it’s in their best interest for you to continue to have an eating disorder.

Curt Widhalm 9:05

Katie Vernoy 9:07
That is pretty bad.

Curt Widhalm 9:08
I don’t know. Ask ask them. There was a movie, I don’t know, this is probably 15 years ago or so called Thank You for Smoking. And the PR guy remember in there’s like we want a lot of people who are lifelong smokers to you know, we don’t want to kill people. Feels reminiscent of that. But NEDAs decision is to pull the chat bot after giving this bad advice and…

Katie Vernoy 9:45
For investigation is what…

Curt Widhalm 9:46
For investigation.

Katie Vernoy 9:47
…investigating it.

Curt Widhalm 9:48
And at least is the time of recording if you go to the NEDA web page that links to their blog about the test a chatbot. It just has a few words therethat say you’re not authorized to access this page. So…

Katie Vernoy 10:03
And then I tried to access the Tessa chat bot, on its own page, like the AI page: 404 not found.

… 10:12
(Advertisement Break)

Curt Widhalm 10:16
This seems to be anybody who knows anything about how therapists actually work to be a very predictable thing that was going to happen. You know…

Katie Vernoy 10:26
We saw it coming.

Curt Widhalm 10:27
We did. Mistakes are going to be made, this isn’t, you know, going to be great from the beginning, there’s a human element that does need to be in here. The research as compared to waitlists, it’s better than absolutely nothing, but nothing has really been compared to the actual humans doing this work.

Katie Vernoy 10:50
So to make sure that we’ve finished this timeline: decided to move to a chatbot fired all the human people chat bot goes rogue, they take down the chat bot. So as of June 1, there is nothing…

Curt Widhalm 11:04
There is nothing.

Katie Vernoy 11:05
…for folks, there is not a hotline, and there’s another chat bot. Neither of those things are available. And potentially by the time we actually publish this, maybe they will have the chat bot back up, which will be very interesting to see.

Curt Widhalm 11:17

Katie Vernoy 11:17
But right now they’ve chosen to give nothing. So refer to an episode that’s coming up on defensive practices.

Curt Widhalm 11:24
Right. This is just a phenomenal story unfolding. But I don’t think that we are anywhere close to being out of the woods, in therapists are much better than AI. I think that this is only the beginning of what is going to transform our field. It’s transforming many other fields. This is where all right, this is just being taken offline, for now. It’s…

Katie Vernoy 11:56
(singing)…only just begun.

Curt Widhalm 11:57
It’s going to be fixed, it’s going to be better, it’s going to make a new round of mistakes.

Katie Vernoy 12:03

Curt Widhalm 12:03
It’s going to then get better from that. And it’s not like we can rely on many positions within our fields to be like only humans can do this. The interactions of a lot of the AI stuff right now is it’s getting better. It is something that we see a lot of people talking about, oh, it’s replacing this aspect of my practice. And we’re like, maybe you shouldn’t rely on it for you know, how to best prepare chicken tartare, because it’s just taking words and putting.

Katie Vernoy 12:45
But I think I think what you’re saying is the capability of AI at this point, is limited. There are things that it doesn’t do as well as a human. Right, that is for very, this is a very short period of time.

Curt Widhalm 13:03
Very short period of time.

Katie Vernoy 13:05
And the more that humans are interacting with, whether it’s a Tessa chatbot, that’s specific to eating disorders, or ChatGPT, the more that it’s going to get better and better. And it’s going to take all the feedback and all of the information, that it that is all the data that’s being processed and get better and better. And it’s going to be designed. But it’s not that people aren’t waiting to use AI as their therapist. I mean, you know, there’s apps that had been created before, like woebot, and different things like that. But I’m sure you saw this, I saw this, there were there are instructions on the prompts that you give to ChatGPT to have a CBT therapist and ChatGPT will come in and say as your CBT therapist, bla bla bla bla bla and interacts. I did try it out. We can talk about that in a minute. But but people are already replacing us…

Curt Widhalm 14:04
And I think…

Katie Vernoy 14:05
…with promts to chat GPT.

Curt Widhalm 14:07
And I think a lot of manualized evidence based practices are very prime if we’re just being plugged into some sort of AI system and can respond and will respond probably a lot faster than a lot of humans are available on demand. And I think that this is already, as evidenced by things like Tessa, to going to be changing our field and very, very rapidly, especially in one on one services based on EBPs.

Katie Vernoy 14:42
If we look specifically I want to look specifically at Tessa just for another couple of minutes because I think that there’s the roadmap that we’re facing. Yes, Tessa there was a Tessa fail. But it was it put a full evidence based practice into a chatbot with the way to detect which part of the conversation to have, and it could jump between conversations. And it was going through different topics. And so if you said this key word, it would trigger this information. And some of it was suggestions, you know, potentially writing prompts. It could be an infographic, it could be general statements, some of it was a little bit more conversational, some of it was longer content that somebody would have to dig into, to respond to a little bit more. But if we look at some of the evidence based practices that are very, very manualized, not the stuff that requires a little bit more nuance, or whatever, but the stuff that’s very manualized, where it’s activity based, where there’s specific guidance, the things that are very, very structured, it is not hard to, to put in and iterate how you how you do those programs. Because, you know, you put in everything that the therapists that, you know, the creators, the therapists, the people that are thinking, let’s put in everything every possible, you know, kind of let’s create all the decision trees, so that we can respond to those things, then we have real people test it and get real feedback. And we keep iterating. So when somebody says something, the AI is able to decode it, and has a one of many responses that they can provide that are appropriate. It’s just a matter of time.

Curt Widhalm 16:27
I think that this is what a lot of people don’t appreciate. And this is not positive appreciation. I think this is what a lot of people don’t appreciate about what the stages of AI are, are happening next. AI, as intended, by Tessa, is designed around singular sorts of things. It’s not supposed to be into this expansive kind of idea in generalized idea. And I think that that’s the part of the conversation that we’re missing, as far as how it interacts within our field and why some of us might not be able to quite put our finger on what is it… What is that human element that we’re looking for? And the ways that things like chat bots and this kind of stuff go from being singular issues sorts of things into expansive issue things, is being able to pull from the data sets of information that they’re able to pull from. And so what that means is inexpansive language sorts of things, particularly around things like eating disorders, the risk of making Tessa better if it’s already adopting language around dieting and best practices of this kind of stuff is once it becomes expansive, starts getting into identifying pro anorexia language, and being able to detect that now, the the dominoes of that is once you start getting into pro anorexia discussions, ends up then pulling from other pro anorexia discussions and what it responds back with to people putting in, hey, I have concerns about, you know, wanting to be you know, very skinny and body image in this kind of stuff. Have you tried XY and Z that comes from these pro ana data sets? So…

Katie Vernoy 18:14
I think that’s kind of it’s running away with something checked. Right? That’s that’s lack of regulation.

Curt Widhalm 18:23
Who’s regulating who’s who’s regulating this stuff right now, though? Nobody!

Katie Vernoy 18:27
No, I think that I want to talk about which which thing that you’re concerned about right now? Because there’s the concerns of how harmful it can be. And they’re the concerns with it’s gonna take at least some of our jobs. Right. And so looking at the harmfulness Yeah. If it’s, if if the, the data sets that they’re able to access or or the the way that the if the data sets are not cleaned? If they’re not, if they’re not appropriately regulated, or assessed or monitored. Yeah, it’s gonna run away with stuff. I mean, we there was the twitter bot, that became a huge racist.

Curt Widhalm 18:59

Katie Vernoy 19:00
Like, I think that that humans interacting with these datasets or interacting with these AI and adding to the dataset with their put, you know, I’ll call it disordered thinking disordered language. And that being taken in and accepted as part of the data set is this is all it’s, it’s all it’s all equal. It’s all neutral. It’s and it’s not evaluated. Yeah, I have huge concerns. I mean, my assumption is, that’s where the pro diet language came from, with Tessa, is that people were interacting with it more. And the language was somehow adopted. I don’t know. I don’t know. I mean, my assumption is that it’s theoretically a learning chatbot but they said it wasn’t. So it really could have been somebody hacked it. You know, who knows? I mean, like, there’s a lot of things where when we’re working with this stuff, there’s ways that harm can flow in.

Curt Widhalm 19:52

Katie Vernoy 19:53
People are not appreciating the harm. That’s that was the point that you’re making.

Curt Widhalm 19:56
Yes. And places like NEDA not appreciating the harm, not appreciate,

Katie Vernoy 20:04
Yes, people who were actually putting this stuff out too early,

Curt Widhalm 20:07
Putting the stuff out too early, putting the stuff out there to air quotes be helpful.

Katie Vernoy 20:14

Curt Widhalm 20:15
This is stuff that will get fixed in time. And this is not like where we can sit back as a profession right now and be like, see how bad AI is for what it is that we do.

Katie Vernoy 20:30

Curt Widhalm 20:30
We are not celebrating that this failed. We are taking it as this is one of the steps along the way to it actually being something that is better. And what that means for therapists is that a lot of the jobs that we have relied on, a lot of the experiences that make us better candidates who are working in certain places. More of this revolution is coming. This is just the first one that’s publicly out there. And, hilariously…

Katie Vernoy 20:32
It failed.

Curt Widhalm 20:50
…and spectacularly failing at many of the steps along the way.

Katie Vernoy 21:09

Curt Widhalm 21:10
But this is where the echoes of this, I don’t know, are really being appreciated yet by the other, you know, therapists associations in ways that are like, what does this mean for our actual workforce? What does this mean for you as the individual therapist? Katie and I’ve been talking for like three years of like, here or more, we’ve been talking for several years, or more about ways that you can go against some of the tech stuff and make your practice viable and these kinds of things, but we’re potentially on the horizon of there’s just certain aspects that are of our field that are just totally going to get replaced by AI. And while that might be helpful for singular issues, what does that mean for the cost of your investment in education, being trained, getting to being a therapist, that’s really more where I would encourage modern therapists to take actions in making yourself more AI proof than it is just kind of tech proof.

… 22:17
(Advertisement Break)

Katie Vernoy 22:18
Well, before we get into that, I just want to talk about that, that concern a bit more, because to me, I did try out some of the recommended prompts with ChatGPT. And I didn’t find it horrible. And it did continually refer me back to a human therapist or a crisis hotline, I tried to put in all of the stuff from our suicide CE episode around, you know, all of the warning signs, and it did recommend that I call a crisis hotline or emergency services. So it did somewhat identify it. But it wasn’t a therapist yet. But it was a tool there was there was writing prompts, it was, you know, potentially good journaling and had some general ideas around coping strategies. And it did refer me to a therapist, or to different kinds of community resources, that kind of stuff. So it wasn’t horrible. Certainly, I didn’t go too far, because I didn’t have the time or the inclination to basically have like a black mark on my, my ChatGPT profile, but there’s that element of there are things about it that are useful. And as it gets more and more developed, I think it could be a very, very useful tool in mental health prevention and potentially adjunctive services to therapists. And because I can see how, with the appropriate safeguards, the appropriate oversight, development, research, all of those things, getting to a chatbot, or other type of AI interactive situation, that will allow for some of what we do to be appropriately handled by AI. I mean, I even thought about you know, like, Okay, I was able to go in and teach it, how to learn how to write like me. It certainly started taking in the things that I had. I, I asked him about a specific book that I love to recommend and say, Hey, write a blog post, in my style of summarizing this book, you know, there were some positive things and and I have a meditation teacher that has a whole AI chat bot will actually video craziness that’s a little bit creepy to watch. That is going to help her with creating meditations. So I can see where if my clients were able to go to an what would Katie say or ask Katie chatbot that could be an adjunct for me, because then maybe it’s you know, for lower level coaching calls, so to speak, a chatbot could handle some of that stuff with appropriate overs. And I’m not suggesting that this happen right away. But it’s something where if we don’t get into the innovation of it, and we don’t get ahead of it and then also advocate for the safeguards that need to be in place. I think we we can overtake it. And that’s not even talking about what you were starting to talk about, which is how we become better therapists than a chatbot.

Curt Widhalm 25:14
I agree. And I think that a lot of the lower level of predictive sort of stuff is going to be for some people more accessible, it’s going to face many of the market things that are Hey, after watching this ad that you can get your help on your suicidality at. I mean, because where’s the end of this for the companies that are making, so it’s got to be about getting money at some point, the look at where we’re going, as professionals within mental health work, Katie and I have always come back to the point of, it’s about the humanity, about the relationships that you make with people in this kind of stuff that is not going to be really faked by here’s a bunch of words that are popping up on the screen and saying the right things that come out of the the data sets of the evidence based practices sorts of things. But if a lot of the psycho education work that we do can be replaced and automated in the way that you’re describing here, what does that leave for us to do? What are the things that AI is not going to be able to do? And I think a lot of that comes down to what are the things in our field that really rely on humans to be able to fix and interact with, you know, if a lot of CBT can be spit out for oh, here’s enough journal prompts that can make it to where I’m able to get to the cause of why I’m in a depressive episode right now. Or, here’s enough prompts about, here’s steps that I can do for myself about exposure and response prevention, that is very manualized. And it gives at least some passing, here’s my limits, as a learning language model that ends up you should go to a professional right now. That, you know, when you get those clients as a professional, they’re like, Yeah, I’ve already tried XY and Z, like, I’m actually looking for more than what the computer spit out at me. But just brainstorming here, what are the things that you think are really safe from being replaced by chatbots in our field? What are the things that require the human interventions,

Katie Vernoy 27:44
This will be outdated in five years, 50 years, I don’t know. But for now, compassion and humanity is a big part. There’s the things that potentially I could say even the art of therapy, but I look at all of the the art AI that is recreating the art from other folks. And so I feel a little bit pessimistic about what’s actually protected. For me, what I want to believe is that there is a special element of me, that is going to be what my clients need. A relationship.

Curt Widhalm 28:27
Literally just minutes after you’re talking about training AI in its current form, to be you.

Katie Vernoy 28:34
Sure, and to be able to give the advice that I would normally give. And I’ve obviously, therapists advice, whatever, we can talk about that later, but be able to ask some of the questions that I might ask, the structured conversations that I can have. But it’s not going to know the therapy, it’s not going to know the client, it’s not going to have the relationship that I have with the client, but it might be able to give them some writing prompts or some questions that gets them through to the next session. Right? I think it’s that that element of connection that I feel like is what we have and the nuanced work that’s not as easy to put into the the algorithms, the decision trees that that happen.

Curt Widhalm 29:20
I mean, a decision tree, a decision tree is going to be replaced. If that is that’s the kind of stuff that AI is primed to take over.

Katie Vernoy 29:29
No, I made the decision tree on in the backend of do I give an intervention A or intervention B as my response. I’m talking about the background of the AI. Like there’s some of those things where we take intuitive leaps. For example, I do Wordle every most every day, and I have a New York Times subscription, so there’s also a wordle bot that assesses how well I did okay? And it assesses both the skill and the luck of each answer, and it has a lot of data. But it’s always interesting because when I guess correctly, and I’m I get way faster, than Wordle robot did they always calling it luck.

Curt Widhalm 30:16
Oh, of course.

Katie Vernoy 30:17
And there it’s not about the the little bits and pieces of information that are hard to quantify, and and even elucidate to say, this is why I chose, you know, I can’t think of the words, but I’ve chose this word versus this word. Right. And so to me, I think there’s that that element of how human brains actually work that I think is different and would be hard for AI to recreate soon. But I could be wrong. I’m not this is not my area of expertise. So I don’t know how well AI is going to be able to make those intuitive leaps.

Curt Widhalm 30:58
I think it’s coming faster than that. I think that there’s certain populations of clients that are going to not be able to have AI make the kinds of leaps that we do. I think some of the intuitive stuff that you’re talking about the, the AI is going to be able to do a lot of that kind of stuff.

Katie Vernoy 31:23
Especially if they’re watching all our session.

Curt Widhalm 31:25
Exactly. I’m thinking the kinds of things that are really going to end up needing therapists are ones that some of that intuitive stuff, poses more risk, poses things that computers can’t do, for example, suicidality. For example, trauma work, for example, marital or family type work, you know, a computer is not going to come out and block somebody from talking over somebody else. You’re not going to see that kind of stuff. This is the kinds of things that do require, at least in this imaginable scenario, a human to actually be the cause of intervention there. I think with enough data set some of the intuitive things that you’re talking about. I don’t think we’re relatively speaking that far off from some of the artificial intelligence being able to do some of that stuff.

Katie Vernoy 32:32
Yeah, I think the things that make us us, the things that make us human, I think it’s hard to describe it. But you’re right, I think with enough data, people, you know, AI can get close enough to what a person can do. And just to kind of talk about what they’re already saying are the arguments for using AI, safe space, there’s no judgment, there’s not a human interacting. So some, I think some folks will prefer at least initially, to talk to an AI or a chatbot. There’s, it can be used as an interactive screening tool, it can be interactive psychoeducation, which is what Tessa is. And it can be, it can start as a adjunctive tool that is watching and becoming more and more entrenched in whatever their specialty so to speak is. But but I don’t know, I think the call to action is really twofold. One is learn the technology and figure out where it belongs in the world. And the second part is advocate for safeguards, and potentially prevention of having these things become Tessa, and come into a space and be used before they’re really ready. Because I think, not only is the harm there to actual potential clients or the consumers of Tessa, the harm is also in all of that data and how much was learned in this failed experiment, which will push Tessa forward much more quickly, and have Tesla more ready to take over for human people very, very soon.

Curt Widhalm 34:13
We’d love to hear your thoughts on what is presumed to be a yet again, already outdated episode. Let us know what you think. Follow us on our social media, join our Facebook group, the Modern Therapist Group. Pre put in to whatever your AI thing of choices, what you imagine our arguments will be and post that link up on our on our stuff. And we’ll have our AI talk with your AI.

Katie Vernoy 34:46
We’ll put everything in our show notes over at

Curt Widhalm 34:50
Yeah, if you want to continue to support us, please consider supporting us on Patreon or Buy Me a Coffee and till next time, I’m Curt Widhalm with Katie Vernoy.

… 35:00
(Advertisement Break)

Announcer 35:00
Thank you for listening to the Modern Therapist’s Survival Guide learn more about who we are and what we do at You can also join us on Facebook and Twitter and please don’t forget to subscribe so you don’t miss any of our episodes.


0 replies

Leave a Reply

Your email address will not be published. Required fields are marked *