Image: “Training Therapists in the Age of AI: Preventing Deskilling and Teaching Clinical Judgment. A Continuing Education Podcourse.” Background shows an open laptop displaying an image of Earth.

Training Therapists in the Age of AI: Preventing Deskilling and Teaching Clinical Judgment

Curt and Katie examine how artificial intelligence is reshaping therapist training, supervision, and clinical practice – and why the greatest risk is not replacement, but erosion of clinical judgment. Drawing from learning science, supervision theory, and real-world examples, they explore how AI can create the appearance of competence while quietly undermining the developmental processes that make therapists clinically effective.
This is a continuing education podcourse.

Transcript

Click here to scroll to the podcast transcript.

(Show notes provided in collaboration with Otter.ai and ChatGPT.)

In This Podcast Episode: Training Therapists to Think Clinically in an Automated World

Artificial intelligence is rapidly becoming embedded in mental health care—through documentation tools, treatment planning, assessment supports, and supervision adjuncts. For therapists, supervisors, and educators, the question is no longer whether AI will be used, but how it will shape clinical thinking over time.

In this continuing education episode, Curt Widhalm and Katie Vernoy explore therapist deskilling as a structural problem rather than an individual failing. They describe how AI tools can generate polished, confident output that masks gaps in reasoning—what the literature describes as the competence paradox—and why this poses particular risks for newer clinicians who are still developing clinical judgment.

Rather than arguing for or against AI, Curt and Katie focus on professional responsibility: the obligation of supervisors, educators, and clinical leaders to ensure that learning remains effortful, reflective, and grounded in human judgment. They examine how automation bias, algorithmic hallucinations, and AI sycophancy can interfere with learning—and how education and supervision can be structured to counteract those risks.

In this episode, Curt and Katie discuss:

“We can’t avoid that people are going to use the tools that are at their hands, and we can only do the best that we can to try to help them learn critically and independently.”
— Katie Vernoy, LMFT

  • Why therapist deskilling is a systemic outcome of training and productivity pressures, not personal laziness
  • How AI can create illusions of competence without developing clinical reasoning
  • The role of automation bias in deferring judgment to algorithmic output
  • Why ambiguity, struggle, and desirable difficulty are essential to learning psychotherapy
  • How supervision models can unintentionally prioritize efficiency over skill formation
  • The difference between producing correct answers and learning how to think critically and clinically
  • Why AI must be treated as an extension of the clinician, not an authority or peer
  • The ethical reality that responsibility for AI-assisted work always remains human

“A lot of psychotherapy and being a good therapist is tacit knowledge. It’s sitting through messiness. It’s sitting through ambiguity. And if you think about the output that you get from a lot of artificial intelligence, it’s very clean feedback, and it doesn’t necessarily operate in the messiness.”
— Curt Widhalm, LMFT

Why This Matters for Supervision and Education

Clinical judgment is not transmitted through templates or shortcuts – it is developed through repeated engagement with uncertainty, complexity, and human feedback. Curt and Katie argue that when AI removes too much of this process, it risks weakening the very skills therapists are ethically required to possess.

This episode outlines how educators and supervisors can actively protect clinical development by:

  • Emphasizing process over answers
  • Requiring human explanation and audit of AI-assisted work
  • Preserving opportunities for manual case formulation and conceptualization
  • Teaching algorithmic literacy alongside clinical skills
  • Reinforcing that AI does not reduce ethical or legal accountability

Resources on AI, Therapist Training, and Clinical Judgment

We’ve pulled together resources mentioned in this episode and put together some handy-dandy links. Please note that some of the links below may be affiliate links, so if you purchase after clicking below, we may get a little bit of cash in our pockets. We thank you in advance!

(Research citations are listed below.)

Continuing Education Information

Hey modern therapists, we’re so excited to offer the opportunity for 1 unit of continuing education for this podcast episode – Therapy Reimagined is bringing you the Modern Therapist Learning Community!

Once you’ve listened to this episode, to get CE credit you just need to:

  1. Go to moderntherapistcommunity.com
  2. Register for your free profile
  3. Purchase this course
  4. Pass the post-test
  5. Complete the evaluation

Once completed, your CE certificate will appear in your profile and can be downloaded for your records.

You can find this full course (including handouts and resources) here: https://learn.moderntherapistcommunity.com/courses/training-therapists-in-the-age-of-ai-preventing-deskilling-and-teaching-clinical-judgment

Continuing Education Approvals

When we are airing this podcast episode, we have the following CE approval:

Therapy Reimagined is approved by the California Association of Marriage and Family Therapists to sponsor continuing education for LMFTs, LPCCs, LCSWs, and LEPs (CAMFT CEPA provider #132270). Therapy Reimagined maintains responsibility for this program and its content. Courses meet the qualifications for continuing education credit as required by the California Board of Behavioral Sciences. Please check with your licensing board to confirm eligibility.

Please check back as we add other approval bodies: Continuing Education Information including grievance and refund policies.

References Mentioned in This Continuing Education Podcast

Relevant Episodes of MTSG Podcast

Meet the Hosts: Curt Widhalm & Katie Vernoy

Picture of Curt Widhalm, LMFT, co-host of the Modern Therapist's Survival Guide podcast; a nice young man with a glorious beard.Curt Widhalm, LMFT

Curt Widhalm is in private practice in the Los Angeles area. He is the cofounder of the Therapy Reimagined conference, an Adjunct Professor at Pepperdine University and CSUN, a former Subject Matter Expert for the California Board of Behavioral Sciences, former CFO of the California Association of Marriage and Family Therapists, and a loving husband and father. He is 1/2 great person, 1/2 provocateur, and 1/2 geek, in that order. He dabbles in the dark art of making “dad jokes” and usually has a half-empty cup of coffee somewhere nearby. Learn more at: http://www.curtwidhalm.com

Picture of Katie Vernoy, LMFT, co-host of the Modern Therapist's Survival Guide podcastKatie Vernoy, LMFT

Katie Vernoy is a Licensed Marriage and Family Therapist, coach, and consultant supporting leaders, visionaries, executives, and helping professionals to create sustainable careers. Katie, with Curt, has developed workshops and a conference, Therapy Reimagined, to support therapists navigating through the modern challenges of this profession. Katie is also a former President of the California Association of Marriage and Family Therapists. In her spare time, Katie is secretly siphoning off Curt’s youthful energy, so that she can take over the world. Learn more at: http://www.katievernoy.com

A Quick Note:

Our opinions are our own. We are only speaking for ourselves – except when we speak for each other, or over each other. We’re working on it.

Our guests are also only speaking for themselves and have their own opinions. We aren’t trying to take their voice, and no one speaks for us either. Mostly because they don’t want to, but hey.

Join the Modern Therapist Community:

Linktree

Patreon | Buy Me A Coffee

Podcast Homepage | Therapy Reimagined Homepage

Facebook | Facebook Group | Instagram | YouTube | LinkedIn | Substack

Consultation services with Curt Widhalm or Katie Vernoy:

The Fifty-Minute Hour

Connect with the Modern Therapist Community:

Our Facebook Group – The Modern Therapists Group

Modern Therapist’s Survival Guide Creative Credits:

Voice Over by DW McCann https://www.facebook.com/McCannDW/

Music by Crystal Grooms Mangano https://groomsymusic.com/

 

Transcript for this episode of the Modern Therapist’s Survival Guide podcast (Autogenerated):

Transcripts do not include advertisements just a reference to the advertising break (as such timing does not account for advertisements)

… 0:00
(Opening Advertisement)

Announcer 0:00
You’re listening to the Modern Therapist’s Survival Guide, where therapists live, breathe and practice as human beings. To support you as a whole person and a therapist, here are your hosts, Curt Widhalm and Katie Vernoy.

Curt Widhalm 0:15
Hey, modern therapists, we’re so excited to offer the opportunity for one unit of continuing education for this podcast episode. Once you’ve listened to this episode, to get CE credit, you just need to go to moderntherapistcommunity.com, register for your free profile, purchase this course, pass the post test and complete the evaluation. Once that’s all completed, you’ll get a CE certificate in your profile, or you can download it for your records. For a current list of our CE approvals, check out moderntherapistcommunity.com.

Katie Vernoy 0:47
Once again, hop over to moderntherapistscommunity.com for 1 CE, once you’ve listened.

Curt Widhalm 0:55
Welcome back, modern therapists. This is the Modern Therapist’s Survival Guide. I’m Curt Widhalm with Katie Vernoy, and this is the podcast where we are doing CEs about things that go on in the therapist world, and in this episode, we are addressing the concept of therapist de-skilling, especially as it comes to artificial intelligence and the ways that human nature leads people to relying on some of the tools around us, and in doing so, we end up losing some of the skills that really make us unique, and especially when it comes to clinical decisions and the clinical aspects that we do as therapists. We are diving into this episode with the intention that this is an episode really geared more towards educators and supervisors, but really applies to us all as artificial intelligence and artificial intelligence tools of all kinds, comes across in the ways that we work, offering us many shortcuts to get to some of our tasks faster, getting to the end of them faster, getting to better clinical outcomes, supposedly faster. We’re going to dive into how do we really conceptualize for new students, new supervisees, to be able to not lose some of the skills that make therapists therapists. As we look through this episode, I want to conceptualize where Katie and I are coming from on this a little bit. I used to be a professor at a couple of different schools in the Los Angeles area. I’ve served as an educator in that area, and did that I don’t know about 10 years in total, across three different universities. I am a California Association of Marriage and Family Therapists, certified supervisor. I have supervisees in my practice. I have supervised in agencies. So coming from some of the areas of being able to kind of foster new people into our profession, and occasionally I record podcasts and talk about the state of the field. So Katie, do you want to frame where you’re coming at in this episode?

Katie Vernoy 3:12
Sure, I can talk a little bit about that. The background that I have for supervision is I was a supervisor for many years, and then also a supervisor of supervisors. I managed a lot of folks who managed a lot of folks, and worked in community mental health. That was where I was doing a lot of supervision. I also am a clinician who has started incorporating AI into my practice, so I have some insight into what that looks like. And also have a few podcast episodes that I’ve done talking about the state of our profession. So.

Curt Widhalm 3:47
Part of the idea behind this episode came from I was part of a panel for the California Board of Behavioral Sciences. It was talking about the introduction of artificial intelligence to the therapy professions, what it means for consumers, what it means for therapists, concerns that therapists have. And I was there as a representative of CAMFT because through our recent work and our recent processes, Katie and I are on the CAMFT ethics committee, we were big proponents in pushing CAMFT to adopt ethical guidelines around the use of artificial intelligence. That’s how I ended up on this panel, and talking about some of the concerns, just overall in the delivery, and a lot of it came from talking points on this very podcast. And as that panel was wrapping up there was some commentary from some of the audience members, and one specific person had asked a question about, what are some of the concerns around therapists de-skilling, or the loss of some of the skills, and what are some of the concerns that are posed by artificial intelligence? And I thought that is a great topic for a podcast episode, and that’s how we arrived here. Now, largely what we’re going to be talking about as far as artificial intelligence today, and particularly some of the artificial intelligence tools are computer systems that perform tasks typically requiring human cognition: pattern recognition, language processing, decision making that supports clinical workflows and patient care. This isn’t necessarily about, you know, replacement of therapy skills such as human empathy, but we are going to talk about human empathy in this episode as something that therapists do, but AI for clinicians in a lot of what is available today at the time of recording is largely going to come in the forms of natural language processing, machine learning, generative AI type models, and some of these tools show up as scribes and documentation, but there are other AI tools that are out there that are decision making, that help develop treatment plans, that help develop diagnoses, that serve as digital therapeutics, and in between session supports. And what we hope to do in this episode is be able to kind of conceptualize that these are tools that exist, people are going to use them, and that we as professionals can’t approach educating the new and upcoming therapists as either don’t use those tools at all. As an educator, I know these are tools that students are going to use. Why argue against them? Why try to out human other humans when it comes to Yeah, they’re going to use them. So this is about teaching how to use artificial intelligence tools appropriately, not solely in just in relying on them.

Katie Vernoy 7:09
One of the thoughts that came to mind, and this is just an add on to your point, so I’ll be brief: in going to the Behavioral Health Tech conference a couple years in a row, it is not just that people are going to use these things. I think that there is going to be a point at which therapists are required to use AI, especially depending on how they’re working, who they’re working for, those types of things. I do not believe that we will be able to avoid AI being a part of our processes. So it’s really important any any therapist, I actually disagree. I don’t think this is just for supervisors and educators. I think any therapist who is using AI in their practice needs to listen to this episode, to be able to talk about or to be able to identify tools to keep themselves thinking critically, having strong clinical skills, all of those things. I think this is a really important episode, and I’m glad we’re doing it.

Curt Widhalm 8:08
So, because this is a continuing education episode, and because this is asynchronous learning, you are going to have to take a quiz at the end if you are wanting to get your CE credits for it. And I know a guy who knows something about the quiz, and you should really pay attention to one of these first points, because it might show up on that quiz. So wanting to lay out some of the structure for this about de-skilling in and of itself, I want to say that de-skilling is not necessarily an individual problem. It’s not necessarily born of individual laziness. It is human nature to want to be able to do things more efficiently, faster, sound smarter. I use artificial intelligence to be able to oftentimes change the tone of how I’m presenting things. I end up putting things into Gemini as my tool of choice a lot of times. And I say, Does this sound too much like Curt Widhalm? And it says, Curt Widhalm comes across as very pragmatic. Here’s ways that he can soften his language. So I I use artificial intelligence sometimes, even in this very podcast, to be able to kind of sound warmer than maybe I naturally am.

Katie Vernoy 9:32
You’re using AI to mask.

Curt Widhalm 9:34
I’m using AI to mask, and I have no shame in saying that.

Katie Vernoy 9:39
All right.

Curt Widhalm 9:42
And I have noticed that in doing this, actually, this is completely off script, that it has kind of made me a more personable person overall, so it has kind of had some influence on it. So.

Katie Vernoy 9:56
You’ve drunk the Kool Aid.

Curt Widhalm 9:57
I have learned in really looking at my language, that there are ways that I can come across warmer.

Katie Vernoy 10:04
That’s, that’s wonderful.

Curt Widhalm 10:06
But this is not just an individual laziness problem. This is something that we really also have to conceptualize as a structural issue, that across the therapeutic lifespan, we have systems that are in place that prioritize efficiency and output over the actual effortful human process that goes into this. If you think about the ways that insurance wants you to spend as few of sessions as possible in order to get clients to their outcomes. If you think about the ways that DMH funding might be based on outcome metrics rather than Did you really have human contact with the client while it’s in session? But even as part of the educational system, the last several years have really, in a lot of academia had ways to how do we anti-AI these courses, and really what that has brought to the surface is that a lot of education is focused on getting the right answer rather than the right process to be able to get to the answer. So the context of a lot of where we’re starting with is there are systemic things that we have in place that make a novice or even a seasoned therapist very likely to rely on AI tools to be able to get to saying the right thing or getting the right result.

Katie Vernoy 11:43
And this isn’t a new phenomenon. I just want to reflect on calculators, the internet, Google search…

Curt Widhalm 11:51
Internal combustion engines.

Katie Vernoy 11:53
Internal combustion engines. I think that there is not just systemic issues, but also human issues. I mean, I guess it’s a different type of system, that we want to make things more efficient, tying our shoes, we have to effortfully do it and then it, then it becomes a single process, instead of five or six. So I think that there’s part of this that is structural, and I think you didn’t use this word, but I will kind of nefarious, and there’s part of it that potentially is natural progress. And so I think for me, one of the frames that I’m putting on this episode is, how do we avoid the nefarious structural issues, and how do we understand and navigate through the normal process of progress and advancement.

Curt Widhalm 12:48
Another thing that we are taking into account on this is that a lot of the emphasis that we have as educators or as supervisors inadvertently contributes to something that in the literature is called the competence paradox, and this is where AI can create an illusion of competence, especially for novice or beginner therapist to try to show that they have the right answer. It might be to relieve their own anxieties. It might be to appease a professor get the right answer for a supervisor, so that way they can feel good and get another, you know, stitch in their cardigan or something like that. But a lot of these are about performance, to get to the right answer sooner and faster. But it is just what it appears, which is it appears competent, but it might not have the underlying clinical judgment and foundational skills to actually be able to get there. And if these are things that are never developed for newer students and newer trainees, they can become over reliant on artificial intelligence overall, and for those of us who are more seasoned, if we are relying on artificial intelligence, it can actually lead to skill decay, or actual atrophy of some of the skills that we have. Now, even some of the problems that exist with some of the clinical exams is being able to put the right answer down into some sort of multiple choice sort of exam. And many of us who have taken those tests, many of us who have written the tests, people like myself, I think we can all agree that that is not representative of what actually being a therapist is whatsoever.

Katie Vernoy 14:42
Nope.

Curt Widhalm 14:43
And that’s because a lot of psychotherapy and being a good therapist is tacit knowledge. It’s sitting through messiness. It’s sitting through ambiguity. And if you think about the output that you get from a lot of artificial intelligence, it’s very clean feedback, and it doesn’t necessarily operate in the messiness. I was reflecting to Katie before we push record that in nearly 20 years of practice, I think that I’ve only ever had one client that has followed a treatment plan, step by step and successfully gone through here is intervention A to intervention L, in order without something messy coming up and diverting some of the opportunities that we were looking at. So a lot of expertise comes from needing to operate in that messiness. We also have talked on this podcast before about the risk of just following automation bias that whatever AI ends up putting out would seem to be right, and to the untrained ear and the untrained eye, it probably does sound right, and coupled with the way that AI tends to exhibit a lot of sycophancy. I like this word. It’s fancy, and it even tells you it’s fancy sycophancy, where it’s providing constant validation, rather than critical challenges, and if the AI is constantly just telling you, Hey, that’s a good question. Hey, you’re right, you’re moving down the right pathway. It can not challenge us in the ways that we want to be able to learn, or the ways that we know that people learn, which is actually kind of struggling through stuff. And as Katie points out, that there’s probably going to be a day at some point or another where there’s a shift in our professional identity, especially when expectation of utilizing artificial intelligence tools becomes part of the profession. We want to know that therapists are more than just spectators of tools that might be part of this, but we also are still going to be responsible, active, ethical agents. And a lot of the ethical updates that Katie and I helped to get CAMFT to be able to adopt was around therapists needing to have responsibility over the outputs of artificial intelligence that they are signing off on, that they have checked it, they have done this, and it keeps the human at the center of being part of the therapy delivery system.

… 17:31
(Advertisement Break)

Katie Vernoy 17:33
Much of this seems like we could be talking about the internet, or we could be talking about some of these other things. I think sycophancy is pretty unique to AI, so I don’t think Google’s like, hey, great question.

Curt Widhalm 17:46
That is a great point, Katie.

Katie Vernoy 17:49
Thank you. Thank you. There has been a journey we’ve all gone on as human cojourners. I don’t even know if that’s the right word, fellow journey people? Anyway, where we’ve had to identify, how do we incorporate internet search, or how do we incorporate using a calculator, whatever it is, we can we can use a typewriter. How do we incorporate those things into our responsibilities and what becomes an individual responsibility that we need to learn, and what is a systemic responsibility? And I think the moral spectators and AI auditors that you were talking about, I think no matter what we’re going to have to do those things, they should not be our only role, and we should be active ethical agents. And I do think that there is a responsibility of the system, and those of us who work to impact the system, to try to make sure that the tools that we have are not solely sycophants with, you know, step by step instructions that give us morally questionable or ethically wrong answers. I think there’s, there’s those things on both sides. Now that’s, I’ll leave that there, because I think this is about, how do we teach folks to use it? But I do think that there is a just a call to action for those of us who have some of this information, that we try to impact how these tools are created, so that it’s easier and potentially less important for clinicians to have to do some of this that you’re talking about.

Curt Widhalm 19:26
One of the papers that I had come across in preparing for this episode is by Avigail Ferdman. This is in AI & Society called “AI deskilling is a structural problem.” This is from 2025 and part of me absolutely loves what this paper starts with, which is talking about Plato and anytime that we can talk about artificial intelligence, and then immediately go back a couple of 1000 years, you’ve got me hooked from the very beginning here. She uses the Socrates writing about this that Socrates cautions that writing will de-skill the capacity for memory in people’s minds. Trusting an external entity will discourage the use of their own memory within them. So not only are we talking about calculators and internal combustion engines, this is literally a problem from the very beginnings of philosophical science. So.

Katie Vernoy 20:28
Yeah, if we write it down, how will we remember it? Okay.

Curt Widhalm 20:34
But the paper goes on to talk about developmental perfectionism and really understanding that we have to use our capacities and we have to practice in order to get better. That while there is a perfectionistic philosophy that we want to get to, and especially when we’re a novice at something, it becomes more about getting the answer right than what it is. And a lot of this is just understanding how people learn about skills. Many of the papers that I came across come back to the point over and over again that the best way to learn skills is practice. It is actually doing the skill over and over until it gets to a place of better practice. Now, Katie, you pointed out a couple of times about the internet, about artificial intelligence, and what we’ve seen is that a lot of what these tools have done, and this is going back to the internet in the last, you know, 30 or 40 years is it has taken away a lot of the and I’m going to put air quotes around this. for those of you who can’t see me on this very audio podcast, air quotes around “grunt work.” That a lot of very introductory level aspects to a number of different professions is very menial grunt work, that is the building blocks of higher level skills. Now, Katie, what do you think some of the grunt work is in therapy?

Katie Vernoy 22:16
I’m sorry I got distracted by the idea of grunt work and the amount of hours I spent going through the stacks, finding articles and then copying them so that I could read them later. I was thinking what it would take to do this podcast prior to the complete adoption of the internet and scanned articles. So let me. Let me get on track.

Curt Widhalm 22:40
Well, so while you’re getting on track, one of the areas that I have known to have largely affected some of the grunt work is in the profession of law. It used to be that junior attorneys would have to be the ones going into a lot of old legal books and finding the laws and going to those very same copy machines and copying them and bringing them to the more senior attorneys and saying, This is what I found on this. And what artificial intelligence and even just a lot of internet tools have done is removed a lot of the laborious work that junior attorneys had to do as part of the beginning steps of getting into their profession. But what those junior attorneys were learning by going through a lot of those books is reading about a lot of other cases that don’t apply. And they were learning a lot of knowledge in not just getting to the right answer real quickly. And now those senior attorneys and likely, you know, legal assistance and para paralegals and all of this kind of stuff, they can just type a few things into some sort of search engine and get those articles a lot faster. Saves a few dollars, but at the cost of some really important, critical learning skills, about being able to learn about a lot of the law profession there.

Katie Vernoy 24:05
Yeah, I mean, I just imagine walking around the library and trying to find something, and finding a different a different book, or talking to somebody who happens to be in my same or similar profession quietly, because it was library and and learning more. So there’s, there’s some of that that I think is relevant. But when you asked about the grunt work of being a therapist, I think that there are different stages of it. I think there’s obviously writing notes, there’s treatment planning, and all the documentation relevant to that. It’s getting document signed. I think we’ve talked about before, I used to print stuff out and have clients sign it, and now they can just go and click a button on my on my client portal. So I think there’s, there’s a lot of a lot of different types of grunt work, but I hate to call it grunt work, but I think this is one thing that you’re talking about, is clinical case conceptualization and potentially writing that down. Because, you know, unlike Socrates or Plato, I don’t know that I could remember it without writing it down. But I think that there’s this other element of any work can be, can feel like grunt work if it takes a long time and it doesn’t feel highly engaging. I think the work that feels most engaging to me is the face to face work with clients. That’s what I would never want to lose. But there are those sessions that I sit through and say, Okay, I’m teaching a basic skill. I have taught it to hundreds of people. This person is nodding and smiling, and I’m just continuing to teach this skill. So I could see that also, even though that’s face to face, one to one work, I could see skills teaching as potentially grunt work in therapy and all of those things, you know, full disclosure, all of those things have been made available or seen as opportunities for AI companies. So I’m not bringing those things out of nowhere, but there, there are different things that can be seen as grunt work. What, what are you most concerned about, as far as the de-skilling that occurs with AI taking on some of this grunt work?

Curt Widhalm 26:19
News, Flash, quiz question.

Katie Vernoy 26:24
Pay attention, folks.

Curt Widhalm 26:26
So the literature talks about desirable difficulty, and these are manual tasks that we have to do to ensure that we are developing cognitive muscles that are exercised and mental scaffolding is built. So you’re talking about doing case notes, and you are a utilizer of your EHR system’s note taking feature.

Katie Vernoy 26:50
Yes.

Curt Widhalm 26:51
And I know that you have a couple of decades worth of experience of doing notes. We’re going to talk about that a little bit later, you have done your duty, as far as having typed out before that, handwritten notes for your for your sessions. You have been trained on the golden thread of documentation around, okay, you’re nodding tell us about the golden thread of documentation.

Katie Vernoy 27:19
That do you have a diagnosis or some sort of assessment of what the presenting problem is and the clinical description of that? You take that and collaboratively with the client, you create a treatment plan that works towards resolving any of the presenting problems, and your documentation shows how your session and the interventions that you’re incorporating into those sessions address the treatment plan, which then, of course, shifts the diagnosis.

Curt Widhalm 27:50
Now in my mind, I am picturing a young and eager Katie Vernoy therapist carrying a stack of papers that is: here is a treatment plan that we are developing together, client in session. And then writing down afterwards on another stack of papers: here’s what we did in the session. And you are developing the skill, okay, in order to put two and two together, what’s in the treatment plan and what I’ve written in this note in the next session, I am going to have to do stuff that helps to fulfill the treatment plan. This is scaffolding, and it makes you more likely to have to follow through on some of the tasks as they all fit together. Now there are plenty of AI tools out there now that will create a treatment plan for you, and they will write those notes for you, and it doesn’t require some of the more in depth learning: Hey, I’m missing out on doing some of these aspects from the treatment plan. Well, AI is kind of saying that I’m doing it here, and all of that wonderful time that we’ve saved in documentation is just getting us to the right outcome, rather than the desirable amount of difficulty of actually learning how all of these things fit together. So what AI tools can do is they can really compartmentalize some of these various aspects that are some of the necessary building blocks to go from conceptualization and patient collaboration to delivery of some of the therapy skills, and instead, what we’re getting is, hey, I’ve saved 30 minutes on documentation for this particular client this week because I’ve managed to type some things in, and the AI bot that my EHR provides me has spit out some fancy sounding words that have some therapy flavored aspects to them. Now, while you’re thinking your response, I’m seeing you, and I’m responding to your very human interaction here, these are some of the skills that we will talk about later. But some of the difficult skill set that I used to give students and and trainees when I was in an educator and supervisor role. Is, like many universities, I would require students to record sessions, and most of the other professors would talk about, you know, about half of my students would even actually listen to those sessions again before they turned them in. Now, in order to get around this, what I would do is I would have my students or my supervisees transcribe their sessions. Now, before some of the internet tools that are out there that would provide transcriptions, they would have to sit there and listen to the session over and over again, and they would come into supervision and dramatically just fling some papers at me and say, I am actually a terrible therapist. What they were learning in this process is how often they were finding that they were annoyed with what they were saying. They were finding that they were drifting off of the topic of the session, that they were missing things that clients were saying, and they were learning some of these desirable difficulty skills to be able to go back and better track what is being said in session. Now, as some of the Internet tools started to transcribe some of these sessions, and I got the feeling that some of my supervisees were not doing the hard work to actually do this, what I then had them start doing is going through transcripts of their sessions and coding the types of statements that they were making. Are these Socratic questioning, or is this a reflective listening? And then I would also have them explain why they were doing it in that portion of the session. Now what I’m talking about here is the evolution of how navigating some of the available tools were out there, corresponded with some of the very skills that I was hoping that supervisees would learn. And that’s kind of the goal of the rest of this episode, is to do some of the same similar things that we’re talking about with artificial intelligence today. Is being able to have some of these tasks that are very manualized, have our students and have our supervisees continue to practice those things, so that way, they’re still developing these muscles, and eventually it’s being able to get that to such an internalized process that they can then utilize some of these tools to help shorten down that amount of time that they’re using on it. But they’re still learning these skills along the way.

… 32:41
(Advertisement Break)

Katie Vernoy 32:43
A lot of this that you’re talking about is requiring a an educator, a supervisor, whoever, to have the bandwidth and the time, the clinical skill themselves to be able to lead this. So I just want to argue, not argue. I just want to point out that we’ve talked to a lot of folks over the years who you know I think about our our conversations with Dr Maelisa McCaffrey, and how many of the people who work with her say they’ve never learned how to do documentation, or they’ve never learned much about the golden thread. And I and I think that sometimes folks who go into private practice that they don’t have an amazing supervisor, like Real Honest Therapy, they don’t learn the golden thread. They don’t learn some of these things. And so I just want to speak to some of the aspirational elements of this, because I know that we also have supervisees listening to this and students listening to this. And so we want to make sure that supervisors and educators are getting these skills, whether it’s an AI, but also just in good old documentation to be able to do this, because I know, I think about what I was doing in community mental health and trying to help folks get their documentation, all the review, all of those things, many of them hadn’t learned, even though they had been working for the organization for six months or whatever, they hadn’t learned, their supervisor didn’t have the skills, and so I was going back through their documentation, trying to teach it to them fresh. And I think that the challenge is that if, if there was even training by osmosis from some of the AI tools, it might be better than what folks have been getting to date. So that’s that’s a pessimistic version of me saying, hey, all of this requires really good knowledge, training, ability to teach, from our educators, from our supervisors, and I don’t know that that’s 100% the case. So I’ll leave that there. I know that that’s not the main point you’re trying to get to here. But I also want to reflect on an experience that I had in this very process when I was going through graduate school. We, our diagnostic class, we had to functionally memorize the DSM. And maybe I’ve told you this story before, but our tests, we could not use our DSM, and we only got bare minimum information, and had to write down questions in our blue books. Do they still have blue books now? Anyway, and go and get the answers from the professor. So we had to go back and forth. And we only were able to diagnose as well as we were able to do the differential diagnosis and ask these questions. And so we didn’t start with all the information. We actually had to go and get those things. And I think that can be, I feel like I was a stronger diagnostician. You know, people have different feelings about diagnosis, so…

Curt Widhalm 36:05
Not the point of this episode.

Katie Vernoy 36:06
Or not, but, but I feel like that process is kind of what you’re talking about, making sure that you don’t allow the the tools that will later scaffold timeline. You don’t allow them to prevent the process of learning.

Curt Widhalm 36:27
These are aspirational, and they are really trying to be a part of the ideas of the solutions to address the very concerns about humans being a part of delivering therapy and not really giving into, you know, AI is going to replace all of us. These are practical steps that people can do, and by no means am I recommending that anybody do all of the steps that I’m talking about, but these are amongst the very steps that can help us to be able to continue to develop solid clinicians in order to be the best therapists that we can be. And if there are people out there who are parts of educational institutions that are spitting out students who are very, very AI reliant without developing new skills, then we can compare in 5 to 10 years, who’s getting better clinical outcomes. Because this is really trying to address some of the core issues at a systemic level. And you know, my dream is that all, you know, therapy departments are going to listen to this episode and say, We really need to incorporate these principles and these kinds of actions throughout all of our courses, rather than it just being relied upon a singular professor here or a singular supervisor there.

Katie Vernoy 37:55
Yeah.

Curt Widhalm 37:57
Part of what you’re talking about in your educational experience here is one of the recommendations, and again, might show up on a quiz. Is about process based assessments. And this is less about getting to the right answer and more about the thought process behind it. Now, when I was teaching law and ethics course, one of the first assignments that I would have students do is write down their ethical decision making process about a case scenario, and it wasn’t one where I necessarily had a right answer, that there was a variety of ways that the students could answer a particular case. And really more of what I was trying to emphasize for the students was it is more about your thought process and your decisions in justifying the actions that you would take in this scenario rather than getting to the right answer. And students really struggled with it. And for a lot of people who came out of undergraduate, you know, programs where they’re very used to getting, you know, really quick feedback on you got the right answer. This was an introduction into at least my course that was, this is really about thinking and defending your logic, rather than it is about getting to the right destination. Now I know that a lot of AI tools that are out there, people can get into prompt writing to be able to think about all of these steps along the way. And if I was still teaching that course, what I would be doing instead is having people do this as an oral presentation, rather than having the opportunity to sit, you know, for a very long time with artificial intelligence, to be able to justify on their feet what their thought process is. It’s, again, not about getting to the right outcome, but it still is very much about getting through the process in the correct sort of way that is defensible.

Katie Vernoy 40:05
It’s kind of the show your work argument, which I I really appreciate. And I oftentimes, when I’ve worked with folks for interviews or that kind of stuff, I state, if somebody asks you a clinical question, and how would you get there? I say, walk through your rationale. Talk through how you’re you know what you’re considering, what’s what you would consider, what are the things that are most important? And I have seen what you’re talking about, a move in the clinicians that I’d interacted with, where they wanted to say yes or no, or this is the diagnosis, or whatever it is, versus kind of vulnerably and with a lot of distress tolerance, talk through this is my thought process. It’s it’s not always easy, and I think it is critically important for clinicians to be able to explain why they’re doing what they’re doing, what they’re considering. And I really like this idea. Obviously there’s, there’s going to be folks who are AI natives, who are able to get all the algorithms and and plan for any question you might be able to ask. But if they’re using that much time and energy on AI, I think we just, we can’t avoid that people are going to to use the tools that they’re that are at their hands, and we can only do the best that we can do to try to help them learn critically and independently.

Curt Widhalm 41:33
The next piece of advice that I have is that instructors and educators should build into their courses adversarial training. Now I’m not talking about having students get up and do Fight Club in front of their peers. Instead, what I’m recommending is that instead of simply teaching students how to use artificial intelligence, this is give assignments where there is broken or biased AI generated plans and require the students to identify the errors and justify the corrections that they would make.

Katie Vernoy 42:14
I think that’s so important. This is, this is probably one of my favorite ones.

Curt Widhalm 42:18
And I’m so glad that this is one of your favorite ones, because I want to demonstrate with you what this might actually look like.

Katie Vernoy 42:26
Okay.

Curt Widhalm 42:27
In an AI generated treatment plan the patient profile is a 24 year old male presenting with moderate depressive symptoms, social withdrawal and high self criticism. The AI comes back with a diagnosis of major depressive disorder, single episode with treatment goals of increased social activity to three times per week. Goal number two is practice positive affirmations daily to counter self criticism and utilize standard cognitive behavioral therapy workbook modules as interventions. I realize that this is a little unfair to you, because I am just reading this to you, but tell me your thoughts as a seasoned clinician about this.

Katie Vernoy 43:13
So we know that it’s a 24 year old male. Is there anything else that we know?

Curt Widhalm 43:20
Nope.

Katie Vernoy 43:21
Okay. So first off, I would want to query to understand any any other demographic information, looking for cultural impacts, cultural things, other demographic types of things, including disability or other things that could impact some of the recommendations. It also just says that there’s a depressive episode going on right now. What are the what’s the information about the the mental health concerns?

Curt Widhalm 43:54
They are presenting with moderate depressive symptoms, social withdrawal and high self criticism.

Katie Vernoy 44:00
Okay. So those things are too labeled for me to know enough about what’s going on, so if I put that in, then maybe I would stick with Okay, there they’ve got some depressive symptoms, social withdrawal and high self criticism. But I would want to know what else is going on? Has there been any trauma? Is there any grief or loss that’s happening? I’d want to understand what depressive symptoms mean. There’s also the element of I don’t hear anything related to suicidal ideation or assessment for that. So I’d also want to make sure I’m looking at any crisis factors. If we’ve got social isolation and self criticism and depressive symptoms which we, which aren’t named, I would really want to make sure that I’m doing that a full assessment on any risk factors. Aalso being male, and I would do this with with all genders, but but primarily thinking about male folks in the 20s, I’d want to look also at any homicidal ideation or any psychosis, or any other things that might be going along with this diagnostic picture. I’m going all out. Curt, so…

Curt Widhalm 45:14
You are, and you’re so far, you’re hitting most of the points that I’ve written down about this. So keep going.

Katie Vernoy 45:19
I was originally trained to do an oral exam in California, and then they got rid of it, which I was quite relieved. But I was trained to do this a little bit. So I would want to look at any other factors that might might lead to the depression, any other risk factors. And I would also want to be able to identify community supports, the the treatment recommendations were increase activity and affirmations, and then was there another one?

Curt Widhalm 45:49
Those increased social activity to three times per week and practice positive affirmations daily to counter self criticism. And the intervention suggestions is using standard cognitive behavioral therapy workbook.

Katie Vernoy 46:01
Okay, so just just go out and do stuff, kiddo. And notice how I feel very old calling a 25 year old kiddo. But okay, so social activity is very different depending on social group, ability to access folks, and if there’s already social withdrawal, the question is, it self enforced, and I don’t want to be around people, or is it socially enforced? Because there’s something about this, this individual, that is not socially accepted, and so there’s looking at that. There’s also safety issues around social activity, especially in today’s day and age. You know, if we don’t know their their culture, gender identity or or any kind of sexuality issues, like we don’t know if they’re going to be accepted by society, and how easy that that that would be. And then the self affirmations. I’m not opposed to that, but it seems, it seems, it seems like weak sauce. I mean, that’s, that’s, that’s, that’s, that’s what we got, is go out and do stuff and tell yourself nice things?

Curt Widhalm 47:06
So Katie, considering that I threw this at you absolutely out of nowhere.

Katie Vernoy 47:12
Well, I had, I had one more thing. So let me just add the last thing, depending on culture CBT might be completely wrong. So it’s, it’s not genuinely, generally effective, especially in Latinx cultures. So okay, that was my last thing.

Curt Widhalm 47:30
Katie, you managed to hit just about every thing, I think you’ve hit everything on the rubric that I did on this. I had the joy and the luxury of having time to write all of this. You did this verbally and orally on the spot, and, you know, just as complete transparency to our audience, the editing was that was done over this section of the podcast was me talking over Katie. So, so the only things that were really cut out were my mistakes and my excitement on this. Some of the things that I had put in specifically looking at this is identifying the sycophancy, which is that positive affirmations, which is, it’s kind of a common hallucination that gets put out in a lot of especially free artificial intelligence about being able to just say, well, this seems to fit with people who negatively talk about themselves. Having more time, and I’m sure that you would arrive at this yoursel, is that, especially for a lot of highly self critical individuals, that positive affirmations can actually worsen their mood and their depression.

Katie Vernoy 48:42
Yeah.

Curt Widhalm 48:43
The jumping straight to major depressive disorder as a diagnosis without considering a whole host of other diagnostics, jumping to, you know, some sort of efficiency language, oh, go and be social three times per week is just kind of meaningless, efficiency based metrics. And then there’s identifying trauma history, as well as screening for suicidality that seems to be missing. This is the kind of work that I would hope that a lot of educators would be doing is showing the limits in this and not just showing it, but having assignments based on critiquing AI output. So that way, it’s training clinicians to not just accept at face value what is being presented by artificial intelligence.

Katie Vernoy 49:40
And in doing this, I just want to reflect on it really quickly. Part of it was knowing that there was stuff missing, but having a foundation, I think actually was helpful for me to think harder. And so I think there is a possibility that we could not only delay de-skilling or try to limit de-skilling, we could potentially also use these tools to get folks to think harder and potentially increase skills because the foundational ones are going to be hit, the very superficial ones are going to be hit. So you don’t have all the all the I keep calling them kids, all the young therapists saying, Oh, well, I would recommend CBT. That’s already on the table. So now you have to think a level higher or a level deeper. You have to think what could be missing here, and that actually is a different skill set than even just starting from scratch and doing something that’s good enough.

Curt Widhalm 50:44
And I’m also the kind of professor that I would throw in examples in some of these assignments where the AI is actually correct, because it’s not just going in and assuming that every single AI is going to have bias, but it’s being able to really critically evaluate when some of these tools are positive. Now, another thing that may show up in your quiz, part of what we demonstrated in this as well, is the algorithmic literacy, and this is teaching some of the mechanics of AI, not just teaching them that there is data bias and that there is hallucinations, but showing how these tools actually develop these hallucinations. And by being able to point out in the thought process that Katie did so well, here is actively being able to point out, you know, that just seems like weak sauce sorts of stuff. You know, that’s professional education language there.

Katie Vernoy 51:45
Yes, exactly.

Curt Widhalm 51:47
But teaching algorithmic literacy is one of the steps that should be incorporated into our education courses. And part of what we’ve talked about as far as doing oral presentations is what is also discussed in the literature as naked practice modules. Not naked that way, folks, but…

Katie Vernoy 52:10
Sounds very inappropriate, but it’s not.

Curt Widhalm 52:12
What it means is specific intervals or specific assignments where all digital tools are removed to ensure that students are still functioning based on internalized knowledge. Now, all of these things might sound like make people practice in the old school way, and it is a little bit of taking the old school way of learning, but being able to use the old school way of learning by doing, and also doing that as it applies to artificial intelligence.

… 52:44
(Advertisement Break)

Katie Vernoy 52:45
Here’s where I just want to make a comment that I don’t know if it was just before we recorded or if I’ve already said it on the podcast, but there is a time at which they could practice modules, or those types of things might feel like trying to do math with an abacus. And so I think that these things are going to continue to evolve and understanding what the key critical skills are and how we continue to make sure that folks are learning them while using the appropriate tools and being able to do their work without relying so heavily on the tools. I think that assessment is going to need to continue. So these are recommendations that I think will stand up over time, but also will need to be looked at critically to make sure that it’s not just sounding like let’s do this the old school way. But let’s make sure that you have the tools that are needed, the skills that are needed to be able to be a current, modern therapist, working in a way that’s effective and clinically appropriate, all of those things.

Curt Widhalm 53:51
So I’m gonna use the last part of this episode to talk about strategies for clinical supervisors or agency directors around your licensed folks, or if you are an independent practitioner, maybe some exercises that you can do just to continue to develop your skills and continue to work on your competency. The role of supervisors is not just as somebody who is looking over somebody’s documentation and making sure that they’re hitting metrics. And we have a whole host of previous episodes in the past that we talk about what supervision should be. We can include those in our show notes over mtsgpodcast.com but the role of supervision is scaffolding the skills of a pre licensee, so that way that therapist can eventually stand on their own. Some of the things that clinical supervisors can use, as far as strategies, is with the goal of maintaining a human to human feedback loop throughout this process, and part of this is ensuring that supervision is not just overseeing that we need to hit some level of metrics. That might be some kind of a starting point, as far as being able to evaluate how people are getting to their goals, but really also utilizing supervision with a focal point on the art aspects of therapy: empathy, the alliance, the nuance, being able to track human expressions and human emotions that develop throughout the session, and ensuring that those are entered into the conceptualization about how interventions are working with a particular client.

Katie Vernoy 55:40
One of the companies that I talked to at Behavioral Health Tech, and I’m assuming that there’s other folks doing stuff that’s similar. Actually, I mean, I was joking it was like their clinical best buddy, AI bot, but I could see it also being used for supervision. And so when you were talking about supervision not being replaced by a metrics, that’s what it says on our little thing that we’re reading from. I think that there is a way in which supervisors could use tools like these to enhance their supervision, but I fundamentally agree that the human to human feedback loop is going to be necessary. But just to give a little picture of what that looks like, a session is recorded, the notes written, but you can also go in and query, what did I miss? What are the things that, where, what opportunities did I miss? What clinical information did I miss? What are the, what questions should I have asked? There are ways in which AI is actually potentially augmenting supervision, and so for supervisors to be aware of those tools, fully vet those tools before letting their supervisees use them, and then also sorting through how might that information help both the clinician and the supervisor to be able to continue to address empathy, alliance and nuance, as well as just raw clinical skill, and so I think there’s that that element of more and more AI is going to be able to do a lot of supervision stuff. Not just, did you talk more to the than the client, but did you you know, what percentage were you reflecting versus summarizing, versus whatever, what opportunities did you miss? All of those things. So I haven’t fully tested that tool. That’s why I’m not going to mention it by name. But I think that there’s this, there is feasibly a lot more data than a supervisor watching a random video of a session that can be used, but it should not then become a crutch or a mechanism of de-skilling our supervisors.

Curt Widhalm 57:47
There’s a wonderful article by Sherrill, et al. called “Teaming with Artificial Intelligence to Learn and Sustain Psychotherapy Delivery Skills: Workplace, Ethical, and Research Implications.” This is from the Journal of Technology and Behavioral Sciences, 2024 that talks about some of the ways that this can be used, not just in supervision, but also in the workplace skills overall, and being able to track such important things as supervisor availability and clinician warmth to delivering to their clients that does balance both some of the data tracking but also on things that AI cannot do in some of the aspects of being in a workplace. The next recommendation for supervisors is requiring explanations and audit requirements in such tasks as treatment plans, not accepting AI assisted treatment plans unless the supervisee can trace the logic and provide the human audit of why they accept or reject a machine suggestions. We’ve kind of demonstrated a couple of times in this episode, and looking at how we’re getting close to the end of our time here, around not just necessarily accepting CBT is the recommendation, because a lot of the internet points to CBT is the answer for everything, and if CBT isn’t the answer, the answer is more CBT, so…

Katie Vernoy 59:22
Or 42.

Curt Widhalm 59:23
Yes, but requiring that supervisees trace the logic behind why intervention is suggested, rather than just accepting at face value, here’s the output that is some of that therapy flavored language, is a way of ensuring that there isn’t an over reliance, that it actually develops some of these skills. Similar to how educators are being recommended by us, and the research that we’re basing this episode on is having rules around identifying when AI is wrong or not optimal. This can be done as part of supervision. It can be done as a training aspect of supervision. And similar to how we had talked about this during the educator portion, as well doing some skills resilience testing, where supervisees need to perform core clinical duties without artificial intelligence tools available to them. This can be done as part of supervision as well. It can be case presentations. It can be on the spot, sort of quizzes. These are such things as naked assessments, naked presentations. And by naked, I mean without tools, again, nothing that would come close to being actually naked with a supervisee. But where supervisors are particularly encouraged, in addition to educators, but supervisors need to really emphasize the role articulation that AI is an extension of the practitioner, not a peer. That supervisors are in the particular position, to reinforce that legal and ethical accountability remains solely with the human therapist who is signing off on the output and the delivery of treatment, and that it is indefensible to say, Well, I did it because AI told me to.

Katie Vernoy 1:01:29
And that’s aligned with kind of what ended up in the California Association of Marriage and Family Therapist’s ethics code. That was the most important thing is that you were responsible for every single output. You can’t say, Well, I thought it was good enough because AI said it was. But before we finish up, because we are getting pretty short on time. You said manual fallback plans, I think, or scenarios where we go to that place. And I know you mentioned the naked assessment, or naked initial assessment, and that makes sense. You know, having starting your assessment of a client without having any AI tools, and then adding it later, so that you have to at least start thinking about it clinically, I’m assuming before being able to rely on other tools to help you with it. What other what other manual fallback plans might there be that folks can look at?

Curt Widhalm 1:02:21
So, similar to how we were talking about this with educators, is supervisors can set aside one supervision per month where therapists are required to draft a full case formulation or treatment plan entirely without the assistance of AI templates or autofill functions as a way of not just cognitively offloading some of these tasks, and this helps to exercise the skills that therapists have to have in order to be able to manually synthesize complex patient data. And the supervisor can then assess how the supervisee is doing this and ensuring that those skills are being developed or are not atrophying. Supervisors can also require supervisees to write one to two sentences explaining why they are or not accepting a AI suggestion with specific patient reported data that the AI might have misinterpreted. So sometimes artificial intelligence can create here’s vaguely sounding language that might apply to this, but having supervisees critically explain through their thought processes about why something might fit or might not fit with a particular client.

Katie Vernoy 1:03:45
And this can be especially important if you have clients who have more complex presentations or or have non typical, I’m thinking like neurodivergent or or even potentially different types of sexual orientation or gender identity. Oftentimes, a lot of, I think AI scribes and other types of AI tools are being trained for diversity, but some aren’t. And so they might make your notes, might have an assumption of who’s a sister or a brother or a gender, or there might be like details to pay attention to, but there also can be bias that comes in through that based on not understanding what the client is actually saying, because they don’t have the full context, because it’s not kind of that heteronormative white, cis kind of presentation. And so I think it’s really important to think critically. And I love this kind of write the question, write the one to two sentences about why you’re accepting it or not.

Curt Widhalm 1:04:50
And the last one that I will recommend here is divergent thinking workshops. And this is either as a staff training, doing group supervision, where a team is presented with an AI generated standard treatment plan for a hypothetical complex case, and having the therapists identify non-standard or idiosyncratic patient needs that the AI failed to address. A lot of which Katie you just outlined here that are specifically looking for cultural nuances or other contraindications that might end up showing up with a particular client, and this does help to develop some of those differential ideas that might need to be in place in order to craft therapy specifically to the particular case that is being discussed here.

Katie Vernoy 1:05:44
Before we finish up, you do have another one listed here that I think we should mention, potentially conducting a paper only day. I wouldn’t do it twice a year. I may not even do it that often, but where all AI support systems are turned off, I think I get nervous about that, because I think there’s, there are some things that if technology, fails, we just have to cancel the session. But there have been times when my technology hasn’t worked. A AI scribe got the recording, got recorded over, or I didn’t hit record properly, or whatever it is, and so I still have to complete those notes. I still have to complete all of that work. And I think doing that as a matter of course, to be able to make sure that your your skills are not getting lost, and that you know how to do the work and not just hit a button to hit record, I think is important. So.

Curt Widhalm 1:06:32
And the extension of that is that it really does allow for people in leadership roles or supervision roles to see which staff struggles the most without AI and being able to craft specific learning skills for those particular employees.

Katie Vernoy 1:06:48
Yeah. So I think it is important to do some of those failure simulations so that you can actually address those skill sets and understand what is being lost.

Curt Widhalm 1:07:00
Listen to the beginning and the end of the episode about how to get CE credits from us. You can find our references in our show notes, both in the course, in our learning community, as well as in our show notes over at mtsgpodcast.com. Follow us on our social media. Join our Facebook group, the Modern Therapist Group, to continue on with this and other conversations. Follow us onSubstack, LinkedIn. And if you want us to come and train your educators or agency on implementing some of these skills, so that way you don’t have your agency de-skilled, you can reach out to Katie and I. We do trainings as well, and until next time, I’m Curt Widhalm with Katie Vernoy.

… 1:07:44
(Advertisement Break)

Katie Vernoy 1:07:47
Just a quick reminder. If you’d like one unit of continuing education for listening to this episode, go to moderntherapistscommunity.com purchase this course and pass the post test. A CE certificate will appear in your profile once you’ve successfully completed the steps.

Curt Widhalm 1:08:02
Once again, that’s moderntherapistcommunity.com.

Announcer 1:08:06
Thank you for listening to the Modern Therapist’s Survival Guide. Learn more about who we are and what we do at mtsgpodcast.com. You can also join us on Facebook and Twitter, and please don’t forget to subscribe so you don’t miss any of our episodes.

 

0 replies
SPEAK YOUR MIND

Leave a Reply

Your email address will not be published. Required fields are marked *