Photo ID: A server room with a photo of Alyssa Dietz to one side and text overlay

The Advances in Artificial Intelligence for Mental Health: An interview with Dr. Alyssa Dietz

Curt and Katie chat with Dr. Alyssa Dietz, a clinical psychologist and digital mental health expert, about the evolving role of AI in therapy. Dr. Dietz discusses how AI can enhance therapy by delivering evidence-based care, particularly in structured approaches like CBT, while acknowledging its limitations with complex, multi-diagnosis cases. She emphasizes the need for collaboration between therapists and AI developers to ensure ethical, patient-centered innovation in digital mental health.

Transcript

Click here to scroll to the podcast transcript.

(Show notes provided in collaboration with Otter.ai and ChatGPT.)

An Interview with Dr. Alyssa Dietz, Head of US Clinical Strategy at ieso Digital Health

Dr. Alyssa Dietz - headshotDr. Alyssa Dietz is Head of US Clinical Strategy at ieso Digital Health. She works cross-functionally to envision, plan, and bring solutions that prioritize patients’ needs to the US market. A clinical psychologist by training, she began her career as a professor where she balanced treating patients, teaching courses, training therapists, and conducting research on the intersection of mental health and digital interventions. She has worked in a wide range of clinical settings including private practice, residential treatment, university-based behavioral health, the VA, community clinics, and academic medical centers. Looking to make a stronger impact by connecting treatments that work to the people who need them, she transitioned to industry first at Pear Therapeutics and then Twill (Happify), holding roles across medical affairs, corporate strategy, and clinical strategy.

In this podcast episode, we talk with Dr. Alyssa Dietz about Advances in AI for Therapy

Artificial Intelligence has already come for mental healthcare. The question is, what should therapists be doing about it? We dig into what AI therapy looks like from the inside, from a clinician’s perspective.

AI’s Growing Role in Therapy

  • AI can enhance therapy but won’t replace human therapists soon.
  • AI is particularly effective in skills-based therapies like CBT.
  • AI currently struggles with complex diagnoses and comorbidities.
  • The most active users of digital mental health tools are 50-60 years old, surprising many industry experts.

How AI is Changing the Therapist’s Role

“Personally, I think [only working on the most challenging cases is] a recipe for burnout. If we take that approach of: we only want therapists doing the hardest, most complex, most difficult cases, and we want them to do, you know, 1000 of them, because the AI is delivering some of the routine care.” – Dr. Alyssa Dietz

  • AI can automate administrative tasks and support therapists in data-driven decision-making.
  • Although tech and insurance folks say we can use AI to work at the “top of our license,” this approach could lead to therapist burnout.
  • Clinicians should work alongside AI developers to ensure ethical, patient-centered care.
  • AI’s judgment and decision-making remain limited, requiring human oversight.

Evaluating AI’s Effectiveness & Ethical Considerations

  • The tension between innovation, regulation, and evaluation in AI-driven therapy.
  • Importance of clear safety protocols and escalation (emergency) measures for client care.
  • AI must be rigorously tested for safety and effectiveness.
  • Understanding how AI products are trained and evaluated is critical for therapists before incorporating these clinical tools into your practice.

AI and Complex Diagnoses

  • Current AI models struggle to address comorbidities effectively.
  • Need for personalization and context-driven interventions.
  • Future AI tools must move beyond a one-size-fits-all approach.

How Therapists Can Adapt to AI in Mental Health

“I think types of work that therapists will do is likely to evolve, and I think that we should think about AI and mental health care as part of a menu of options, as opposed to something that’s gonna come in and completely take over the way that care is delivered.” – Dr. Alyssa Dietz

  • Educate yourself on AI’s capabilities and limitations.
  • Stay informed through online courses, conferences, and tech-focused therapist groups.
  • Engage with organizations like the Digital Medicine Society to understand responsible AI use.
  • Connect with the “Therapists in Tech” Slack group to network with others in the space.

 

Resources for Modern Therapists mentioned in this Podcast Episode:

We’ve pulled together resources mentioned in this episode and put together some handy-dandy links. Please note that some of the links below may be affiliate links, so if you purchase after clicking below, we may get a little bit of cash in our pockets. We thank you in advance!

ieso website – iesogroup.com

Digital Medicine Society website – dimesociety.org

Consumer Technology Association website – www.cta.tech

Therapists in Tech website – www.therapistsintech.com

DTX West Conference website – www.dtxglobalsummit.com/west

Dr. Alyssa’s Social Media – LinkedIn

 

Relevant Episodes of MTSG Podcast:

AI Therapy is Already Here: An interview with Dr. Ben Caldwell

Is AI Smart for Your Therapy Practice? The ethics of artificial intelligence in therapy

Is AI Really Ready for Therapists? An interview with Dr. Maelisa McCaffrey

The Future Is Now: Chatbots are Replacing Mental Health Workers

What Actually is Therapy?

Beyond Reimagination: Improving your client outcomes by understanding what big tech is doing right (and wrong) with mental health apps

Reporting Back from the Behavioral Health Tech 2024 Conference

 

Who we are:

Picture of Curt Widhalm, LMFT, co-host of the Modern Therapist's Survival Guide podcast; a nice young man with a glorious beard.Curt Widhalm, LMFT

Curt Widhalm is in private practice in the Los Angeles area. He is the cofounder of the Therapy Reimagined conference, an Adjunct Professor at Pepperdine University and CSUN, a former Subject Matter Expert for the California Board of Behavioral Sciences, former CFO of the California Association of Marriage and Family Therapists, and a loving husband and father. He is 1/2 great person, 1/2 provocateur, and 1/2 geek, in that order. He dabbles in the dark art of making “dad jokes” and usually has a half-empty cup of coffee somewhere nearby. Learn more at: http://www.curtwidhalm.com

Picture of Katie Vernoy, LMFT, co-host of the Modern Therapist's Survival Guide podcastKatie Vernoy, LMFT

Katie Vernoy is a Licensed Marriage and Family Therapist, coach, and consultant supporting leaders, visionaries, executives, and helping professionals to create sustainable careers. Katie, with Curt, has developed workshops and a conference, Therapy Reimagined, to support therapists navigating through the modern challenges of this profession. Katie is also a former President of the California Association of Marriage and Family Therapists. In her spare time, Katie is secretly siphoning off Curt’s youthful energy, so that she can take over the world. Learn more at: http://www.katievernoy.com

A Quick Note:

Our opinions are our own. We are only speaking for ourselves – except when we speak for each other, or over each other. We’re working on it.

Our guests are also only speaking for themselves and have their own opinions. We aren’t trying to take their voice, and no one speaks for us either. Mostly because they don’t want to, but hey.

Stay in Touch with Curt, Katie, and the whole Therapy Reimagined #TherapyMovement:

Patreon

Buy Me A Coffee

Podcast Homepage

Therapy Reimagined Homepage

Facebook

Twitter

Instagram

YouTube

Consultation services with Curt Widhalm or Katie Vernoy:

The Fifty-Minute Hour

Connect with the Modern Therapist Community:

Our Facebook Group – The Modern Therapists Group

Modern Therapist’s Survival Guide Creative Credits:

Voice Over by DW McCann https://www.facebook.com/McCannDW/

Music by Crystal Grooms Mangano https://groomsymusic.com/

Transcript for this episode of the Modern Therapist’s Survival Guide podcast (Autogenerated):

Transcripts do not include advertisements just a reference to the advertising break (as such timing does not account for advertisements).

… 0:00
(Opening Advertisement)

Announcer 0:00
You’re listening to the Modern Therapist’s Survival Guide, where therapists live, breathe and practice as human beings. To support you as a whole person and a therapist, here are your hosts, Curt Widhalm and Katie Vernoy.

Curt Widhalm 0:12
Welcome back, modern therapists. This is the Modern Therapist’s Survival Guide. I’m Curt Widhalm with Katie Vernoy, and this is the podcast for therapists about the things that go on in our profession, the things that happen in our workforce. And one of the things that we’ve been talking about a lot recently is about where AI is coming into our field, and how that is playing out. And we have done a lot of discussions around how it’s being discussed, or lack thereof in current educational situations in the schools, we’ve talked about how it looks in the research. Katie and I have been doing a lot of work on ethical codes around this, and this is one of the opportunities to help round out some of the discussion, where we are being joined by Dr. Alyssa Dietz from ieso ai to talk about it from the big bad AI companies that are coming to take all of our jobs and actually come in and talk about how the changing landscape for therapists might look as AI becomes more prevalent and accessible for those seeking out mental health services. So thank you very much for joining us and sharing your expertise on this.

Dr. Alyssa Dietz 1:27
Yes, happy to be here.

Katie Vernoy 1:29
So you and I spoke before, I kind of grabbed you at the behavioral health tech conference, and then we also had a nice conversation beforehand. So there’s some of this that we’re, that’s planned, and some of it might be a little bit more in the moment, but before we jump into our hard hitting questions, we’re going to ask the question that we ask all of our guests, which is, who are you and what are you putting out into the world?

Dr. Alyssa Dietz 1:52
Yeah. So my name is Alyssa Dietz. I’m a clinical psychologist by background. I got my start in the beginning of my career in academia, so I was teaching, training future therapists, directing a research lab, and I also maintained a case load as well, and I became frustrated by this lack of access to evidence based care. So when I looked around me, there were so many people in the US who needed access to mental health care, and they were either getting nothing, or in many cases, they were getting something, but it was not something that was consistent with best practices and standards of care. And so I moved from academia to digital mental health about six years ago, and I worked in a few different companies, and my sort of guiding principle has been that every person who wants access to evidence based care ought to be able to get it, and so that’s what sort of drives my motivation and the work that I do each day.

Curt Widhalm 2:49
How did you get into the digital space? That therapy, mental health in general, seems to be something that is kind of not really in that progressive sort of space, and you’re talking about joining this before the pandemic, when a lot of us were forced into doing some of the technology. So can you talk about how you got into it, how your perspective of where things have been has changed over the last six years?

Dr. Alyssa Dietz 3:15
Yeah, great. So I’m gonna take you on a journey back about 15 or so years ago when I started my doctoral training. So I was in a research lab that was looking at how technology could be leveraged to facilitate delivery of risk reduction programs for substance use. And if you think back, this is like 2009 ish substance use sort of prevention and early intervention programs were largely happening in schools, and there was a need for evidence based ways to intervene. And so the sort of original research was around, you may have heard of the expectancy effect, which is this idea that what alcohol does for us as a drug physically and what we believe it does for us, are not actually the same thing. They’re not congruent. And so there was an intervention developed by Alan Marla at University of Washington. He’d bring students into a bar lab give some sort of explanation, like studying the social effects of alcohol, and then everyone was going to be receiving alcohol and sort of their behavior observed. But what really happened is only about 50% of people would get alcohol, and the rest would get sort of fakes that were doctored up to smell and taste like alcohol, and then an intervention would sort of arise. At the end, there’d be a reveal, and people would have this discrepancy developed between sort of what they expect and what they experience and how that, you know, motivates their behavior. So as you can imagine, it was really hard to do this, right. So it was expensive. It was costly. Operationally, it was a nightmare. You had to pregnancy test people before you could give them alcohol or fake alcohol. There was a lot. So we were trying to figure out, like, okay, how can we deliver the impact of this intervention, but in a way that can be done much more easily? And so we developed a digital curriculum that essentially harnessed the lessons that came from that specific intervention. So my dissertation was delivering and evaluating a version of this for high school students in six high schools in Orlando, and looking at how we could leverage technology to sort of increase access to that program. And so that’s what kicked it all off for me. I was like, Oh my gosh, this is amazing. So we can have something that works, it’s effective, but we don’t have people to deliver it. Now we can use technology to make that happen in a way that we couldn’t before.

Katie Vernoy 5:35
So adding digital to mental health is obviously not new. There’s been a lot of different things. A lot of it’s been very rules based or something that’s pretty cookie cutter. You’ve got a program, somebody receives it, as long as they click the next button, they keep going along on the program. And that’s really shifted. There’s generative AI. There’s a lot of stuff that is happening that is really transforming how things work. And so therapists are terrified and are really worried that we’re going to have our jobs taken from us. There’s going to be so much that happens that is going to to really change the job that we have. And for me, I was at this behavioral health tech conference, absorbing the other perspective, which is, well, now clinicians can work at the top of their license, and they can have all of these, you know, AI co-pilots being able to take care of all the administrative stuff and teach the skills and put the curriculums forward and all of those things. And you and I have talked already about this, about how that sounds. But I just want your perspective on it, because you sit in this space of being a clinician or working as a clinician, used to work as a clinician and also promoting and working in the digital space, creating AI therapy. What do you think about that idea of of how this is being rolled out and framed?

Dr. Alyssa Dietz 7:13
It’s a great question. First of all, the top of the license thing. I think I told you before Katie, I hate that framing. And you know. So I personally, I think that’s a recipe for burnout. If we take that approach of, like, we only want therapists doing the hardest, most complex, most difficult cases, and we want them to do, you know, 1000 of them, because the AI is delivering some of the routine care. I think that’s not going to be in therapists or patients best interests, frankly, because, like I said, I think that would lead to burnout. I do think that the role of the therapist is likely to shift and evolve. First of all, I will say there’s always going to be people who don’t want to do therapy digitally, in terms of, like, on an app. They certainly are skeptical of AI and may not want to participate in that. So there’s always going to be that. Another thing I will say is that certain types of therapy lend themselves well to digital delivery, things like cognitive behavioral therapy, or other therapists that are skills based lend themselves in a way that I think that other types of therapy don’t. So I think there’s still going to be lots of room for therapists practicing other types that don’t lend themselves well. There are two sort of roles for clinicians inside these companies as well. So first of all, there is incredible need for clinicians who are experienced to be sitting alongside these other experts, these AI scientists, these software engineers, etc, to be making sure that all of the decisions that are made are with the patient at the center, and with clinical expertise and knowledge about how things need to be guided. I can give you an example of this. We were testing something. This was long before it got in front of patients, and it was for anxiety, and we were specifically targeting avoidance behavior. And so there was a question asked of the patient, like, Well, how would you cope with this? And they’re like, Oh, I’d go in my bed and cover my head in the cup with my covers and just pretend to disappear. And sort of the initial AI response was like, okay, you’ve got a coping skill that’s great. And so, you know, we needed a clinician to be like No, that undermines the entire point of what we’re trying to do with this specific thing about avoidance and escape behavior. So my point just being there will always be need for clinicians to be sitting in lockstep with AI scientists, engineers, etc, to have that clinical expertise to inform the building of these products, another area is for at least for right now, I don’t think the judgment of AI systems is where it needs to be. If you think about large language models, they can give information, but making decisions is a different story. And so when you think about some types of complex therapy, where there’s a huge amount of clinician judgment of deciding sort of what needs to happen next, and there’s more elaborate decisions than like, you know, session one is this, session two is this, so on and so forth. Again, those types of therapy, I don’t think the AI is anywhere near where it would need to be to be able to do those complex judgment makings. And so that also brings me to things like assessment and diagnosis. These things are complicated. It’s not just sort of black and white, where you check off some criteria and render a diagnosis. I think they’re still very much going to continue to persist, to be a role for therapists and psychologists in assessing, diagnosing those kinds of things, and frankly, helping to decide whether a digital approach is appropriate for an individual, right. And so I think there will be new types of roles for therapists. I think types of work that therapists will do is likely to evolve, and I think that we should think about AI and mental health care as part of a menu of options, as opposed to something that’s gonna like come in and completely take over the way that care is delivered.

… 11:15
(Advertisement Break)

Curt Widhalm 11:15
What do you see as the limitations of AI right now and within the broader scope of where AI is going? At the time of recording, even this week, the Google CEO is saying that, all right, the low hanging fruit of AI is kind of picked over. The next big leaps seem to be kind of off in the distance at this point. Where do you see these limitations of especially therapy adopting AI, and where that might look like for the next few years with these limitations in place?

Dr. Alyssa Dietz 11:53
Yeah, great question. I think the near term is that it will be absolutely used to enhance the provision of therapy. You know, something I haven’t mentioned at all is there are a lot of administrative tools that leverage AI, things like scribes, note takers, note writers, things of that nature that I think are going to continue to develop and be a more viable solution. In the near to medium term, we will continue to see the rise of AI delivering sort of content as a therapist’s extender. So not a replacement for but an extender and streamliner of the therapy that people are already providing. And I think a lot needs to happen, both in the technology, but also around like safety, ethics, responsible innovation to ensure that the things that are being developed meet the standards that we think they ought to for our patients and are appropriate to use in a context as sensitive as mental health.

Katie Vernoy 12:57
So you’re talking about AI is doesn’t have the judgment it, it can’t make decisions. And I don’t see that lasting forever.

Dr. Alyssa Dietz 13:09
Yeah. I agree with you.

Katie Vernoy 13:09
I feel like that is something that becomes algorithmic. I think it becomes you get enough of that clinical insight, intuition from hours and hours and hours of listening to sessions or transcribing sessions, or whatever it is. There’s, there is a point at which an AI therapist has a lot of utility and efficacy, and that’s, I think that’s part of the terror of a lot of therapists, and they don’t want to participate, because they don’t want to be part of creating that. And I think that we could go into why, why to help, and why not to help, but my understanding is that AI in the medical space is already providing at least more detailed information. I was sitting, you know, I had an extra medical test that I had to take because something came up on a scan. I’m fine everyone, but it was something that I think the human eye did not see, and AI saw, and so I had an extra test, and potentially that’s catching more things than medical doctors could do on their own. And so this is not new, where AI is extending competence, potentially getting into more detail. And so when you’re saying it doesn’t have judgment yet, and so therapists are doing assessments, and therapists have to make these clinical decisions, I don’t see that lasting for very long. And so when we look at AI therapy at this point, whether it’s a content extender, and I’m assuming that means providing the lessons that we do over and over and over again.

Dr. Alyssa Dietz 13:18
Right.

Katie Vernoy 13:25
Or it is making sure that your clients take your evidence based outcome measures before and after sessions, or whatever it is. Those types of things where having someone who, or having a being that does not forget, that has an automatic thing, that an automation that allows for some of these things to be done at a higher level, sounds helpful, but it does feel like it’s just steps away to this fully operational AI therapist that has most of the stuff, especially when therapists are participating. How do therapists protect themselves in this in this environment? Because I think a lot of clinicians are not, like I said, they’re not wanting to participate because they don’t want to participate in their own demise.

Dr. Alyssa Dietz 15:52
Yeah, it’s a good question. And, you know, like, it’s a meaty one, it’s complex. And like all meaty and complex answers, you know, there’s pros and cons to sort of both sides of that, right? So I think that, I think there’s going to continue to be a role. Not everyone is going to want a AI therapist. Certainly that’s not the case. And I think becoming knowledgeable to understand sort of how these systems work, what a large language model does, and how things like prompt engineering lead to what a patient could potentially eventually see, I think are really helpful topics for people to get themselves immersed in, even if they don’t ever want to work in sort of AI therapy or something along those lines. I think just understanding sort of what the capabilities and what the limitations are helps us to understand what niche areas are basically not in the near term horizon for being up for grabs for AI to be able to do. So, I think awareness is is super helpful educating ourselves. I do think that AI and other digital technologies are they have already taken off. It’s they are going to continue to expand. And again, I don’t think that that means there’s no role for therapists. I just think it’s going to look a bit different than maybe it has in the past for many. And then for others, I think they will have their niche area of delivering one to one face to face care, and that’s going to meet preferences and desires for for a lot of folks.

Curt Widhalm 17:30
We’re looking at how AI works and kind of holding it up to those rigorous evaluation standards. What kinds of metrics are we looking at that says, Yes, this is actually working. I mean, if we were to just take any company’s, you know, own research and be able to say, hey, yes, we’ve researched this thing, we give it our own stamp of approval. But how are we holding this as comparable to best practices that are out there?

Dr. Alyssa Dietz 17:59
Yeah, there is a tension between innovation, regulation, and evaluation. You know, innovation is moving quite quickly, and I think things like frameworks and and regulatory positions are really in their infancy around AI and other technologies. There is no playbook of how to do this and how to do this ethically and responsibly. Lots of people have their own opinions. What I personally think is going to be incredibly important is companies working on similar technologies, sort of working together, forming consortia to to really advocate for a responsible and ethical innovation and defining what that means, how to measure that, and then publishing their results of these types of evaluations.

Curt Widhalm 18:54
With what’s being put out there, when do you anticipate that using AI and using some of these tools actually becomes part of best practices?

Dr. Alyssa Dietz 19:06
Ooh, that’s that one’s quite, quite an interesting one to think about. I think we’re a ways off from that. I’ve been sort of an early entrant into digital health, including products that did not have AI. And the the speed of adoption and comfort with those products is very slow to slow to grow. And so I’m talking about like this is like 2018, these products were more or less digitized self help manuals. And even the skepticism around making things like that appropriate, like in best practices and clinical guidelines, has taken a very long time, despite good, strong clinical and scientific evidence. So I think it’s going to be, frankly, quite a while before AI in particular, or driven programs are thought of as part of like standards of care.

Curt Widhalm 20:04
When you think about the mental health workforce, which does tend to skew older, is this something where it’s kind of the pearl clutching older, kind of more conservative aspects of our profession that would be kind of holding that back? If we were a profession of younger 30 somethings that were more digital natives, do you think that that would be speeding up this curve around more of the adoption of this sooner?

Dr. Alyssa Dietz 20:32
It’s a great question. I don’t think it’s about therapists necessarily. I think it’s about healthcare culture. And I have a friend who likes to say slowness in healthcare is a feature, not a bug. There’s a reason that there is a slowness, right? People want to be sure. They want to feel that there’s good evidence and compelling, repeatable evidence that something is not only effective, but safe for the for the people who are going to be using them. So I don’t, I don’t necessarily think if it was older versus younger. I just think that is sort of the speed of healthcare, if you will.

Katie Vernoy 21:08
And yet, there’s incidences, I don’t know if you heard about the Tessa bot that was for the eating disorder association, and it started hallucinating and telling people who were calling an eating disorder hotline to start dieting, because that’s good and healthy. And it was, I think, based on more of this digital self help manual. It was based on a particular curriculum, and that was something that wasn’t even this generative AI that is a learning language model, versus something that’s kind of fixed and just supposed to be having some, some decision making. How are folks within the space trying to address, take care of, make sure hallucinations, glitches, client harm is not happening? Because there’s this, this element. I think part of the slowness is if I have a client at three in the morning who’s interacting with this bot, and this bot doesn’t know what to do, I’m still liable, right?

Dr. Alyssa Dietz 22:08
Right.

Katie Vernoy 22:08
If they’re my client, if I’ve, if I’ve endorsed this, or if I’m billing for it, because I some of the FDA approval ones are saying they’re gonna get billing codes for these things. How are we addressing that liability?

Dr. Alyssa Dietz 22:23
The liability piece is interesting. So it would depend on the deployment model or the go to market strategy of the individual company. So there are many, many to most, I would say, that are more clinically sound and more concerned about things like outcomes and safety are going a B to B to C model. And what I mean by that is they will sell the product to either a health plan or a health system, who then activates the end user. And so then the way that it works is a little bit different, because, for example, in the product that I work on, we have escalation protocols that are specific to the health system or the payer. So we follow protocols outlined and agreed upon by the entity that would be the one that would be liable in a situation something that would happen. So we do things that are consistent with what they want. An example I can give you is there’s a health system that we’re working on an implementation right now, so it’s top of mind, where, if somebody needs an escalation, we get in contact with the practice. It’s a primary care practice, so we get in contact with their provider and sort of help them make a plan about what to do, about what the clinical escalation is. Relatedly, if something happens sort of in the middle of the night, we have technology. I know others have similar that flag that as a risky utterance and present resources to the individual. So for example, they’ll get 911, 988, this particular implementation, they get like a nurse triage line that includes after hours lines. So I think when you’re evaluating whether or not to use a digital program as a therapist, one thing you should look at is what they say they do in the case of of an emergency or of a risky event that becomes evident. And I think that’s really one of the key differences between companies and products that are designing with the patient at the center and others that are more consumer grade, less rigorous in terms of the like clinical safety mechanisms and are less structured and have that have less oversight or other safety mechanisms in place.

Katie Vernoy 24:43
And I guess I made a very, you know, kind of complex question. I talked about liability, but there’s also the risks of hallucinations and glitches and ineffective care. And you started talking about some of the things earlier on, but if you can go into more detail around what is a rigorous positive process to determine efficacy of a AI therapy.

Dr. Alyssa Dietz 25:06
I’m going to take it sort of to the top in that the first thing to know about is what is the product or the service trained on? So is the data set from actual therapy, or have they essentially uploaded a textbook on cognitive behavior therapy and trained the language model on that? And so that sort of step one is, is this real clinical interactions? Is there subtlety and nuance because of the nature of the data set? Or is it like, you know these rules you need to follow this so on and so forth? So that’s the first thing to consider is, how is it trained? There are companies in the space that are taking an approach where there are therapists who are doing much of the prompt engineering and the building of the product. So for example, for our there’s an AI scientists are paired with our clinicians, and they sort of work together to get the the model to behave in a way that is clinically appropriate and effective. Then the next thing to understand is, what is the product, service or company doing to evaluate safety? So are they just sort of standing back and saying, seeing what happens, and will respond when something bad happens. Or do they have a plan? So, for example, we have a protocol whereby we do essentially a patient agent that’s been that acts as a patient to evaluate whether the model is responding appropriately at a high volume before a human ever gets exposure to the interactions, and then we have a graded sort of process of steps that we move through so that by the time we feel it’s ready to deploy to a user, we feel reasonably confident, I shouldn’t say reasonably confident. We feel very confident in its performance and adherence to what we’ve asked it to do. Now, there are other types of, you know, proprietary stuff around safety, filtering and monitoring that are relevant as well. But what I really want to make the point here is that there are, there’s a there’s a range of companies. Some are really consumer focused. They’re training on free data that’s available on the internet. They don’t have clinicians testing, evaluating and monitoring, and they’re, you know, slapping it in the app store and basically saying, all right, check that off. I made that product. And then there are other companies that are being extremely thoughtful and contemplative about what they’re building, how they’re building it and testing and evaluating it continuously to be confident in the performance of the visual program.

… 27:47
(Advertisement Break)

Curt Widhalm 27:51
You were saying earlier in the episode that the AI is great for things that are kind of manualized, CBT and that kind of stuff. And one of the criticisms of a lot of CBT research is that it’s very specialized around a singular diagnosis. And what many clinicians find when they enter into actual clinical work is that many people are more complex than just a DSM diagnosis that comes out there. How is AI handling things like comorbid diagnoses and multiple diagnoses in a way that is beyond just kind of the cookie cutter research that might be out there?

Dr. Alyssa Dietz 28:34
Yeah, so at this moment, I would say it probably isn’t in products that are available in the App Store right now. What many are working towards ieso included, is adding assessment in on the front end to understand the person in context. Again, it’s not going to be a human level of understanding and judgment, but if you can understand what are the maintaining mechanisms that the person is experiencing, and then configure a program that maps on to those specific things. So, you know, you talked about like, you know, comorbidities are the rule rather than the exception, once you get in clinical practice, right? And I’ll take it even a step further from that is that, you know, for depression, there are, you know, the you have to have five of nine criteria to meet a diagnostic threshold, but there are many permutations a person can have of which specific symptoms, and so their depression looks quite different than someone else’s. And so I think it’s not just about the delivery of the content, but we need to also build tools that understand the individual in context, to be able to configure a program that makes sense for them and what they’re experiencing and is personalized to to them, as opposed to this like one size fits all approach that we saw quite a bit of, especially in the early days of digital interventions.

Katie Vernoy 30:01
And if I’m understanding correctly, it seems like if it rises to a certain certain threshold, it would then be passed off to a higher level of care, potentially a clinician or crisis line, those types of things. And so it sounds like there’s a lot of thoughtfulness that’s there. One of the questions that one of my colleagues brought up to me that I thought was really interesting, especially in light of, I think there was a young man who had died by suicide after a AI relationship bot had said, go, go for it. And so when we’re looking at the relationship and how that’s navigated, people do create relationships with digital entities. How are people addressing that and making sure that that that relationship is at least understood, and if there’s like a therapeutic rupture or something, how that’s being addressed?

Dr. Alyssa Dietz 30:56
Yeah, again, I think this varies widely, depending on the company, product, etc. But I think being explicit from the front end of what it is and what it is not, and saying this is an AI powered conversational agent, as opposed to being vague about whether it’s your therapist or not. Also things like, if the individual tries to engage in a conversation about like, Oh, do you feel that way, you know, the conversational agent responding in such a way that’s like, remember, I’m not a human. I don’t have feelings, you know. Like, like, if you ask Siri, like, What’s your favorite food, she’ll say, I’m not a human. I don’t eat food, or something like that, you know. And so I think taking opportunities to reinforce that is, is important. I think also building in scripts. So, for example, there are many AI technologies, both in mental health, but just like, generally speaking, that can detect like sentiment, like, oh, this person is getting really irritated and frustrated. And so, you know, using something like that to say, like, I can see you getting frustrated of this conversation. Would you like to be facilitated, to be connected back to your primary care provider, or something along those lines, so that the person is not stuck in this loop where it’s almost like the the, you know, the call center loop, where you keep pressing, agent, agent, agent, we don’t want people to end up in that loop. So being thoughtful about that and being, you know, person centered inner design like, what would you want to experience? And making sure that those processes are built in there, so that people are A reminded that not a human, it’s a robot, but B, that there are ways for them to escalate out, or essentially exit out of the roundabout, if you will, if they want to engage with a human instead of an AI powered conversational agent.

Curt Widhalm 32:48
With where AI is at right now, what would be your advice to our listeners who might be interested in starting to incorporate AI into their work?

Dr. Alyssa Dietz 33:00
Oh, that’s a great question. So there are a few sort of organizations that I would recommend checking out. One is called Digital Medicine Society. They have some work around AI. Consumer Technology Association also does as well. And so, you know, the Consumer Tech Association, it’s obviously for more of a consumer lens, but it would at least help you to understand sort of the market players, what types of AI are being used or proposed for use in clinical practice. Because, like I said, not everything is service delivery, some of it is administrative burden, things of that nature. And so there are multiple areas that you could consider. I also, like, took a course on Coursera that was free to get myself up to speed on some things that I felt like I didn’t really understand. So checking out those resources and courses is also something I think that’s worth considering if you want to learn more and you aren’t really sure what to start it’s sort of a low, low investment way to be able to educate yourself.

Katie Vernoy 34:02
So we’ve, we’ve circled around this. So let’s get to the actual what is ieso?

Dr. Alyssa Dietz 34:08
Yeah.

Katie Vernoy 34:09
We didn’t really get get there yet. So let’s, let’s talk about that for a little bit.

Dr. Alyssa Dietz 34:13
Okay, yeah, so ieso is a digital mental health company that started in the UK about 15 years ago. They were some of the originals on the scene of tele mental health therapy. So they started delivering online CBT psychotherapy in the United Kingdom through the NHS about 15 or so years ago. And so if you fast forward to today, we have 750,000 hours of therapy transcripts with every utterance from the therapist in every utterance from the patient recorded because it was typed. And there was also weekly outcomes measurement that we can index against that. So now we have this data corpus of over 1 billion words that we’re leveraging to understand what works for who under what circumstances, and then building these digital programs, which I should mention, our offerings are digital and human combined. So we have member support, and we’re working on a model that also incorporates a clinician as well. And so we’re building these models where the conversational agent is is delivering what I think of as lines from a script, and those lines from a script have all been written by clinicians who, amongst themselves, have 10s of 1000s of hours of experience. So I often joke, it’s not two dudes in a garage in Silicon Valley, you know, throwing something together, but rather, it’s been a really thoughtful journey through mental health care and leveraging the best that science has to offer us to try to make access to evidence based interventions more widespread, more equitable and more available for people who want it.

Katie Vernoy 35:50
So it’s already happened. The people that are worried that that we’re training our replacements. We already were 15 years ago.

Curt Widhalm 35:58
The toothpaste is out of the tube.

Dr. Alyssa Dietz 36:00
The toothpaste is out of the tube.

Katie Vernoy 36:02
So you worked you worked another place. Now you’re at ieso what are some of the things that you’ve learned, and maybe even surprising things that you’ve learned that you want to share with other therapists?

Dr. Alyssa Dietz 36:12
Okay, this one is one of my favorites, and hearkens back to Curt question about like older age demographics and responsiveness to digital technology. So I’ve worked at three digital health companies now, one was focused primarily on substance use, one was on multiple conditions with mental health and related symptoms, and then now on depression and anxiety. At all three companies, the most active user group is people ages 50 to 60, 65, and I think that’s really surprising to a lot of people. If you ask them, they’d bet like, oh, 18 to 30 year olds are probably the ones who are using it. But that has not been the case. It’s actually been folks that are more like 50 to 60, 65-ish range. I have some hypotheses about why that might be, but that is something…

Curt Widhalm 37:00
You can’t just you can’t just drop that and then wait for the end of the episode.

Dr. Alyssa Dietz 37:05
That’s fair. That’s fair. So my hypothesis are around generational differences, around things like commitment, like once things are started, I think they’re more likely to stick with it. So we’d found, again, at all three companies that getting that age group into the product is sometimes a bit harder. Sometimes you have to work a little bit more to get them to come into the experience. But once they’re in, their their retention rates are much higher than their younger peers are. So there’s that. And then my personal hypothesis is this, like, sort of sandwich generation thing, like they’re working, they’re taking care of parents, maybe taking care of kids. And the idea of, like, going to a therapy session for an hour every week just feels like much too much, and it’s easier to you know, in these 15 quiet minutes before bed, work a little bit on your mental health, instead of having to allocate this, like, logistical chunk of time that’s hard to hard to realize. So those are hypotheses. I don’t have data for those two points, but that’s what I think.

Curt Widhalm 38:07
Where can people find out more about you and ieso?

Dr. Alyssa Dietz 38:11
I am on LinkedIn. Please feel free to to connect there. I also like to be on the sort of conference circuit. So Katie and I met at Behavioral Health Tech last November. I’ll be at DTX West in a couple of months, speaking on a panel about sort of therapists and AI and how we can work together. And there’s a few others that I’d love to connect with your listeners at if they’re going to be there. I also would like to give a plug to the group therapists in tech. If you haven’t checked them out before, I highly recommend you do. So it’s a Slack group where folks can trade resources, and there’s a whole spectrum from sort of tech curious all the way to seasoned veterans. There’s a mentorship program, there’s job posting. So if you want to learn more about maybe getting involved in the tech side of therapy. It’s a wonderful resource to have access to.

Curt Widhalm 39:05
And we will include links to all of those in our show notes over at mtsgpodcast.com. Follow us on our social media, join our Facebook group, the Modern Therapist Group, to continue on with these conversations, and until next time, I’m Curt Widhalm with Katie Vernoy and Dr Alyssa Dietz.

… 39:22
(Advertisement Break)

Announcer 39:23
Thank you for listening to the Modern Therapist’s Survival Guide. Learn more about who we are and what we do at mtsgpodcast.com. You can also join us on Facebook and Twitter, and please don’t forget to subscribe so you don’t miss any of our episodes.

 

0 replies
SPEAK YOUR MIND

Leave a Reply

Your email address will not be published. Required fields are marked *