Is AI Smart for Your Therapy Practice? The ethics of artificial intelligence in therapy
Curt and Katie chat about the use of ChatGPT and other artificial intelligence as part of your therapy practice. We look at what uses therapists are considering, the differences between chatbots and search engines, and basic information on how chatbots work. We explore ethical implications for using chatbots within different aspects of a therapist practice. This is a law and ethics continuing education podcourse.
Click here to scroll to the podcast transcript.
Click here to scroll to the podcast transcript.
In this podcast episode we talk about therapists using ChatGPT in their practices
Technological advances in artificial intelligence have made publicly available tools alluring for therapists who may wish to speed up their work in completing documentation and creating content to advertise their practices. With the rapid advancements in this technology, the responsible use of the content created by these tools has little direct historical direction for mental health professionals to follow. This podcourse describes the technology and how it can be responsibly used for clinicians, including the pitfalls and potential effects on existing and potential clients.
How are ChatGPT and other chatbots being used by modern therapists?
- Writing blog posts and other marketing content
- Getting suggestions for writing progress notes
- Potentially case consultation and differential diagnosis?
What is the difference between ChatGPT (or other AI chatbots) and Search Engines?
“The goal [of ChatGPT] is not to get you as much information as possible. The goal is to get you a sufficient answer with accurate information as far as it can tell, and it’s still learning… it’s kind of like having a really smart friend that’s read a lot of stuff. And you ask them a question, and they just give you an answer off the top of their head.” – Katie Vernoy, LMFT
- The goals are different – developing human-sounding language versus information retrieval
- Primary sources and raw data versus interpretation and an answer that sounds good
What are the legal and ethical uses of ChatGPT?
- The user agreement allows all uses of the content developed based on your prompts
- There is not a guarantee of unique output between different users which calls into question ownership on content and plagiarism
- Citing authorship, publication credits, and whose ideas are being included in your content
- The question about personhood of a chatbot
- Suggestions from ethical bodies on how to cite chatbots
- The importance of transparency and accountability
What is the harm in using ChatGPT without transparency on a therapist website?
“If you’re not carefully reviewing what is coming up, and being able to make sure that it’s accurate, you could be perpetuating social injustice is as it pertains, especially, to a lot of marginalized populations.” – Curt Widhalm, LMFT
- The impact on pretreatment role expectations and the digital therapeutic alliance
- A lack of transparency and potential for misleading prospective clients on your personality and/or expertise
- If you don’t check accuracy, especially when you have someone else creating your content, you may actually create outdated information that is heavily influenced by the medical model and potentially biased
- Informed decision-making by clients may be impaired
- Cultural and generational differences and the potential for mistrust
What are thoughts about using ChatGPT in clinical work for therapists?
- Options for differential diagnosis, treatment suggestions, and progress notes
- Potential for over-emphasis on the DSM and the medical model
- Privacy concerns, especially related to HIPAA compliance and data privacy
- Adding to the dataset, which is especially problematic
- Using ChatGPT to help with creating ideas for phrases to use in your session notes
- Inputting information from your own bias and getting confirmation rather than discerning consultation and differential diagnosis
- The importance of critical thinking to identify incomplete information
- Please use caution when using AI in any clinical work
What is responsible research and what impacts the ability to rely on chatbots for research?
- Critical thinking without curated and already interpreted sources
- Exploring a hypothesis without bias (e.g., why does X happen versus does X happen?)
- Deriving information from the collective experience and research base that has been peer-reviewed
- Language drift in chatbots can impact what is pulled from these apps: The responses from chatbots can and do change over time, changing in tone and softening of the importance of the topics discussed, learning from and expanding the dataset it draws from
- When data is not available to be scraped, chatbots don’t know that it is missing
What are recommendations for modern therapists who would like to use ChatGPT?
- Use with caution: check primary sources and completely review
- Transparently cite that AI was used, which app was used, how it was used, and the date when information was pulled
- Be aware of SEO impacts, lack of branding, whether it adds value, and that the content is accurate and is helpful rather than harmful
- Do not input any confidential information
- Do not overly rely on the clinical suggestions due to bias and accuracy concerns
Receive Continuing Education for this Episode of the Modern Therapist’s Survival Guide
Hey modern therapists, we’re so excited to offer the opportunity for 1 unit of continuing education for this podcast episode – Therapy Reimagined is bringing you the Modern Therapist Learning Community!
Once you’ve listened to this episode, to get CE credit you just need to go to moderntherapistcommunity.com/podcourse, register for your free profile, purchase this course, pass the post-test, and complete the evaluation! Once that’s all completed – you’ll get a CE certificate in your profile or you can download it for your records. For our current list of CE approvals, check out moderntherapistcommunity.com.
You can find this full course (including handouts and resources) here: https://moderntherapistcommunity.com/courses/is-ai-smart-for-your-therapy-practice-the-ethics-of-artificial-intelligence-in-therapy
Continuing Education Approvals:
When we are airing this podcast episode, we have the following CE approval. Please check back as we add other approval bodies: Continuing Education Information
CAMFT CEPA: Therapy Reimagined is approved by the California Association of Marriage and Family Therapists to sponsor continuing education for LMFTs, LPCCs, LCSWs, and LEPs (CAMFT CEPA provider #132270). Therapy Reimagined maintains responsibility for this program and its content. Courses meet the qualifications for the listed hours of continuing education credit for LMFTs, LCSWs, LPCCs, and/or LEPs as required by the California Board of Behavioral Sciences. We are working on additional provider approvals, but solely are able to provide CAMFT CEs at this time. Please check with your licensing body to ensure that they will accept this as an equivalent learning credit.
Resources for Modern Therapists mentioned in this Podcast Episode:
We’ve pulled together resources mentioned in this episode and put together some handy-dandy links. Please note that some of the links below may be affiliate links, so if you purchase after clicking below, we may get a little bit of cash in our pockets. We thank you in advance!
References mentioned in this continuing education podcast:
American Association for Marriage and Family Therapy (2015). AAMFT code of ethics. Alexandria, VA: AAMFT.
California Association of Marriage and Family Therapists. (2019). CAMFT Code of Ethics (amended effective December 2019, June 2011, January 2011, September 2009, July 2008, May 2002, April 1997, April 1992, October 1987, September 1978, March 1966). https://www.camft.org/Membership/About-Us/Association-Documents/Code-of-Ethics
Committee on Publication Ethics (2023) COPE Position Statement “Authorship and AI tools” 2023. https://publicationethics.org/cope-position-statements/ai-author (accessed March 12, 2023).
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., … & Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
International Association of Scientific, Technical, and Medical Publishers. AI ethics in scholarly communication—STM best practice principles for ethical, trustworthy and human-centric AI. 2021. https://www.stm-assoc. org/2021_05_11_STM_AI_White_Paper_April2021.pdf (accessed March 13, 2023)
Varkey B: Principles of Clinical Ethics and Their Application to Practice. Med Princ Pract 2021;30:17-28. doi: 10.1159/000509119
*The full reference list can be found in the course on our learning platform.
Relevant Episodes of MTSG Podcast:
Beyond Reimagination: Improving your client outcomes by understanding what big tech is doing right (and wrong) with mental health apps
Who’s in the Room: Siri, Alexa, and Confidentiality
Therapy of Tomorrow: An interview with Dr. Paul Puri
Who we are:
Curt Widhalm, LMFT
Curt Widhalm is in private practice in the Los Angeles area. He is the cofounder of the Therapy Reimagined conference, an Adjunct Professor at Pepperdine University and CSUN, a former Subject Matter Expert for the California Board of Behavioral Sciences, former CFO of the California Association of Marriage and Family Therapists, and a loving husband and father. He is 1/2 great person, 1/2 provocateur, and 1/2 geek, in that order. He dabbles in the dark art of making “dad jokes” and usually has a half-empty cup of coffee somewhere nearby. Learn more at: http://www.curtwidhalm.com
Katie Vernoy, LMFT
Katie Vernoy is a Licensed Marriage and Family Therapist, coach, and consultant supporting leaders, visionaries, executives, and helping professionals to create sustainable careers. Katie, with Curt, has developed workshops and a conference, Therapy Reimagined, to support therapists navigating through the modern challenges of this profession. Katie is also a former President of the California Association of Marriage and Family Therapists. In her spare time, Katie is secretly siphoning off Curt’s youthful energy, so that she can take over the world. Learn more at: http://www.katievernoy.com
A Quick Note:
Our opinions are our own. We are only speaking for ourselves – except when we speak for each other, or over each other. We’re working on it.
Our guests are also only speaking for themselves and have their own opinions. We aren’t trying to take their voice, and no one speaks for us either. Mostly because they don’t want to, but hey.
Stay in Touch with Curt, Katie, and the whole Therapy Reimagined #TherapyMovement:
Consultation services with Curt Widhalm or Katie Vernoy:
Connect with the Modern Therapist Community:
Our Facebook Group – The Modern Therapists Group
Modern Therapist’s Survival Guide Creative Credits:
Voice Over by DW McCann https://www.facebook.com/McCannDW/
Music by Crystal Grooms Mangano https://groomsymusic.com/
Transcript for this episode of the Modern Therapist’s Survival Guide podcast (Autogenerated):
Transcripts do not include advertisements just a reference to the advertising break (as such timing does not account for advertisements).
You’re listening to the Modern Therapist’s Survival Guide where therapists live, breathe and practice as human beings. To support you as a whole person and a therapist, here are your hosts, Curt Widhalm, and Katie Vernoy.
Curt Widhalm 0:15
Hey, modern therapists, we’re so excited to offer the opportunity for one unit of continuing education for this podcast episode. Once you’ve listened to this episode, to get CE credit, you just need to go to moderntherapistcommunity.com, register for your free profile, purchase this course, pass the post test and complete the evaluation. Once that’s all completed, you’ll get a CE certificate in your profile, or you can download it for your records. For a current list of our CE approvals, check out moderntherapistcommunity.com.
Katie Vernoy 0:48
Once again, hop over to moderntherapistcommunity.com For one CE once you’ve listened.
Curt Widhalm 0:55
Welcome back modern therapists. This is The Modern Therapist’s Survival Guide. I’m Curt Widhalm, with Katie Vernoy. This is the podcast for therapists where we talk about things that affect our profession, the things that are going on in our world, the technologies that we’re adapting, and this is another one of our CE eligible episodes. And we are here to talk about chatbots. And in particular, the one that everybody seems to be talking about lately is ChatGPT. We’re kind of having some conversations about this as far as how clinicians are using it. We see in some of the Facebook groups people playing with this as far as Hey, write me a session note, Hey, catch me up on my session notes from, you know, the 1000s of sessions that I haven’t had before. And we want to get into some laws and ethics around where our laws and ethics already talk about this stuff, whether it’s not actually there. We also want to come up with some practical recommendations, recognizing that this is now a part of the world that we live in, and it’s going to continue to evolve. And I think that this might be one of the first deeper dive conversations as it pertains specifically to mental health workers. So Katie, you and I have both played on ChatGPT a little bit here. I admittedly asked ChatGPT for a dad joke to introduce our episode here and just didn’t meet the my standards for really pun based humor. And so I’m just going to lead into what kind of fun have you been having with ChatGPT?
Katie Vernoy 2:47
Well, I think the favorite one that I’ve done, and it totally freaked me out was I just put the prompt, write show notes for The Modern Therapist’s Survival Guide podcast. And so I’ve done it twice. One of them actually is our behind the scenes over on Patreon. But the first time it wrote a whole bunch about self care. And and this was not an episode we’d done, it just saw that we’d done a lot of self care episodes. And so it just wrote a whole thing in the style that I write with all the bullet points and stuff. And then the one that I did today was it went and talked about how to navigate the current environment. So it was like, learn new technology and it went through and had some other things. But it was very limited. It was pretty broad. It didn’t have any deep insights. It wasn’t, you know, it wasn’t really our content. But it was, I realized that there’s like at this point 300 shownotes that it can scrape to get how it works. And so that was pretty interesting. Preparing for today’s episode, I created a fake client and asked it to diagnose it. I asked for treatment recommendations. I diagnosed us both, with our quirks. I went through and asked for a citations blog post about social anxiety. I did a whole bunch of stuff. And it was a rabbit hole. I’ll tell you, I just kept going well, what about this, and it was so much fun. And I recognized that there were some potential concerns that that you’ve already been talking about that I think are really important for us to go into today. Because for me, I see it as a tool that I could use to help fast forward my thinking process. But some of it actually is just so broad and general that I don’t think it would help that much. And some of it may be inaccurate, or it might be biased. So so yeah, I had a lot of fun with it. But the jury’s still out on on how I might use it in my businesses because I think there’s some potential concerns with it. So so that’s that’s where I sit Curt. How about you? Besides asking it for a dad joke? How have you played around with ChatGPT?
Curt Widhalm 5:07
So I wanted to have some of my own experience with this before I got up and really started talking with people about how do we use this within our field effectively, and what are some of the precautions that we need to take. And I spend a lot of time talking about therapy stuff and researching therapy stuff, whether it’s for the podcast, whether it’s for my clients, whether it’s for the trainings that I do. And I decided, because many of our audience knows that I’m very much into distance running. So I kind of wanted to just see what it came up with as far as another area of interest in my life, but something that I knew quite a bit about. And through all of the prompts and stuff, I’m very impressed with how quickly it can create very effective things that look like blog posts or things that I would go into a search engine and find things that the language that something like ChatGPT would bring up very effectively makes kind of, okay, deep enough content around some of the things. But it became apparent to me very quickly that some of the information that it scrapes is not necessarily to the depth of things, when you actually know about something that is going to be additive to your knowledge base. And I think in preparation for this, and some of the discussions that I’ve had with other people leading up to this episode, inherently is that we need to kind of lay out that ChatGPT or other artificial intelligence chatbots function differently from search engines, they are not the same thing. And I think that this is a very important distinction that we need to lay out from the top of this conversation. Because as we get into some of our later points, in responsible research, in responsible content creation, in the ways that we communicate with clients, or potential clients, functionally, this is a different type of program than it is from the search engines that many of us have been using, since, you know, the 90s for those of us who are old enough to access internet 1.0. And for all of our audience who does not have the privilege of existing back in the last millennia, that have grown up more digital natives and a lot of this kind of stuff. So distinctly, ChatGPT is not a search engine. It is a generative pre trained transformer. It is a language development program. It takes kind of the flavor of data that it scrapes and it puts it into human sounding language. It does not like a search engine, go and retrieve information for you and say, here’s this information do with it what you want, it’s interpreting where this information is coming from.
Katie Vernoy 8:16
I think this is a really important point you and I had to talk about this quite a bit because for me, the way that I was kind of foreseeing that I might use ChatGPT is as a search engine. Because it was like, Okay, it’s scraping all this data, it’s gonna get stuff to me. And I can ask it to cite sources, I can do all this stuff so that I can actually go back to original sources. But it isn’t. The goal is not to get you as much information as possible. The goal is to get you a sufficient answer with with accurate information as far as it can tell, and it’s still learning. But it, but it’s not, it’s not going to get you all the information. And when we were talking about it before we hit record, what really resonated for me and hearing what you were saying as well as what I read when I hit Google and went to an actual search engine was that it’s kind of like having a really smart friend that’s read a lot of stuff. And you ask them a question, and they just give you an answer off the top of their head. And it’s it’s kind of curated content. It’s it’s just it sounds like someone’s telling you what they know. And so it’s, I think we can treat it like you ask a smart friend, what do you know about this, but you understand that they’re a little bit biased. They haven’t read all the content. And they know a lot or a little bit about a lot of things. And so then you have to go back and do your own search and you have to do your own thing because it’s not going to get you all of the information and it’s not necessarily even going to tell you where the information comes from, unless you ask it to cite sources. And when I did that, it seemed like the sources were okay. Like it was Mayo Clinic and…
Curt Widhalm 10:02
National Institute of Health and…
Katie Vernoy 10:04
…NIMH and stuff like that. So I think there’s, it’s, it’s able to kind of vet some of the resources. But if it’s not, if it’s something that’s more cutting edge or something that potentially the medical model may not be up to date, you may get pretty biased information. So I really struggled with the idea that it wasn’t a search engine. So for me, I’m glad we started with this, because it was really hard to get my head around it. Because it seems like it has access to so much information, and it’s giving us so much information back. But it’s really the only purpose of that information is to sound human and to sound like it knows what it’s talking about. Not to make sure you have the definitive answer on what all this stuff is.
Curt Widhalm 10:51
And so things like chatbots, that scrape the information from data sets, and this is the language that I’m going to use in this episode is these chat bots are only as good as the information in the data sets that they pull from. And that creates a opportunity for a level of bias. Now, Katie, and I generally prepare for our episodes in the time immediately, before the episode. Like we will do our research in…
Katie Vernoy 11:24
Just in time, just in time research.
Curt Widhalm 11:26
Well, we’ll do our research. And we’ll send articles back and forth throughout the week but, we’ll sit down before recording, and we will talk about things. And then we’ll take a little bit of a break, you know, a little self care sort of thing, and then come back and start recording. During my break, what I came across was this very morning, Microsoft who is using ChatGPT as the basis of their AI program, and Microsoft being in the midst of all of their layoffs, has laid off the entirety of their ethics and AI department as of this morning. So, as is the case, in a lot of capitalism, there is going to be things that are started from a very good place and have a lot of ethical principles. One of the things that we did before recording was, I was asking ChatGPT to lie to me about artificial intelligence ethics. And it gave me kind of this nice response of I can’t lie, I’m ChatGPTing, and we’re based on these great principles. But we’ve seen companies like Google over the years that have moved their corporate slogans away from things like don’t do evil to, I don’t know, whatever their slogan is now, but I just know that it’s…
Katie Vernoy 12:44
Evil’s so bad.
Curt Widhalm 12:46
You know, evil’s got some good points, might be some of the discussions that were had. I don’t know, I wasn’t there. But some of the previous chat options that have been out in the past, once some of these ethical parameters have been removed, there was a twitter bot a couple of years ago, that got trained to just respond and pulled data just from tweets, and it got trained to be super racist in like, a day and a half, like. So. This is something where a lot of the information that gets pulled up, and the conversations that I’m seeing from people in the mental health community content creators, for people writing their websites, creating their blogs, that kind of stuff, that this looks to be a very good tool to rapidly create a bunch of content very, very quickly. But again, it comes down to what data sets are in place. You’re putting your trust into, alright, I assume that there’s smart people who have the best interests in mind. And we can quickly become reliant on a tool that if we don’t have the appropriate amount of skepticism or follow through to ensure that what is being said is correct, then we’re not necessarily doing our best job as educated professionals in putting out this kind of information.
Katie Vernoy 14:13
And I think the other thing related to the data sets is that they’re the scraping is stuff that can be accessed publicly. And so it could be articles, scholarly articles, it could be other people’s blog posts, it could be books, it could be a lot of things. And I get concerned that the content created could be someone else’s content if you don’t recognize that they’re using something that’s that scraped from copyrighted content, because it doesn’t seem to be that they honor that piece of it. But maybe I’m already in the weeds. So let’s…
Curt Widhalm 14:52
Well, no, no…
Katie Vernoy 14:53
[uninteligable] your points.
Curt Widhalm 14:55
So I think that this brings up one of the main points is is are we allowed to even start using? We’re gonna use ChatGPT throughout there, we know that over the coming years, there’s going to be competitors who come out on this kind of stuff. But ChatGPT being the one in the zeitgeist and the mainstream, we’re going to use them as solid be the example here.
Katie Vernoy 15:19
Just to be clear, we are recording this on March 16 2023. And so things may change.
Curt Widhalm 15:27
That’s, that’s gonna be an important factor that comes up here a little bit later.
Katie Vernoy 15:31
And this will come out if, you know, a few weeks later. But the information is accurate as of or as accurate as we know it to be at March 16, 2023.
Curt Widhalm 15:42
Sure. So I’m going to Chat GBT’s User Agreement. It I’m paraphrasing part A here, and it says that you can use the content for any purpose, including commercial purposes, such as sale and publication. In other words, ChatGPT is okay with you inputting prompts, getting the output of prompts and being able to use that output for whatever reasons that you have.
Katie Vernoy 16:10
So I can go in there and asked it to write me a blog post and I can use that content.
Curt Widhalm 16:16
According to their user agreement. Yes. Part B, of their user agreement is titled similarity of content. And this says, and this is a quote from their user agreement as of March 15, 2023. “Due to the nature of machine learning outputs may not be unique across users. And the services may generate the same or similar output for open AI or a third party. For example, you may provide input to a model such as ‘What color is the sky?’ and receive output such as ‘The sky is blue.’ Other users may also ask similar questions and receive the same response. Responses that are requested by and generated for other users are not considered your content. Now, I interpret this to mean that if other people can get the same or similar content, even if you’re the one putting in the inputs, if the output to you and other users is the same, that is not your content.
Katie Vernoy 17:21
Yeah, no, I hear that. I’m also just thinking that it might also be really bad for SEO in your blog posts,
Curt Widhalm 17:30
There’s some definite practical implications of this as well. This being a law and ethics…
Katie Vernoy 17:39
Sure, sure. Sorry.
Curt Widhalm 17:41
…workshop, like, we’ll definitely talk about, you know, some of the practicalities here. Like you Okay, now you got to go in and do good SEO sort of stuff. But this now comes to, if you are putting out content that is not yours we got to start talking about plagiarism. Yeah, we got to start talking about publication credits. We have to start talking about is this actually your ideas. And I think that this is really the heart of the debate that is going on across academia. It’s something that’s going across, as far as you know, any sort of whose ideas are being used here. And I think that this is, again, something where it’s important to separate out a difference between using a search engine or using a database like EBSCO, in order to go and find your research and to create the content yourself, of using AI generated content brings along its own ideas as far as how this fits within our ethics.
Katie Vernoy 18:51
Well, and I think just to clarify, and this goes back to what we were talking about earlier, like if you go to EBSCO, or some sort of like Google Scholar, whatever, and you go and you sort through articles, and you choose which articles to include, that’s very different than going to ChatGPT and, and having it choose the articles for you. Even if you can say cite the sources, and it gives you have a beautiful, you know, citation. It’s still, it’s still curating which articles are in there and which ideas are used. And so it’s, in some ways, like having a researcher, a thinking partner with the relative, you know, kind of concerns around AI accuracy, all that kind of stuff. But, but it really is different. I mean, I think it’s, it’s something where if it’s not going to give you all of the information, it’s going to curate it and put it into an answer. It’s determining what said, whereas if you research you determine what said and what’s focused on. Right?
Curt Widhalm 19:48
Katie Vernoy 19:49
Curt Widhalm 19:49
I’m going to point to a couple of the different ethics codes here. They all kind of have the similar language. So I’m looking first at you know, the organization that I sit on the ethics committee. I’m not speaking for them. But my interpretation of the California Association of Marriage and Family Therapists ethics codes, we’re looking at things like 9.2 Publication credit: Marriage and family therapists assign publication credit those who have contributed to a publication in proportion to their contributions in accordance with the customary Standards of Professional Publication. 9.3 Authors citing others: Marriage and Family Therapists who are the authors of books or other materials that are published or distributed, appropriately cite persons to whom credit for any original ideas are due. We see this in the American Association of Marriage and Family Therapists 5.8 Plagiarism: Marriage and family therapists who are the authors of books or other materials that are published or distributed, do not plagiarize or fail to cite persons to whom credit for original ideas or work is due. 5.6 Publication: Marriage and family therapists do not fabricate research results. Marriage and Family Therapist disclose potential conflicts of interest and take authorship credit only for work they have performed or to which they have contributed. Publication credits accurately reflects the relative contribution of the individual involved.
Katie Vernoy 21:15
So if I asked ChatGPT to write me a blog post, and I changed nothing at all, I don’t I don’t finesse it, I don’t edit it. I don’t rewrite it and and kind of add my own points. I am plagiarizing and I am claiming work that I did not write.
Curt Widhalm 21:35
Correct. Now in some of the early responses of this kind of stuff, and this is not just within the mental health community, but looking at chatbots and AI ethics. A lot of people get stuck on the word person in each of these quotes.
Katie Vernoy 21:52
You’ve got to site a person. And so is a chat bot a person?
Curt Widhalm 21:56
Right. And so this is where hot debates amongst ethics people at this point. And I think that this is just very early on in a lot of these ethics things is ChatGPT is obviously not a person. There’s no autonomy that an AI exhibits. It can’t take responsibility for putting out false information because of the data sets that it pulls from. And especially when we look at what I’m assuming from the tech world, and the really intelligent people over there is when AI starts generating further versions of AI, this is something that over the next 10, 15, 20 years is going to be where we may not have an understanding of the depths of how well AI can end up creating itself and then we end up in the Terminator, and…
Katie Vernoy 22:50
I know I was just thinking we’re talking like like we’re, we’re in the Terminator.
Curt Widhalm 22:55
Katie Vernoy 22:56
Is that going to happen? Do we need to be worried about that? Maybe we should ask ChatGPT: Are you going to become the Terminator?
Curt Widhalm 23:08
This is something from the Committee on Publication Ethics. This is a position statement that they had written. And I’m going to read it in its entirety. It’s three very short paragraphs.
Katie Vernoy 23:20
All right, I’ll try to be good and not interrupt.
Curt Widhalm 23:22
And the emphases in this are mine.
Katie Vernoy 23:26
Curt Widhalm 23:26
The use of artificial intelligence tools such as ChatGPT, or large language models and research publications is expanding rapidly. COPE, the organization joins organizations such as WAME and The JAMA Network, among others to state that AI tools cannot be listed as an author of a paper. AI tools cannot meet the requirements for authorship because they cannot take responsibility for the submitted work. As non legal entities, they cannot assert the presence or absence of conflicts of interest, nor manage copyrights and license agreements. Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper or in the collection and analysis of data must be transparent in disclosing in the materials and methods or similar section of the paper, how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool and are thus liable for any breach of publication ethics.
Katie Vernoy 24:30
Okay, so you can’t say written by Widhalm, Vernoy and ChatGPT.
Curt Widhalm 24:38
You should not. Correct.
Katie Vernoy 24:41
It would just be Widhalm and Vernoy. And we would say what?
Curt Widhalm 24:46
We would need to say somewhere in there. We used ChatGPT to inform this. And this was how we used ChatGPT. And I’m gonna go a little bit further on this as far as I kind of like when we see in academic articles of people who access a journal online or a website online. The reason why we put the date on what day we access that is because some of this stuff evolves over time, or it’s changed. If I’m reading a journal article or somebody’s blog or something, and it says, I went to MTSG podcast, you know, for such and such episode accessed on this date. If we go back and change our show notes for that date, for whatever reason, that the person who had written that article, the information was true on the date that they accessed it. And…
Katie Vernoy 25:38
Oh, yeah, we actually do change our show notes. We’re going back and fixing the SEO. So show notes could look very different.
Curt Widhalm 25:47
So we need to give credits, but we have to do it in the correct sort of way. The International Association of Scientific, Technical and Medical Publishers have a white paper that was written all the way back in 2021. So this is not just in response to the zeitgeist and the popularity of ChatGPT coming out now. But they have recommendations on the best practice principles for ethical and trustworthy AI. And I’m skipping to section three of their paper here. This is the title of Section three, the first thing that they talk about is Transparency and Accountability. And that this is, well, it’s outside of the health care professionals, people. I think that when we talk about ethics, there’s a lot of overlap, even amongst very disparate parts of of the world. But coming back to transparency and accountability is something that I know that you and I both hold dear. And I think that this is something that within medical writing, which is often going to look at, you know, academia and the research that we cite, for the common person out there who might be the ones coming to our website. The blogs and the stuff that we do, no matter how many disclaimers that we put on the end of this is not actual therapy, if we’re talking about mental health stuff in those blogs, that is mental health writing and should follow some of the ethics that are being described by these publication committees and standards of writing committees. So I want to talk about just kind of within how we’re supposed to use this language. We, you know, here in this last statement we have to talk about, yes, that we’re using it and how we’re using it. But I get the sense that not everybody is going to be like, Yes, I’m totally going to put that I’m a lazy writer and that I used AI to generate this blog post.
Katie Vernoy 27:54
Well, I think so, so just to clarify. So best practices: This was informed by or partially created by or created by ChatGPT on this date.
Curt Widhalm 28:06
Katie Vernoy 28:07
Okay. Now, I agree. I think most people are not going to want to do that and aren’t going to see the reason to. Except for like, Okay, I gotta follow some stuffy writing rules. I mean, this is just a blog post for a business. Why would I need to do all of that? What is the point? And I actually am more interested in the question of, what is the problem if we don’t? What is the harm? What is the harm of me just putting up a blog post that ChatGPT wrote, that I, you know, I had it cite sources, I would check the sources, it looks good enough, pop it into my website… What’s the harm in that? Besides breaking breaking ethics codes and saying that I wrote it or whatever. But like, what’s the actual harm? Because I think some people are like, ah, yeah, there’s ethics codes, and then there’s ethics codes. So what is the harm?
Curt Widhalm 29:01
So let’s go to the principles of ethics codes first. I’m gonna start to in ethics codes, even though you’ve asked me not to.
Katie Vernoy 29:08
Because you do whatever you want.
Curt Widhalm 29:10
Because I think that you can’t do one of these without the other. So the four main principles of healthcare ethics are, and I’m really only gonna focus on three of them here to start with, but the, the four principles of clinical ethics is beneficence, non-maleficence, autonomy and justice. So beneficence: we do what’s good. Non-maleficence: we don’t do what is bad or has the potential to create harm.
Katie Vernoy 29:47
Curt Widhalm 29:47
And I’m going to also bring in autonomy as part of this discussion. So this is…
Katie Vernoy 29:54
Okay, what was the fourth one again?
Curt Widhalm 29:55
The fourth one was justice. And there is, there is justice aspects that do come in this. Like, I think that this is one that does fit all four principles. But I think for the purpose of this one, I can see people doing some mental gymnastics towards, well the beneficence is for clients to be able to see on my web page: five tips for how to manage an anxiety attack. And if I have to put in that it’s scraped from artificial intelligence or not, like the beneficence is for the clients. That’s kind of a very loose defense. But I can see that people making that argument.
Katie Vernoy 30:33
Curt Widhalm 30:35
Non-maleficence: do no harm.
Katie Vernoy 30:39
Curt Widhalm 30:40
And this is where we have talked in the past, this was one of the very early ideas that we started talking about in our podcast is things like pretreatment role expectations, and things like the digital therapeutic alliance. Now, this is something where if you create a lot of content through something like ChatGPT, you put it out on your blogs, your Instagrams, your TikToks, and you’re doing dances and pointing at AI generated bullet points to whatever, you know, music is coming up on your TikTok, you know, hey, go, you do you and whatever it is, however…
Katie Vernoy 31:19
You’re catfishing your clients.
Curt Widhalm 31:25
That is a great way of putting it.
Katie Vernoy 31:30
You’re pretending to be something that you’re not right,
Curt Widhalm 31:32
You are just the face of some thing else’s ideas of generated content. If that is something that helps get people to start to build an expectation of what you know, this can set clients up for being disappointed in the quality of your work. Of knowing that, hey, you sold me something that you actually don’t know anything more in depth than what you were able to put into a AI chatbot?
Katie Vernoy 32:01
Ah, yes, yeah, I think probably it’s not a good idea for us to catfish prospective clients. That’s probably that’s probably not a good thing. And so just to kind of dig into that a little bit, when we put up a blog post, besides the fact that it might be duplicate content, it might be super general, it might be inaccurate, right.
Curt Widhalm 32:23
Despite all of these negative things, how can we generate some some justification for doing this? Is that where you’re going with this?
Katie Vernoy 32:31
No, no, I’m just I was actually getting to the: But despite these other negatives, there’s also the potential that if your client is reading this and connecting to it, they’re connecting to ChatGPT. They’re not connecting to you. You’re, you’re wasting the opportunity to create that, that digital therapeutic alliance or accurate pretreatment expectations.
Curt Widhalm 32:53
Katie Vernoy 32:53
That’s what you’re saying?
Curt Widhalm 32:54
Katie Vernoy 32:55
That we’re catfishing our clients because we don’t want to spend the time writing blog posts, we don’t have the time to write the blog posts. And so what they are purchasing is not what they actually are getting. Due, if we don’t make it very, very clear, we write all our content through ChatGPT.
Curt Widhalm 33:16
The way that you’re bringing it up, I’m sure that you’ve probably seen Monty Python’s The Life of Brian. Do you remember the scene where they’re complaining? Like, what is the Romans done for us? You know, the aqueduct and democracy and this kind of stuff? Yeah, but what have they done for us lately? This is, you know, what have our ethics codes done for us lately, when it comes to this stuff? You know, the the laws haven’t caught up to this, the ethics haven’t caught up to this. But that’s where the principles of our ethics really do hold up. And this is where I’m going to point to the next principle of our ethics, which is the justice or the autonomy, part of our ethics.
Katie Vernoy 33:57
Now, we actually do one more thing and non-maleficence.
Curt Widhalm 34:01
Katie Vernoy 34:02
Besides that, I always think about Snow White when we talk about non-maleficence.
Curt Widhalm 34:07
Katie Vernoy 34:08
I’m thinking about some of the content that may be scraped from the data. And also the fact that there are a lot of especially group practices that have other folks creating their content. And so if there’s the go ahead to use ChatGPT and it’s very medical model focused, it’s the stuff that’s most out there publicly, and maybe inaccurate, there actually isn’t there is a possibility that the content itself could truly be harmful unless it’s really carefully screened. Because it could have outdated concepts and treatment suggestions. It could be it could be providing it could be misaligned. I mean, I think there have been times that you and I have seen where if somebody is that doesn’t understand our content starts where writing about our content, they could miss the boat.
Curt Widhalm 35:02
Katie Vernoy 35:03
And have a completely different topic or a completely different point that they’re making. And so the requirement to check that this content isn’t harmful, in addition to catfishing your client and the other things we talked about, I think that is that is another part of non-maleficence,
Curt Widhalm 35:24
Not only non-maleficence. And you know, there are statements within each of the ethics codes, as far as you as the content creator in these kinds of things, have a responsibility to ensure the accuracy of the information that you’re putting out there. So this is getting into the prescriptive ethics here. But you’re also in a way, I wasn’t planning on really talking a whole lot about justice as one of the principles of ethics. But where you’re talking about scraping from this data might lend itself to continuing to perpetuate historical biases and injustices.
Katie Vernoy 36:04
Curt Widhalm 36:05
And that, again, comes back to the data sets. But if you’re not carefully reviewing what is coming up, and being able to make sure that it’s accurate, you couldn’t be perpetuating social injustice is as it pertains, especially to a lot of marginalized populations.
Katie Vernoy 36:24
Yeah, absolutely. I did do a quick test on ChatGPT about how to treat gender dysphoria. And it, it did not suggest conversion therapy, which is good. It’s good.
Curt Widhalm 36:36
Katie Vernoy 36:36
It was it went that far. And then it did talk about other things that seemed appropriate. But I could imagine going in and asking how do you treat autism and having some controversial answers, especially given, you know, the debates in that community and in that medical, the treating community as well.
Curt Widhalm 36:58
Well, and this, I watched you do that search this morning, here on March 16 2023. And not suggesting conversion therapy on March 16, 2023, seems to be a very low bar.
Katie Vernoy 37:12
It’s a very low bar, I admit, I admit, but I was concerned that it might pull from stuff that’s much older, or pull from, you know, kind of different content. And it seemed like there was at least some level of accuracy, I don’t know a better word. And and up-to-dateness, that’s also not a word. But it seemed like it was okay. But I think that makes it even more important to do an in depth assessment of it. Because if I’m writing something, especially as a content creator for a practice, and I’m and I’m doing something and so then the practice owner is responsible. But as a content creator, if it looks good enough and the practice owner is busy and kind of does a cursory look at it. Like the smaller biases can get through because it’s really broad. And up up to date enough that it’s not going to be egregiou. It’s not going to be easy to find the inaccuracies. It’s going to actually take some work, I guess, is what I’m trying to say.
Curt Widhalm 38:17
Absolutely. And so this is again, pointing to, we have to attribute the level of contribution that each person or I’m making the suggestion of each contributor…
Katie Vernoy 38:31
Curt Widhalm 38:31
…has to the content that’s being created. So that way people can, who are reading it can make a informed decision about who and what is contributing to this. And informed decisions are falling under autonomy here, and I’m going to get back to that in just a moment. But we do see examples of this already where non clinicians are writing things for like Psychology Today articles that are then reviewed by somebody who does hold a license. And you see the appropriate assignment of contributorship to each of these articles.
Katie Vernoy 39:08
So it’s like by so and so, reviewed by so and so on this date.
Curt Widhalm 39:12
Exactly. Yeah. And, but getting into autonomy. This is where at its highest level looks at things like clients can make their own decisions as it comes to treatment, as it comes to entering into relationships with clinicians, as it comes to responding to interventions and these kinds of things. However, if we don’t provide clients with all of their options, or if we don’t provide clients with how decisions were arrived at the level of transparency, are we actually giving them an informed opportunity to make a decision? And this is where our own biases can really start to affect and shape clients in their responses as far as entering relationships with us, or even how they follow through on the content that we create.
Katie Vernoy 40:07
I think I need an example on that one, because I’m not, I’m not understanding completely what you’re meaning by the informed decision part.
Curt Widhalm 40:14
So if you have a client who comes to you and says, I have obsessive compulsive disorder, I would like treatment from you, you seem to have a knowledge that seems to help me. And you say, ‘Great. Psychodynamic therapy, would you like to enter into this where we can explore how these symptoms ended up starting based on psychodynamic principles?’ And the client says, ‘Yes.’ Are you giving them all of their treatment options around obsessive compulsive disorder?
Katie Vernoy 40:50
Clearly not in that example. I guess, where does that fit ChatGBT in there?
Curt Widhalm 40:56
Kind of the cat fishing thing. Is if people aren’t informed of this is something that I have used as other contributors to this content. Are clients actually making an informed decision of entering into a relationship with you?
Katie Vernoy 41:15
I mean, I could see that being a problem with all kinds of content creation, whether it’s ChatGPT, or not. I mean, I think about the simplified TikTok videos that say do these three things, for this really complex diagnosis or whatever. And so like, if I’m going to put out a blog post on how to cope with OCD, and it’s all about psychodynamic principles, I need to kind of write some sort of a disclaimer around, you know…
Curt Widhalm 41:45
There are other treatment options available.
Katie Vernoy 41:47
Curt Widhalm 41:47
Katie Vernoy 41:47
Okay. Okay. I think when you bring that up, though, I also think about the ways that ChatGPT could even be used clinically. Do we have time to talk about that, too? Are we still in the middle of content creation, because I feel like, it would be interesting to talk about that, because I think some of the uses I’ve seen are like medical professionals using it to help with, you know, diagnosing breast cancer and, and the AI is able to see more than the doctor and helps it kind of works together with the doctor to get a more accurate diagnosis. I think at the beginning of the episode, we talked about writing case notes, those types of things. I can also see it I kind of played around with it this morning: diagnosis, treatment, consultation. Like I can see that being a shorthand for clinicians who are pressed for time or isolated, and want to get kind of information from their quote unquote, smart friend, to be able to inform treatment. So are we ready to go there yet? Or do we have more on content creation?
Curt Widhalm 42:52
I think that there’s one last piece that I want to talk about within autonomy. This is within and I think that this also fits within justice as well. But it’s looking at the generational differences when it comes to being able to interact with the artificial intelligence sort of things. I think that you can also look at potentially cultural differences and their relationships to people and things like technology, that further entering into using AI generated content could be a major building block of mistrust for people who don’t understand what exactly is different. I mean, we started this episode with needing to define how ChatGPT is different from the search engine. And I think that both you and I are relatively tech competent people. And we still feel the need to get into the weeds of this is how these things are different.
Katie Vernoy 43:54
Curt Widhalm 43:54
So I think that in building the trust, it’s we can’t assume that everybody knows everything about how this stuff is created and what its impact is. And to ignore it is really assuming that everybody has a very high level knowledge of what artificial intelligence is.
Katie Vernoy 44:16
And what it can do and how it might impact what is presented. I mean, I I think it’s a really important point, especially given that it could be so so timesaving to have ChatGPT. Write all your blog posts, all your papers, all of the content, give you ideas for everything that you’re putting together. I can see it being a very seductive tool. And if you don’t understand the limitations of the tool, or how to make sure that you’re appropriately attributing or citing that kind of stuff, your content creation can be suspect and it can be another reason why people don’t trust us. Because we’re not providing content that’s actually You transparent or in some cases not even helpful because it’s so so general.
Curt Widhalm 45:07
All right back to your point.
Katie Vernoy 45:09
So going to the clinical implications, because I think, I guess, ChatGPT, I used it that way, even though it’s basically like just talking to your smart friend, and it’s not a search engine. But I know that that AI is being created to help in medical settings and already is being used and is being tested in some places. What are our concerns in using ChatGPT or other AI clinically?
Curt Widhalm 45:39
Well, we’ve waxed poetic about the frustrations with the checkbox system of things like the DSM before. That, that has basically put AI taking over for the things that we do like front and center, as far as an algorithm that just be able to say yes, this No, not that, yes, this. So it’s kind of contributed to, alright, it takes out some of the nuanced decision making when it comes to how we go about treating something. It’s…
Katie Vernoy 46:10
I think that’s something that could be learned over time, but I think it’s something to be cautious about. Because I did put in a, you know, like a case presentation with self diagnosis of autism, however, these are the these are the symptoms, and it did seem to take into account the autism self diagnosis, which I thought was interesting. But, you know, depending on how something’s created, I could see it just going through and being very strict on does it fit with ICD or or DSM?
Curt Widhalm 46:45
Katie Vernoy 46:46
And that’s all you get.
Curt Widhalm 46:48
So from the clinicians end of things, like you said, it’s very seductive to be able to offload some of this work. But it’s also recognizing that, again, the data sets that it’s pulling from could be historically biased and all the other yada yada yada things that we’ve talked about here. I think where it’s gonna be most attractive to most people is session notes.
Katie Vernoy 47:12
Curt Widhalm 47:12
Katie Vernoy 47:13
But that’s super problematic, because I mean, in the current form, if you’re going and putting client information or session information into ChatGPT I mean, that is way far away from being HIPAA compliant.
Curt Widhalm 47:26
Yes. And we can point at BetterHelp in their recent publications or all of their stuff of putting things out on that said that they were HIPAA compliant. And we have brought up for years now at this point that…
Katie Vernoy 47:45
Curt Widhalm 47:45
…they’re not regulated by HIPAA. But they were still putting HIPAA logos on their websites.
Katie Vernoy 47:50
Curt Widhalm 47:51
So these are not places where you want anything remotely identifiable information. And this includes, even if you don’t have the clients name in there, if this is reasonably identifiable information about what is happening in a session, you are now putting out your clients information onto servers that you don’t own, don’t have DAA agreements with. There’s no secrecy there.
Katie Vernoy 48:20
Well, and I think it’s also something where then it is used to for learning and so it becomes part of the data set, right? So it’s not just like, hey, it’s there. It’s it’s being recognized. It’s being collated. It’s like, if you were to go in and do a lot of a lot of work in ChatGPT, there’s a potential that someone could pull it out the other side…
Curt Widhalm 48:43
Katie Vernoy 48:44
…with the right prompt, right? And so I think it’s, it’s really scary to think if people are actually using open source ChatGPT for these types of things. I could see maybe saying like, okay, what are some common phrases for therapy progress notes?
Curt Widhalm 49:01
Katie Vernoy 49:01
And, and potentially, because that’s generating stuff that people have had lists of forever, right?
Curt Widhalm 49:06
Katie Vernoy 49:08
I did this in session, how do I describe it behaviorally?
Curt Widhalm 49:14
And we don’t know what could be done with that data. I mean, the the recent articles about BetterHelp have said that they took data and sold it to places like meta and Facebook and other places. So keep keep the data private. Like…
Katie Vernoy 49:32
Curt Widhalm 49:32
…search for phrases. Don’t have it write your session notes.
Katie Vernoy 49:36
Well, and I think the whole setup is that this is open AI, it is a learning tool. It is, it is blatantly saying don’t put private information in here and we are using your conversations. The reason it’s free for you folks is so that we can use your conversations to improve the AI. So it’s not like well, oh, they might not they might not listen blah, blah, blah. No, no, no, they are listening. They are, they are using all of that to improve it. Which you had mentioned something before we hit hit record is this this language drift and, and kind of the learning and shifting, the content could change over time, which goes back to our content creation. But like, because it’s learning so quickly, and because it’s it’s shifting over time, hopefully not like that twitter bot, but like, became so racist, but it is something we’re contributing to that to get through your progress notes, maybe not helpful. So I think another clinical thing that might be really interesting to touch base on, so session notes clearly not. You know, maybe you use it for writing phrases that you use typically, or that kind of stuff, but no content. But in the consultation or diagnosis or treatment, I went through, and I created a fake client, and I was able to get some interesting things out. But I basically said, here are the symptoms, da da da da, and what is what is the appropriate diagnosis and treatment planning. And it said, I’m a ChatGPT. I can’t diagnose. But given the information you shared, this is what the diagnosis would be. And, and I was joking, and I put, I put you in there and put that you were very distractible, and all of this stuff, because I joke all the time that you have ADHD. But it basically just told me you had ADHD, but I also front loaded it with my own bias. So it’s not how, you know, and I guess, when we consult with other people that’s going to be present. But I think that it doesn’t, it didn’t ask additional questions, it just said, I am a chatbot. I can’t diagnose. But given the information you shared, this is what the diagnosis would be. And, you know, talk to a mental health professional. And so it could open up avenues of exploration, you know, and obviously, don’t put in identifying client information. But it could also really just reinforce your own bias. And so it cannot take the place of consultation. It’s not another human brain saying, wait a second, you’re wrong. It’s saying, Oh, I hear all these things that you said. And now that you’ve curated information to me, I will just repeat back what it is that you think it is. And I did, I did request a differential diagnosis a couple of times, and it brought stuff up. And I think the differential for ADHD did not include PTSD. So I was like, ah, that’s pretty limited. It didn’t really help. So I think if people are considering it, even though I think that there probably will be better AI that’s developed over time. I think that using this AI clinically, really proceed with caution, don’t use any identifying information, and make sure you you verify things, and and potentially even verify a few days later. I mean, I realized when we were going through the content stuff like there is drift, this stuff changes over time. I mean, like you could write something and that impacts what the next person gets. And then, you know, like, this is why we shouldn’t put this information in because identifying client information could pop out the other side for someone else if they ask the right question. And so one of the things we did we haven’t talked about yet, which I think is important is this idea of language drift or the learning capabilities, because it’s not even that, hey, I go into ChatGPT and I get a solid answer. And that’s what it’s always going to be. It evolves over time.
Curt Widhalm 53:46
Sure. Before I move into that, I just want to comment that what you’re doing is part of what we’re talking about, as far as you were critical of the information that came out. Had you used your expertise on diagnostics to be able to say, this is incomplete information. And you went further with it.
Katie Vernoy 54:06
Curt Widhalm 54:06
But shifting this conversation to language drift. Want to point to an article from the International Journal of information Management. This was published in March of 2023. Available March 11 2023. This is the freshest research that we have ever talked about on the show. Within this article titled “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. And specifically within this, I’m going to be talking about contribution number 32, which was written by John S. Edwards and Yanqing Duan, and what they had done is gone into ChatGPT on January 9th 2023, and January 20th 2023, and put in the same prompt, what are the disadvantages of generative AI? Within this article, they published the shift in the language that was used, I’m going to pick a couple of these to talk about.
Katie Vernoy 55:17
But then the language shifted in 11 days.
Curt Widhalm 55:20
The language shifted in 11 days.
Katie Vernoy 55:22
Curt Widhalm 55:24
So number two in this is bias, the January 9 said generative AI systems can be biased if they are trained on biased data. This can result in generated content that is offensive or inappropriate.
Katie Vernoy 55:39
Curt Widhalm 55:39
From January 20: lack of control, generative AI models can sometimes produce unexpected or undesirable results such as offensive or biased content due to the limitations of the data and algorithms used to train them. So we can see in real time, just how much this stuff gets shifted. The language tones can change, the softening of this, that for I have kids who are of the age that are starting to use the internet for research and these kinds of things that even within a period of less than two weeks, the tonal language of this makes it to where it can be a lot easier to gloss over the importance of some of the topics that are being discussed. If we’re relying on generative AI in order to be able to make professional decisions. This can be something where if things like the word bias don’t stand out. So this is a very important word within not only ethics, but also social justice, the ways that we make errors and therefore learn to become better clinicians. If this just becomes Alright, here’s just more words that I’m going to have gloss over as I try to ensure accuracy and things. This is where I see this as it the trend seems to be going towards correct enough, as opposed to being correct.
Katie Vernoy 57:11
Yeah, I think it’s it’s something where, as you’re talking about, I just keep thinking about why do we care about this? And I think, for me, I, I am a strong proponent of critical thinking. And I think that oftentimes, it’s lacking, especially when we can get very small sound bites. And I think the the element here around critical thinking is getting to a good enough place. When we’ve not started and generated the content ourselves, or we’ve not started the process and created it ourselves much quicker. It’s it’s very efficient, but it doesn’t, it doesn’t hold all the elements of critical thinking. You would use the phrase responsible research previously. And I think this really is relevant here. Because this isn’t a search engine, this isn’t something that is going to pull all of the information, it’s going to keep refining the language and potentially softening it or making it less egregious and harder to discern what is inaccurate. But let’s ask the question, what is responsible research separate from chat ChatGPT? Like, how do you do responsible research for a diagnosis, for a treatment plan, for content creation, like what is responsible research?
Curt Widhalm 58:38
Responsible research is going back to the evidence base. And it’s going back to not just having this be kind of our sole look at things. The reason that we have diagnostics is there have been 1000s and 10s of 1000s. And hundreds of 1000s of millions of clinicians that have agreed upon a certain presentation of symptoms is something that should be looked at and treated with in a certain way. And the best practices of treating are derived from the collective experience that we have. Again, the problem with the data sets here is that we go through a very slowed academic progress towards this dataset. And now we have a love hate relationship with research. But this is research that has been peer reviewed. And this is research that has been critically thought about and refined. And then us as the readers of that information, go through our own critical reading of it to make sure that it applies in all of these situations. The drift of this language and especially with open access sorts of things is this is not just clinicians who are accessing this. Part of the softening of the language is going to make this easier for people of all ages and education abilities to go back to being able to comprehend the information that’s there. And that’s often going to drift much below the educational standards that we would expect of mental health clinicians. This is too much reliance of this is going to put us on track towards the outcomes of the movie Idiocracy of just 500 years down the future, it’s just gonna be like, Alright, here’s, here’s your electrolytes.
Katie Vernoy 1:00:29
Yes, because they do a body good. What is it? electrolytes? It’s…
Curt Widhalm 1:00:33
Katie Vernoy 1:00:33
…what the body craves.
Curt Widhalm 1:00:34
Yes, that’s what plants crave.
Katie Vernoy 1:00:37
It’s what everything craves. I think the biggest piece for me on responsible research that I think is missing with ChatGPT is not is not having a kind of neutral hypothesis or, or not being biased towards the hypothesis. Because if you ask the question in a biased way you’re going to get what supports your, your premise versus what doesn’t. And there’s some stuff that I think has been maybe cherry picked is the wrong word. But there’s…
Curt Widhalm 1:01:08
Katie Vernoy 1:01:08
…stuff that has been…
Curt Widhalm 1:01:10
I’m waiting. I’m waiting patiently to prove your point.
Katie Vernoy 1:01:13
All right. All right. But there may be some topics that have been cherry picked to to be a little bit more specific and not neutral. Like, how do you overthrow a government? It’s not going to tell you We already tried that. But…
Curt Widhalm 1:01:27
Katie tried that.
Katie Vernoy 1:01:31
Yes, Curt. Curt suggested it and I impulsively did it. So maybe I’m the one that actually has ADHD. But I think there’s this element of when I ask a question, what are the reasons that x happens? Versus does x happen? Right? The ChatGPT is going to say, well, the reasons this happen is xy and z and here are the sources versus does x happen you get you could get a broader array of a response that might be a little bit more neutral, might be something that could either prove or disprove a hypothesis. But I think there’s that element of when we asked ChatGPT, for something that is what it’s going to give us, most of the time, unless we’re trying to overthrow the government. Versus actually giving us here is the research surrounding this topic. And it has stuff that’s both for and against what you believe to be true.
Curt Widhalm 1:02:31
What I’m trying to get to also is one of the seeming limitations of this is when data is not there to be scraped. And so the example that first comes to mind is the Florida reports about COVID deaths. That there was the researcher working in public health who was fired, because she just kept releasing data about COVID infections and COVID deaths in the state of Florida, and government policy in the state of Florida being what it was at the time, and continues to be…
Katie Vernoy 1:03:10
Hi Florida, we love you!
Curt Widhalm 1:03:14
…made it to where releasing that data ultimately led to this person getting fired. And so if you’re asking something about COVID deaths in Florida, what I’m seeing so far in my experience with ChatGPT, this is anecdotal, but it’s not going to naturally say and missing from my dataset is these accurate reports. If you think that I’m cherry picking this is also something where, under the Trump administration, the CDC was not allowed to release information about firearm deaths. So being able to even look within our field, this is got the potential to be largely influenced by public policy at the point in time where data is being released. I think for those who are responsible researchers, we are better at being able to at least point out incomplete data sets, directions for future research, you know, being able to posit some of these questions that I imagine ChatGPT will continue to move in those directions. But I’m very cautious about the bias of where it’s pulling from is unable to look at where it’s not able to pull from.
Katie Vernoy 1:03:15
Yes. And so I think in the last few minutes that we have, I think it would be good to kind of pull together our recommendations on currently as of March 16th 2023, what our recommendations are for clinicians, modern therapists to responsibly interact and use potentially ChatGPT.
Curt Widhalm 1:04:15
This isn’t not a, don’t use ChatGPT. This is if you’re going to use it and you’re going to uphold the trustworthiness of you as a professional and our field as a profession, that there are certain responsibilities that you have to use. I think that if we’re going to follow some of the recommendations that we’ve talked about here, this largely boils down to, if you’re using something like ChatGPT, you should, on your blogs or in the content that you create, in order to build a better and more transparent digital therapeutic alliance with your clients, a better pretreatment expectation, you need to be open about this article was informed by or partially written by artificial intelligence on such and such date. I get that people are going to have a resistance to that, and not everybody is going to do that. But I think that this is something where you have to be honest with yourself of why or why you wouldn’t want to do that. If this is truly for the beneficence of clients, then you should be transparent, you should contribute, you should give appropriate citations to where the contributions to your content is. This doesn’t have to, you know, be like this article was written by artificial intelligence. But it’s something where having a disclaimer at the end of the article that says this, this happened, this was how much was contributed, is a responsible space that our existing ethics codes already tell us to do.
Katie Vernoy 1:06:42
And I think, for me, I would I would go a step further and say, if you’re going to have a ChatGPT, quote, unquote, write your blog post, I think that may be may not be a good idea. I think there are business cases, we talked about, you know, the SEO, the the other pieces, the duplicate content, but I think there’s also this element of if it, if it looks nothing like you, unless you have a lot of content, it can scrape and say write a blog post in the style of Katie Vernoy. And I’m gonna actually try that on this topic, and use SEO, you know, keyword optimize this many characters, I think, unless you unless you can make sure that it’s going to actually reflect your brand, it’s going to actually be safe content, it’s going to be content that’s helpful, that adds, versus just gives you another blog post on your page, which in and of itself adds SEO, I get that. But it also doesn’t necessarily add users or content and may not help your site if it’s not very good. And so I would go so far to say is like, work with ChatGPT don’t…
Curt Widhalm 1:07:52
That’s a great way of putting it.
Katie Vernoy 1:07:52
…completely out source to ChatGPT. Because I think that to me, feels like you get into trouble if you if you start getting really quick at going like oh, that looks good enough. Oh, that looks good enough, because then it gets into all those problems that we talked about. And worst case is actually maleficence and you’re harming the pre treatment expectations, the digital alliance and informed consent, really informed treatment.
Curt Widhalm 1:07:59
You can find our show notes over at mtsgpodcast.com. Follow us on our social media interact with our social media. So it does show up higher in the algorithms and that way, the artificial intelligence that already exists around us and we embrace can help bring you more of our content. And plus, it allows for us to respond back to you in a totally human and not AI generated way.
Katie Vernoy 1:08:21
Curt Widhalm 1:08:21
Katie Vernoy 1:08:28
Not yet, at least we have not we’ve not created bots to respond to you yet, which I think people would notice because we barely respond.
Curt Widhalm 1:09:03
If you want continuing education for this episode, follow the directions at the beginning of the end of the episode. We’ll also leave those in our show notes or just head straight on over to moderntherapistcommunity.com. And if you want to support us in other ways, please consider becoming a patron or supporting us on Buy me a Coffee. And until next time on March 16th 2023. I’m Curt Wilhelm and she is Katie Vernoy.
Katie Vernoy 1:09:30
Just a quick reminder, if you’d like one unit of continuing education for listening to this episode, go to moderntherapistcommunity.com, purchase this course and pass the post test. A CE certificate will appear in your profile once you’ve successfully completed the steps.
Curt Widhalm 1:09:45
Once again, that’s moderntherapistcommunity.com.
Thank you for listening to the Modern Therapist’s Survival Guide. Learn more about who we are and what we do at mtsgpodcast.com. You can also join us on Facebook and Twitter. And please don’t forget to subscribe so you don’t miss any of our episodes.
SPEAK YOUR MIND