The Five Most Key Takeaways from This Blog
- Mental-health bots, which are A.I. chatbots that can have therapeutic conversations with people, have been getting some high-profile exposure this year.
- Part of the appeal here is that using a chatbot may be more affordable for certain patients who may be priced out by privatized mental-health care. Another potential application would be using a mental-health bot as something like a supplementary tool for a patient already in human-to-human mental-health care.
- One prominent example of a mental-health bot is “Woebot”, which CBS’ program 60 Minutes has covered.
- Concerns surrounding the entire concept of mental-health-specialist chatbots include whether chatbots’ proclivity for hallucination (i.e., making stuff up or giving erroneous information) could end up doing any amount of harm to patients.
- Mental-health practices will need to heavily weigh the pros and cons, with the especially thorny legal issue of who should be considered liable if there is an established connection between a chatbot’s conversation with a patient and harmful patient behavior.
Mental-health Specialists Must Ask If It Is Worth Trying
The fourth bullet point of the Key Takeaways section above raises the interesting what-if.
That being, what if mental-health chatbots end up being able to help a wide range of patients?
Yes, but there is also a possibility that these bots help many patients, but at the cost of some patients ending up worse off than before using the bot.
If you are a mental-health care provider toying around with the idea of implementing any of the rising chatbot stars in mental health, then you will need to seriously consider the possibility of that what-if playing out in your practice.
Questions that are really not fully answered by the current legal system of America surround this issue, such as what if the patient commits a harmful act, self-directed or otherwise, with an evidently strong link to conversations held with a chatbot that offered “hallucinated” advice that no professional mental-health practitioners in their right mind would ever give? (E.g., “Sprinting down the highway in dark clothes in the middle of the night could be an excellent way to relieve your anxiety, [insert patient name here]. Is there anything else you would like to chat about today?)
And if a practice has enough in-house ethical considerations vis-à-vis patient-treatment solutions, then the potential legal mires that such an implementation may lead to are just the beginning of the worries.
One large consideration here is, is implementing an A.I. system with such risks something that a practice should even bother with in the first place?
A loaded question, yes, but perhaps consider some other aspects of this technology. There are pros and cons to the implementation, which will pop up in the text below, where we consider the ability of chatbots to individualize conversations based on patient data.
Personalization in Mental-health A.I. Solutions
One of the appeals of implementing a chatbot for mental-health practitioners is that the A.I. can be fed data about specific patients.
That way, you could offer automated conversations with patients that are actually patient-centric, on an individualized level.
So instead of more general advice that chatbots extend to each patient, personalization allows the chatbot to specifically train itself to chat with a certain individual.
For patients, this can indeed make the chats feel more personal and potentially helpful, even.
Is Faultless Chatbot Confidentiality Possible?
Of course, this all invites justifiable concerns about patient-doctor confidentiality.
Yes, post-Freud there has always been the image of mental-health practitioners scribbling patient notes onto a legal pad. Private data in potentially legible handwriting.
And in the 21st century, screen recordings of telehealth appointments. Usually the patient consents to these before any recording happens, but it is another example of how not all information shared during a session is exclusively kept in the mental memory banks of the participants. (Note that not all mental-health practices do screen recordings, as some practices completely eschew it.)
But unless some outside party were to find or steal one of these pads or access the telehealth recording, under the assumption of the mental-health practitioner being trustworthy then those conversations ought to be entre nous, w/r/t patient and practitioner.
But giving such personal data to a chatbot intuitively strikes most of us as being quite different. Here, you are not only giving confidential information to a computer system, but a computer system that talks. Talks to other patients, and who knows how many patients a practice may have?
Yes, yes, data-privacy practices and all that will be put in place, but no system is perfectly airtight, and the unpredictability of hallucination in chatbots should give many practices pause when considering the possibility of privileged information resurfacing in another chat. Or being found in a ransomware hack.
Recent Comments