Health Calls

A.I., Empathy and Goal-Aligned Health Care

Episode Summary

When working with patients, it is essential for palliative care providers to have goal-oriented conversations. Without them, caregivers may not efficiently and collaboratively meet the patients' needs and goals for their own care. How can new technologies enhance these conversations, both from a productivity and a human relationship standpoint?

Episode Notes

When working with patients, it is essential for palliative care providers to have goal-oriented conversations. Without them, caregivers may not efficiently and collaboratively meet the patients' needs and goals for their own care. How can new technologies enhance these conversations, both from a productivity and a human relationship standpoint?

Matthew Gonzales, MD, of the Institute for Human Caring at Providence, joins the conversation to discuss IHC, the center’s successes, and its use of generative artificial intelligence. Gonzales lays out how “EmpathyAI” can enhance caregiver-to-patient interactions and lead to better outcomes for patients in a goal-aligned care model.

Resources

Visit the IHC’s official website to learn more about its work

Episode Transcription

Welcome to Health Calls, the podcast of the Catholic Health Association of the United States. I'm your host, Brian Reardon. And again, in this season of health calls, we are talking about technology and humanity. And for this episode, our topic is empathy, AI and beyond generating pathways for better goal aligned care. We're going to get to that in just a minute, but let me introduce our guest. I'm going to bring in via Zoom, Dr. Matthew Gonzales. He's the Associate Vice President and Chief Medical and Operations Officer for the Providence Institute for Human Caring. Matt, how are you?

Matthew Gonzales, MD (00:41):

Oh, great. Thanks for having me. Super excited to be here with you.

Brian Reardon (00:44):

No, thanks for being with us and talking about this topic. Before we dive into the topic, tell us a little bit about the Institute for Human Caring.

Matthew Gonzales, MD (00:53):

That's one of my favorite topics actually. We're talking about two of my favorite topics today, generative AI and the Institute for Human Caring. The Institute for Human Caring was founded in 2014 by Dr. Ira Byock, who was a pioneer in the field of palliative care, and came together with Providence to create the Institute for Human Caring to really try to create a system level resource to be able to help us figure out how to care better for patients who are living with serious illness. So I've been with the institute now since 2015, and when Dr. Byock retired just about two years ago now, I stepped into his role and we work across Providence trying to make that care experience as good as it can be, thinking about ways that we teach people to have better conversations, to understand what's truly meaningful to patients and families, to be able to make the systems and processes and epic of charting those information and easy to find and easy to be able to act on. It's honestly a true privilege to work for an organization that's just dedicated to getting this right because I think all of us know that there are really big opportunities to improve upon within healthcare just globally within this, and just makes me proud to be with Providence and a part of really Catholic healthcare, which is leading the way nationally in terms of figuring out the best models of care for seriously ill people.

Brian Reardon (02:26):

Yeah, and I'm glad you brought up Dr. Byock. I remember hearing him speak again an assembly, and actually I'm going to mention assembly twice here in a few matter of moments because you spoke at our last assembly, but this was years ago. This is like the mid 20 teens. And at the time, I'm going to get a little personal here, but I know that folks that have listened to other episodes from past seasons have heard me talk about both of my parents being in hospice, and I had one parent in hospice at the time when he was presenting and just what a champion for families and for those suffering, like you said, serious illness and having those really critical, important conversations at the end of life. And I think he's a champion of that. So getting a little bit of context about the Institute for Human Caring I think is really important for this conversation because those conversations are just sacred that we have as family members with loved ones who we are helping them ease their way through the end of their life. So I just wanted to mention to him and what an impact he had on me some 10 years ago or more. I guess it was when I was dealing with that issue.

Matthew Gonzales, MD (03:29):

I couldn't agree more,

Brian Reardon (03:29):

But yeah. So you kind of inherited the institute from him or?

Matthew Gonzales, MD (03:33):

I did. Yeah. We worked side by side for eight years together, and then when he retired, I stepped into his role and it's been a privilege taking the work that he's done and amplifying it in different ways. And honestly, the first eight years were such an amazing learning experience. When I took this job, I was just excited to be able to meet with him and to be able to have worked side by side with him for that many years was remarkable because you're right, he is a great champion. And I think that with him at the helm and the work that we've continued to do, we've continued to make significant progress. And as you said, these critically sacred conversations, they're so meaningful. And across the US if you survey people and ask them, would you do want to have a conversation? Maybe not want, maybe that's the wrong word, but do you think it's important to have a conversation about what the end of your life looks like when you're seriously ill?

(04:31):

We know that Americans, 80% of them would say, yes, that's an important conversation. Nationally though we're only having these conversations maybe about 10% of the time, 13, 15% in some of the really exceptional places. And that's one of the key things that we worked on at Providence, is trying change that. I'm really proud of the fact that this year to date in all of our ICUs, if you have a ICU length of stay of five days or more, that 84% of those patients that fit in the category, they or their families have had one of these critical sacred conversations about what's meaningful when they're seriously ill. So I think we've made really important strides, and yet we still have a long way to go. And that's partly what empathy AI has really teed up to do, which is to try to help us feel more confident in having these difficult conversations.

Brian Reardon (05:23):

And I'm guessing people listening now are thinking, okay, wait a second. We're talking about ai, generative ai. How does that come into play with this really important human to human conversation that needs to happen? And so I think, again, I mentioned assembly, Dr. Byock assembly, talking about end of life issues. And then this past June, you were at our assembly in San Diego talking about empathy ai. And I think that presentation, again, we had a number of topics on AI, but I think it was really a well-received presentation. I guess to start with, when you were talking about empathy ai, were there any questions or comments after your presentation that maybe surprised you from that conference?

Matthew Gonzales, MD (06:05):

I think the one that resonated the most with me was that someone came up and said, I was nervous about ai and watching the demo that you presented, it made me realize that training tools like empathy, AI feel real and can help foster human connection. And that to me is I recognize that as we talk about AI in our society and healthcare and Catholic healthcare, that there are some downsides. Sure. But there's massive upsides. And to me, one of those is empathy ai, which it's really a conversation trainer. Part of the way that we've gotten to having this many conversations is that we've done a lot of teaching around the best ways to have these conversations around serious illness to help people, to help clinicians feel confident in exploring a patient's hopes, dreams, and worries in the context of this. But those are hard human to human conversations, and unless you've had a lot of training in it, it's hard to know how to respond when someone looks at you in the eye or someone's family looks you in the eye and says, is my loved one dying? When I was in med school, I had 15 minutes of role play around how to break bad news in this way.

Brian Reardon (07:23):

15 minutes.

Matthew Gonzales, MD (07:24):

Yeah, total. In four years of med school, which is not sufficient at all,

(07:30):

But we know that role play can help people get better at this. And so one of the things that we've done is really created a role play based tool. We call it advanced communication training that was based on a tool Gawande serious Illness conversation guide training. And we've trained 5,000 people at Providence since 2016, which is remarkable. But we're a big health system. We have 25,000 doctors, 35,000 nurses. It'd be impossible for us to train them in human to human role play one by one or even in classes of 12 or 24 that we often do. And so that's where this idea of empathy I came in is could we replicate the experience of human to human role play and make it human to AI role play where we have AI driven patients that have simulated hope streams and worries, and allow clinicians to interact with those all under the watch fly of a communication coach to help them get better at having these conversations in safe and scalable environments.

Brian Reardon (08:30):

And you created this model, and again, you presented it. I guess I should have asked this question probably earlier, is what was the genesis? Why come up with using generative AI to help clinicians role play end of life conversations?

Matthew Gonzales, MD (08:44):

Well, so part of my history is that I was a software engineer before I was a doctor. And so

(08:51):

Honestly, I was very interested. I've been very interested for a long time in trying to figure out a synthesis between the highly technical fields that exist in this world and the highly relational emotional realms of palliative care. So it started honestly in my evenings and weekends just trying to have fun and learn about ai. The reason that I thought this problem might be interested is because I am teaching these classes a lot and it works, but it's so time intensive and we just can't train everybody. And then a paper came out, I guess I would say March of 2023, that really showed that these tools, that these chat bot tools have a way to emulate what looks like empathy. It was a very interesting trial where they took concerns from Reddit posts and asked a physician to respond to them, and then asked a chat bot to respond to 'em. And if you look at the difference in terms of the empathy scores as related as scored by independent raters after the fact chat, GPT, these LLMs, as we call them, large language models, really had much higher empathy looking scores than the human responses. And I think I say empathy looking because it's not real empathy, right? Computers don't have empathy,

(10:15):

But they're relentless in their ability to create words that look and feel to us as human beings, like empathy. And so it occurred to me, could we use that? Could we use this highly relational model to be able to allow us to try things out in a safe environment?

Brian Reardon (10:32):

And so that was all I would imagine based on inputs you were doing was, and again, I'm maybe getting a little too in the weeds here, but I'm really fascinated to know what these chatbot models, a lot of them are sort of closed environments. Others go out into the worldwide web and pull all sorts of crazy stuff in. How contained did you have to have this process to really build a tool that wasn't going to just get you a lot of, I guess, off the wall stuff as you were working on this training model?

Matthew Gonzales, MD (11:04):

Such a good question. Initially, it was literally on my own computer with the open source stuff that existed, and then I presented it to a lot of the leaders at Providence, and they have been wonderful in helping to partner with us and to take that initial working prototype that I had to work with real data scientists and software engineers who do this now. And it's not just sort of like my coding. And it's been a really fun, inventive process to be able to take that prototype and to be able to co-create something over the last year that's in a more secure environment that allows us to be able to save these conversations so that we can learn from it. Because you're right, one or two word differences in the ways that US GPT can cause massively different outputs. And so one of the things that we've really is stored the outputs of all these conversations so that we can keep track and to pay attention or changes that we're making GPT do something that we didn't intend, or it's saying things that don't quite make sense in context or it doesn't feel real.

(12:07):

And I think that one of the strengths of this project is what I would say is, is that Providence, we've really had a tight alignment between the data scientists doing the work and those of us that are subject matter experts in doing this every day. And that's just been such a real privilege because we can work together to be able to bring our unique skill sets and create something that feels honestly very real. When I think about my experience and having these conversations, this is the first training that I've ever done that I can't just fly through without putting thought into it. I actually have to use my skills to navigate these conversations. And the simple fact that it pushes my learning edge is where I think that we're going to be able to create really high quality trainings, not just for these difficult end of life conversations, but for other conversations.

Brian Reardon (13:01):

And how does this tool work? Do individual physicians have a chat box that they practice with? Do you have group training done virtually? Can you walk us through a little bit from a practical standpoint, how this is actually applied in the training for again, end of life conversations?

Matthew Gonzales, MD (13:19):

Yeah, so we're still in the alpha testing phase. Our beta testing is coming up where we're going to invite a lot more people. We wanted to do this in a super safe environment though. So for instance, what we've done is gone through our institutional review board to make sure that everything that we're doing makes sense and that we're not putting people at risk. It's part of the reason that we're reviewing a hundred percent of these conversations after a teaching scenario because we want to make sure that the advice that people are getting makes sense and that we're not going to be teaching them a technique that the AI has hallucinated. And to date, it hasn't done anything that we thought that was inappropriate, which has been wonderful. The way that we've really set this up to answer your question is we have some asynchronous online modules that go through the theory and learnings of how one has these conversations according to best known ways, what's the best opening, what's the best middle, what's the best ending of one of these conversations?

(14:18):

Sort of establishing the roadmap as it were for having these conversations. And then we release people out into the AI environment where folks can interact with one of two different personas. We're actually building a bunch more now and some more difficulty levels, but you have the ability to speak to these personas and they speak back to you through your computer. You have a get help button, which if you get lost, you can click that get help button, and the coach gives you a sense of how you're doing, also a concrete suggestion of what you might say next. And you take that suggestion and move the conversation forward. And then when you're done with the conversation, it gives you a score of sorts. It's not an A, B, C, D, but a sense of how well the skills that we teach in the asynchronous online modules are replicated in the role play.

Brian Reardon (15:12):

Fascinating. I asked you earlier about the reaction that you had after you did this presentation at our assembly. What reaction are you getting from those physicians who are going through these alpha pilot tests?

Matthew Gonzales, MD (15:25):

So I think the most interesting thing is is that everybody kind of thought this couldn't work. And they've come into it initially with this sense of, well, I don't know that this is going to replicate the experience. And I think for the most part, we've changed people's minds around that. I mean, honestly, I changed my own mind. I did this thinking, well, maybe, I don't know. We'll see AI at the time when I started building this, which was May, 2023, felt like such a buzzword, and it felt like such a hype. And I've seen so many of these tech hypes over the years that I thought, this don't think it's going to take off the way other people do, but the fact that I have to use my real skills and that other people are reporting they have to in these tests, I think that changes their impression of it and gives people this aha moment of wow there. There's something real here. I think, to be honest though, Brian, we've got a much longer way to go in this. We have two patients now, and my dream for this is to be able to create almost like a choose your character or choose your scenario where you can choose from multiple different patient personas and multiple different illnesses in multiple different settings. And

(16:42):

We have a long way to go before we're able to replicate the variety of patients and diagnoses and settings that we need in order for people of all different types of disciplines to be able to feel like this works well.

Brian Reardon (16:56):

And what do you say to folks that, again, those hesitant clinicians like, whoa, wait, wait, what? Are we doing what? Because again, AI is seen as this soulless machine that may take over the world, may take our jobs, et cetera. How do you convince people no, this can actually help you be better, help you be more empathetic?

Matthew Gonzales, MD (17:15):

It's such a good question. My sense is, is that it is really this moment of recognizing that the training you're going into or that you're doing feels different than every other training. There's a lot of tools out there and some are very experiential based and that those are wonderful. And then there's some module based ones. We even have had some more extensive module based ones, but I don't know about you all, but some of these module based tools, they ask you maybe an open-ended question or they ask you to reflect on something. And no matter what you put into the box, the system isn't smart enough to respond to. It just says, oh, thanks for your reflection, or whatever it might be.

Brian Reardon (17:57):

Yeah, it's regurgitating it seems like.

Matthew Gonzales, MD (17:58):

Yeah, right. And this doesn't do that. If you say something that's off the wall to the AI patient because you think that it's not going to know how to respond, it asks you questions like, doctor, I didn't understand what you were talking about. How is that important in the context of my cancer? And so you realize very quickly that you can't gain this system. And I think that that pushes you to a different place of really trying to interact with this, and that, I think changes people's mind.

Brian Reardon (18:28):

Yeah, it's fascinating. Well, as we're wrapping up, I'm going to do sort of the big philosophical question that I think we're trying to maybe not answer, but explore throughout this season. And that is how does this technology, particularly generative AI, how can it enhance that connection between the patient and the clinician and those of us who work in Catholic healthcare? What advice would you give for us to keep in mind the potentials and pitfalls of this technology?

Matthew Gonzales, MD (18:59):

Well, I think there's a few ways. I think one way is the way that we've been talking about in terms of training and allowing really high quality trainings to happen so people can practice these difficult conversations as many times as they want in the privacy of their own home. And to be able to learn without harming someone, a real patient or family. I think the other way that is really exciting is the opportunity for clinicians to get out from behind the computer. Honestly,

(19:29):

When I get really inspired by these ideas that some days AI will be listening to us having conversations and creating notes and orders from that, so that rather than paying attention to the screens and pixels in front of me instead of the real person, that it creates the space to look up and to be able to look another human being in the eye and to be able to hear their story at a much, much deeper level. And so that excites me because I think most of us that have gone into the healing arts, that's really what we want to do. The caution I would say, is I think that as we create these technologies, I think it can't just be the technical folks that are doing it, that their expertise is deeply, deeply needed and important. But I also think we need clinicians, and I think we need patients and families helping us co-create these so that we understand how they're perceived and what the impact is.

(20:28):

So I think we have a lot more learning to do, but I'm excited to be able to do this. And I think, as I said at the assembly, if we aren't the ones that are doing this, somebody else is going to. And so if I could drop a call to action, I think it's time for us in Catholic healthcare to step up to this challenge and to lead the way on this because we know that we will do this in the right way, and that we will think about all of the implications, the ethical implications of the ways that we do this, and to be able to balance those with the massive upsides that we see. And so I guess I would just say, Brian, it feels like a very exciting time, but with any new technology, we've got to step carefully and methodically and thoughtfully.

Brian Reardon (21:19):

Yeah, great conversation. Really appreciate your insights and in taking time out both for this episode and taking the time to talk to our attendees at the assembly on this really just so interesting topic. And just again, like you just said, it's an exciting time. So again, thank you for your time.

Matthew Gonzales, MD (21:36):

Thanks for having me.

Brian Reardon (21:38):

That was again, Dr. Matthew Gonzales. He's the Associate Vice President and Chief Medical and Operations Officer for Providence Institute for Human Caring. I'm your host, Brian Reardon, and this has been Health Calls, the podcast of the Catholic Health Association of the United States. Our show's executive producer is Josh Matejka, and this episode was Co-produced by Jenn Lyke with additional production support from Yvonne Stroder. This episode was engineered by Brian Hartmann at Clayton Studios in St. Louis, Missouri. You can find health calls on all your favorite podcast apps and services, as well as on our website, chausa.org/podcast. If you enjoy the show, please go ahead and give us a five star rating. We'd love to hear from you. And as always, thanks for listening.