Health Calls

Theology, Artificial Intelligence, and Catholic Health Care

Episode Summary

As artificial intelligence is implemented into more and more aspects of health care, it can be difficult to keep up with the varying ethical considerations that arise. If Catholic health care is to thrive in this new era of technology, a robust theological perspective is necessary.

Episode Notes

As artificial intelligence is implemented into more and more aspects of health care, it can be difficult to keep up with the varying ethical considerations that arise. If Catholic health care is to thrive in this new era of technology, a robust theological perspective is necessary.

Dan Daly, Executive Director of the Center for Theology and Ethics in Catholic Health, returns to Health Calls to discuss how Catholic health providers can find theological guidance and ethical clarity on technological questions. He discusses Pope Francis’s perspective on AI, why human empathy is crucial, and why efficiency and profit cannot be the ministry’s ultimate goals.

Resources

Visit the Center for Theology and Ethics in Health Care’s new website

Revisit Dan Daly’s previous Health Calls episode on the Center’s mission and values

Episode Transcription

Brian Reardon (00:07):

Welcome to Health Calls, the podcast of the Catholic Health Association of the United States. I'm your host, Brian Reardon, and joining me for this episode as always is Josh Matejka. And we're also going to bring in just a minute. Dan Daly, he's the executive director of the Center for Theology and Ethics in Catholic Health. And Josh, we are continuing this season's theme of humanity and technology. In this episode, we're going to be talking about the theology and ethics ofAI. And so I guess my question to you to sort of set the context is we've actually done some episodes on AI and ethics over actually the past few seasons, not just this season. So why do we want to bring Dan in on this conversation?

Josh Matejka (00:46):

Well, I mean, other than the fact that we want to see Dan in studio, and it was a great opportunity to bring him to St. Louis. One of the things that I think I have thought about, especially as we've been recording this season, is that these things change so fast, right? We've done an ethics episode for artificial intelligence each of the last two seasons, and every time we get to a new season, there are new topics emerging, and there are new thoughts and there are new tools. And so the ethics of AI are kind of constantly, maybe not evolving, but we're having to confront new issues. But also, one of the things that I want to put to our listeners is that the Center for Theology and Ethics and Catholic Health is theology and ethics and Catholic Health. We haven't really approached this from a theological perspective, at least in such a rich way. And I think bringing Dan back in to have that type of conversation is really important because we need both of those things. We can't just leave one for the other.

Brian Reardon (01:40):

Yeah. Let's bring in, Dan. Dan, it's great to have you here in the studio. I think the last time we talked, it was over Zoom, so it's always better to be in person. But again, Dan Daley is the executive director of the New Center for Theology and Ethics and Catholic Health. And I guess Dan, for those folks that didn't listen to the last episode of season four where you talked about the center, can you just give us a quick recap of the center now? It's up and running, essentially, and you're on your way.

Dan Daly (02:05):

Thanks, Brian. Thanks, Josh. It is great to be with you in studio. So the Center for Theology and Ethics and Catholic Health is responsive to Catholic Healthcare. It's taking up the questions that are pressing in the ministry and trying to provide some guidance, right? We're always looking to provide guidance, not necessarily make all the moral decisions for people, but help them to make their decisions in light of the Catholic tradition in light of the best expression of Catholic theology and the best expression of Catholic ethics in a way that's faithful to the church's teaching. But yeah, it's an exciting time for the center.

Brian Reardon (02:42):

And I think the reason, again, we wanted to bring you in is because of that notion of the faithful guidance, and again, technology, how technology is being applied in all of our healthcare facilities. And this is not something new, but it's happening so rapidly. And I think our first guest of the season, colos talked about the exponential growth, and as we've had subsequent episodes talking about all the different aspects of technology and how it interacts with, again, the care we're providing to our patients. So I think, I guess my first question is, as you convene your advisory board for the center, what's some of the input that you're going to be looking for from that group as it comes specifically to the issue of emergent technology and the delivery of healthcare?

Dan Daly (03:22):

So the advisory board is an incredibly high powered group of professionals in Catholic health. And so really what I'm looking to do is to draw upon their expertise, their diverse expertise. Some are VPs of mission and ethics in our systems. Some are professors. We've got a bishop and an archbishop. So to really draw on that diversity of experience to help us to think what is going on right now that we need to be attentive to in AI, in Catholic health, and then going forward, what do we need to be on the lookout for? One of the things we want to do is we don't want to be so far behind the technology that it becomes settled and therefore really difficult. And ossified. We want to be a conversation partner as the technology is being deployed, developed, and adapted within Catholic Health.

Brian Reardon (04:16):

And I think the nice thing is that Pope Francis has spoken about this.

Dan Daly (04:20):

Yes.

Brian Reardon (04:20):

So there's a foundation from the Catholic church's perspective on this, and I think one of the lines you use in a presentation is exciting and fearsome tool.

Dan Daly (04:28):

Yeah. So Francis is, I think, very clear-eyed about AI. He's neither a Luddite, nor is he someone who sees this as humanity, salvation as some would suggest. So he sees it, as you noted, as an exciting and fearsome tool. And it's exciting because think about a recent, A study showed that providers who used AI for things like the creation of the electronic health record or office visit notes, they saved on average three hours per week. So they increased accuracy, they saved time, they decreased burnout, and they increased satisfaction. That's exciting. We should be excited about that kind of a technology. Now, Pope Francis is victim of the most viral AI generated image. It's him in a $4,000 puffer coat. We've probably seen this. That's the fearsome side. It can deceive. And as I think we'll probably Brian go on in this conversation to discuss other ways that we should fearAI. But I think it's really, it's this clear-eyed approach, this balanced approach that Francis is taking that is guiding us between the two extremes of being overly excited and not critical enough and being kind of retrograde and not adopting these technologies that really can help us.

Brian Reardon (05:54):

And I think what's interesting in a presentation I heard give was really talking about the tenets of scripture and Catholic social teaching in guiding this work around artificial intelligence and other emerging technologies. So you think, well, how do the scriptures play into this? So can you talk a little more about that?

Dan Daly (06:10):

It's a great question. Yeah. And it may seem to be far from the experience ofAI, right? I mean, Jesus wasn't dealing with these levels of technology clearly. But I think when we turn to the scripture, when we turn to the person of Jesus and we turn it specifically to his healings, Catholic healthcare is continuing that healing ministry. So what does that ministry look like? Well, for Jesus, it looks like encounter. Jesus encounters those who are suffering. Jesus physically touches them. If you look at almost all of his healings, he's touching those who are wounded, those who are sick, and his touch, his healing, reintegrates the person back into community. It's not just that they're cured, they are healed, and healing and curing are different. Healing goes beyond curing. It always involves it, or it can often, maybe not always,

(06:58):

But healing goes beyond it. And it really does involve that relational aspect. One of the ways that we'll have to judge AI is does it enable us to continue that healing ministry of Jesus Christ? Does the healing happen that is the reintegration into community? Does it prevent touch or does it enable touch? Does it enable mercy or does it prevent it? This is a bit futuristic, but in development, our care bots where care would be provided by some kind of a robot, we would've to think really seriously about whether that could be something that we could adopt in Catholic health because of the importance of touch and encounter. And the works of mercy are very relational and visceral, and those animate Catholic health. And we will judge AI based on how they help us, because AI may liberate us to do more of that. If AI liberates us to do the works of mercy even better than we currently do, then we should be excited aboutAI.

Brian Reardon (08:04):

No, and I think the recent example that we discussed on this series was using AI to basically help clinicians have really difficult conversations for patients who may be at the end of their life and be more empathetic. What's your reaction to that as sort of using basically algorithm or artificial intelligence to help human beings be more empathetic?

Dan Daly (08:29):

So what I would say about that is an AI is not empathetic because it can't feel it can't with the other. But I think what it can do, like any good technology, like any textbook, it can provide guidance. So in fact, I prefer the language of augmented intelligence, not artificial intelligence. Why? Well, it's augmenting our intelligence. It throws it back to our intelligence. We are the intelligent beings. This is just an algorithm that's reflecting our society. That's what AI essentially is. So it's augmenting. It's helping us to do what we do well, to do it even better. And I think there's no moral problem with that. What I would not want to see is that the AI takes the place of the provider who has that conversation because

Brian Reardon (09:18):

There has to be a human element there.

Dan Daly (09:20):

Absolutely. We're human beings, we're embodied relational beings. We must never forget

Brian Reardon (09:24):

That. And I think as AI continues to develop, we've all read this, that at some point it starts feeding on itself. And so if there's not that human input, and maybe that's part of the whole notion of people fearingAI, I think we need to remember that AI is only going to be as good as the human that is providing, or humans that are providing the input to whatever we're using it for, whatever we're creating, that eventually if humans disengage, we're going to have robots talking to each other. And that's not really going to get us very far.

Dan Daly (09:57):

Yeah, I mean, I think the AI is only as good as the data that we feed it. I think that's one of the points that you're getting at. And I think we have examples of when we feed it racist data, when we feed it classist data, when we feed sexist data, we have those kind of outputs coming right back at us. So the data needs to be curated. We need to be critical about what we're putting in and what we're not putting in about how we're interpreting the data. I think one of the things, the dangers is that we come to trust the judgment ofAI,

(10:29):

The quote judgment of AI more than our own judgment. And I think that would be a colossal mistake.AI, again, I'd prefer us to think about it as augmented intelligence. It's augmenting our own intelligence and not replacing it, because it really is. I mean, one of the ways a scholar, Shannon Valor talks about AI as a mirror. It is simply a mirror for our society with all of our pathologies, with all of our vices. And the beauty is represented there as well, but it's no better than us. It is us. And so we need to be aware of that and not see it as an autonomous being that has some kind of objectivity that we lack. It simply is a reflection of our own society.

Brian Reardon (11:17):

And on that mirror concept you spoke to, it's also not morally neutral. I think that's another point that Pope Francis also brings up in his five themes.

Dan Daly (11:25):

If Pope Francis is clear, and really there's consensus on this, he's not on an island here. Any ethicist who has really looked at this has said, this is not a morally neutral technology. You could look at a knife and say, well, a knife can be used to cut vegetables, to hunt animals, to keep a family alive, or it can be used to kill another human being. So in a way, it's neutral. It's how it's used. Used. Yep. AI is not neutral. It designed with certain purposes and ends in mind. So first of all, it has goals in mind. They're usually efficiency and profit. So we need to interrogate those goals. And it is, as I just noted, it's reflecting our own society. And so if we are racist as a society traditionally, and we're feeding it racist data, racist texts, texts that have been written predominantly by persons who are white, the outputs of AI will reflect that right back to us. And so we need to be aware of that when we're adoptingAI.

Brian Reardon (12:28):

And when you speak to AI being motivated or created to enhance profit to be more efficient, how does that, I guess, come up against two of the other themes that Pope Francis had and that was respect, dignity, and promote integral human development? Those seem to be in conflict.

Dan Daly (12:49):

Well, I would say the first point is efficiency and profit in and of themselves are fine. They're good. We want to be efficient. Efficiency is better than inefficiency and the generation of profit, there's nothing morally wrong with a generation of profit as long as it's done in a just way. Those are not the ultimate goals, though. Catholic healthcare does not exist for the sake of efficiency, nor does it exist for the sake of profit or some kind of excess revenue generation. The good of Catholic health is in the person, it's in the human being. The good that was created is in the person. It's in the healing, it's in the health. It's in the respecting of the dignity. And to your earlier point, the promotion of the integral human development of the person that is the development from having one's needs met to being integrated in society and then to flourishing in a way in relationship to others and through education and finally through this relationship with God. So it's this movement from conditions that are less human to conditions that are more human. And if AI can help us to get to those more human conditions, then we ought to see it as something that is positive for us, but we need to interrogate it based on that notion of integral human development. If it does not promote our relationships with others, our education and our relationship with God, then we ought to be very critical of it. Certainly.

Brian Reardon (14:14):

Yeah. And your interrogation line kind of brings me to the fifth point in Pope Francis's key themes, and that is an able encounter in fraternity. And you had talked in the presentation that I saw you give on AI was the need to include vulnerable persons and communities in dialogue. I think you referred to that as inclusion monitoring. So can you speak a little bit more about that?

Dan Daly (14:36):

Yeah. So one of the things that Francis will write about and speak about is he's going to give us the values that we ought to promote, but he also has procedures, procedures that when we're thinking about how we're going to use AI that we should follow, and the central procedure is it should be dialogue based, and that should be an inclusive dialogue. That should be a dialogue. And Francis continually emphasizes this of people who have been traditionally marginalized, people who are vulnerable. So we need to monitor who we're including and not including. So when you look at data, who's included in the data, who's represented in the data, if AI is being trained predominantly on the electronic health record and persons of color and immigrants are less likely to have an electronic health record, then their reality is less reflected in the AI than other communities realities are. So we need to be critical about that. Who's included in the data, and then who's included in the decision-making? Who is at the table where the decisions about what kind of AI to adopt and how to adapt it to our purposes? Who's at the table are people who are going to be affected by that technology included? And so we need to be critical about that. And Francis has over and over and over again, he doesn't use the concept of inclusion monitoring, but certainly it is present in his thinking.

Brian Reardon (16:04):

So you have a nice foundation. You have those five key themes that Pope Francis has again expressed of one, it's an exciting and fearsome tool. Two, it's not morally neutral, and that's the notion of it's a mirror. It needs to respect dignity and needs to promote integral human development. And then the final one is enable or encounter or fraternity. So with those principles in mind, going back to the center, how do you see that being used and what kind of, and I know you can't come with the full answer on this because it's going to take a lot of work, but taking those principles, what do you see some of the ethical evaluation of these emerging technologies arising?

Dan Daly (16:42):

Well, yeah, I think, I guess in a way, I haven't gotten as much to the encounter piece, the fraternity piece. I would say ultimately the promise of AI as I see it at this moment is that it will free us in Catholic health to encounter patients more and more with greater depth. Think of the encounters that we have with physicians and providers today. There's often a screen between us and the provider, whether it's an iPad or a computer, and the provider is spending a decent amount of time looking at that screen. If AI can free us in a way that it can capture a lot of these health notes and create the electronic health record without the provider having to enter it, then that is an enormous gain for us where the provider can do more listening, which is one of the things that patients say they want more of. They want their providers to listen to them, that they're not just a disease, but they're a person who has some kind of a condition and that they want their providers to know how it affects their life, their person, their relationship. I think that is really the promise ofAI. And I think as we begin to and continue to evaluateAI, we need to keep that centered, keep the patient centered, the patient's experience centered, and that is the way that we're going to promote human dignity in the most profound sense is being able to provide care for all persons and provide it for the whole person.

Brian Reardon (18:15):

Now, nice summary. I want to bring in, Josh, any final thoughts or questions for Dan?

Josh Matejka (18:20):

Yeah, Dan, thank you so much. This has been a really rich information and guidance heavy episode, which I think is really powerful. One thing I wanted to ask you, one of the things that I often think about when we have these conversations across the spectrum of care, spiritual care, environmental care, is there are so many issues. There are so many areas that artificial intelligence and emerging technologies tackle that. It can be really difficult to focus and center ourselves on what is our mission, what is our kind of calling in the Catholic Health Ministry? If there's a theological and ethical North Star that you could offer, what would it be for people who are interested in this space but want to do it in a way that honors the charism of Catholic health?

Dan Daly (19:07):

Josh, it's a great question. I would say to center the person of Jesus Christ, I think we would always do well to go back to the scripture and to read the scripture in its entirety, but read the gospels, read Jesus's healings, encounter art that discusses the healings of Jesus, encounter art that depicts the Good Samaritan, contemplate Van Gogh's the good Samaritan with his back bent, putting the man who's been beaten by the robbers on his own animal. We are storytelling creatures. A lot of what I've discussed today is very conceptual, and it's important. As a theologian, I would never say that the conceptualization of these things is not important. It is. I think the North star though is the person of Jesus Christ. It's not an idea. It's the person of Jesus. And I think we encounter Jesus in the scripture. We encounter Jesus in the lived practice of the faith. We encounter Jesus through art, through literature. I think we need to be aware of that, just how embodied we are, and again, if we circle back toAI, we need to remember just that AI is helping us as embodied souls encounter each other in light of the reality of God and Jesus Christ.

Brian Reardon (20:31):

Love it. Dan Daily, executive director for the Center for Theology and Ethics and Catholic Health. It was great to have you with us here in St. Louis and really providing, again, what I would call faithful guidance on this really big topic. So thanks for being with us.

Dan Daly (20:44):

Thanks, Brian. Thanks, Josh.

Brian Reardon (20:46):

And this has been another episode ofHealth Calls, the podcast of the Catholic Health Association. I'm your host, Brian Reardon. Our show's executive producer is Josh Matejka. We have additional support production from Yvonne Stroder. This episode was engineered by Brian Hartmann here at Clayton Studios in St. Louis. You can find Health Calls on all of your favorite podcast apps and services, as well as on our website, chausa.org/podcast. If you enjoy this show and other shows, please go ahead and give us a five star rating. We'd love to hear from you. As always, thanks for listening.