Health Calls

Viewing AI Through a Catholic Social Teaching Lens

Episode Summary

Nate Hibner, Ph.D., Senior Director of Ethics and Editor of Health Care Ethics USA (HCEUSA) at CHA and Nicholas Kockler, Ph.D., MS, HEC-C, Vice President of System Ethics Services at Providence St. Joseph Health in Renton, Washington, join Health Calls for an enlightening discussion, exploring the intersection of artificial intelligence (AI) and Catholic social teaching, the topic of Kockler’s recent HCEUSA article.

Episode Notes

Nate Hibner, Ph.D., Senior Director of Ethics and Editor of Health Care Ethics USA (HCEUSA) at CHA and Nicholas Kockler, Ph.D., MS, HEC-C, Vice President of System Ethics Services at Providence St. Joseph Health in Renton, Washington, join Health Calls for an enlightening discussion, exploring the intersection of artificial intelligence (AI) and Catholic social teaching, the topic of Kockler’s recent HCEUSA article. Discover how Catholic social teaching serves as a moral compass for navigating the complexities of AI, illustrating how AI can transcend being a mere tool and become an extension of our commitment to societal betterment by emphasizing values such as human dignity and the common good. Gain insights into the importance of forging partnerships in Catholic health care to ensure that AI is grounded in real-world needs. 

 

Resource:

Generating Insights from Catholic Social Teaching: Ethical Guidelines for Artificial Intelligence in Health Care Ministries by Nicholas Kockler, Ph.D., MS, HEC-C - HCEUSA Fall 2023

Episode Transcription

Health Calls: AI Ethics 
April 9, 2024

Brian Reardon:

Hey, Nate. 

Nate Hibner:

Hi, Brian. 

Brian Reardon:

Good to see you. 

Nate Hibner:

Glad to be here.

Brian Reardon:

What AI tools are you playing around with in your work? Or in just your curiosity in the world? 

Nate Hibner:

Actually, just the other day my wife and I were using ChatGPT to help plan for an upcoming vacation in Europe. 

Brian Reardon:

Oh, that's an interesting application.

Nate Hibner:

Oh, it's brilliant. You can say, "I'm going to spend this many number of days in this town, what should I see? Create a little itinerary for me." Then you can go back and say, "Oh, I'd like to see more of that, or let's eliminate this, or let's go to a nice restaurant that night." It really helps to give you a starting point, where we can then do our research and make final decisions. 

 

 

Brian Reardon:

Nice. It's viewing AI through a tourism lens. 

Nate Hibner:

Very much so. 

Brian Reardon:

We'll have to do another episode on that at some point. But for today, we're going to talk about viewing AI through the lens of Catholic social teaching. You ready to go? 

Nate Hibner:

Very much so. 

Brian Reardon:

Let's do it. 

This is Health Calls, the podcast of the Catholic Health Association of the United States. I'm your host, Brian Reardon. As you just heard, I'm joined by Nate Hibner. Nate is our senior director of ethics at CHA. Always good to have him back on the show. In just a moment, we're going to also talk to Nicholas Kockler. He is vice president of system ethic services with Providence Saint Joseph Health in Renton, Washington. We'll talk to Nick in just a moment. 

But, Nate, I'm going to come back to you. Obviously, AI getting a ton of attention, it has been for a while now. Actually, we wrote about this and this is what we're going to talk to Nick about in a second, in the recent issue of Healthcare Ethics USA, you're the editor of that. Why have an article of ethics around AI? What caused you to pick that as a topic? 

Nate Hibner:

Well, a lot of times in the realm of ethics, we often have questions posed to us by new technologies. Obviously, AI is a fairly new technology, at least for most people who don't work in the tech world. Those new technologies ask new questions, they're going to have impacts on the way that we live our lives, in the way that we work. For us in Catholic healthcare, in the way we might provide care for patients in the communities we serve. 

Those questions are arising and we want to make sure that we're addressing them as they come in so we're not falling behind. So that we're ahead of it and that we're incorporating it in a way that aligns with our vision and our mission. 

Brian Reardon:

Our boss, Sister Mary Haddad, often talks about needing to read the signs of the time. When we look at artificial intelligence and reading the signs of the time, again, through the lens of those that you work with in Catholic healthcare, how do we read AI with that perspective? Particularly as it relates to some of the gospel principles that we follow. 

Nate Hibner:

Some of the areas that we might be concerned about regarding AI's use for us and for other industries is how are we going to utilize it to achieve the mission that we within Catholic healthcare? Is it going to be a disruptor, is it going to add a benefit? Is it going to provide an ease? Is it going to help us or is it going to hinder us? We have to examine a number of different questions in order to move forward.

Doing so is that reading the signs of the times. We're examining how is AI being developed, how is it being used and what are the ultimate consequences of that technology. Those three areas is a way that we examine and read the signs that are happening right now, today. 

Brian Reardon:

You spend a lot of time with ethicists from across Catholic healthcare from around the country. This topic, I'm sure ... When I talk to folks that are in communications and marketing, I'm sure when clinic leaders are talking to each other, AI is constantly coming up. What are you hearing from your colleagues as far as their concerns or maybe their hopes and dreams with this technology? 

Nate Hibner:

I think we go back to this three phases. The first is how is it being developed? What are the datasets, what are the algorithms? What about transparency that might need to be identified? Whose making them, et cetera. 

The second would be what is the actual use of that technology? Is it to enhance, to provide additional support, to give people more free time, to streamline? 

The third is what are the consequences of this technology? Is it going to further patient care? Is it going to make our associates more productive? Is it unfortunately maybe going to lead to further biases and disparities? 

These are the three general buckets that I think most ethicists and our healthcare leaders are examining this technology. 

Brian Reardon:

It just so happens the topics you just enumerated there are very much in line with what our guest here is going to talk about. That's, again, Nicholas Kockler. He is vice president of system ethics services with Providence Saint Joseph Health. 

Nicholas, great to have you with us. Again, you wrote an article for Healthcare Ethics USA that I thought was fascinating. Again, we'll have a link on our podcast page so folks can read that. That's really why we wanted to bring you in, was to talk a little bit more about that. 

I guess to start with, you have a phrase in there which jumped out to me and it's that you described AI, particularly generative AI, as "the industrialization of thought." Can you talk a little bit more about that? 

Nicholas Kockler:

Hi, Brian. Hi, Nate. Great to be with you.

Yeah. As I was thinking about all the work that we're doing in artificial intelligence, and reflecting on how do we do so responsibly and being faithful to the Gospel, my mind just went to places around comparisons. What analogies are there in human history and the development of technology that I think AI relates to? Two examples I point to in the article are the invention of the printing press and the development of mass production in the era of industrialization. 

It occurs to me that, particularly with generative AI, what we see is the systematic production and execution of cognitive and creative tasks. The generation of words, the generation of images, the generation of video, motion pictures. And on a scale that I don't think we've seen, historically. But yet, there's seeds of this in mass production and the printing press that just evoked, for me, this concept of the industrialization of thought. That's where it came from. 

Brian Reardon:

As an ethicist, when you think about that term, "industrialization of thought," how much sleep do you lose when you read and hear about all of the potential ways AI could just go awry? 

Nicholas Kockler:

Well, I think you're going to get different answers for the different ethicists you would ask that question. There's a virtue that I think I like to imagine myself embracing, which is in this instance prudence. Which is not recklessly adopting a technology, but also not being a Luddite. 

For me, I have worries, I have concerns but I also feel like if I don't lean in, and I don't learn, and I don't figure out how AI is working and how it could affect my life, I'm going to be left behind. For me, I've leaned in and have not lost much sleep. Although, I will say that when I test AI and I get into arguments about ethical issues or moral questions, it is scary. But I also think that I try to look at that positively and say this is an invitation to advance our thinking and our collective wisdom as a civilization. 

I have perhaps a dose of optimism that allows me to sleep at night, but I wholeheartedly share in the concerns about what this means for displacing workers and for the adverse effects that AI can bring about. In fact, I think it was a New York Times article, an editorial, that stated that "AI has the potential and will likely, in some instances, reflect the worst in humanity." I just want to work to avoid that. 

Brian Reardon:

Yeah. I think your article just does a really nice job of framing it up, again, around Catholic social teaching. I want to get a little bit deep here on some terms, so listener, bear with me. You, in your article, you organize your reflections on this technology, again through the frame of Catholic social teaching, really around three pillars. They are axiological, eschatological and sociological. 

Nick, I'm going to ask you to break it down and explain each of those three areas, those three pillars, because I think this is important for people to get a better understanding of how you frame this up. Could you share a little bit about what each of those pillars means? 

Nicholas Kockler:

Sure. Well, first of all, Catholic social teaching is not univocal. In order to come up with a reflection that was cohesive enough and drew from the broad wisdom in Catholic social teaching, I felt it necessary to come up with organizing categories. 

Axiological, to me, is all about the values that we hold in the healthcare ministry. In other words, what is it that we're working on or working towards. Human dignity, the common good, and so forth. The value or the what is the axiological component or pillar.

The eschatological pillar is all about the Kingdom of God. What is our destiny in humankind and what does God yearn for for us? Where is our love of God and of neighbor leading us? Where are we going? From the value or what, to the Kingdom of God or where. 

Finally, the sociological pillar is about how are we going to get there? How are we going to walk along with one another in solidarity, in community, in right relationship with each other to get to where we want to go and to live out our values? 

Brian Reardon:

I think you do a really nice job, specific now to Catholic healthcare, in laying out examples where AI can really help us provide if not more efficient, better care to our patients. Can you give examples of, again being an optimist, of some of the positive things where AI is actually being applied today in our facilities or the good of the patients? 

Nicholas Kockler:

Well, I think there's an enormous range of ways in which AI is improving or has the potential of improving healthcare. Some of them are around alleviating some of the administrative and documentation burden of our caregivers, our physicians and our nurses in documenting care. I think AI has the potential of learning some deep insights around the patterns of care with the goal of contributing to the quadruple aim. Quality of care, cost reduction, and so forth.

A couple concrete examples. Reading images, or sifting through audio of a care encounter and helping the doctor consolidate notes for the patient. Learning from the patient, based on their questions, what could be concerning them. It's immense potential. I think with that comes the immense responsibility to make sure it's done right. If we're basing these AI tools off the wrong dataset or a skewed dataset that has inherent bias in it, it's going to worsen disparities. 

Those are just some examples of the positive direction this can go, but it's certainly not without its hazards. 

Brian Reardon:

Yeah. I think that's why you, and people like Nate, are so valuable in this. Okay, so you get these positives, these ways that we can use this technology to, again, help us in healthcare do our jobs. But we got to make sure, just like with any coworker, are they doing their job right, so checking in on that.

From your perspective, and I think you brought this up in the article, there's not only is the data or the information provided truthful but is it authentic. Can you talk a little bit more about that, of discerning authenticity and truthfulness in the outputs that AI generate? 

Nicholas Kockler:

Well, one of the major challenges with generative AI is, I found this to be a fascinating use of the term, hallucination. When an AI produces a result based on a prompt, is that result, how does that relate to what actually is? 

I'll give you an example. In the early days, when I was learning and playing around with ChatGPT, I would ask it a research question. "Is there an article on normothermic regional profusion in donation after cardiac death?" It would produce a list of resources. I naively gave that list to our medical librarian, who promptly responded, "I can't find these sources anywhere." 

Brian Reardon:

Hm.

Nicholas Kockler:

This was more than a year ago so I'm sure that the model has improved and I've gotten smarter about how I design my prompts. But the point is it produced something that had the look and feel of being accurate and reliable, but it was a figment of the AI's imagination. It was a hallucination. I think we have to be guarded and really smart with how we design our prompts and how we interpret the results.

Brian Reardon:

Are there, I guess, ethical deployment protocols or governance structures that colleagues across Catholic healthcare that are leaning into AI should be aware of or should adopt? 

Nicholas Kockler:

Providence recently, earlier this year, signed the Rome Call for AI Ethics, which is an initiative that started with the Pontifical Academy for Life, that set out several principles that should be infused in the design, development, deployment and monitoring of artificial intelligence. It should be no surprise, those are reflective in Catholic social teaching and in my article. 

Those principles from the Rome Call have been adapted into what I would phrase, and apologies for another 25-cent word, a maxim. When we evaluate an AI project internally, and we have a process by which our caregivers and our leaders submit initiatives or implementations of AI, we review the AI tool through the lens of the Rome Call principles and these maxims so that we can, at various gates along the process if you will, we can assess the risk as well as what are the mitigation strategies for assuring that this AI tool, whatever it happens to be, is going to be reflective of our values, and our mission, and our commitment to continuing the healing ministry of Jesus. 

Brian Reardon:

You've got some basic guidelines that is a good starting point? 

Nicholas Kockler:

Well, basic guidelines and a process. It's not just we give guidelines to our software engineers and say, "Go at it." It's a community of reflection and dialogue around, "Hey, we have this idea for an AI application." We're going to think together around how to make this the best possible AI tool for this particular function. It's the milieu, the basis of conversation and design work, and implementation work. 

Brian Reardon:

Yeah. It's bringing that-

Nicholas Kockler:

It gets translated from principle to application.

Brian Reardon:

It's bringing that human element into it as well, so I think that's a really important point. 

I want to bring Nate back into the conversation. Nate, you've been hearing what Nicholas has shared, you worked with him as editor on the article. Anything that comes to mind that, A, you want to ask him, or that you're reflecting on from this conversation? 

Nate Hibner:

Thank you for that, Brian.

Yeah. Nick, I know you're using Catholic social teaching in a very creative way here, that really helps to, again, provide us a roadmap, both of a list of values and a process perhaps. As we look more broadly at AI and we look more broadly at its development and its societal impact, how can Catholic healthcare maybe collectively be a leader of change or a leader that helps to guide this in the right direction? You described a little bit about how Providence has been doing it independently, internally. But how can we, in our overall ministry in the US, be an active leader of positive change for this technology? 

Nicholas Kockler:

I8 think there are a variety of ways, Nate. I think the more Catholic healthcare systems acknowledge and adopt principles and values based in Catholic social teaching, and take those earnestly, and infuse them into the operations and workflows that touch on the design, development and deployment of AI, I think is one step. 

I think whenever we enter partnerships or collaborative arrangements with other Catholic systems or Catholic organizations, or other than Catholic organizations, I think we need to be vigilant in making sure that whatever data we use or however the data is going to be used that we do our best to assure that they're going to respect and defend human dignity and promote the common good. 

I think these are things that can permeate our culture, both nationally and also internationally, but it's up to us to carry forward these values and these principles, and to make them work in the decisions, whether that's internal to an organization or across institutions. It's about, if you pardon my use of the word evangelization, I think it's in part evangelizing these principles and values so that they are infused in the culture and the technology does not become its own entity. It doesn't become a technocratic reality. 

Brian Reardon:

Nick, you wrap up your excellent article by saying, "We affirm our responsibility to harness the power of AI in ways that uplift humanity, honor our shared values and pave the way for a future where technology and ethics walk hand-in-hand." Final word from you. How do ethicists and this new frontier of AI, how do they walk hand-in-hand? What's the final advice you'd give to particularly those who are grappling with some of the issues AI, what should they walk away with? 

Nicholas Kockler:

Well, gosh, I think ethics, as Nate said earlier in this podcast, technology is always a catalyst for ethical reflection. Ethical reflection is a catalyst for influencing technology to serve the human person and the human community. If we can adopt a bit of a dialectic, an accompaniment mentality and walk alongside our colleagues who are in the space of developing these technologies, and share in a common purpose of working towards the good of all, I think that's what I had in mind. It's a sense of we are in relationship and can mutually learn and benefit from each other. The moment we get fragmented is the moment I think we risk everything. As long as we walk together in pursuit of the good, we will be able to face what's ahead.

Brian Reardon:

Great perspective. Again, that was Nicholas Kockler. He's vice president of system ethics services for Providence Saint Joseph Health. Nate Hibner also joined us. He is senior director of ethics for the Catholic Health Association. I'm your host, Brian Reardon. Our producer for this episode is Jenn Lyke. Our engineer is Brian Hartman, here at Clayton Studios in St. Louis. You can download and listen to, and read the article that Nick wrote, at our podcast page on our website. That's chausa.org/podcast. Again, you can listen to all of our podcasts on that page. Again, I really encourage you to check out the article from Healthcare Ethics USA that Nick wrote, really good insights. Again, you can always listen to and download our podcasts on all of your favorite podcast applications. Thanks for listening.