by Mary K. Steffany
Over the past several decades technological advances have repeatedly impacted the delivery of health care. Computer-assisted surgery, robots/robotics, computer applications (apps), online patient portals, telehealth, and artificial intelligence (AI) are just a few examples of how the delivery of patient care has evolved. While technology has played a significant role in expanding cutting-edge resources available to providers and influencing aspects of the patient experience, it is important to be mindful of the potential risks associated with these advances – in particular, the possibly diminished role of critical thinking itself at a time when AI is being touted as the “next greatest advance.”
This article focuses on the use of critical thinking to evaluate the potential benefits and drawbacks of using AI in a variety of health care contexts. This is the first part of an ongoing discussion for risk management professionals as AI becomes more commonplace.
Critical thinking is an approach to problem-solving that is based upon the Socratic method of questioning and analyzing assumptions, biases, information, and problems to reach a conclusion. It is vital to the practice of medicine and nursing, but can also be used effectively by risk management professionals.(1,2)
Artificial intelligence (AI) is the ability of a machine to perform cognitive functions normally associated with the human mind – such as learning and problem-solving using algorithms developed from the analysis and entry of millions of pieces of data.(3,4) The application of artificial intelligence to clinical problem-solving has rightfully created concern about patient safety and quality of care.
As the role of health care technology grows, in fact, risk management has expanded from the clinical domain to include wider organizational enterprise risks such as cybersecurity, workplace violence, supply chain management, and AI. Risk management professionals are responsible for proactively and systematically safeguarding patient safety as well as the organization’s assets, market share, accreditation, reimbursement levels, brand value, and community standing.(5) In order to meet these responsibilities, risk management professionals need to understand the impact that the adoption of AI could have on their organization’s risk exposure. To do so, a risk management professional can use critical thinking to fully understand the risks and tradeoffs.
For example, what if leadership is considering a cost-effective way to manage patient falls through implementation of a virtual sitter program powered by AI? The risk management professional, as a critical thinker, would partner with the appropriate leaders to assess the potential effectiveness of the virtual sitter program, including vetting systems and vendors, analyzing the return on investment with the costs of the system compared to the incidence and cost of patient falls with injury, staff workers compensation related to staff injury associated with patient falls, potential implications of privacy and consent, and comparing data captured through the AI program for relevance and accuracy with data collected by current risk assessment parameters. Based upon the results of that analysis, an informed decision could be made about adoption of a virtual sitter program.
From a risk management perspective, employing critical thinking to mitigate the risks associated with implementation of AI technology necessitates following a prudent path forward. This process should include but not be limited to analyzing the following (6-8):
- Governance: Identify an oversight committee responsible for AI development and decision-making; create policies and procedures that incorporate organizational objectives in using AI-based applications such as ChatGPT, ambient scribing; follow ethical principles as to AI implementation and use; outline the process for data collection, disclosure and identification of potential bias and discrimination in AI algorithms; specify the security measures in place to protect against data breaches; outline how the organization will comply with regulatory requirements.
- Teamwork: Establish a multidisciplinary team to review AI-related products and services before implementation.
- Testing: Prior to implementation, assess for safety by testing the effectiveness of AI processes through Failure Mode and Effects Analysis (FMEA). FMEA assesses the likelihood of failure in each step in a process and the impact on the final process outcome.(9)
- Training: Prepare training checklists for providers using AI programs and applications.
- Liability exposure: Engage insurance carriers and brokers through conversations about potential insurance implications with the use of AI. (Example: A provider makes a decision contrary to the AI-recommended treatment, resulting in an unintended outcome. The AI might have been wrong or the provider might have been wrong, but in the process, the provider failed to document their clinical rationale. As a result, is there a potential for an AI-related liability claim for the patient harm that occurred?)(10)
- AI vendor contracts: Review vendor contracts and include requirements for privacy, transparency, testing for bias, compliance with federal, state and regulatory compliance and indemnity requirements in the agreement.
- Monitor AI systems: After deployment, monitor AI systems to ensure safety and accuracy. (Example: Physicians who use AI chatbots to ask questions need to be mindful that the program could “hallucinate,” i.e., generate a response that seems plausible but is factually incorrect or unrelated to the context.)(10)
- Updates to AI and technological acquisitions: Consider utilizing a technology assessment checklist when the organization is acquiring/updating technology.(11)
Part of the challenge of AI implementation is monitoring its use and noting the challenges at the ground level. This shines a bright light on the need for risk management professionals to ensure frequent rounding throughout the organization to determine from staff feedback how AI is working, what challenges users are experiencing, and what yet-to-be-identified risks exist.
Given the breadth of AI’s capabilities, it is not yet possible to know what we do not know about AI. What is known is that critical thinking aligns with the process health care risk management professionals follow to identify actual risks and to mitigate potential risks to patients, staff, and the organization.(1,5) Thus, it is incumbent upon providers and risk management professionals to learn about AI and understand how it can best be deployed at their organization. Integrating AI, utilizing critical thinking as a framework, should enhance quality of care and improve patient safety through risk identification, stratification, and reduction strategies.
Author: Mary K. Steffany is an independent health care risk consultant. Upon retirement from Zurich Financial Services as a senior health care risk consultant, she has volunteered for ASHRM. Currently, she is completing her tenure as co-chair of ASHRM’s Educational Development Committee.
References
1. Paul R, Elder L, Bartell T. A Brief History of the Idea of Critical Thinking. The Foundation for Critical Thinking website. https://www.criticalthinking.org/pages/a-brief-history-of-the-idea-of-critical-thinking/408. Accessed November 13, 2023.
2. Marr B. 13 Easy Steps To Improve Your Critical Thinking Skills https://forbes.com/sites/bernardmarr/2022/08/05/13-easy-steps-to-improve-your-critical-thinking-skills/?sb=35aad3565ecd. Published August 5, 2022. Accessed November 13, 2023.
3. What is AI? McKinsey & Company website. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai#/. Published April 24, 2023. Accessed November 18, 2023.
4. Kanade V. What Is Artificial Intelligence (AI)? Definitions, Types, Goals, Challenges and Trends in 2022. Spiceworks website. https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-ai/. Published March 14, 2022. Accessed November 19, 2023.
5. What is Risk Management in Healthcare? The New England Journal of Medicine website. https://catalyst.nejm.org/doi/full/10.1056/CAT.18.0197. Published April 25, 2018. Accessed January 5, 2024.
6. Porcaro, JM. Artificial and Augmented Intelligence: Risk Management considerations for Healthcare. WTW website. https://www.wtwco.com/en-us/insights/2023/10/artificial-and-augmented-intelligence-risk-management-considerations-for-healthcare. Published October 11, 2023. Accessed January 25, 2024.
7. Conmy S. “Ten Steps to Creating an AI Policy.” Corporate Governance Institute website. Accessed June 2, 2024.
8. Huben-Kearney A, Fischer-Sanchez D, Wilburn B. ASHRM/AHAS White Paper: Recognizing and Managing Bias in Digital Health. Published 2024. https://www.ashrm.org/system/files/media/file/2024/10/ASHRM-Recognizing%26Managing-Bias-in-Digital-Health-White-Paper.pdf
9. Mercolli L, Rominger A, Kuangyu S. Towards quality management of artificial intelligence systems for medical applications. Z Med Phys. 2024 May;34(2):343-352. doi: 10.1016/j.zemedi.2024.02.001.
10. Gallegos A. When Could You Be Sued for AI Malpractice? You’re Likely Using It Now. Medscape. https://www.medscape.com/viewarticle/992808. June 6, 2023. Accessed June 9, 2024
11. Digital Prism Advisors. Technology Assessment Checklist. https://dprism.com/wp-content/uploads/2021/02/dPrism_Tech-Assessment-Checklist_v1_02-05-21.pdf.