Obstacles To Widespread Adoption of AI in the Healthcare Industry

Why is AI-adoption not as widespread in healthcare yet? Find out.

May 9, 2023

AI in the Healthcare Industry

Artificial intelligence (AI) has the potential to dramatically improve the delivery of healthcare. Thanks to AI’s ability to unlock insights and patterns from very large data sets, the stage is set for innovative, high-value, augmentative capabilities such as patient degradation prediction, suggestions for appropriate interventions for specific conditions, and high-frequency analysis and insights for many vital signs in parallel. Ophir Ronen, founder & CEO, CalmWave, discusses the hurdles in AI adoption and how the healthcare sector can overcome them.

According to a recent report from the Brookings InstitutionOpens a new window , however, the healthcare industry has been particularly cautious in adopting AI. Though it’s only natural to approach new technology carefully, it’s especially evident in healthcare due to the enormous responsibility involved in providing the best care for patients. Many factors cause concern for clinicians in adopting AI, including fear of being marginalized, fear of AI-triggered errors causing negative impacts on patient health (i.e.death), and fear of poorly understood conclusions based on black box AI.

Before exploring these concerns, it’s important to understand what healthcare providers stand to gain from AI, specifically regarding working conditions.

What Can AI Do For Healthcare?

AI has the potential to revolutionize healthcare by augmenting clinicians’ abilities to identify and treat diseases. AI systems can analyze vast amounts of data from electronic health records, imaging studies, and other sources to find patterns that are difficult for humans to detect. These analyses can lead to earlier and more accurate diagnoses, better treatment outcomes, and more personalized care.

One area where AI can have a significant impact is in reducing clinician burnout. Nurses, in particular, are at risk of burnout due to the demanding nature of their work. AI can help alleviate this by providing objective measures of workload based on the frequency of ICU alarms, acuity of patients, and frequency and complexity of interventions. Enabling hospital administrators and managers to understand clinician workload and potential for burnout fosters data-driven opportunities to work towards healthier workplaces, places where clinicians want to remain and follow their passion for healing.

In addition to reducing burnout, AI can help clinicians make more informed decisions by consolidating real-time data to provide actionable insights and predictive analytics. For example, AI algorithms can analyze patient data to identify those at risk of developing complications and alert clinicians to take preventative measures. This can improve patient outcomes and reduce healthcare costs by avoiding more serious complications.

Overall, AI has the potential to transform healthcare by augmenting clinicians’ abilities to analyze vast amounts of data and identify patterns that are difficult for humans to detect. By reducing burnout and providing real-time data and predictive analytics, AI can help clinicians make more informed decisions, improve patient outcomes, and reduce healthcare costs.

See More: The Debate on Responsible AI and IoT

Common Obstacles To Widespread AI Adoption

AI appears to hold the key to making the lives of healthcare workers significantly easier. However, introducing complex and unfamiliar technology into such an essential industry has several risks. In fact, many healthcare workers fear that AI could do more harm than good for providers and patients alike.

Here are a few reasons healthcare providers might be resistant to AI:

1. Explainability

Perhaps the biggest obstacle to the adoption of AI in healthcare is the mystery surrounding the mechanics of AI. How do these algorithms work? How are the aforementioned data points produced? “Black-box” AI is the way of the past, and clinicians (& regulators) are expecting explainability when it comes to AI-based solutions.  

‘Explainability’ is the concept that a machine learning model and its output can be explained in a way that “makes sense” to a human being at an acceptable level. To feel comfortable implementing AI into their operations, healthcare practitioners must have proof that it will adhere to the Hippocratic Oath, i.e., “do no harm.” Without a thorough understanding of how AI makes decisions, practitioners will have a hard time leaving essential responsibilities in the hands of a machine.

2. Bias and discrimination

Many healthcare systems are steadily increasing their efforts toward addressing racial disparities and expanding the accessibility of their services to minorities and underserved communities. Unfortunately, medicine has a long history of bias. And there are instances where AI has been used to exacerbate that issue. 

Practitioners may worry that AI algorithms trained on specific datasets will perpetuate discriminatory practices by systematically ignoring company-wide initiatives to improve health equity.  Any AI-based technologies in healthcare today must consider these dynamics as they develop more holistic, robust solutions to improve care for everyone.

3. Risk and comfort

Technology will never be perfect. Healthcare providers strive for perfection because anything less could mean lives impacted. The stakes are high in healthcare, and so are expectations for any new medical technology.  AI-based products are strikingly accurate but not perfect.   Therefore new AI-based technologies could still cause some error or failure, potentially resulting in a misdiagnosis or the mistreatment of critically ill patients. This expectation isn’t unique to AI, but it definitely creates a high and sometimes unrealistic bar, slowing adoption. Also, there’s the ongoing challenge of legacy systems. 

Different organizations have their own systems and methodologies for patient care.  Providers often weigh familiarity and consistency higher than advancement and accuracy.  It’s not good enough for a technology to be good or accurate but is just as important to consider the comfort level that clinicians have with using and understanding the technology. 

4. Lack of regulations

Though the FDA has approved hundreds of AI-powered medical devices, there are no regulations for non-commercial AI algorithms in healthcare. The challenge of creating these regulations largely stems from how quickly AI advances. This seeming lack of oversight and accountability is understandably troubling for healthcare workers, who would prefer to know that this new technology has been approved by a regulatory body and adheres to certain standards, particularly in relation to privacy and anonymity.

See More: Digital Transformation in Healthcare: Eight Pillars of Change

How To Introduce AI Into Healthcare

Despite reservations from clinicians, AI can and will change the face of healthcare. In order to implement AI-based tools successfully, though, clinicians must be at the forefront of the design, testing, and training of novel medical technology. 

Design

To give AI systems their trust, healthcare practitioners must be directly involved in their design and implementation. You can’t blame clinicians for wanting the assurance that AI developers share their goals and are fully aware of their concerns. 

Hospitals are complex ecosystems with critical workflows. Successful integration of AI into healthcare systems requires a comprehensive look at existing workflows to improve them rather than causing more work. Including health workers in the design phase is crucial to ensuring that AI prioritizes usability and fits seamlessly into the day-to-day workflow.

Transparency

Developers of AI systems must provide practitioners with full visibility and transparency into the AI decision-making process. Not only the end result of the process but also the data which supported the decision must be provided to the users. Without this foundational requirement, the prospect of AI deployment in critical care functions seems remote. Clinicians must feel that they agree with the design of the algorithms alongside the data that the AI processes to provide the expected results.

User testing

To that end, healthcare workers should receive ample opportunities to test AI within clinical settings. These real-world interactions will ultimately reveal which use cases support practitioners and patient care delivery versus those that create unnecessary complications. 

Simply dropping AI technology into a hospital ward without providing user testing with clinicians will exacerbate clinician concerns about unfamiliarity, biases, and malfunction risks. Getting clinicians comfortable with using the technology from the start will ease concerns and improve integration. Additionally, the feedback from healthcare workers will ultimately help AI companies consistently enhance their technology’s capacity to simplify daily tasks and meet practitioners’ most pressing needs.

Clinical evidence

One thing that is guaranteed to gain the acceptance of healthcare providers: proof. Much of healthcare follows clinical evidence-based methods. Clinical evidence-based medicine (EBM) is an approach to medical practice that emphasizes the use of the best available research evidence to guide clinical decision-making. The goal of EBM is to improve the quality of patient care by ensuring that treatments and interventions are based on the most current and reliable scientific evidence.  The key word here is evidence.   

Though this takes more time and can seem like a massive inconvenience and barrier to adoption, it is often a necessary step to ensure safe and sustainable solutions.  To be clear, there are varying levels of evidence, and it’s important that the healthcare industry, including regulators, adapt to conditions, scenarios, and exceptions to provide the proper flexibility to accelerate access to technology.   Putting the evidence behind the technology will not only improve patient care but also instill the confidence that clinicians need to drive adoption.

Healthcare Providers: AI’s Purpose Is To Augment and Empower You

The healthcare industry’s reservations about AI are undoubtedly valid and deserve to be taken seriously. That begins with acknowledging the changes AI presents and eradicating the notion that AI’s introduction will be rushed to instantaneously modernize the industry.

It’s important for healthcare workers to know that AI will not be adopted without their input and that any AI initiative will have clearly defined objectives, values, and evidence. Clinicians can, should, and must have a voice in designing, testing and implementing AI technology. There is no healthcare without clinicians. The barriers to widespread adoption will gradually fall as more healthcare workers are given a chance to be an integral part of the generation of medical AI augmentation so that they become more aware of their new capabilities: diminished stress, improved working conditions, and better patient outcomes.

How are you trying to overcome the barriers to AI-adoption? Share with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON HEALTHCARE

Ophir Ronen
Ophir Ronen is the CEO and Founder of CalmWave, a healthcare company using AI to remediate clinical alarm fatigue in ICUs and build a first-to-market hospital operations orchestration platform. He is a serial tech entrepreneur, having begun his career as a co-founder of Internap Network Services, one of the first commercial Internet backbones which IPO'd in 1999. He has started six companies and achieved three successful exits, the last of which was PagerDuty's acquisition of EEHQ.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.