When Change Management Isn’t Enough for Healthcare AI Adoption
Content contribution by
Elissa Torres, Director of Transformation and Enablement
A common pattern shows up across healthcare organizations as AI initiatives move from early success into broader adoption.
An AI initiative gains momentum, a vendor is chosen, pilots succeed, and rollout begins, then resistance arises. Clinicians raise concerns, compliance scrutinizes output, managers delay adoption, and skepticism fills feedback forms. Leadership routinely diagnoses this as a change management problem and prescribes communications plans, training, town halls, executive roadshows, and adoption champions.
Six months later, adoption metrics are still flat. The organization concludes that its workforce is unusually resistant to change. This diagnosis misses the underlying issue, and the remedy it produces consistently fails by treating a governance design problem as a communications problem.
Healthcare professionals aren't resisting change. They're rationally resisting risk transfer, which is a different problem that requires a different solution.
Where Change Management Falls Short
Change management functions play a critical role in how organizations operate and evolve. It works well when the source of resistance is unfamiliarity, anxiety about new processes, or a lack of information about why a change is happening. The assumption underlying all change management is that once people understand the change and feel heard, they will accept it.
That assumption breaks down in a specific set of circumstances: when the people resisting the change are not uninformed. In this case, they are informed, and they know the change transfers risk onto them.
This dynamic is increasingly visible in healthcare AI adoption. When a clinician is asked to use an AI-assisted diagnostic tool, they are being asked to validate outputs they didn't generate, from a model they didn't design, trained on data they've never seen. In the event of an adverse outcome, accountability ultimately rests with the clinician who acted on the recommendation, regardless of where the input originated. It reflects how accountability is currently structured, and it cannot be addressed through communications alone.
Three Questions to Establish Clarity on Accountability
Before commissioning a change management workstream for AI adoption, leaders should be able to answer three questions:
1: If this AI output leads to a patient harm event, who is liable? Is that clearly documented, communicated to clinical staff, and reflected in the organization's indemnification posture?
2: What recourse does a clinician have if they disagree with the AI's recommendation and does the organization protect clinicians who override AI in good faith, or does the performance management system penalize deviation from AI-driven recommendations?
3: Has the organization made any changes to the documentation requirements, accountability, audit structures, or liability protections for clinicians using AI-assisted tools? Conversely, have we simply added AI to existing workflows without redesigning the accountability architecture?
If leadership cannot answer these three questions confidently, the workforce can’t either. And when adoption slows, it’s not irrational. It reflects clinicians protecting themselves and their patients in the absence of clear accountability.
The Risk Transfer Problem in Plain Language
Here is what AI deployment often looks like from a frontline healthcare professional's perspective: In many cases, AI deployment decisions are made at the organizational level, with tool selection, configuration, and approval handled through centralized processes. Clinicians are then asked to incorporate these tools into their workflows.
What no one has told them explicitly is who is accountable if the tool makes an error that affects a patient. In the absence of that answer, they assume it is the person who used it.
This is risk transfer without acknowledgment. And organizations doing it are not facing a workforce that resists change, but a workforce that has correctly identified they are being asked to absorb downside risk they had no role in creating. Change management cannot address this, because the workforce's concern is not about understanding the change, but often about who is accountable when outcomes are negative.
What Liability Design Looks Like
Properly addressing AI adoption resistance requires redesigning governance, not additional communication plans. This means four things specifically:
Step 1: Clearly map accountability before deployment. For every AI tool in a clinical workflow, provide a concise, accessible statement answering: who is accountable for what when the tool is used with patients? Make this available for all clinical staff, not hidden in policy documents.
Step 2: Defined and protected clinical override rights. Clinicians must have the explicit, organizational right to override an AI recommendation when their clinical judgment conflicts with it. That right must be protected in performance management, not penalized. The documentation requirement for an override should be reasonable, not a bureaucratic deterrent.
Step 3: Provide clear AI error indemnification. In partnership with legal and risk management, set and communicate your AI-assisted decision indemnification policies. Specify the liability coverage provided when a selected AI tool contributes to a patient harm event, for both the organization and clinicians. Ensure these answers are as clear as those in other clinical risk domains.
Step 4: Enable meaningful governance participation before deployment. Ensure clinicians and compliance professionals using AI tools to participate directly in their configuration, approval, and use. Give them real authority in pre-deployment governance, not just post-deployment input. When those at risk design the controls, adoption becomes a matter of ownership rather than persuasion.
The Governance Design Reframe
AI adoption challenges in healthcare are often best understood as questions of accountability design, rather than communication effectiveness. Healthcare organizations seeing stronger AI adoption tend to have addressed accountability questions early and made those answers visible to the people expected to use the tools.
Your workforce is not the primary obstacle to AI adoption, but governance design often is. And no amount of change management fixes a governance design problem.
Before your next AI adoption initiative, take the time to clearly answer the three questions outlined above and ensure those answers are visible to the people expected to use the tools.
Where accountability is well-defined, consistently applied, and understood by clinical staff, adoption challenges are more likely to center on communication and change enablement. Where those answers remain unclear or inconsistent, adoption challenges often reflect gaps in governance design that are best addressed upfront.
If your organization is navigating AI adoption and encountering similar challenges, Kona Kai Corporation works with leadership teams to assess readiness, define governance structures, and align AI initiatives with operational reality.
INSIGHTS












