Rethinking AI Skepticism in Healthcare Governance
Content contribution by
Elissa Torres, Director of Transformation and Enablement
AI adoption in healthcare is accelerating, driven by the promise of improved outcomes, greater efficiency, and better use of clinical data. Organizations are investing in tools that support diagnosis, streamline workflows, and enhance decision-making across care settings. At the same time, these deployments are introducing new layers of complexity around accountability, risk, and trust, particularly for the employees expected to use them in practice.
As organizations work through these complexities, attention often turns to adoption. Leaders look for signals that AI is being used consistently and effectively across teams. When adoption slows or varies, the focus quickly shifts to the people using the tools and how to bring them along. Employees who are skeptical of AI are treated as a problem to fix with the assumption that skepticism is getting in the way of adoption, so the goal becomes reducing it.
That assumption is often incomplete. In many cases, organizations may be overlooking a valuable source of risk insight at a point when understanding potential issues early matters most.
Employees who are most resistant to AI adoption often have strong instincts about where meaningful risks may exist. Treating them primarily as obstacles can lead organizations to overlook important governance considerations.
When Skepticism Signals Risk
Not all AI skepticism is equal. There is a meaningful difference between a generalized anxiety about technology and change, and a specific, substantiated concern about how a particular AI tool is performing in a particular clinical or operational context.
The first kind of skepticism may indeed be a change management challenge. The second kind is risk intelligence, and healthcare organizations may be systematically misclassifying the second kind as the first.
Risk intelligence is the real-world insight employees have about where AI may fail before those risks are visible in formal data or outcomes.
Consider what a thoughtful AI skeptic in a clinical setting often knows that an AI governance committee does not. They are the voice of process and experience. They can see outlying cases in their patient population that are not represented in the training data. They know workflow realities that make the AI's outputs impractical and risky within the available time. They may notice how the tool's suggestions shift when patient demographics change. They hear the questions their colleagues are quietly asking, but not raising formally.
These concerns reflect a deep understanding of how the technology is used in practice, often exceeding that of those making the adoption decision.
How to Empower AI Skeptics
Most healthcare AI governance processes do include clinical and operational stakeholders. Skeptics are often present, and their concerns are documented as part of the process.
The distinction that matters, however, is not participation, but influence. In many cases, governance is structured to gather input, not to act on it. Concerns are surfaced and recorded, but there is no clear mechanism for those concerns to alter the direction of a deployment.
Effective governance requires more than consultation. There should be a defined pathway for concerns to shape outcomes. This could include formal escalation processes, the ability to pause or revisit decisions, and clear criteria for when additional review is required. Without these mechanisms, dissent becomes informational rather than actionable.
When input does not translate into impact, participation loses credibility. Over time, individuals who consistently raise concerns see little change, disengage from the process, and are often labeled as resistant. In reality, the issue is not resistance, but a governance structure that does not fully utilize the insight it is designed to capture.
The Four Structural Changes of AI Governance
Redesigning AI governance to genuinely incorporate skeptic perspectives requires four structural changes:
- Formal dissent rights and defined escalation paths. Governance processes should include a process for a named participant to formally register a concern significant enough to warrant an independent review before deployment. This is not a veto or complete stop; it is a structured pause. The concern must be addressed, not just acknowledged.
- A standing AI red team function. Every healthcare organization deploying AI at scale should have a named group with a formal mandate to identify potential issues before, during, and after deployment. This group should include clinical skeptics, compliance professionals, and operational leaders who have expressed concerns about specific AI tools. The role here is to make AI safer by stress-testing it before it reaches patients.
- Operational representation in AI governance with domain authority. Governance committees should not be composed primarily of leaders and technologists. They should include people who are close to the actual use of AI tools and can recognize when something behaves unexpectedly. These representatives should have sufficient domain authority that their observations carry weight.
- Post-deployment feedback loops with real consequences. Skeptics who were overruled at deployment should be re-engaged after go-live to see whether their concerns materialized. Their feedback should go directly into model performance reviews. When post-deployment findings validate pre-deployment concerns, document that finding. The governance process should learn from these patterns.
Considerations for Boards and Oversight
For board members and general counsel, there is another angle. When an AI-related harm event happens in healthcare, such as a patient harmed by a biased decision support recommendation or a regulatory finding tied to AI-aided prior authorization, one of the first post-incident questions will be: who raised concerns, and what happened to those concerns?
Organizations with structured processes for surfacing and addressing dissenting views are better positioned. They create clear pathways to raise concerns, evaluate them, and document how decisions are made. This strengthens governance and provides greater clarity in legal and regulatory contexts compared to environments where concerns are acknowledged but not meaningfully addressed.
This is not theoretical. The same logic applies to product liability, informed consent, and quality management in every other domain of healthcare.
When issues arise, organizations are better positioned when they can demonstrate that concerns were surfaced, evaluated, and reflected in decision-making, rather than simply managed or set aside.
A Practical First Step
For leaders who want to begin this shift without overhauling their entire governance architecture, there is a simple starting point.
Identify your three most persistent, most thoughtful AI skeptics. Not the loudest resisters, but the ones whose concerns are specific, substantiated, and domain-grounded. These are the people who keep returning to the same two or three questions about a particular AI tool and have not been satisfied with the answers they've received.
What should you do? Schedule a direct conversation with each of them. Don't make it a change management conversation designed to address their concerns. Make it a listening conversation to understand them. Ask what they believe could go wrong, what evidence they've seen, and what governance structure would make them feel confident to engage constructively.
Then, before the next AI governance cycle, propose one structural change that reflects what you heard. That single step can begin to shift the dynamic, moving a skeptic from being managed to being meaningfully involved in governance. Over time, changes like this help organizations better incorporate frontline insight into decision-making.
This does not eliminate all risk, but it improves how risk is identified and addressed. In healthcare AI, it allows organizations to better distinguish between resistance to change and valid concerns about how risk is being introduced and managed.
If your organization is navigating AI adoption in healthcare, it is worth taking a closer look at how accountability, risk, and governance are structured in practice. Kona Kai Corporation partners with leadership teams to assess these areas and help align AI initiatives with the realities of clinical and operational decision-making.
INSIGHTS












