Rethinking AI Skepticism in Healthcare Governance

May 6, 2026

Content contribution by Elissa Torres, Director of Transformation and Enablement


AI adoption in healthcare is accelerating, driven by the promise of improved outcomes, greater efficiency, and better use of clinical data. Organizations are investing in tools that support diagnosis, streamline workflows, and enhance decision-making across care settings. At the same time, these deployments are introducing new layers of complexity around accountability, risk, and trust, particularly for the employees expected to use them in practice. 


As organizations work through these complexities, attention often turns to adoption. Leaders look for signals that AI is being used consistently and effectively across teams. When adoption slows or varies, the focus quickly shifts to the people using the tools and how to bring them along. Employees who are skeptical of AI are treated as a problem to fix with the assumption that skepticism is getting in the way of adoption, so the goal becomes reducing it. 


That assumption is often incomplete. In many cases, organizations may be overlooking a valuable source of risk insight at a point when understanding potential issues early matters most. 


Employees who are most resistant to AI adoption often have strong instincts about where meaningful risks may exist. Treating them primarily as obstacles can lead organizations to overlook important governance considerations. 


When Skepticism Signals Risk 

Not all AI skepticism is equal. There is a meaningful difference between a generalized anxiety about technology and change, and a specific, substantiated concern about how a particular AI tool is performing in a particular clinical or operational context. 


The first kind of skepticism may indeed be a change management challenge. The second kind is risk intelligence, and healthcare organizations may be systematically misclassifying the second kind as the first. 

Risk intelligence is the real-world insight employees have about where AI may fail before those risks are visible in formal data or outcomes. 


Consider what a thoughtful AI skeptic in a clinical setting often knows that an AI governance committee does not. They are the voice of process and experience. They can see outlying cases in their patient population that are not represented in the training data. They know workflow realities that make the AI's outputs impractical and risky within the available time. They may notice how the tool's suggestions shift when patient demographics change. They hear the questions their colleagues are quietly asking, but not raising formally. 


These concerns reflect a deep understanding of how the technology is used in practice, often exceeding that of those making the adoption decision. 


How to Empower AI Skeptics 

Most healthcare AI governance processes do include clinical and operational stakeholders. Skeptics are often present, and their concerns are documented as part of the process. 


The distinction that matters, however, is not participation, but influence. In many cases, governance is structured to gather input, not to act on it. Concerns are surfaced and recorded, but there is no clear mechanism for those concerns to alter the direction of a deployment. 


Effective governance requires more than consultation. There should be a defined pathway for concerns to shape outcomes. This could include formal escalation processes, the ability to pause or revisit decisions, and clear criteria for when additional review is required. Without these mechanisms, dissent becomes informational rather than actionable. 



When input does not translate into impact, participation loses credibility. Over time, individuals who consistently raise concerns see little change, disengage from the process, and are often labeled as resistant. In reality, the issue is not resistance, but a governance structure that does not fully utilize the insight it is designed to capture. 


The Four Structural Changes of AI Governance 

Redesigning AI governance to genuinely incorporate skeptic perspectives requires four structural changes: 


  1. Formal dissent rights and defined escalation paths. Governance processes should include a process for a named participant to formally register a concern significant enough to warrant an independent review before deployment. This is not a veto or complete stop; it is a structured pause. The concern must be addressed, not just acknowledged. 
  2. A standing AI red team function. Every healthcare organization deploying AI at scale should have a named group with a formal mandate to identify potential issues before, during, and after deployment. This group should include clinical skeptics, compliance professionals, and operational leaders who have expressed concerns about specific AI tools. The role here is to make AI safer by stress-testing it before it reaches patients. 
  3. Operational representation in AI governance with domain authority. Governance committees should not be composed primarily of leaders and technologists. They should include people who are close to the actual use of AI tools and can recognize when something behaves unexpectedly. These representatives should have sufficient domain authority that their observations carry weight. 
  4. Post-deployment feedback loops with real consequences. Skeptics who were overruled at deployment should be re-engaged after go-live to see whether their concerns materialized. Their feedback should go directly into model performance reviews. When post-deployment findings validate pre-deployment concerns, document that finding. The governance process should learn from these patterns. 


Considerations for Boards and Oversight 

For board members and general counsel, there is another angle.  When an AI-related harm event happens in healthcare, such as a patient harmed by a biased decision support recommendation or a regulatory finding tied to AI-aided prior authorization, one of the first post-incident questions will be: who raised concerns, and what happened to those concerns? 


 Organizations with structured processes for surfacing and addressing dissenting views are better positioned. They create clear pathways to raise concerns, evaluate them, and document how decisions are made. This strengthens governance and provides greater clarity in legal and regulatory contexts compared to environments where concerns are acknowledged but not meaningfully addressed. 


This is not theoretical. The same logic applies to product liability, informed consent, and quality management in every other domain of healthcare. 


 When issues arise, organizations are better positioned when they can demonstrate that concerns were surfaced, evaluated, and reflected in decision-making, rather than simply managed or set aside. 


A Practical First Step 

For leaders who want to begin this shift without overhauling their entire governance architecture, there is a simple starting point. 


Identify your three most persistent, most thoughtful AI skeptics. Not the loudest resisters, but the ones whose concerns are specific, substantiated, and domain-grounded. These are the people who keep returning to the same two or three questions about a particular AI tool and have not been satisfied with the answers they've received. 


What should you do? Schedule a direct conversation with each of them. Don't make it a change management conversation designed to address their concerns. Make it a listening conversation to understand them. Ask what they believe could go wrong, what evidence they've seen, and what governance structure would make them feel confident to engage constructively. 


 Then, before the next AI governance cycle, propose one structural change that reflects what you heard. That single step can begin to shift the dynamic, moving a skeptic from being managed to being meaningfully involved in governance. Over time, changes like this help organizations better incorporate frontline insight into decision-making. 

 

This does not eliminate all risk, but it improves how risk is identified and addressed. In healthcare AI, it allows organizations to better distinguish between resistance to change and valid concerns about how risk is being introduced and managed. 

 

If your organization is navigating AI adoption in healthcare, it is worth taking a closer look at how accountability, risk, and governance are structured in practice. Kona Kai Corporation partners with leadership teams to assess these areas and help align AI initiatives with the realities of clinical and operational decision-making. 


INSIGHTS

By Carly Whitte April 28, 2026
AI adoption in healthcare often stalls due to unclear accountability, not resistance. Learn how governance design, risk management, and liability structures impact successful implementation.
By Carly Whitte April 5, 2026
Learn the most common enterprise AI implementation challenges and how to overcome them. Improve data, governance, and adoption to drive real business value.
By Carly Whitte April 5, 2026
Discover how to identify high-value AI use cases, evaluate readiness, and build a roadmap that drives measurable outcomes in enterprise organizations.
By Carly Whitte April 4, 2026
What is shiny object AI? Learn why AI initiatives fail without ROI and how to prioritize use cases that deliver measurable business value.
By Carly Whitte April 4, 2026
Why AI projects fail and how to improve success. Explore the role of readiness, governance, and process in AI transformation.
By Carly Whitte March 15, 2026
Struggling with CRM challenges that are hindering the growth of your business? Don't worry, you're not alone. Discover the most common CRM challenges businesses will face in 2026 and effective solutions to ensure seamless CRM implementation, user adoption, and data management.
By Carly Whitte March 4, 2026
Learn how to build self-serve AI analytics dashboards in CRM. Quick wins, best practices, and expert tips to empower sales and service teams 
By Carly Whitte February 24, 2026
Discover the four levels of AI readiness and assess where your organization stands. Learn how to move from experimentation to scalable, responsible AI adoption.
February 16, 2026
As organizations head into 2026, the conversation around artificial intelligence (AI) is changing. The early years of AI adoption were dominated by experimentation. Proofs of concept multiplied. Vendors promised transformation. Internal teams explored use cases in pockets across the organization. Yet for many enterprises, the results have been uneven at best. In 2026, AI success is more than access to advanced models or cutting-edge tools and will be driven by execution. Organizations that struggle with AI rarely lack ambition but instead lack the structure and organizational readiness. Here’s what you can expect to see in 2026. Agentic AI Goes Beyond Experimentation Agentic AI is often described as the next frontier: AI systems that can reason, plan, and take action autonomously. In theory, this represents a major leap forward. In practice, 2026 will expose a hard truth: autonomy without discipline or readiness creates risk faster than value. The most effective organizations will deploy agentic AI deliberately within clearly defined operational boundaries. Agentic AI will increasingly be used to coordinate workflows, surface decision options, and manage repetitive execution across systems, while humans retain ownership over judgment and accountability. The intelligence of the agent matters far less than how well it is integrated into existing processes and platforms. When agentic AI operates outside governed systems of record, organizations lose visibility, auditability, and trust. When it is embedded directly into the operating model, it strengthens execution and amplifies impact instead of introducing risk. In practice, we are already seeing this distinction play out. One organization attempted to deploy autonomous agents across customer operations without clear escalation paths or system boundaries, quickly creating confusion and rework. Another embedded agentic AI narrowly within its CRM workflows to triage cases, surface next-best actions, and route work, reducing cycle time while preserving human accountability. The difference was the discipline of its deployment and readiness of the company . In 2026, agentic AI will succeed quietly inside workflows , under guardrails, and in service of execution rather than experimentation. The Shift from Models to Systems The advantage of having access to the most advanced AI model will be minimal. Models will improve, but they will also become more interchangeable. The differentiator will be the system surrounding them. Organizations that see real returns from AI will focus on how data moves, how decisions are made, and how outcomes are measured. AI does not operate in isolation. It inherits the strengths and weaknesses of the environment in which it is deployed. At KKC, we often see AI initiatives stall because foundational questions were never addressed. Data may exist, but not be trusted. Platforms may be implemented, but not integrated. Processes may be documented, but not followed. AI simply exposes these gaps faster. We frequently see organizations using the same AI tools achieve radically different outcomes. In one case, two teams implemented similar predictive capabilities. One struggled due to inconsistent data definitions and disconnected platforms. The other succeeded by first aligning data ownership, integrating systems of record, and defining how insights would be acted upon. The technology was identical. The system was not. In 2026, the most successful AI programs will be built on strong systems thinking. They will prioritize reliability over novelty and consistency over speed. These organizations may appear slower at first, but they will compound value over time while others reset their strategy yet again. Governance and Accountability Take Center Stage AI governance is now a practical requirement. As AI moves deeper into decision-making, organizations will face growing pressure to explain how outcomes are generated, who is responsible for them, and how risks are managed. This pressure will come not only from regulators, but from customers, boards, and internal teams who expect clarity and control. Effective governance doesn’t limit innovation; it enables it to scale safely. Organizations that invest in clear ownership models, defined approval paths, and ongoing monitoring will move faster because they eliminate uncertainty and rework. In regulated and complex environments, governance determines speed. Organizations without clear ownership stall decisions while debating risk. Those with defined approval models, monitoring, and escalation paths move faster because teams know exactly how to proceed. Governance removes friction while not slowing AI down. In 2026, governance will be recognized as infrastructure instead of overhead. AI Readiness Is No Longer Just Technical One of the most underestimated shifts heading into 2026 is the recognition that AI readiness is as much about people as it is about technology. Many organizations underestimate the cultural impact of AI. Teams may distrust outputs they do not understand. Leaders may struggle to explain how AI fits into decision-making. Employees may fear replacement rather than augmentation. When these concerns are not addressed, adoption stalls, even when the technology works. In several organizations we’ve observed, AI tools technically performed as designed but were quietly ignored. Teams lacked confidence in outputs, managers hesitated to rely on recommendations, and adoption plateaued. Where leaders invested in education, role clarity, and communication, usage increased without changing the underlying technology. Organizations that succeed in 2026 will invest intentionally in education, communication, and change management. They will articulate not just what AI does, but why it exists and how it supports human decision-making. They will prepare leaders to lead differently and teams to work differently. AI is success often depends more on the operating model shift than the actual technology rollout. From AI Theater to Real Outcomes By 2026, patience for AI initiatives without measurable impact will be gone. Executives will expect clear business cases, defined success metrics, and visible progress. AI strategies will increasingly resemble other enterprise transformation efforts grounded in financial outcomes, operational efficiency, and long-term scalability. At KKC, we help organizations move beyond AI theater by focusing on where AI creates tangible value and where it does not. Not every process should be automated. Not every decision should involve AI. Disciplined prioritization will be a competitive advantage. We see many organizations measure AI progress by the number of pilots launched. The more successful ones measure it by decisions improved, hours saved, or revenue protected. In 2026, output metrics will replace activity metrics, and many AI programs will not survive that transition. The organizations that thrive will stop chasing AI for its own sake and start using it as a tool to strengthen execution. What 2026 Will Really Reward AI will continue to evolve rapidly. The organizations that benefit most from it will be the most prepared. In 2026, advantage will belong to organizations that: Build systems, not experiments Treat governance as an enabler Invest in readiness, not just tools Focus on execution over ambition AI has moved beyond proving what is possible. The focus now is delivering what matters consistently, at scale, and with confidence. Organizations that make this shift will define the next generation of AI leaders. At Kona Kai Corporation, we help organizations make that shift. We bring structure to AI initiatives that feel fragmented, turn ambition into executable roadmaps, and help teams move from pilots to real business impact. If your organization is ready to move beyond experimentation and into execution, 2026 is the year to do it, intentionally .
By Carly Whitte February 6, 2026
Celebrating 20 years of digital transformation success, Kona Kai Corporation has helped organizations navigate technology change, drive measurable business outcomes, and evolve from early CRM and process optimization to AI-driven solutions grounded in people, governance, and real results.