AI in Practice: A Payer Q&A Series

From Transparency to Trust: Inside AI Governance for Health Plans

Artificial intelligence is rapidly reshaping how health plans manage clinical operations, member services, provider interactions, and payment integrity. As AI-driven decisions increasingly influence high-impact areas—such as healthcare outcomes, customer support, and financial accuracy—health plans face growing pressure to ensure these solutions are transparent, explainable, and trustworthy. Enterprises rely on these principles to reduce risk, meet regulatory expectations, and drive meaningful adoption across teams and stakeholders.

In this edition of our Q&A series, Joseph Ancil, Chief AI Officer at healthplans.ai, addresses the core concepts of AI transparency, explainability, and trust, and clarifies why these distinctions matter for organizations responsible for protecting members, providers, and sensitive health information. He also explores the operational challenges health plans face, including the complexity of AI models, data quality issues, and the growing need for auditable, reviewable AI solutions.

As regulatory scrutiny increases and AI becomes more deeply embedded in decision-making processes, health plans must not only understand how AI solutions work but also be able to prove their fairness, reliability, and safety. The insights Joseph provides here offer a practical guide for health plan leaders who are building responsible AI programs that enhance performance while maintaining trust with members, providers, and regulators.

  1. How do you distinguish between AI transparency, explainability, and trust, and why do these concepts matter for health plans?

    Transparency is about being clear regarding what data the AI uses, how it operates, and how its behavior can be reviewed. Explainability goes a step further, ensuring the solution can provide simple, understandable reasons for its outputs and the sources behind them. Trust is ultimately about confidence: people must believe the AI is safe, accurate, and fair. These distinctions matter because you can be transparent without being easy to interpret, and you can explain results without necessarily earning trust.

  2. Why are transparency, explainability, and trust becoming essential for health plans as AI becomes more integrated into operations and decision-making?

    AI is now embedded in high-impact workflows such as healthcare decisions, customer support, and payment operations—areas where mistakes carry significant consequences. Transparency and explainability help reduce risk by making decisions reviewable and understandable. Trust is what enables real adoption across staff, leadership, and regulatory stakeholders, ensuring AI can be used confidently and responsibly.

  3. What challenges make it difficult to build transparent and explainable AI solutions?

    Many AI models learn complex patterns that are difficult to translate into plain language. Even when the technology is capable, poor-quality, fragmented, or incomplete data can make it hard to explain outcomes consistently. These issues create real obstacles for teams trying to make AI outputs both reviewable and understandable.

  4. What steps can health plans take to build employee confidence in new AI tools and workflows?

    Health plans can build employee confidence in new AI tools by clearly positioning them as decision support, not decision replacement, and by explicitly preserving human accountability. Effective solutions provide transparent, explainable recommendations, clearly show how outputs are generated and used, and allow safe human overrides. Confidence grows through role-based training, early pilots with trusted frontline teams, and visible feedback loops that demonstrate impact and improvement. Equally important are strong governance and compliance engagement, paired with clear communication that AI is designed to reduce rework and cognitive burden—not replace jobs—which is essential for sustained adoption and trust.

  5. Which AI governance practices are most effective in fostering trust across an organization?

    Effective governance includes written AI policies, formal approval steps before deployment, and clearly defined ownership for every solution and its results. Maintaining audit trails, testing models before release, and reviewing performance routinely with business and compliance partners all reinforce consistent, responsible use of AI.

  6. How do today’s regulatory and compliance requirements influence how transparency and explainability must be implemented?

    Regulations, and even individual payer contracts, often require organizations to demonstrate how decisions were made, particularly when dealing with Protected Health Information (PHI) or patient rights. This drives the need for documented data usage, traceable audit logs, and explanations that reviewers can readily understand.

  7. How should organizations balance advanced technical performance with the need for explainability?

    For high-risk decisions, simpler models are often the better choice because they’re easier to interpret and validate. More complex models can certainly be used, but only when paired with strong controls, monitoring, and clear explanation layers. If a model cannot provide meaningful insight into how it works, it should be limited to advisory roles where humans remain in control.

  8. Why is ongoing monitoring for bias and drift critical, and how can it help protect members and providers over time?

    Ongoing monitoring ensures AI remains fair and accurate as data changes, enabling early detection of issues and preventing harm to members, providers, or financial outcomes while reinforcing long-term trust.

  9. What guidance would you offer to executives who are beginning to prioritize trustworthy and transparent AI?

    Every AI solution should have a clearly defined purpose, a named owner, and simple metrics that demonstrate safety and effectiveness. Start with small, manageable deployments; insist on audit trails and human oversight; and scale only after results are consistent and reliable. This creates a strong foundation for long-term governance.

  10. How do you expect “trustworthy AI” to evolve over the next few years, and what will health plans need to demonstrate to maintain confidence from stakeholders?

    Trust in AI will evolve from assumption to proof, grounded in rigorous testing, clear explainability and thorough documentation. As AI influences a growing number of healthcare decisions, organizations will be measured by their ability to demonstrate each solution’s safety, fairness, and reliability. Sustaining trust will be essential as AI becomes a foundational component of healthcare operations.

The Takeaway

For health plans, the path to responsible AI is not defined solely by technical performance—it is anchored in clarity, oversight, and trust. As Joseph Ancil highlights, organizations must adopt governance practices such as written AI policies, human oversight in sensitive decisions, audit logs, and ongoing performance reviews to ensure AI remains fair, safe, and aligned with organizational goals.

Building trust is an ongoing commitment. It requires transparent communication with employees, clear ownership for every AI solution, and simple, meaningful success metrics that demonstrate real value without compromising member or provider experience.

Joseph emphasizes that trust in AI will continue to evolve—from believing an AI solution works to being able to prove it works through rigorous testing, monitoring, explainability, and documentation.

As AI becomes a foundational component of operational efficiency, cost savings, and improved healthcare outcomes, health plans that embrace transparency and explainability will be better positioned to innovate confidently and responsibly. The principles outlined above serve as a blueprint for leaders looking to scale AI safely while earning and maintaining the trust of members, providers, employees, and regulators.

Joseph Ancil, Chief AI Officer at healthplans.ai

Joseph spearheads the company’s AI practice, meticulously overseeing the rollout of innovative AI solutions. In this pivotal role, he crafts and implements groundbreaking AI strategies that drive organizational transformation and innovation, augmenting human efficiency through AI.

About the Q&A Series

The healthplans.ai Q&A Series is a thought-leadership publication designed to help payer executives and operational teams navigate the rapidly changing landscape of artificial intelligence. Each issue features conversations with industry experts and innovators who break down complex AI topics into clear, practical insights tailored for the unique challenges health plans face.

Our goal is simple: to demystify AI and give payers the clarity and confidence they need to adopt it responsibly and effectively. To that end, this series delivers practical guidance for payer leaders on automating workflows, strengthening governance and compliance, enhancing member experience, and reducing risk—insights they can apply immediately.

Previous
Previous

Untangling the Spaghetti: How AI-Driven Process Discovery Is Bringing Clarity to Healthcare Payer Operations

Next
Next

AI in Healthcare Payers: Lessons from 2025 and What’s Coming in 2026