The Viewpoint from Vida: where does AI fit into healthcare?

AI in telehealth must be governed by the principle of “do no harm” and must serve, not shape, care. AI improves care by reducing clinical workflow friction and streamlining data among other benefits however, when it comes to healthcare where do we have a responsibility to be sure humans are calling the shots?

With great power comes great responsibility: AI in telehealth

AI has arrived in healthcare the way many transformative technologies do: quickly, loudly, and with enormous promise. The question isn’t whether AI belongs in telehealth, but more so where it belongs, and where it absolutely does not. 

“Humans have been through this before. The first steam-powered engine amazed people. And burned people,” says Kelly Rawlings, our VP of Program and Intervention Design, “Humans are resourceful. We have learned how to design better, using tools. It’s all faster now in our AI era. Amplified. And yet in healthcare, ‘do no harm’ continues to be a golden, guiding principle.”

At Vida, we think about AI the same way clinicians think about any tool: it must operate under the same guiding principles that define person-to-person medicine, and it must serve care, not shape it. 

Where AI meaningfully improves care

Used properly, AI can make care better in ways that are both practical and human-centered:

It can reduce friction in clinical workflows, help care teams communicate more effectively, and ultimately make complex information easier to navigate for providers and their patients. 

In practice, that can look as simple as prioritizing lab results, or summarizing research or documentation. Nevertheless, none of this replaces expertise; it creates space for it. 

AI allows clinicians to adapt in real time. When used responsibly, it can meaningfully strengthen care and insights while keeping clinical judgment firmly at the center, but only if the right safeguards are in place. 

Where healthcare must draw a hard line

Despite its potential, AI poses real risks, especially in clinical decision-making.

AI systems can exaggerate, reflect bias, and produce outputs that are incomplete or wrong. And in healthcare, “almost right” isn’t good enough.

Take, for example, this recent Nature study. Researchers found that AI systems confidently described a completely fabricated medical condition as real, underscoring how easily these tools can generate plausible but incorrect clinical information.

Additionally, a recent state-approved pilot in Utah allows an AI system to participate in prescription renewals for chronic conditions, one of the first real-world examples of AI stepping into clinical decision-making. While the program includes guardrails like limited use cases, escalation to clinicians, and oversight requirements, it also raises important concerns among physicians about validation, accountability, and the risk of removing critical clinical touchpoints.

The bottom line is that AI should never make diagnoses, independently create treatment plans, generate clinical recommendations without clinician oversight, or communicate directly with patients without guardrails and review. 

When it comes to healthcare, human review isn’t optional; it’s foundational. 

How Vida applies AI in a clinically responsible way.

To ensure the highest standards of patient safety, Vida operates on the principle that AI serves as a supportive tool rather than a final decision-maker in diagnosis and treatment.

“The current technology, regulatory environment, and the lack of validation don’t yet support diagnosis and treatment by AI,” says Vida Chief Medical Officer, Dr. Richard Frank. “But, we expect that eventually healthcare will evolve and AI will become a more critical component of the physician’s workflow.”

Rather than replacing care, AI supports clinicians and enhances the member experience.

Our AI capabilities are built on a secure, scalable foundation, with the appropriate agreements and controls in place to support HIPAA compliance. This allows us to responsibly centralize and analyze large volumes of patient data, turning it into real-time insights that clinicians can actually use, like weight loss trends, blood sugar levels, and behavioral health indicators. 

But insight alone isn’t the goal. Lasting metabolic control is.

Every output must support human-led division making, not replace it. Our clinicians remain the final decision-makers. Every recommendation, message, and intervention is grounded in clinical expertise and reviewed accordingly. 

As standards of care evolve, our models evolve with them, ensuring that our solutions remain current, evidence-based, and trustworthy, and holding us accountable to our clients. 

Healthcare has a responsibility to govern AI

Healthcare organizations have a responsibility to govern AI if they choose to implement it. 

That responsibility shows up in very different ways:

  • Ensuring no Protected Health Information (PHI) is exposed in unsecured tools
  • Validating outputs against established clinical guidelines
  • Designing systems that are transparent and understandable for clinicians
  • Identifying and mitigating bias, including weight stigma, Social Determinants of Health (SDOH), or other cultural blind spots 

As AI systems evolve as they so often do, so must the standards that guide them. 

Why clinical judgment must remain at the center

The most important safeguard is also the simplest: 

Clinicians must make the final decision. 

At Vida, AI is designed to support providers, not replace them. Every insight and recommendation ultimately routes back to a human who understands the full context of a patient’s life, beyond data plugged into a machine.

Patients aren’t static; bodies, circumstances, and motivations change. Data alone, no matter how sophisticated, can become noise without clinically led interpretation to turn that information into wraparound care. 

AI will continue to shape the future of telehealth, but progress in healthcare isn’t measured by how quickly we adopt new tools, but rather how responsibly we use them to serve our patients.

Resources