Can Agentic AI Deliver? Rethinking Innovation, ROI, and Risk in Managed Care
Andy De explains why generative AI has largely stalled in health care and argues that agentic AI—focused on automating workflows and driving action—offers a more practical path to measurable ROI, improved clinician productivity, and better patient outcomes.
Key Takeaways
- Generative AI in health care has been hindered by limitations such as hallucinations, bias, and a lack of scalable use cases, with most investments failing to move beyond pilot stages.
- Agentic AI builds on these technologies by enabling action—automating repetitive workflows like care coordination, patient engagement, and monitoring—while maintaining necessary human oversight.
- As adoption grows, agentic AI has the potential to improve clinical productivity, reduce costs, and enhance patient outcomes, though questions around workforce impact, governance, and ethics remain unresolved.
Andy De, CMO: My name is Andy De. I’m the chief marketing officer at Lightbeam Health Solutions—its first chief marketing officer. I have been here for almost a year.
Previously, I worked with very large enterprise software leaders like SAP, GE Healthcare, and GM. I also had experience at Tableau and Alteryx, building their health care and life sciences businesses. Then I became chief marketing officer at MedeAnalytics, which is primarily a revenue cycle management vendor based in Dallas. I had a brief stint at Verato, which is a patient identity management company.
Now, I’ve been at Lightbeam for almost a year, focused squarely on value-based care and population health management. The common thread is a lot of analytics, and now artificial intelligence. I write extensively about artificial intelligence, have a readership across 47 countries, and almost 13 500 followers on LinkedIn.
What lessons should managed care organizations take from previous failures of generative artificial intelligence (AI) initiatives in health care, and how might agentic AI offer a more sustainable return on investment for health plans, ACOs, and provider networks?
De: There is this notion of a portfolio management approach to AI in health care and life sciences. There’s been so much hype and noise, especially for generative AI. For a lot of people, you ask them what AI is, and they say, “Oh, gen AI and large language models (LLMs).” And it’s not.
A very simple schematic really looks at all the modalities and technologies that fall under the artificial intelligence umbrella. It really started with machine learning, which is a statistical approach to recognizing patterns and doing predictive analytics, followed by deep learning, which is a subset of that, leveraging neural networks to deal with not only structured data, which machine learning does, but also unstructured data—kind of simulating the working of the human brain.
Then came natural language processing and natural language generation, which basically enable conversation in plain language. For instance, in a health care context, a nurse or a physician going to the patient’s bedside, and rather than having to pull up a chart and the analytics, would get a natural language summary of the patient’s condition, what the complications are, medication, et cetera.
Then came, obviously, generative AI based on large language models. It’s a bit challenging—this is probabilistic, not very high on accuracy—but obviously seeing tremendous traction just by virtue of ChatGPT, Claude, OpenAI, Perplexity, et cetera. I’ll delve into what the challenge is.
What is conventionally not thought of as AI in a health care context is also medical robotics, which actually preceded generative AI, deep learning, and all of that. Probably the pioneer was the da Vinci surgical robot from Intuitive Surgical, with the promise that you could operate on patients and significantly reduce the length of stay and recuperation. Robotic process automation briefly made its foray, but it is also being appended by generative AI. Within medical robotics, you had technologies like machine vision, augmented reality, and virtual reality.
But the new kid on the block—and where I think we’ll see a lot of unrealized promise—is agentic AI, which again will subsume. I wrote a piece saying agentic AI will really subsume generative AI and LLMs. And here’s why: if you look at generative AI and LLMs, the model is pretty simple—a probabilistic query-response. Versus agentic AI, which can use LLMs or small language models, but has additional capabilities in terms of call routers and generators to weed out a lot of the challenges, and most importantly, lead to action versus just a response to a query.
An obviously significant challenge is hallucinations. It’s incredible—hallucinations are getting better, but they still hallucinate. If you query an LLM with variations of the same question, it’s likely you’ll get different answers. AI bias is a huge challenge, and non-determinism, because it’s probabilistic, again reduces accuracy. There are also security issues and the ability to manipulate it, often called AI grooming, which is a challenge; copyright infringement, with billions of dollars in lawsuits against the likes of OpenAI, et cetera; and then just limited use cases and applications.
In the health care context, interestingly enough, unlike previous generations of technology, it is actually at the forefront. Even in a GenAI context, while a recent MIT report says 95% of all gen AI investments have not gone anywhere, with only 5% proceeding beyond pilots, one of the biggest blockbuster applications in health care is ambient listening. If there is interest, I can tell you where ambient listening is going.
For all of these reasons, what we are seeing is this evolution happening as we speak. There is venture capital money pouring in—billions of dollars—with more driven CXOs embracing gen AI as this panacea for all health care issues. And unfortunately, beyond ambient listening and physician scribes, we haven’t seen too many applications proliferating.
What we see instead is a lot of these startups, which started out as gen AI startups around a very specific use case, really scrambling to reinvent themselves with agentic AI.
Why is agentic AI so interesting? Agentic AI fundamentally starts to automate repetitive tasks, processes, and workflows. A great example—some of the ones we at Lightbeam are driving value in, in a care management and population health management context—is a post-discharge care transition agent. For instance, our care management enrollment agent. We have a best-in-class application called Remote Patient Care Signal, which can actually call patients and enroll them into that remote patient application. We have patient referral agents, and then, of course, a care gap closure agent. For instance: have you done your annual wellness visit? If not, do you want to schedule that with me? This conversational voice agent can literally take a patient through the process.
To answer your question—how will agentic AI deliver value?—this framework, which is getting a lot of traction, is something I crafted. It looks at the level of human intervention needed on the x-axis, going from high to modest to none, and then what is the anticipated value. Clearly, it’s early stages. We are starting to see some evidence of value and ROI being computed, but that’s been the biggest challenge with AI in general. Organizations deploy pilots, which are often abandoned, and the value and ROI are not computed.
I think there are 5 stages. I’ve put a lot of thought and research into this.
The first stage is prescriptive actioning. What that means is moving from descriptive and predictive analytics to prescriptive analytics, which can also be called actionable decision support. For instance, identifying for a cohort what are the prescriptive care gaps to be closed for care managers so they don’t have to do that analysis themselves.
The second stage is the notion of AI assistants or copilots. “Copilot” is also a branded term from Microsoft, but it’s a good way of thinking about AI assistants. Assistants help with complex or repetitive process steps and tasks, especially as they lend themselves to self-service, both from a patient perspective as well as a clinician perspective—patient or member self-service, and clinician decision support for evidence-based guidelines and recommendations.
Stage 3 is happening as we speak. This is the notion of automated monitoring of task workflows and processes, exceptional alerts and management, but with human oversight and interventions, given appropriate governance processes in place. For instance, conversational AI–enabled remote patient monitoring triggers an alert about a patient’s A1C being out of range. AI alerts care team members to similar care gaps, which in turn trigger next steps to call the patient and intervene.
This is where I think the holy grail is today: automating repetitive task workflows of business or clinical processes with human oversight and intervention, especially in health care. You can think of this as “lights out” in industries like retail or logistics, but in health care, because we are dealing with lives, this remains governed automation.
A good example is the notion of an AI radiologist. Today, an average midsize hospital might employ 4 or 5 radiologists at $400 000 each annually. Instead, you could have an agent with machine learning and machine vision that scans DICOM images and proactively identifies outliers—for instance, patients at risk of heart failure, heart attack, or stroke—and escalates those to a radiologist, who then applies domain expertise to validate and refer as needed. This improves outcomes and reduces costs significantly.
The fifth stage is the ultimate holy grail: artificial general intelligence (AGI). This is where you have a system of multiple AI agents working in parallel or sequentially, capable of making decisions and performing tasks automatically without human intervention, adapting based on outcomes and learning.
The biggest example is self-driving cars, which raise ethical questions. For example, if a car must choose between hitting a grandmother or an infant, what does it do? That opens up a huge set of ethical and governance issues, which are not keeping pace with innovation.
It’s hard to imagine health care reaching AGI today, but that could change over the next 4 or 5 years. There is speculation—some experts say 3 years, 5 years, or 7 years. That remains bleeding edge. Where we see value today is in agentic AI as a framework to help organizations understand where to start and where to go.
As an autonomous operating system, how does an agentic AI model integrate with existing utilization management, care coordination, and population health workflows without adding administrative complexity or cost?
De: A great example is ambient listening. What ambient listening does today is, when you have a patient-physician interaction, it records everything, captures the data, presents it to the physician, and saves the physician the challenge of having to type everything. It saves time, and the physician can quickly edit and post it into the electronic health record.
Now, when you integrate agentic AI with that—think stage 2 to stage 4—the agentic AI not only monitors what comes out of that interaction but also determines what actions should follow. Does the patient need an appointment with a cardiologist? Medication? Follow-up with a nurse practitioner? Once you integrate agentic AI with ambient listening, you start to automate those processes with caregiver governance.
Fundamentally, the value of AI in health care exists because physicians, nurses, and clinicians are overworked. They want to maximize time with patients while remaining productive and avoiding burnout. That’s the biggest value: higher productivity, lower fatigue, and better outcomes because nothing falls through the cracks. From a population health perspective, savings come from avoidable admissions, readmissions, and reduced length of stay.
As value-based care continues to evolve, what role do you see agentic AI playing in helping managed care organizations meet quality benchmarks, manage risk, and remain compliant with shifting policy and reimbursement models?
De: This is a framework that was just published, showing where ambient listening is going. The core premise is that ambient listening integrated with agentic AI will drive value.
Today, ambient listening is used for physician-patient encounters by about 27% of health systems and 30% of physician practices. The next step is integrating it with agentic AI agents for scheduling, referrals, closing care gaps, care transitions, prior authorizations, and specialty-specific workflows—cardiology, neurology, gynecology, et cetera.
We’re also seeing adoption expand beyond physicians. Care managers and nurses are starting to use ambient listening. At Lightbeam, we use it in care management contexts as well.
We’ll also see this evolve into operating rooms, surgical documentation, rehabilitation, occupational therapy, telehealth, hybrid visits, home health, and remote patient monitoring. A company like Caresyntax is already doing this in surgical contexts.
Beyond that, agentic AI can extend into revenue cycle management—ensuring documentation is complete and reducing claim denials—and even into life sciences and medical devices, such as ventilators and infusion pumps, helping contextualize alarms and reduce false positives.
It will also enhance medical robotics for surgery, telepresence, and rehabilitation, improving automation, documentation, and communication, and ultimately driving better outcomes and productivity.
Despite the misses with generative AI, many of those use cases may gain new life when integrated with agentic AI, particularly by automating repetitive processes prone to human error.
That said, there are big questions about workforce impact. As we automate workflows and tasks, do employees move into higher-value roles, or are they displaced?
Finally, we are in an arms race—across nations and companies—to reach more advanced AI. But governance, oversight, and ethics are not keeping pace. That gap presents risks, especially if bad actors misuse the technology. That’s the big picture as I see it today.


