
2024 was the year A.I. officially arrived in healthcare. Startups sprinted to build, health systems scrambled to evaluate and investors poured billions into a flood of new companies promising to reduce burdens and improve care. But the adoption numbers tell a different story. While usage is increasing, just 66 percent of clinicians surveyed by the American Medical Association report using A.I. in their daily workflows. That disconnect between investment and actual impact isn’t just a hiccup. It’s a signal. Without careful attention, we risk echoing one of the most frustrating periods in healthcare technology: the slow adoption of electronic health records (EHRs).
We’ve seen this movie before
When EHRs first emerged in the 1990s, they were supposed to be a game changer, finally offering clinicians easier access to patient information. But progress was painfully slow.
Many clinicians faced inconsistent software design, clunky interfaces and interoperability issues, such as the lack of standardization, which made sharing data between EHR systems more tedious and disrupted workflows. These early EHRs tried to solve too many problems—clinical, billing and administrative—resulting in fragmented systems that performed poorly. As late as 2004, over 10 percent of healthcare facilities still didn’t plan to adopt one. It wasn’t until federal mandates pushed EHRs into hospitals in 2016 that the industry finally caught up, and adoption accelerated. This story of delayed adoption and forced integration isn’t unique to EHRs. It’s a pattern that threatens to repeat itself with ambient A.I. The difference? This time, there’s no government mandate forcing adoption. The momentum is market-driven. And while the momentum is promising, it’s also risky if not backed by clear adoption and returns.
A.I. risks the same fate
Right now, we’re making the same mistakes with A.I. The excitement is there. The race is on. But speed without strategy is dangerous. If we don’t prioritize intuitive design, clinical usefulness and seamless interoperability, we risk building A.I. solutions that promise much but deliver little impact. Clinicians don’t want more dashboards. They don’t want more tools that require training, handholding or workarounds. They want tools that feel invisible—solutions that fade into the background and just work.
From start to end, the clinician-patient interaction involves scheduling, reviewing a patient summary, blending, showing key data, creating notes, staging orders, extracting codes, providing contextual nudges and ensuring the encounter is closed and everyone is paid. Piecemeal solutions that only focus on one of those tasks won’t cut it.
Imagine a future where all of these tasks happen seamlessly and invisibly. That is the vision we should be building—but it only works if the technology integrates effortlessly into the existing tech stack and can power the entire clinician-patient journey.
Interoperability isn’t optional—it’s survival
A.I. is not valuable in isolation. A solution that improves scheduling or documentation is helpful, but if it doesn’t plug into the broader system, it won’t move the needle at scale. Healthcare is made up of layers—EHRs, billing systems, compliance protocols—and A.I. needs to be interoperable across all of them to be worth the effort of adoption.
In time, A.I. will eventually permeate nearly every healthcare technology product on the market. That’s why clinicians will need a true partner to ensure A.I. is implemented responsibly and holistically. An A.I. platform that supports daily healthcare workflows is far more important than narrow A.I. point solutions.
Which brings us to a critical point: adoption is the only metric that matters. Not demos. Not hype. Not press coverage or inflated valuations. If clinicians aren’t using your tool day in and day out, it’s not working. And it won’t survive.
In healthcare, demonstrating early value isn’t optional, it’s critical. Clinicians are overwhelmed, systems are stretched and there’s no patience for technology that overpromises and underdelivers. Start with high-impact, high-friction use cases like documentation—pain points clinicians feel every day—to show early value. Solve those, and you earn trust and more sophisticated use cases become viable.
- Health systems and investors need to align around this reality and ask:
- Does this A.I. tool meaningfully reduce administrative burden time?
- Does it integrate cleanly into clinical workflows?
- Does it actually improve the clinician and patient experience?
- Does it improve patient care?
If the answer is no, then we’re investing in complexity, not progress.
A.I. should assist, not replace
Let’s also be realistic about what A.I. can and should do. Large language models and ambient tools have huge potential, but they are assistive, not autonomous. They should help clinicians make decisions, not make decisions for them.
Already, there are some troubling signs—like doctors using ChatGPT during patient visits, relying on generalized models not built for medical accuracy or context. That’s not innovation; that’s risk. A.I. should support clinical judgment, not substitute it—at least not yet. In order to get to the point where A.I. can be more than just an assistant, we have to ensure we do the responsible work.
We are beyond the “move fast and break things” era, and A.I. policies are being developed to help find responsibility in innovation. But they must be carefully calibrated. Too little oversight risks harm; too much could stifle innovation and adoption.
The path forward
This isn’t about slowing down. It’s about building smart. If we want A.I. to transform healthcare, we must stop chasing the next shiny thing and start focusing on real outcomes. That means co-building with clinicians, prioritizing user experience, making interoperability non-negotiable and it means holding ourselves—and each other—accountable to adoption as the North Star. Adoption is not solely the responsibility of A.I. companies. Health systems have to partner hand in hand to deploy this together.
We have the technology. We have the momentum. What we need is the discipline to get it right. If we succeed, we can create technology that genuinely lightens the load for clinicians, enhances patient care and brings lasting improvements to the entire healthcare ecosystem. We can do better. And if we’re serious about change, we must.