2025-05-20Stanford AI professor offers some cautionary advice for deploying the technology
So many in healthcare today have such high hopes for artificial intelligence to do everything from defeating clinician burnout to advancing medical research. The challenge with AI in healthcare is not excitement, there's plenty of that. Rather, they key challenge for IT leaders is clarity – seeing AI clearly for what it can actually accomplish. Where is AI actually delivering ROI? What's safe to deploy now? And how do you manage risk, governance and long-term value without getting swept up in the hype?Dr. Justin Norden is a Stanford professor and cofounder and CEO of Qualified Health, which sells infrastructure for generative AI in healthcare designed to deliver the technology, training and support for hospitals and health systems to get started with generative AI and scale it safely across the organizations.Emphasis on the word cautionHe cautions IT leaders at healthcare provider organizations to identify ROI, safety issues, risk, governance and long-term value when approaching AI systems and tools. And he puts emphasis on the word caution."We're now nearly two-and-a-half years into ChatGPT's release, and while the buzz around generative AI in healthcare is louder than ever, it's time to ask, 'Where's the real ROI?'" Norden said. "Despite all the excitement, even the most widely adopted use case – ambient documentation – hasn't delivered consistent financial returns across provider groups. Some physicians love it, but adoption is still limited and uneven."Meanwhile, dramatic headlines about AI outperforming doctors on diagnostic tasks grab attention, but these miss the point: These clinical use cases are not what will define AI's impact near term in healthcare," he continued.The real value of AI today is in healthcare operations, he contended."That's where we're starting to see ROI," he said. "AI now can unlock insights from unstructured data – the bulk of what healthcare produces. Tasks like quality reporting, improving revenue cycle workflows, and simplifying patient outreach may not sound flashy, but they're essential and time-consuming."AI is finally capable of automating what's been buried in PDFs, faxes and clinical notes," he continued. "These behind-the-scenes improvements may look small individually, but together they represent significant, scalable impact."Ideas from the people closest to the workMany in health IT and healthcare overall still are searching for a single "killer app" to change everything. But real transformation will come from hundreds – or thousands – of small, practical use cases embedded into everyday work – and the best ideas won't come from the top down but from the people closest to the work, Norden said."Doctors and nurses already are using AI – just unofficially, on personal devices or through workarounds," he noted. "That tells us two things: there's demand, and there's risk. The path forward is clear – we need to bring AI above the table. Make it secure, HIPAA-compliant and accessible so we can turn this quiet revolution into lasting, system-wide progress."On another front, when it comes to safely deploying AI in healthcare today, Norden said it is critical to begin by acknowledging what is not safe – because that's where many organizations are still exposed, whether they know it or not. One of the most pressing concerns is staff using personal AI accounts to process sensitive patient information – and it is more common than many realize."Talk to leaders across the country and you'll hear everything from, 'We know it's happening, but we're looking the other way' to 'We'll deal with it if it becomes a problem,'" he said. "Some even operate under a quiet 'Don't ask, don't tell' policy. But none of these are viable long-term strategies. We've seen this before with Meta's pixel and Google ad tracking where once privacy violations came to light, lawsuits followed. The same is likely with AI. The legal and reputational risks are too big to ignore."Another area that demands caution is public-facing AI chat tools," he continued. "While the demos can be impressive, these systems are vulnerable to 'jailbreaking.' We've seen AI tools exploited to produce inappropriate, harmful or even dangerous content, often completely bypassing the systems' intended safeguards. In a clinical setting, that could result in anything from misinformation to data leaks or even harmful patient interactions."Watch out for the open internetThe risk grows exponentially when these models are connected to the open internet, he added."Bad actors can plant malicious content online designed to influence AI behavior, creating serious cybersecurity threats," he said. "At best, this leads to a PR headache. At worst, it can result in data breaches or ransomware attacks that bring entire systems to a halt."The safer path forward starts with internal deployment of AI in secure HIPAA-compliant environments with human-in-the-loop systems," he stated. "This makes sure data is being used safely, and humans are still signing off on actions being taken by AI systems. Early AI applications should focus on operational areas like streamlining admin tasks, improving workflows and reducing friction – areas that offer ROI without introducing clinical risk. The goal isn't to avoid AI – it's to use it wisely, building value and trust with every step."Norden also offers cautionary advice when it comes to managing risk, governance and long-term value with AI without getting swept up in the hype, calling this one of the biggest challenges in healthcare today."There's a natural hesitation, and rightfully so, given how much is still unknown," he said. "That's why many health systems are stuck in cautious mode – launching pilots, dabbling in internal side projects and experimenting without a clear path forward."The shift from 'This looks promising' to 'This is safe and scalable' starts with clear leadership and direction on what tools to use, and how we should measure success before we start," he continued. "What's difficult is without clear direction for our workforce now, people are turning to outside public tools and under the table use. We must start with HIPAA-compliant options for where our workforce should access these tools."Much more than safetyBut safety alone isn't enough."Too often, safer tools are also clunkier or less helpful, which pushes people right back to public options," he explained. "We need to make internal tools both secure and genuinely more valuable. That means embedding AI into real workflows and enriching it with internal data, so it's not just compliant, but also indispensable."As usage expands across the organization, governance must scale, too," he continued. "That includes tracking usage, auditing interactions and educating users – not to police them, but to guide safe, responsible use. If someone tries to use AI for high-risk tasks like medication dosing, we need systems in place to catch and correct that behavior early."Ultimately, long-term value comes from building a repeatable, scalable process, he added."That means structured pilots, performance thresholds, and infrastructure that helps governance teams track and grow what works," he said. "With strong tools, smart policies and clear priorities from leadership, we can move past experimentation and into sustainable, system-wide transformation."Avoiding common misstepsSo how can hospitals and health systems avoid common missteps that stall progress? Norden said in a variety of ways."Right now, when we talk to healthcare leaders, we see most AI strategies falling into one of four buckets – waiting for the EHR vendor to roll something out, banning tools like ChatGPT outright, buying a point system like ambient documentation, or trying to build everything in-house," Norden observed. "All of these approaches have some logic behind them, but on their own they often miss the bigger picture."What's really needed is a clear, shared vision across the organization that AI is coming, it's going to change the way healthcare operates, and we need to start preparing for that future now," he continued. "Without that buy-in, teams end up working in silos, unsure of where to focus, and progress gets stuck."Another common pitfall is trying to do too much at once."We've all seen systems chasing dozens of pilots with different vendors, spreading their time and resources thin," Norden said. "The result is not enough traction in any one place to make a meaningful impact. What works better is picking a few high-priority areas where AI can make a clear, immediate difference and investing in those with real support and leadership backing."It's about making fewer, smarter bets and giving those teams the tools, data and clarity they need to succeed," he added. "That focused approach builds momentum and makes it easier to scale what's working."Don't forget peopleAnd finally, Norden said one cannot talk about avoiding missteps without talking about people."Most of our staff already are using AI tools in their personal lives, and increasingly they're bringing them into work," he noted. "If we ignore that or try to shut it down, we're missing a huge opportunity. What we need to do is lean into it by giving them safe, secure tools to experiment with and teaching them how to use AI effectively and responsibly."Education and training can't be a one-off; it needs to be an ongoing part of how we support our teams," he concluded. "The future of AI in healthcare isn't just about the technology – it's about empowering our people to use it well. When leadership brings everyone along, that's when real transformation happens."Follow Bill's HIT coverage on LinkedIn: Bill SiwickiEmail him: [email protected] IT News is a HIMSS Media publication.WATCH NOW: Grabbing the Chief AI Officer brass ring – and working with top brass Enterprise Taxonomy: AIQuality CarePatient safetyAnalyticsCareData and Information