The Billion-Dollar Gamble: Hospitals and AI Investment
Hospitals are on the brink of a significant transformation, with projections indicating that they will invest billions in artificial intelligence (AI) technologies in the coming years. However, many healthcare institutions find themselves grappling with a crucial question: how can they accurately measure the return on investment (ROI) from these substantial expenditures? As health system leaders navigate this uncharted territory, they are experimenting with various methods to assess AI’s effectiveness, ranging from concrete metrics like patient outcomes to more subjective indicators such as physician job satisfaction.
Rethinking ROI in Healthcare AI
The challenge of measuring ROI is not lost on health system leaders. Kiran Mysore, chief data and analytics officer at Sutter Health, emphasizes that many pilot programs overlook ROI considerations from the outset. "It’s often a case of ‘let’s just solve the problem,’" he explains. This approach can lead to significant investments without a clear understanding of the value derived from the technology. To make informed decisions, hospital leaders must estimate a tool’s ROI before adoption, which can influence the scale of investment they are willing to make.
For instance, consider AI-powered ambient listening tools designed to alleviate the documentation burden on physicians. Measuring the time saved can be elusive, especially when a physician sees numerous patients in a short time frame. Instead, qualitative metrics like cognitive burden become essential. Mysore points out that while these measures may not be scientific, they reflect a physician’s sense of relief and improved ability to engage with patients.
The Importance of Qualitative Metrics
In the current healthcare landscape, where clinician burnout is rampant, qualitative metrics hold significant weight. Scott Arnold, CIO and chief of innovation at Tampa General Hospital, echoes this sentiment, noting that traditional metrics like staff attrition rates and job satisfaction can serve as indicators of an AI tool’s impact. "While I may not have a direct ROI figure for the CFO, I can point to a decrease in attrition rates, showing that happier staff are staying," he explains. Conversely, some technologies demand a more quantitative approach, such as tracking the average length of patient stays after implementing AI-driven discharge processes.
Challenges in Scaling AI Solutions
Once a promising AI solution has been piloted successfully, scaling it across a healthcare system presents its own set of challenges. Mysore highlights the need for tailored deployment strategies, as different specialties may require distinct approaches. For example, primary care physicians may document patient interactions differently than cardiologists, necessitating customized scaling functions. Without this tailored approach, even the most effective AI tools risk stagnation at the pilot phase.
Moreover, many health systems lack the necessary infrastructure to scale AI solutions efficiently. Tej Shah, managing director at Accenture, likens this issue to "building the lab but not the garage." While hospitals are investing in AI pilot projects, they often neglect the foundational infrastructure required for broader implementation. A strong digital core, characterized by cloud-based operations and structured, accessible data, is essential for AI tools to deliver reliable insights.
The Role of Data Governance
Data governance plays a critical role in the successful deployment of AI technologies. Hospitals must establish robust governance structures to ensure that their digital tools are secure and ethical. This involves not only protecting patient data but also ensuring that AI algorithms are free from bias and inefficiencies. Poor data quality can lead to flawed insights and missed opportunities for scaling AI solutions.
Training staff on how to effectively use AI tools is equally important. Shah emphasizes the need for healthcare providers to invest in their workforce, helping clinicians understand the capabilities and limitations of AI technologies. "It’s about helping them navigate the jagged frontier of AI," he notes, underscoring that the success of AI in healthcare hinges on both people and processes.
Bridging the Evidence Gap
A significant hurdle in scaling AI solutions is the lack of external evidence to guide decision-making. Meg Barron, managing director at the Peterson Health Technology Institute (PHTI), points out that hospitals often struggle to find reliable data on which AI solutions are most effective. PHTI aims to address this issue by publishing research that evaluates the clinical and economic impact of digital health tools, emphasizing the importance of prioritizing clinical effectiveness over user satisfaction.
Barron warns that not all evidence is created equal, and bias can infiltrate efficacy studies, particularly when vendors have financial incentives. To combat this, PHTI systematically reviews evidence with a focus on real-world data and performance, rather than relying solely on randomized controlled trials, which may not accurately reflect the effectiveness of rapidly evolving digital health technologies.
The Need for Real-World Evidence
Real-world evidence for healthcare AI tools is scarce, and providers often find it challenging to access. Many vendors rely on studies conducted in controlled environments, using simulated data rather than real patient information. For example, a recent analysis of over 500 studies on large language models in healthcare revealed that only 5% utilized real-world patient data. As healthcare providers scrutinize digital health vendors, it is crucial to investigate claims regarding cost savings and clinical outcomes.
While cost reduction may not be the primary goal of every AI tool, improving health outcomes often leads to lower spending. Barron advocates for digital health technology assessments that consider both clinical effectiveness and budget impact, particularly within the common one-to-three-year contract cycles in healthcare. PHTI’s research has shown that some digital solutions, such as virtual physical therapy, can save costs while delivering clinical results comparable to in-person care.
In contrast, other technologies, like digital diabetes management tools, have raised costs without demonstrating superior outcomes, despite vendor claims of cost-saving capabilities. As the healthcare industry accelerates its adoption of AI, a meticulous examination of real-world evidence will be essential in guiding providers’ decisions on which technologies to scale. Without this critical component, hospitals risk investing in tools that promise much but ultimately fail to deliver on their clinical and cost-saving potential.