System to Classify, Detect and Prevent Hallucinatory Error in Clinical Summarization

An Analysis of 5.4 Million AI- Generated Inpatient Clinical Summaries

PUBLISHED BY

Daniel Kreitzberg, PhD, Yukun Chen, PhD, Asohan Amarasingham, PhD, Ruben Amarasingham, MD

PUBLISHED

In today’s fast-paced healthcare environment, clinicians are overwhelmed with extensive patient charts- some can be as long as Hamlet (over 30,000 words).  The sheer volume of information makes it challenging to focus on the most critical and pertinent information during patient encounters. This is where Pieces steps in with our innovative solution, The Pieces Working Summary, which distills a patient’s status, provides up-to-date key information like reasons for admission, treatments, and upcoming care activities. 

Ensuring the safety and reliability of these AI-generated summaries is a huge responsibility that we at Pieces take very seriously. We’ve been diligently developing our Pieces Classification Framework for Hallucinatory Error for years to tackle this issue head-on, and are eager to share more in this technical white paper.

Pieces Working Summary

Pioneering AI Quality Oversight

Learn how we assess and address hallucinations in AI-generated clinical summaries, setting a new standard in the healthcare industry for AI quality oversight and risk mitigation

Rigorously Measuring Effectiveness

Discover how our statistical methodologies ensure the reliability of our AI tools and build trust in clinical settings

Creating a Vision for the Future

Explore our commitment to advancing AI safety in clinical care by collaborating with industry experts

While Large Language Models (LLMs) are excellent at summarizing large amounts of data, they come with limitations – most notably, the potential for “hallucinations,” where the AI can generate inaccurate or misleading information. To address this, Pieces developed a risk classification framework to categorize hallucinations by severity.

In this paper, we outline our pioneering classification framework, the “Pieces Framework,” the methodology behind it, and how we mitigate hallucination risks to ensure AI reliably supports healthcare providers.

How It Works

Step 1: Generation

Pieces receives patient information, and generates an initial Pieces Working Summary

Step 2: Detection

Pieces adversarial models review all Summaries and flag any at high risk of hallucination

Step 3: Escalation

Any Summaries not flagged proceed to EHR, flagged summaries are escalated to SafeRead

Step 4: Evaluation

Pieces board-certified physicians review flagged Summaries in SafeRead

Step 5: Verification

After reviewing, the approved Summaries are sent to the EHR

Step 6: Validation

Physicians in the EHR can edit Summaries, allowing Pieces AI to continuously improve

Explore our vision for the future of clinical AI that supports safe and effective care delivery. Learn more about our hallucination classification framework and how we’re advancing AI for healthcare.

X