Our AI Safety and Security Process

We use a robust system of human and machine review to ensure AI generated content is trustworthy

Introduction

In today’s fast-paced healthcare environment, clinicians are overwhelmed with extensive patient charts- some can be as long as Hamlet (over 30,000 words).  The sheer volume of information makes it challenging to focus on the most critical and pertinent information during patient encounters. Pieces distills and summarizes this information, filling in the gaps for physicians and providing more accurate data.

Ensuring the safety and reliability of these AI-generated summaries is a huge responsibility that we at Pieces take very seriously. We’ve been diligently developing our Pieces Classification Framework for Hallucinatory Error for years to tackle this issue head-on, and are eager to share more in this technical white paper.

You'll learn

Pioneering AI Quality Oversight

Learn how we assess and address hallucinations in AI-generated clinical summaries, setting a new standard in the healthcare industry for AI quality oversight and risk mitigation

Rigorously Measuring Effectiveness

Discover how our statistical methodologies ensure the reliability of our AI tools and build trust in clinical settings

Creating a Vision for the Future

Explore our commitment to advancing AI safety in clinical care by collaborating with industry experts

Pieces Classification Framework for Hallucinatory Error

While Large Language Models (LLMs) are excellent at summarizing large amounts of data, they come with limitations – most notably, the potential for “hallucinations,” where the AI can generate inaccurate or misleading information. To address this, Pieces developed a risk classification framework to categorize hallucinations by severity.

In this paper, we outline our pioneering classification framework, the “Pieces Framework,” the methodology behind it, and how we mitigate hallucination risks to ensure AI reliably supports healthcare providers.

SafeRead: Our patented clinical document review system

SafeRead is a patented platform designed for Pieces clinicians to review AI-generated content. Our approach combines adversarial AI models with a scalable human-in-the-loop process to monitor for bias, hallucinations, omissions, and quality.

How it works

Step 1: Generation

Pieces receives patient information, and generates an initial Pieces Working Summary

Step 2: Detection

Pieces adversarial models review all Summaries and flag any at potential risk of hallucination

Step 3: Escalation

Any summaries not flagged proceed to the EHR, flagged summaries are escalated to SafeRead

Step 4: Evaluation

Pieces board-certified physicians review flagged summaries in SafeRead

Step 5: Verification

After reviewing, the approved summaries are sent to the EHR

Step 6: Validation

Physicians in the EHR can edit summaries, allowing Pieces AI to continuously improve

back arrow

Download the Technical White Paper

Explore our vision for the future of clinical AI that supports safe and effective care delivery. Learn more about our hallucination classification framework and how we’re advancing AI for healthcare.

Ready to learn more?

Our flexible healthcare AI solution can be tailored to your needs.