Pioneers Blog Post

Can AI in the laboratory be trusted? What years of peer-reviewed evidence shows.

March 2026

Author

  • Tiffany Chen

    Tiffany Chen

I’m often asked a version of the same question when speaking with pathologists and laboratory professionals:

“Is AI really worth it? And more importantly, can we trust it in our laboratory?”

It’s a fair question. 

Clinical labs are built on consistency, reproducibility, and evidence. New technologies are not adopted simply because they are innovative. They are adopted when data demonstrates that they can support real workflows and address real diagnostic challenges.

Over the past several years, I’ve worked alongside teams developing and validating AI-assisted digital pathology solutions across both veterinary and human medicine. What stands out most isn’t a single breakthrough, but the growing body of peer-reviewed research showing how AI can be validated and successfully integrated into routine laboratory settings.

We’re quickly approaching the point where the question is no longer whether AI belongs in the lab, but whether labs that aren’t using AI are leaving quality and efficiency on the table. 

Blog images 17

Where AI development really began

Many people are surprised to learn that a significant amount of Techcyte’s early AI development happened in veterinary diagnostics. These studies evaluated AI-assisted workflows for fecal parasite detection, blood smear analysis, and urine sediment evaluation across canine, feline, and equine samples.

Throughout this time, multiple peer-reviewed veterinary publications have demonstrated agreement rates approaching or exceeding 95% across multiple parasite classes. Just as important, these studies exposed algorithms to significant variation in staining quality, morphology, and specimen preparation, all challenges that mirror those seen every day in human clinical laboratories.

That experience mattered. It helped establish the data pipelines, validation frameworks, and quality processes that later shaped human clinical applications.

What the human studies tell us

As digital pathology adoption expanded, our research collaborations with institutions such as ARUP Laboratories, CorePlusPR, Mayo Clinic, Quest Diagnostics, The University of Bern, and other partners began evaluating AI-assisted workflows in human clinical settings.

In clinical parasitology, studies published in the Journal of Clinical Microbiology reported positive and negative agreement above 98% for AI-assisted detection of intestinal protozoa. A prospective study published in Diagnostics demonstrated 98.1% overall agreement and a Cohen’s kappa of 0.915 between AI-assisted workflows and manual microscopy using routine clinical samples.

One finding consistently stood out: AI-assisted review identified additional organisms that were initially missed during manual examination. This wasn’t because experts were wrong; rather, AI provided a consistent, expanded second look across every field of view.

Beyond microbiology, our AI-assisted cervical cytology screening workflow was clinically evaluated by a laboratory across 1,400 whole slide images, achieving 97% accuracy and 99% specificity. Based on these findings, the laboratory implemented the screening system to support 100% quality-control review.

These are not hypothetical studies or controlled lab demonstrations built on curated datasets. They reflect real laboratory environments and routine clinical workflows.

What years of research have taught me

When people ask whether AI can be trusted, I don’t point to a single algorithm or a single dataset. I point to the pattern that emerges across independent studies.

Across veterinary and human diagnostics, the evidence consistently demonstrates:

  • A strong agreement between AI-assisted workflows and traditional microscopy
  • Successful validation across diverse specimen types, including fecal samples, blood smears, cytology slides, and transplant pathology
  • AI functioning as a valuable tool to support quality control, triage, and efficiency within existing laboratory workflows

Trust in AI doesn’t come from replacing expertise. It comes from building systems that reinforce consistency, reduce variability, and provide additional layers of review.

Blog images 16

Why the question is changing

Earlier in my career, the question was often whether AI belonged in the laboratory at all.

Today, the conversation is shifting. Laboratories are no longer asking if AI will play a role in digital pathology; they are asking which solutions are grounded in real scientific evidence.

What gives me confidence is seeing years of iterative development across species, specimen types, and clinical environments. From veterinary parasitology to human cytology and microbiology, the growing publication record shows that AI development is becoming more mature, more collaborative, and more deeply integrated into laboratory workflows.

AI is not a shortcut. It is the result of sustained research, careful validation, and collaboration between clinicians, laboratorians, and engineers.

So when I’m asked whether laboratories can trust AI, my answer is simple: look at the evidence.


The following peer-reviewed studies reflect independent evaluations of AI-assisted digital pathology workflows developed using the Techcyte platform:

In addition to peer-reviewed publications, Techcyte’s AI development has been evaluated in more than 25 scientific abstracts presented across veterinary and human clinical meetings.

Check out more blogs