Contact

Case Study - Medical Diagnostics

AI-Powered Medical Diagnostics

Blueberry developed an on-device AI system for real-time throat abnormality detection, using a mobile app rebuilt from the ground up for reliability and scale.

icon

Healthcare

The Challenge

Our client had an innovative and disruptive idea: create a low-cost laryngoscope (a camera for viewing a patient's throat) by cleverly combining a custom-moulded bracket, an inexpensive UVC camera, and a standard mobile phone as the display. The concept was brilliant, poised to undercut the market by a huge factor.

They had engaged another software company to build the mobile application to power the device. However, the software was unreliable. The live camera feed—the product's core function—would crash at random intervals.

The breaking point came during a product demonstration to prospective customers. With the device in use on a patient, the feed crashed, rendering the product useless and untrustworthy. It was clear the existing software was not fit for purpose, and the client came to us for a solution. 

Phase 1: Delivering a Stable, Commercial-Grade Application

After reviewing the initial, failing codebase, we advised a different path. Rather than attempting to patch a fundamentally flawed application, the most effective route was to rewrite it from scratch. This would give us full control, a deeper understanding of the system, and the ability to build a truly robust and reliable product.

Our team, with extensive experience in mobile development, chose to build the new application using Flutter, a modern, high-performance framework. To keep costs down, the client chose to use Android 10 devices—an older OS version that introduced additional technical challenges. The primary hurdle was the inconsistent support for UVC cameras on Android, and existing plugins were often incomplete, outdated, or incompatible with this version.

Our solution was to take the most functional open-source plugin, correct its underlying errors, and build out the missing functionality ourselves. We quickly developed a proof of concept that demonstrated a stable, crash-free camera feed running for extended periods of time.

The Outcome of Phase 1: We delivered a reliable, commercial-grade application that our client could sell with confidence, rescuing the product and turning a critical failure into a market-ready success.

Phase 2: Proactive Innovation—On-Device AI for Early Detection

With a stable product delivered, the client asked a transformative question: "What if the device could not only see, but understand what it's seeing?" 

 We initiated a research project to explore the feasibility of running on-device ("Edge AI") machine learning to flag potential throat abnormalities, such as cancers or other diseases, in real-time. 

1. Feasibility & Hardware Research: We first benchmarked a range of mobile devices—from low-cost Android phones to the latest Samsung Galaxy S25—to confirm that modern phones possessed the necessary processing power for real-time AI inference. Our tests proved it was not only possible but highly practical on modern hardware. 

2. Model Selection & Data Science: Our team selected YOLO, a mature and efficient object detection model ideal for running on edge devices. The greatest challenge in any medical AI project is sourcing and preparing high-quality data. Our process involved: 

  • Sourcing Datasets: We scoured medical research for suitable public datasets of laryngoscope images, filtering heavily to find those captured with the correct type of camera (white light) and without watermarks or other artifacts. 
  • Data Labelling: Using professional annotation tools like Label Studio, we meticulously drew "bounding boxes" around problem areas on hundreds of images to teach the model what to look for. 
  • Dataset Augmentation: We used the Roboflow platform to intelligently augment our dataset, applying rotations and other transformations to multiply our 400 initial images into a more robust training set of nearly 1,400. 

Current Results & Next Steps

Our initial prototype model, focused on identifying a single type of abnormality (granuloma), is already achieving ~70% accuracy on test data. This confirms the fundamental approach is sound. 

Our roadmap is clear: 

  • Finalise On-Device Integration: Complete the engineering work to feed the live UVC camera frames directly to the AI inference engine on the phone. 
  • Expand the Model: Continue the data science work to train the model to recognise all eight major pathology classes in our dataset. 
  • Real-World Data Collection: Begin collecting data from the device itself to further fine-tune the model for our client's specific hardware, creating an even more accurate and powerful diagnostic tool. 

This project perfectly illustrates our philosophy: we begin by solving the immediate, critical business problem, and then act as a proactive partner to explore and build what's next. 

We're easy to talk to - tell us what you need.

CONTACT US

Don't worry if you don't know about the technical stuff, we will happily discuss your ideas and advise you.

Birmingham:

London: