The Blueberry Multiplexor
The Problem with a Single AI Provider
The Blueberry Multiplexor is a sophisticated software module that allows your application to send a prompt to any one of over 37 major LLMs, like ChatGPT or Claude, just by changing a setting. It is paired with Blueberry’s LLM Database, a constantly updated repository holding key information—including performance metrics, features, and real-time pricing—on all supported models. This combination of a flexible router and a rich database opens up several powerful possibilities for your business.
You've integrated an AI model into your application, and it works well. But the landscape of Large Language Models (LLMs) is volatile. Prices change, new and better models are released every month, and relying on a single provider leaves you exposed. What happens when your provider doubles their prices? Or when a competitor's model becomes 50% faster?
Being locked into a single AI vendor is a significant strategic risk. Re-coding your entire application to switch providers is expensive, time-consuming, and puts your innovation on hold.
The Blueberry Multiplexor is the solution. It's a powerful software layer that decouples your application from any single AI model, giving you the freedom to choose, the power to optimise, and the flexibility to control your AI strategy and your budget.
Use Cases
Model Selection Research & Benchmarking
The Challenge: Before starting a new AI project, how do you know which model will perform best for your specific needs? Choosing based on hype is a recipe for failure.
Our Solution: We provide a multi-LLM query interface powered by the Multiplexor. You can enter a sample prompt and instantly see how different leading models respond side-by-side. Our integrated database also allows you to filter models by price point, capability, or provider, so you can benchmark the options that are most relevant to you.
The Value: Make data-driven decisions from day one. Avoid costly mistakes by selecting the optimal model for your project's budget and performance requirements before you write a single line of integration code.
Ongoing Model & Prompt Evaluation
The Challenge: You've built a business process around a specific model and prompt, but the AI landscape is constantly changing. A new, cheaper, or more effective model could be released next week, but you have no way of knowing without constant manual testing.
Our Solution: We turn model evaluation into a hands-off, automated service. Give us your core prompts, sample data, and success criteria. Every week, we will automatically run your tests against a range of models—including all the latest releases—and email you a detailed performance report.
The Value: Future-proof your business. Get automated alerts when a new model offers a significant improvement in cost or quality, allowing you to continually optimise your operations and maintain a competitive edge without the manual overhead.
Integration & Licensing Options
The Challenge: You are building an AI-powered product or internal tool and want to avoid vendor lock-in. You need the flexibility to switch LLMs easily as the market evolves.
Our Solution: The LLM Multiplexor can be licensed and integrated directly into your own systems. We offer two primary models:
- API Access: A simple, managed API call that gives you access to our entire routing and database system.
- Embedded Code Licence: For deeper integration, we can license the Multiplexor as a software module to be embedded directly within your application, all backed by a full support contract.
The Value: Build on a flexible foundation. Give your product the powerful, built-in capability to switch LLMs easily, offering your customers the best performance and ensuring you can always manage your own costs and technical strategy.
Who Is This For?
Build your application on a flexible foundation from day one. Avoid vendor lock-in and ensure you can always offer your customers the best performance at the lowest cost.
Manage a complex portfolio of AI tasks efficiently. Use high-cost models only where necessary and optimise the rest of your workflows with faster, cheaper alternatives.
Gain a top-down view of your organisation's AI usage and costs. Enforce governance and make strategic decisions about which models are approved for use across different departments. Great for R&D teams who need to benchmark, test, and compare multiple models efficiently.
SaaS Companies who want to offer their end-users a choice of different AI engines to power a feature.
Key Features & Your Business Benefits
Feature | Benefits |
---|---|
37+ Pre-Integrated LLM APIs | Instantly access and test new models as they are released without any new development work. Future-proof your application against market changes. |
Centralised API Gateway | Your team makes one simple integration, saving hundreds of hours of repetitive engineering and maintenance effort. Swap models in or out on the fly with zero engineering effort, ensuring you can always leverage the latest, most powerful technology without being locked into a single vendor. |
Dynamic, Rule-Based Routing | Automatically route different types of prompts to the models best suited to handle them. Send a complex legal query to a premium reasoning model, a simple summarisation to a fast and cheap model, and a language translation to a specialised one—all seamlessly. |
Cost and Performance Optimisation | Filter and test models based on your specific criteria. Need to find the three best models under a certain price point for a specific task? The Multiplexor lets you benchmark them in real-time to find the perfect balance of cost and quality. Make informed, data-driven decisions about which model to use for any given task, allowing you to dynamically minimise costs and maximise quality. |
Simple Licensing Model | The Multiplexor is available as a licensable software component that can be deployed directly within your own environment, ensuring your data and routing logic remain under your control. It's a simple, predictable cost that replaces the chaos of managing multiple API keys and contracts. |
How It Works: A Simple, Powerful Workflow
Your application sends a prompt to the secure Blueberry AI Multiplexor endpoint.
Send your prompt to the Multiplexor, specifying which of the 37+ supported models you want to use. You can switch from ChatGPT to Claude to an open-source model with a single parameter change. The Multiplexor checks your business rules (e.g., "for 'summarise' tasks, use model X, Y, or Z if cost is below $0.50/1M tokens").
The prompt is sent to the optimal LLM API, and the response is returned directly to your application.