Pro User
Timespan
explore our new search
Copilot Adds Anthropic & OpenAI LLMs
Microsoft Copilot
Dec 11, 2025 10:09 PM

Copilot Adds Anthropic & OpenAI LLMs

by HubSite 365 about Szymon Bochniak (365 atWork)

Microsoft 365 atWork; Senior Digital Advisor at Predica Group

Microsoft Copilot goes multi‑LLM with OpenAI GPT five and Anthropic Claude via Copilot Studio, protected by EDP

Key insights

  • Multi-LLM orchestration: The video explains Microsoft Copilot now routes requests across multiple models so each task uses the best engine.
    This approach lets businesses mix GPT-5 and Anthropic Claude models for higher quality and task-specific results.
  • Copilot Studio: The presenter shows how Copilot Studio gives low-code tools to build, tune, and manage AI agents.
    Teams can select models, set prompts, and test agents without deep AI expertise.
  • Model routing and modes: Copilot can automatically pick the ideal model or let users choose manually in Studio.
    Specialized models handle reasoning, summarization, grounding, or content generation for better accuracy.
  • Data grounding and integration: Copilot connects to enterprise sources so answers reflect company context via Microsoft Graph and web grounding layers.
    This ensures responses are relevant to internal data and current context.
  • Enterprise data protection: The video highlights built-in protections and governance when using external models, including compliance tools and monitoring.
    Organizations keep control over data, model tuning, and access policies during deployments.
  • Business benefits and reach: Multi-model Copilot improves task optimization, customization, and multi-agent workflows across Microsoft 365 apps like Word, Excel, Teams, and developer tools.
    Businesses gain scalable AI assistants tuned to their needs and workflows.

Introduction

The newsroom reviewed a recent YouTube video by Szymon Bochniak (365 atWork) that outlines how Microsoft is evolving Copilot into a multi-LLM solution for business. The video explains the addition of models such as GPT-5 and Anthropic’s Claude family into Microsoft’s ecosystem. Consequently, this shift aims to improve performance, reasoning, and task-specific capabilities across the productivity suite.

Bochniak highlights that these models are available through the Microsoft tenant and are governed by enterprise data protections. Furthermore, he demonstrates how administrators and developers can see and select new models inside Copilot Studio. Therefore, the video serves as both an update and a practical walkthrough for IT teams and business leaders.

What the Video Shows

Initially, Bochniak walks viewers through the interface where administrators can check which models are enabled on their tenant. He pauses to show how GPT-5 appears as an option with tuning capabilities, and then presents how Anthropic’s cloud models can be controlled within the studio. As a result, viewers gain a clear view of what model choices look like in a real Microsoft 365 environment.

Next, the video covers orchestration and routing inside the platform and points to controls for model selection and governance. Bochniak also demonstrates Copilot’s connectivity to organizational data sources, emphasizing enterprise protections. Thus, the segment underscores both functionality and compliance features that enterprise teams must consider.

Benefits of the Multi-LLM Approach

Adopting multiple LLMs allows Microsoft to assign tasks to models that are best suited for them, which can improve quality and accuracy. For example, one model may excel in structured reasoning while another is stronger at creative writing, and the system can route requests accordingly. Consequently, businesses can exploit specialized strengths without relying on a single model for every task.

Moreover, Copilot Studio offers low-code tools to tune models with company data, lowering the barrier to customization. This makes it easier for teams to create tailored assistants that reflect internal workflows and domain knowledge. In turn, businesses may see productivity gains because the assistants provide more relevant and actionable outputs.

Challenges and Tradeoffs

Nevertheless, integrating multiple LLMs introduces complexity in governance, cost, and latency that organizations must manage. Routing queries to different models can increase infrastructure overhead and may complicate monitoring, particularly when models differ in response time or pricing. Therefore, IT teams will need new policies and tools to track usage and performance across providers.

There is also a tradeoff between automation and control: while model orchestration can automate model selection, manual tuning and oversight remain important to prevent inconsistent outputs and reduce hallucinations. Additionally, businesses must balance the benefits of specialized models against potential vendor lock-in and the skills needed to manage multi-model pipelines. Consequently, the promise of flexibility comes with demands for governance and talent.

Security, Compliance, and Data Protection

The video stresses that enterprise data protection covers the use of third-party models and that Microsoft ties outputs back to organizational policies. Bochniak demonstrates controls that allow administrators to limit model choices and to apply data governance rules across instances. As a result, organizations can adopt these models while maintaining control over sensitive information and compliance requirements.

However, practical challenges persist, such as proving data residency, auditing model behavior, and ensuring consistent policy enforcement across heterogeneous models. These issues require integrated tooling and clear procedures, which can be time-consuming to implement. Thus, the pathway to secure multi-LLM deployment is feasible but requires deliberate planning and resourcing.

Implications for Businesses and Next Steps

For businesses, the multi-LLM Copilot approach offers a pathway to build smarter, more specialized assistants without deep AI expertise. By using GPT-5, Anthropic Claude models, and connectivity through Microsoft 365 and Azure AI Foundry, organizations can tailor solutions that reflect their data and processes. Consequently, early adopters may gain competitive advantage through faster insights and automation.

Yet, firms should approach adoption with a staged plan that includes pilot programs, governance frameworks, and training for administrators and makers. They should also monitor costs, latency, and output quality while iterating on policy and tuning. In summary, Bochniak’s video provides a useful introduction and practical view of both the opportunities and responsibilities that come with multi-LLM Copilot deployments.

Microsoft Copilot - Copilot Adds Anthropic & OpenAI LLMs

Keywords

Copilot multi-LLM integration, multi LLM business solutions, Anthropic Claude enterprise, OpenAI GPT Copilot for business, Copilot Anthropic Claude OpenAI, enterprise AI assistant integration, multi-LLM Copilot deployment, AI copilots for enterprises