Copilot Studio: LEGO Rebrickable Demo
Microsoft Copilot Studio
2. Feb 2026 16:24

Copilot Studio: LEGO Rebrickable Demo

von HubSite 365 über Andrew Hess - MySPQuestions

Currently I am sharing my knowledge with the Power Platform, with PowerApps and Power Automate. With over 8 years of experience, I have been learning SharePoint and SharePoint Online

Copilot Studio demo: call external APIs like Rebrickable, parse large JSON for structured chat and manage payload limits

Key insights

  • Copilot Studio + External APIs: Demonstration uses the Rebrickable API to show how Copilot Studio agents call live REST endpoints, parse results, and return structured answers for real tasks like listing LEGO parts.
    Agents turn user prompts into API calls and present the data back in chat-style responses.
  • How it works: Developers supply an OpenAPI spec, authentication details, and tool descriptions so the agent knows when and how to call the API.
    Copilot Studio parses the spec, routes prompts to the right endpoint, and formats the API output for the user.
  • Large responses and limits: The demo highlights handling very large JSON payloads and a key limit: when an API returns over 25,000 records, images and structured content may be truncated or fail to render properly.
    Design agents to page, filter, or summarize results to avoid hitting payload constraints.
  • Practical workflow: Typical steps are prepare API assets (OpenAPI, auth), write simple YAML for the tool, set headers or query params, test queries, and iterate instructions.
    Keep prompts and inputs like “min parts” or “search” clear to reduce ambiguous or repeated prompts.
  • Benefits for teams: Integrating external APIs brings real-time data, secure enterprise controls, and natural-language actions without heavy coding.
    It supports secure authentication, governance, and use cases from hobby demos to production apps that use RAG (Retrieval Augmented Generation) for grounded answers.
  • Best practices and lessons: Always update agent instructions, limit returned fields, and use human-in-the-loop checks for complex changes.
    Monitor usage and analytics, and design for paging and error handling to keep responses reliable and scalable.

In a clear and practical YouTube walkthrough, Andrew Hess - MySPQuestions demonstrates how Copilot Studio calls live data from external services by using the Rebrickable API as a working example. The video walks viewers through connecting an agent to a REST endpoint, parsing large JSON responses, and returning structured chat answers that can include images and parts lists. Importantly, the demo surfaces a real-world limit when a query returns more than 25,000 records and shows how that constraint affects responses that combine images and structured content. Therefore, the walkthrough serves as both a practical tutorial and a cautionary case for production usage.

Overview of the LEGO Rebrickable Demo

The demo begins by showing how an agent queries the Rebrickable API to retrieve LEGO set inventories and part details, which illustrates the common pattern of live data integration. As the agent receives large datasets, the video explains how to parse, filter, and structure that data so a chat response is coherent and useful to a user. The presenter emphasizes that this is not a toy example but the same pattern used in production, which helps teams understand real operational behaviors and constraints. Consequently, viewers get a realistic picture of how external APIs behave when integrated into conversational agents.

How Copilot Studio Connects to External APIs

Hess outlines the practical steps for connecting an agent: provide an OpenAPI specification, configure authentication, and supply descriptive metadata so the model knows when to invoke the API. He demonstrates creating an API tool in the studio, writing a simple YAML tooling file, and deciding whether parameters belong in headers or queries, which affects both security and caching. The video also covers a useful trick to avoid repeated prompts for dynamic inputs by instructing the agent how to fill them automatically. Thus, the guide simplifies configuration while highlighting choices that influence reliability and user experience.

Managing Large JSON Responses and Payload Limits

A key part of the demo focuses on what happens when the API returns very large lists: responses can exceed token or payload limits and images or structured items make the problem worse. Hess reproduces a scenario where more than 25,000 records cause practical failures or truncated replies, and he shows how that impacts both the agent’s reasoning and the chat output formatting. As a result, the video stresses the importance of pre-filtering, pagination, and server-side aggregation so the agent only requests what it can handle effectively. In short, controlling dataset size up front reduces latency and prevents unexpected behavior in production.

Practical Lessons from the Walkthrough

Throughout the demonstration, the presenter points out several hands-on lessons: start from a blank project to understand defaults, iteratively update instructions when you encounter issues, and validate each step with realistic queries. He shows using inputs like minimum parts or search terms to limit returned results, which improves performance and user relevance. Moreover, the walkthrough highlights how small changes to prompts or configuration often resolve the “initial issues” many developers encounter. Therefore, the learning loop of test, adjust, and retest is central to deploying reliable agents.

Tradeoffs and Challenges

Integrating live APIs into conversational agents requires balancing completeness against performance; you must choose between returning exhaustive data and keeping responses fast and within payload limits. Security and authentication also create tradeoffs: richer integrations may need OAuth flows and stricter permissions, while simpler API key approaches are easier but less flexible. Additionally, including images and detailed structured content increases bandwidth and token consumption, so teams must decide which elements truly add value to a user’s experience. Ultimately, these decisions involve weighing user needs, operational cost, and system complexity.

Best Practices and Next Steps

Hess recommends several practical steps: filter and aggregate data server-side, limit results with sensible defaults, and return a concise summary with an option to fetch more detail on demand. He also advises documenting the OpenAPI and YAML configurations clearly, testing with production-sized datasets, and adding human-in-the-loop checkpoints for complex operations. For teams planning to scale, it makes sense to instrument usage and errors to monitor when payload limits or structured content cause failures. In this way, designers can iterate safely and prioritize the highest-value interactions.

In conclusion, the video by Andrew Hess - MySPQuestions provides a focused, production-oriented tour of connecting Copilot Studio agents to external REST APIs using the Rebrickable demo as a running example. By combining configuration walkthroughs, real-world failure modes, and practical workarounds, the tutorial helps viewers understand both how to implement integrations and how to manage their tradeoffs. Therefore, teams experimenting with custom agents or data-driven chat experiences will find actionable guidance for avoiding common pitfalls and building more robust solutions. For readers, the chapters listed in the video make it easy to revisit specific steps and reproduce the demonstrated patterns in their own projects.

Microsoft Copilot Studio - Copilot Studio: LEGO Rebrickable Demo

Keywords

Copilot Studio external APIs, Rebrickable API tutorial, LEGO Rebrickable integration, Copilot Studio demo, Microsoft Copilot API integration, Rebrickable LEGO parts search, AI app with external APIs, No-code Copilot Studio integration