
Currently I am sharing my knowledge with the Power Platform, with PowerApps and Power Automate. With over 8 years of experience, I have been learning SharePoint and SharePoint Online
In a clear and practical YouTube walkthrough, Andrew Hess - MySPQuestions demonstrates how Copilot Studio calls live data from external services by using the Rebrickable API as a working example. The video walks viewers through connecting an agent to a REST endpoint, parsing large JSON responses, and returning structured chat answers that can include images and parts lists. Importantly, the demo surfaces a real-world limit when a query returns more than 25,000 records and shows how that constraint affects responses that combine images and structured content. Therefore, the walkthrough serves as both a practical tutorial and a cautionary case for production usage.
The demo begins by showing how an agent queries the Rebrickable API to retrieve LEGO set inventories and part details, which illustrates the common pattern of live data integration. As the agent receives large datasets, the video explains how to parse, filter, and structure that data so a chat response is coherent and useful to a user. The presenter emphasizes that this is not a toy example but the same pattern used in production, which helps teams understand real operational behaviors and constraints. Consequently, viewers get a realistic picture of how external APIs behave when integrated into conversational agents.
Hess outlines the practical steps for connecting an agent: provide an OpenAPI specification, configure authentication, and supply descriptive metadata so the model knows when to invoke the API. He demonstrates creating an API tool in the studio, writing a simple YAML tooling file, and deciding whether parameters belong in headers or queries, which affects both security and caching. The video also covers a useful trick to avoid repeated prompts for dynamic inputs by instructing the agent how to fill them automatically. Thus, the guide simplifies configuration while highlighting choices that influence reliability and user experience.
A key part of the demo focuses on what happens when the API returns very large lists: responses can exceed token or payload limits and images or structured items make the problem worse. Hess reproduces a scenario where more than 25,000 records cause practical failures or truncated replies, and he shows how that impacts both the agent’s reasoning and the chat output formatting. As a result, the video stresses the importance of pre-filtering, pagination, and server-side aggregation so the agent only requests what it can handle effectively. In short, controlling dataset size up front reduces latency and prevents unexpected behavior in production.
Throughout the demonstration, the presenter points out several hands-on lessons: start from a blank project to understand defaults, iteratively update instructions when you encounter issues, and validate each step with realistic queries. He shows using inputs like minimum parts or search terms to limit returned results, which improves performance and user relevance. Moreover, the walkthrough highlights how small changes to prompts or configuration often resolve the “initial issues” many developers encounter. Therefore, the learning loop of test, adjust, and retest is central to deploying reliable agents.
Integrating live APIs into conversational agents requires balancing completeness against performance; you must choose between returning exhaustive data and keeping responses fast and within payload limits. Security and authentication also create tradeoffs: richer integrations may need OAuth flows and stricter permissions, while simpler API key approaches are easier but less flexible. Additionally, including images and detailed structured content increases bandwidth and token consumption, so teams must decide which elements truly add value to a user’s experience. Ultimately, these decisions involve weighing user needs, operational cost, and system complexity.
Hess recommends several practical steps: filter and aggregate data server-side, limit results with sensible defaults, and return a concise summary with an option to fetch more detail on demand. He also advises documenting the OpenAPI and YAML configurations clearly, testing with production-sized datasets, and adding human-in-the-loop checkpoints for complex operations. For teams planning to scale, it makes sense to instrument usage and errors to monitor when payload limits or structured content cause failures. In this way, designers can iterate safely and prioritize the highest-value interactions.
In conclusion, the video by Andrew Hess - MySPQuestions provides a focused, production-oriented tour of connecting Copilot Studio agents to external REST APIs using the Rebrickable demo as a running example. By combining configuration walkthroughs, real-world failure modes, and practical workarounds, the tutorial helps viewers understand both how to implement integrations and how to manage their tradeoffs. Therefore, teams experimenting with custom agents or data-driven chat experiences will find actionable guidance for avoiding common pitfalls and building more robust solutions. For readers, the chapters listed in the video make it easy to revisit specific steps and reproduce the demonstrated patterns in their own projects.
Copilot Studio external APIs, Rebrickable API tutorial, LEGO Rebrickable integration, Copilot Studio demo, Microsoft Copilot API integration, Rebrickable LEGO parts search, AI app with external APIs, No-code Copilot Studio integration