Pro User
Timespan
explore our new search
Microsoft Foundry: LLM Tools in Python
Python
Dec 17, 2025 6:09 AM

Microsoft Foundry: LLM Tools in Python

by HubSite 365 about Andrew Hess - MySPQuestions

Currently I am sharing my knowledge with the Power Platform, with PowerApps and Power Automate. With over 8 years of experience, I have been learning SharePoint and SharePoint Online

Microsoft Foundry: Python tool calling with OpenAI API and GitHub examples for controlled weather data workflows

Key insights

  • Tool calling
    LLMs request named functions instead of directly calling external APIs. The app runs those functions in Python, returns the results, and then the model produces the final reply.
  • Three-step workflow
    Send user input and function definitions to the model, let the model generate a tool call, execute that call in Python, and pass the result back to the model for a final answer.
  • Azure AI Foundry & AIProjectClient
    Foundry’s Python SDK (AIProjectClient) creates agents and threads, accepts tool definitions via the tools parameter, and manages calls to deployed models and compute.
  • Define tools clearly
    Create Python functions with type annotations (for example, a get_current_time or get_weather function), register them as tools, then test and validate each tool before use.
  • Tool choices and control
    Use tool_choice options like "auto" for model-driven calls, specific names to force a tool, or "none" for direct answers; this keeps external calls explicit and predictable.
  • Benefits and demo example
    Tool calling gives developers control, improves security, and scales with Foundry’s model catalog; the video uses a weather tool to show argument extraction, execution, and follow-up responses in a real workflow.

In a clear, developer-focused YouTube walkthrough, Andrew Hess - MySPQuestions explains how to implement tool calling with Python using the OpenAI API and Microsoft Foundry. He demonstrates the pattern where the model requests a function, the application executes it, and then the model receives the real-world result. Consequently, the video emphasizes predictability and control instead of letting the model directly call external services. This article summarizes the main ideas and assesses tradeoffs so developers can decide when to adopt the approach.

What the Video Covers

First, the presenter lays out the core concept: give the model a function signature it can request instead of granting direct API access. Next, he shows a practical flow where the model signals a need for data, the app runs a tool in Python with standard HTTP requests, and the result returns to the model for a final answer. For clarity, the video uses a weather lookup example to illustrate argument extraction and follow-up responses. Meanwhile, viewers get a sense of how this pattern fits into real agent workflows in Microsoft Foundry.

Furthermore, the author walks through code updates and adding a specific tool role, then tests the tool to confirm that the model and Python code exchange messages correctly. He times the sequence and highlights how the agent chooses when to call a tool or answer directly. This stepwise demo keeps the process concrete, focusing on the developer experience rather than hype. Ultimately, the video targets engineers who want to understand the mechanics behind tool calling.

How Tool Calling Works in Practice

The essential loop involves three actions: define your callable functions with clear parameter types, send those definitions to the model, and execute requested calls in your application code. In effect, the model returns a structured request that the developer-run code fulfills, so the application remains in control of external interactions. This approach reduces ambiguity because external services are invoked explicitly rather than implicitly by the model. As a result, teams can apply existing security, logging, and error handling to every external call.

To set this up, the video shows initializing a client and creating an agent thread, then passing tool definitions when starting the conversation. The agent can use options like automatic tool selection or be restricted to a specific function set. Also, good system messages and context help guide the model about when and how to use tools. Thus, configuration and messaging become important levers for predictable behavior.

Example: Weather Lookup Workflow

Andrew uses a weather retrieval tool to make the concept tangible, showing how the model extracts arguments such as location and time, requests the function, and then receives the fetched data. He then demonstrates how the application executes the HTTP call, returns the raw result, and allows the model to build a user-facing response. This sequence shows both the benefits and the places developers must add robustness, such as retry logic and validation. Therefore, the weather example serves as a compact template that teams can adapt for other real-world APIs.

During testing, he highlights the need to validate arguments before executing calls and to sanitize external responses prior to feeding them back to the model. These checks help prevent injection or misuse and preserve data integrity. Meanwhile, the demo underlines the separation of concerns: the model decides intent while the application enforces execution rules. This separation supports secure and auditable integrations.

Tradeoffs and Implementation Challenges

One tradeoff is latency: adding an execution step introduces network and processing delays compared with purely generative responses. However, developers gain traceability and security, which often matter more in production settings. Another tradeoff involves complexity; defining tools, handling errors, and testing end-to-end interactions require more engineering work than simple prompt-based systems. Consequently, teams must weigh the value of control against the cost of added infrastructure.

There are also subtle modeling challenges, including ensuring the model chooses tools when appropriate and avoids overusing them for trivial answers. Moreover, permissioning and role assignment in the platform can become intricate when multiple services and teams are involved. Thus, organizations should plan for logs, monitoring, and clear governance to manage those risks. In short, tool calling shifts responsibility back to developers while imposing operational requirements.

Practical Tips and Takeaways

For developers adopting this pattern, the video recommends clear function signatures, strong input validation, and explicit system messages that instruct the model when to call tools. It also stresses local testing and staged rollouts to catch edge cases before wide release. Additionally, maintain thorough logging so each requested tool call and its result are auditable, which aids debugging and compliance. These practices help balance flexibility with safety in production systems.

Finally, Andrew’s walkthrough makes a persuasive case that tool calling via Python in a managed environment like Microsoft Foundry suits teams needing deterministic, auditable interactions between LLMs and external APIs. While the approach requires more engineering effort, it delivers clearer control and integrates easily into existing security and operational workflows. Therefore, developers should evaluate the tradeoffs and start with small, well-instrumented tools before expanding their agent’s capabilities.

Python - Microsoft Foundry: LLM Tools in Python

Keywords

LLM tool integration Python, call tool from LLM Python, Microsoft Foundry Pro-Code tutorial, invoke external tools in LLM Python, tool invocation in language models, Python prompt engineering for tools, LLM tool chaining example, production-ready LLM tool calls