
Pragmatic Works published a YouTube video demonstrating how to make Microsoft Power BI's conversational assistant more consistent by using Prep Data for AI and, specifically, AI Instructions. In the clip, Justin walks viewers through a side-by-side comparison that highlights the difference between an ungoverned model and one enriched with clear guidance. Consequently, the video emphasizes that inconsistent Copilot answers are often a result of interpretation rather than a bug, and that authors can shape those interpretations with explicit rules. As a result, the walkthrough frames governance as a practical way to increase trust in AI-driven insights.
The video starts with a baseline demonstration of Copilot interacting with an unprepped report where terms like revenue and growth are ambiguous. In that scenario, Copilot frequently asks follow-up questions or invents its own calculations to fill gaps, which can confuse business users. Justin then switches to a model with AI Instructions and shows how verified answers and explicit rules reduce unnecessary interpretation. Thus, the demo makes a clear case for adding context directly into the semantic model.
Justin explains that AI Instructions act as "rules of the road" for Copilot by encoding business definitions and priorities inside the semantic model. For example, you can instruct Copilot to always treat revenue as the model's Total Sales field and to prefer an official year-over-year growth measure instead of inventing new math. After applying these instructions and restarting the chat pane, Copilot responds more consistently and aligns with validated business logic. This connection between model-level guidance and conversational output is the core benefit highlighted in the video.
The tutorial shows authors how to access the feature from both Power BI Desktop and the online service, as Justin opens the Prep Data for AI dialog and chooses Add AI Instructions. He recommends writing short, actionable statements for terms such as revenue, customers, growth, and ranking rules for "top performing" items. After applying changes, the presenter restarts the Copilot chat to test the results and confirms that the assistant uses the model’s verified answers and instructions. Finally, Justin points out where these instructions live inside the service semantic model so teams can maintain them centrally.
While adding AI Instructions improves consistency, the video also implicitly acknowledges tradeoffs such as increased maintenance and potential loss of exploratory flexibility. In other words, tightly governed instructions can reduce Copilot’s ability to offer creative or unexpected insights, which may be valuable in discovery scenarios. Moreover, authors must keep instructions up to date as definitions and measures evolve, creating ongoing governance work. Therefore, teams need to balance authoritative guidance with the need for adaptive, investigative queries.
Justin suggests keeping instructions concise and focused on common, high-impact terms, which helps reduce ambiguity without overwhelming the model with rules. Furthermore, he demonstrates the benefit of pairing verified answers with model-level instructions so Copilot has both precise, validated outputs and general context to fall back on. As a result, teams should test changes by restarting the chat and validating responses before publishing reports broadly. Ultimately, pragmatic governance that combines verification and clear instructions will likely yield the most reliable conversational analytics for business users.
In conclusion, the Pragmatic Works video offers a practical guide for Power BI authors who want to make conversational AI more trustworthy by shaping its interpretation. By using Prep Data for AI to add AI Instructions, organizations can reduce ambiguous responses and align Copilot outputs with official measures and business logic. However, the approach requires careful maintenance and thoughtful tradeoffs between control and flexibility. Overall, the demonstration provides an actionable path to smarter, more consistent AI-driven reporting.
data preparation for AI, adding AI instructions, instruction tuning datasets, AI data preprocessing, prompt engineering for AI, dataset labeling best practices, building AI data pipelines, structured prompts for models