
In a recent YouTube presentation, Pragmatic Works trainer Alison Gonzalez demonstrates how Measure Killer can help tidy up bloated Power BI models. She shows the tool scanning a report to flag which measures and columns are truly used versus those that are unused or only referenced by other unused items. As a result, viewers get a clear sense of the risks that come with removing DAX artifacts without understanding dependencies. Consequently, the demo frames the tool as a way to reduce accidental breakages while improving model maintainability.
Measure Killer runs from the External Tools menu in Power BI Desktop and performs a model-level analysis to report usage status for measures, columns, and other artifacts. The tool categorizes objects as used, unused, or in a special state described in the video as “used by unused”, which helps surface indirect dependencies that might otherwise be overlooked. Then, it presents a summary with percentages, artifact counts, and insights into where items appear across report pages and visuals. Moreover, the demo highlights accessibility of icons and explanations so users can interpret the results confidently.
Alison walks through the typical cleanup workflow: launching the tool, choosing to analyze a single model and report, reviewing the results, exporting findings to a spreadsheet, and re-running the scan after cleanup to validate changes. She emphasizes exporting to Excel as a simple way to document decisions and to share a proposed cleanup plan with colleagues. After the initial scan, the video shows how quick wins—like removing clearly unused measures—can reduce clutter without changing report visuals. Finally, she re-analyzes the file to confirm that no dependencies were accidentally removed and that the model is leaner.
The demonstration also contrasts desktop-level usage with enterprise scenarios, mentioning that more advanced tenant-level features exist for broader governance. These capabilities include scanning across workspaces and checking lineage and security settings, which is valuable for organizations that manage many datasets and reports. Yet the presenter makes clear that some of those features sit behind paid tiers, which means teams must weigh budget against the scale of their governance needs. Consequently, viewers are nudged to consider whether a desktop-first cleanup or an enterprise-wide program better fits their situation.
Cleaning a semantic model involves balancing speed, safety, and completeness, and the video highlights each of these tradeoffs clearly. On one hand, automated scans speed up discovery and can remove obvious dead measures quickly; on the other hand, automation can miss business logic held in unexpected places, such as bookmarks, hidden pages, or external report references. Therefore, the tool’s dependency mapping—especially the identification of “used by unused” relationships—helps reduce false positives, but it does not fully replace human review. Teams must still plan careful testing and maintain versioned backups to mitigate the risk of disrupting reports.
Another challenge discussed is the difference between deleting measures and deleting calculated columns: while measures can often be removed without a full model refresh, column deletion may require edits to Power Query and could force recomputation. Thus, the tradeoff includes operational cost; removing columns can reduce memory usage but may require more development time and testing. Additionally, for large organizations, tenant-wide scans provide governance benefits but raise concerns about scanning frequency, performance impact, and the complexity of interpreting results at scale. As a result, decision-makers must balance governance automation against resource constraints and change management overhead.
The video closes with practical advice on when and how to run cleanups, which aligns with typical best practices for BI lifecycle management. Alison recommends running audits before major publication or deployment events, and then scheduling periodic checks to prevent accumulation of redundant artifacts. She also suggests exporting findings to a shareable format so that stakeholders can review proposed deletions, which improves transparency and reduces the chance of accidental breakage. Moreover, re-scanning after making changes is shown as a necessary validation step to confirm that the cleanup had the intended effect.
For teams deciding whether to adopt the tool, the presenter frames it as especially useful for teams that inherit models or manage many reports, because those situations tend to produce technical debt quickly. However, she cautions that adoption should be paired with governance policies and testing routines so that cleanup becomes part of a sustainable workflow rather than a one-off fix. In summary, the video positions Measure Killer as a practical aid for model hygiene while urging teams to complement it with human oversight and change control.
Overall, the tutorial by Pragmatic Works offers a clear, actionable look at how Measure Killer helps identify and remove dead artifacts from Power BI models. It balances a demonstration of features with sensible warnings about dependency risks and the need for testing, and it shows how the tool fits into both desktop and enterprise workflows. As a result, analysts and BI teams can see how the tool may reduce model bloat and improve maintainability, provided they also adopt careful review and governance practices. Therefore, teams should weigh the benefits of faster cleanup against the costs of deeper testing and potential licensing for tenant-level capabilities.
Measure Killer Power BI, Power BI measure cleanup tool, remove unused measures Power BI, Power BI measure management, Power BI performance optimization, optimize DAX measures Power BI, Measure Killer tutorial Power BI, Measure Killer review Power BI