Founder | CEO @ RADACAD | Coach | Power BI Consultant | Author | Speaker | Regional Director | MVP
Reza Rad (RADACAD) [MVP] recently published a YouTube video that examines the best way to create a calculated column in Power BI, and this article summarizes his key points for newsroom readers. The video contrasts two main techniques: creating a column with Power Query's Add Custom Column feature and creating a Calculated Column using DAX. In addition, the author explains why, in many situations, using a Custom Column in Power Query can be the more efficient choice.
Moreover, the video clarifies when each approach fits different workflows and highlights performance and maintenance considerations. Consequently, the guidance helps report developers decide whether to shape data before loading or to add logic inside the model. This report distills the practical advice while pointing out tradeoffs and implementation challenges.
First, the video outlines the technical distinction: a Custom Column built in Power Query uses the M language and runs during data import, whereas a Calculated Column uses DAX and is computed as part of the data model refresh. Therefore, custom columns are evaluated once per refresh step and become part of the transformed table before the model stores it. In contrast, calculated columns are evaluated during model processing and can increase memory footprint because they persist in the model storage.
Additionally, Reza notes that Calculated Columns can reference other tables and model relationships, which makes them useful for calculations that need model context. However, Custom Columns work within the current query and are ideal for row-by-row transformations like text concatenation and basic data cleaning. Thus, the choice often depends on whether you need relational context or you simply want to pre-process raw data.
The video stresses several scenarios where using a Custom Column is preferable, and this guidance is practical for everyday report authors. For instance, when you perform simple row-level transformations, remove or combine text fields, or normalize data prior to loading, Power Query is faster and reduces model size. Consequently, dataset refreshes usually run quicker because fewer calculated fields inflate the model.
Furthermore, Reza recommends Power Query for predictable transformations that do not depend on filter context or relationships. Since those columns are finalized during ETL, they remain stable across visuals and slicers but do not react dynamically. Therefore, using Power Query simplifies maintenance for common cleansing tasks and often reduces long-term complexity.
On the other hand, the video acknowledges that Calculated Columns and measures still have clear use cases, particularly when calculations require relationships or dynamic behavior. For example, if a new column must refer to related tables or be used as a slicer or row label, a calculated column inside the model may be necessary. Likewise, complex aggregations and analytics should usually be implemented as measures, since measures evaluate dynamically and avoid inflating the model with stored values.
Reza also points out that developers should default to measures when possible because they optimize query-time evaluation and keep the model lean. However, when a column must exist as a physical field for visuals or export, a calculated column becomes unavoidable. Thus, balancing storage, interactivity, and usability drives the decision between DAX columns and measures.
The video highlights performance tradeoffs and offers best practices, and these recommendations are actionable for model builders. In particular, minimizing the number and complexity of stored calculated columns reduces model size and speeds refresh cycles, whereas pushing transformations to Power Query often yields better performance. Therefore, teams should audit columns regularly and migrate simple transformations to Power Query when feasible.
Moreover, Reza recommends using profiling tools and monitoring refresh times to identify bottlenecks, and he encourages developers to weigh maintainability against immediate convenience. While Power Query is efficient for ETL, it can complicate refresh logic when applied inconsistently across multiple queries, so governance matters. Ultimately, the tradeoff involves choosing the path that balances refresh performance, model size, and the need for relational context.
Finally, the video candidly addresses common challenges that teams face when choosing between approaches, and it provides pragmatic steps to manage them. For example, version control and collaboration can become harder when many transformations live in Power Query, and debugging complex M or DAX logic demands different skills from team members. Therefore, standardizing patterns and documenting rationale for column placement helps reduce friction across projects.
In conclusion, Reza Rad’s video delivers clear guidance: prefer Power Query Custom Columns for simple row-level work, use Calculated Columns when model relationships require them, and favor measures for dynamic calculations. By balancing performance, maintainability, and functionality, report teams can design cleaner, faster Power BI solutions that meet both business and technical needs.
Power BI calculated column, Power Query custom column, create calculated column Power BI, add custom column Power Query, DAX vs Power Query calculated column, Power BI calculated column tutorial, Power Query custom column examples, Power BI data modeling calculated column