
Founder | CEO @ RADACAD | Coach | Power BI Consultant | Author | Speaker | Regional Director | MVP
In a concise YouTube presentation, Reza Rad (RADACAD) [MVP] demonstrates how to extract structured data from a custom Name-Value field and prepare it for analysis in Power BI. The video focuses on a straightforward transformation in Power Query called PIVOT, which transposes name-value pairs into table columns. As a result, data that originally arrives in a compact, semi-structured form becomes usable in reports and models. This article summarizes the method, key takeaways, and practical tradeoffs for BI practitioners.
First, Power Query lets you select the column that holds the names and the values and then pivot those names into distinct columns. Consequently, each name becomes a header and its corresponding values populate rows under that column, which turns scattered custom fields into a relational layout. The transformation typically requires choosing an aggregation method if multiple values map to the same pivot intersection, and the video shows simple choices to ensure predictable output. Therefore, the technique helps convert data from systems like CRM or ERP into a more analysis-ready schema.
Reza Rad walks viewers through connecting to the source, cleaning unwanted columns, and selecting the key name and value fields before applying the PIVOT operation. Then, he emphasizes the importance of setting the proper aggregation and data types so that numbers remain numeric and dates remain dates after the pivot. Moreover, he points out that the same steps work across tools: in Power BI Desktop, in Dataflow, in Microsoft Fabric Data Factory, and even in Excel. As a result, teams can standardize the approach across deployment targets to ensure consistent models in warehouses and reports.
While pivoting simplifies analysis, it can also introduce sparsity when a dataset contains many unique names, which increases the column count and can bloat models. For instance, pivoting a large set of diverse custom fields may hurt performance and complicate downstream relationships, so you must balance convenience with model efficiency. Alternatively, keeping the data in a normalized long form preserves flexibility but adds the burden of extra joins or DAX logic for reporting. Consequently, teams must weigh the tradeoff between a wide, flat model that favors ease of visual creation and a long model that better supports scalability and storage efficiency.
One common challenge is inconsistent naming in the source field, which leads to multiple near-duplicate columns after pivoting; therefore, standardizing keys before pivoting is essential. Additionally, the presence of nulls and mixed data types requires careful cleaning so that the pivot step does not force everything into text. Reza suggests simple preprocessing—such as trimming, case normalization, and type conversion—to reduce errors and speed up the pivot operation. In practice, these small steps often prevent the need to rebuild transformations later.
Adopting a consistent pivot approach streamlines data import and improves report reliability, and it also supports reuse when implemented in shared Dataflow or Microsoft Fabric artifacts. However, governance matters: naming conventions, change control, and refresh strategies must align across teams to prevent breaking published reports. Moreover, when pivoting at scale, consider moving the transformation into the data warehouse layer if possible, because doing heavy transformation at report time can impact performance. Ultimately, the method shown in the video offers a practical balance between quick wins for analysts and the longer-term needs of enterprise data architecture.
 
Power Query pivot name column, Power BI get data from name column, Pivot custom field Power BI, Extract custom field from name column Power Query, Power Query extract field from name column, Power BI pivot custom field into columns, Power Query split name column custom field, Power BI transform name column