All Content
Timespan
explore our new search
Maximizing Optimization in PowerBI: Overcoming Frustration
Image Source: Shutterstock.com
Power BI
Sep 28, 2023 7:30 AM

Maximizing Optimization in PowerBI: Overcoming Frustration

by HubSite 365 about Marc Lelijveld (Data – Marc) [MVP]

Data Platform MVP | FastTrack Recognized Solution Architect | Microsoft Certified Trainer | Public Speaker | Data enthusiast | Solution Architect Data & Analytics

Data AnalyticsPower BIM365 Hot News

Unlock PowerBIs full potential with expert insights on optimizing data models for better performance, simplified processing, and improved user experience.

In this blog post, the author, MVP Marc Lelijveld, shares his experiences in dealing with issues related to the performance of data models in Power BI. The problems frequently involve failed refreshes, slow queries, or models falling short of user expectations. The author discusses his standard procedure of asking a set of questions before starting the process of optimization.

Lelijveld proceeds to share his views about a common issue within Power BI users: poorly performing refreshes or poorly functioning Power BI solutions. He talks about how clients often question their choice of Power BI, doubting its efficacy in handling their requirements. Power BI is often not meeting their expectations due to similar recurring issues as listed below.

  • Failure of dataset refreshes
  • The unexpectedly large size of the dataset
  • Pondering over shifting to directQuery as Power BI data model becomes too big
  • Poor performance of reports

In a scenario with a client, their initial approach was to create domain-oriented datasets with Power BI. But, rushing the building process of these datasets resulted in issues. As a result, the datasets grew significantly in size, with one reaching over 40GB with just a year's worth of historical data.

The author discusses the importance of alignment between data engineering and Power BI developers at the start of the migration. A design decision of excluding any logic in the Power BI data models to leverage the data platform was made. However, due to rushed migration and misalignment, the data engineers could not always optimize these views for Power BI use.

Contributing to the challenge were the product owners who played a crucial role. Because of a hard deadline to migrate to Power BI and shut down old systems, solutions had to be swiftly built, validated, and signed off by the business. As various solutions merged, datasets expanded exponentially. This led to a detraction from dataset optimization as more columns, tables, and wider tables were continually added.

Business users often proposed new requirements which needed to be added to the solutions. Consequently, the dataset’s granularity was intense, and Power BI’s data compression resulted in lots of unique values in columns, causing the table size to grow.

The blog post moves to a discussion on the area of optimizing. He suggests several questions everyone should ponder when building a Power BI dataset. The refresh process was a stumbling block in the client’s solution. Large datasets would kick-off their refresh concurrently, causing a significant load on the data platform. He mentions using orchestration, specifically Azure Data Factory, to optimize the refreshes.

As the discussion takes a more detailed turn, Lelijveld introduces the benefits of user defined aggregations. Even though these were often dismissed due to anticipated high maintenance, Lelijveld suggests that automatic aggregations could be a viable solution offering efficiency and prevention of continual level changes.

Towards the end, talk about the high number of tables in the dataset arises. Questioning why some tables weren't appended or modeled into a dimension, he underlines aiming for a proper star schema to enhance Power BI functionality and make the data more understandable.

General Considerations

The blog post provides valuable insights into dealing with issues clients often face when handling data models in Power BI. Proper consideration of data size, refresh processes, granularity, dataset optimization, and leveraging features such as aggregations could significantly enhance the performance of Power BI models.

Read the full article PowerBI: From Frustration to Optimization

Power BI - Maximizing Optimization in PowerBI: Overcoming Frustration

Learn about PowerBI: From Frustration to Optimization

The recent uptake in data analysis and business intelligence (BI) has increased the importance of tools such as Power BI for drawing meaningful insights from large datasets. Power BI can help to transform raw data into insightful visualized results, however, challenges such as slow queries, failed refreshes, or underperformance often arise. Learning how to optimize Power BI's performance will help better understand and extract the most from this tool.

Firstly, understanding the characteristics and demands of Power BI is crucial. The data model, the size of the dataset and the frequency of refreshes play a significant role in its performance. When attempting to speed up the refresh process, considering the time of day and configuration of the refresh can make a dramatic difference to performance. In addition, learning about data engineering and the role of a product owner would also help with the optimization process.

Furthermore, optimization techniques like setting up the granularity of data, configuring user-defined or automatic aggregations, and considering the true requirement of tables and columns in your dataset are important. In this scenario, the data was based on sales order line level, which is highly detailed and granular. This granularity led to a larger dataset, hence addressing the level of detail in the data should be a priority for optimization.

Moreover, learning about orchestration tools like Azure Data Factory and the enhanced refresh API can make the refresh process more efficient. The ability to trigger individual tables in your Power BI dataset to refresh as soon as they are ready on your data platform can significantly shorten the overall process time.

Lastly, understanding the function of star schema design and the distinction between fact and dimension tables can be beneficial. These techniques help to avoid big wide tables and keeps the data model understandable.

To learn more about these topics, the following training courses might be useful:

  • Power BI Courses on Microsoft Learn: Offers a wide range of courses from beginner to advanced covering topics such as data modeling and data transformation.
  • LinkedIn Learning's "Power BI Essential Training": This course covers several topics including how to work with the Power BI service, Power BI mobile apps, and Power BI desktop.
  • Power BI courses on edX or Coursera: These platforms offer comprehensive Power BI classes from basic concepts to advanced techniques suitable for different proficiency levels.

This blog post should spark curiosity in better understanding Power BI and its capabilities. In summary, learning about Power BI, its data model, data engineering, product ownership, data granularity, and aggregations will be beneficial. By employing these techniques and going through the recommended training courses, the optimization of Power BI should become a more intuitive process.

More links on about PowerBI: From Frustration to Optimization

From Frustration to Optimization: A Journey through Power ...
Feb 24, 2023 — In this blog post, I will delve into the characteristics of each scenario, typical findings, and where to start optimizing. I will also share my ...
Marc Lelijveld's Post
From Frustration to Optimization: A Journey through Power BI Data Model Design.
Reid Havens - From Frustration to Optimization
From Frustration to Optimization: A Journey through Power BI Data Model Design.

Keywords

PowerBI optimization, PowerBI frustration solutions, PowerBI tips, optimize PowerBI, solve PowerBI frustration, improve PowerBI use, mastering PowerBI, PowerBI training, PowerBI insights, PowerBI efficiency.