The article introduces Data Pipelines from Microsoft Fabric Data Factory, a tool complementing Dataflows in Azure, enhancing data analytics. A Data Pipeline is a group of activities bundled together into a workflow that enhances control over data processing. It can automate tasks like running a Dataflow in a loop until a condition is met, sending out emails upon completion or failure, copying data, and running stored procedures.
The author compares the Data Pipeline to SQL Server Integration Services (SSIS) and Power Automate. While it’s similar to Power Automate, Data Pipeline is designed for Data Factory, which focuses on moving data at scale and comes with activities like Dataflow Gen2, Delete data, Fabric Notebook, Fabric Spark job definition.
One of the benefits of Data Pipelines is that it brings the transformational power of Dataflow into the Data Factory, combining the data ingestion tool with Power Query’s simplicity for data transformations.
The author clarifies that Dataflows and Data Pipelines are complementary; while Dataflows are used for data transformation, Data Pipelines control the flow of execution and orchestrate ETL jobs.
The article then guides through the process of creating a Data Pipeline, explaining the activities involved, and illustrating a simple example of how a Data Pipeline works. The author concludes by highlighting that Data Pipelines can be scheduled and their execution can be monitored within the Microsoft Fabric portal.
https://radacad.com/getting-started-with-data-pipelines-in-fabric-data-factory