• Home
  • Modeling for BI
  • DBA
  • Azure
  • SSIS
  • SSAS
  • SSRS - Reporting
  • PBI - Analytics
  • Consulting
  • About
Microsoft Data & AI

All Things Azure

Azure Data Factory (ADFv1) Techniques for Starting a Pipeline On Demand During Development Phase; Pipeline Schedules

4/25/2017

1 Comment

 
One of the challenges of ADF is getting a pipeline to run demand.  I am told by Microsoft that the next version of ADF coming fall 2017 will include this functionality, but right now, there is no <Run Now!> button to be found.  Granted, you will find a rerun button in the ADF Monitor Console (orange box below) ...
Picture
... but this button is only enabled for failed pipelines.  What about during development?  Those of us coming from SSIS expect <execute> now functionality, but with ADF, there is no such thing because the pipeline is tied to the job schedule.  This is when I remind myself that just because something is different, it doesn't make it wrong.   Okay, so let's work with what we have.

First an assumption:  We do not want to have to use a custom .NET activity, PowerShell command, or Azure job scheduler.  The scenario is development mode and all we want to do is test to see if our new datasets and pipeline are producing the desired result.  Second, recall that in order for an activity to run on demand, ADF compares the "start" property of the pipeline to the last run DateTime of each dataset and there is no  way to remove dataset execution history.  

Consequently, there are two ways to start a non-failed pipeline on demand:

1.  Clone, delete and redeploy the datasets used in the pipeline. 
2.  Change the "start" property of pipeline to an earlier point in time

Option #1: Clone, delete and redeploy the datasets
The first thing you will run into is dependencies.  This isn't a big deal if your pipeline only has one set of input/output datasets.  However, when you are working with a pipeline containing a plethora of tables or files, this becomes a time consuming and outright ridiculous option.
1.  Navigate to Azure Data Factory Author and Deploy action
2.  Select each input and output dataset used in the pipeline
3.  Click <Clone>
4.  Right mouse click on each dataset you just cloned and <delete>
5.  From the set of Drafts created in step #3, click <deploy>

Picture
Be aware of "start", "end" and "pipelineMode" pipeline properties before you redeploy.  The pipeline "start" is UTC time and must be the current UTC DateTime or a DateTime in the past, or the pipeline won't start.  If you have cloned a paused pipeline, you will also need to change the "pipelineMode" property to "Scheduled".

As a matter of habit, I clone, delete and redeploy both my datasets and my pipeline to remove all internal run history.  You could change the activity properties of a pipeline and then just clone, delete and redeploy your input datasets, but that seems more complicated to me.  For pipeline that have a limited number of activities, the clone, delete and redeploy goes pretty quick and produces the desired result: a running pipeline!

Disclaimer: This is a DEV solution.  If you are in PROD you will over-write existing files and folders!!


Option #2: Change the "start" property of the pipeline

If all your datasets are in step with each other (all have the same execution history), you can also get the activities of a pipeline to run on demand by updating the "start" property of the pipeline to a prior point in time.  This will cause ADF to "backfill" the destination creating new destination files or folders for this same earlier point in time.   On a full data load (a pipeline that does use a WHERE LastModified DateTime > [x])  the resulting destination  files will be a duplicate of a later point in time, but at the moment, we are in DEV and are just trying to test out the input, output and resulting destination result.  If you are in PROD, be very careful.  Because you have not deleted anything, source files, folders or tables will not get overwritten, but you may have just caused issues for an ETL process.

Disclaimer: This option has worked for me except when my pipeline has activities referencing datasets that have different execution histories.  I've looked at my new (earlier) destination location and have had missing files because there has been datasets with execution histories since the dawn of time.

1 Comment
Nutan Patel
7/13/2017 12:39:19 am

Thanks Man!!
This is really a very nice trick.

how we can manually run pipeline using Powershell command ?

Reply



Leave a Reply.

    Categories

    All
    Agile Methodology
    Azure Blob Storage
    Azure Data Factory
    Azure Data Lake
    Azure Data Warehouse
    Azure SQL Database
    Cosmos DB
    Data Architecture
    Databricks
    Elastic Query
    External Tables
    Linked Services
    Migrating To The Cloud
    Parameters
    PolyBase
    Project Management

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Home
  • Modeling for BI
  • DBA
  • Azure
  • SSIS
  • SSAS
  • SSRS - Reporting
  • PBI - Analytics
  • Consulting
  • About