Free DP-300 Exam Braindumps (page: 15)

Page 15 of 76

You have an Azure Data Factory that contains 10 pipelines.

You need to label each pipeline with its main purpose of either ingest, transform, or load. The labels must be available for grouping and filtering when using the monitoring experience in Data Factory.

What should you add to each pipeline?

  1. an annotation
  2. a resource tag
  3. a run group ID
  4. a user property
  5. a correlation ID

Answer(s): A

Explanation:

Azure Data Factory annotations help you easily filter different Azure Data Factory objects based on a tag. You can define tags so you can see their performance or find errors faster.


Reference:

https://www.techtalkcorner.com/monitor-azure-data-factory-annotations/



HOTSPOT (Drag and Drop is not supported)
You have an Azure data factory that has two pipelines named PipelineA and PipelineB. PipelineA has four activities as shown in the following exhibit.


PipelineB has two activities as shown in the following exhibit.


You create an alert for the data factory that uses Failed pipeline runs metrics for both pipelines and all failure types. The metric has the following settings:

-Operator: Greater than
-Aggregation type: Total
-Threshold value: 2
-Aggregation granularity (Period): 5 minutes
-Frequency of evaluation: Every 5 minutes

Data Factory monitoring records the failures shown in the following table.


For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:



Box 1: No
Just one failure within the 5-minute interval.

Box 2: No
Just two failures within the 5-minute interval.

Box 3: No
Just two failures within the 5-minute interval.


Reference:

https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-metric-overview



Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Data Lake Storage account that contains a staging zone.

You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes mapping data flow, and then inserts the data into the data warehouse.

Does this meet the goal?

  1. Yes
  2. No

Answer(s): B

Explanation:

If you need to transform data in a way that is not supported by Data Factory, you can create a custom activity, not a mapping flow,5 with your own data processing logic and use the activity in the pipeline. You can create a custom activity to run R scripts on your HDInsight cluster with R installed.


Reference:

https://docs.microsoft.com/en-US/azure/data-factory/transform-data



Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Data Lake Storage account that contains a staging zone.

You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

Solution: You schedule an Azure Databricks job that executes an R notebook, and then inserts the data into the data warehouse.

Does this meet the goal?

  1. Yes
  2. No

Answer(s): B

Explanation:

Must use an Azure Data Factory, not an Azure Databricks job.


Reference:

https://docs.microsoft.com/en-US/azure/data-factory/transform-data



Page 15 of 76



Post your Comments and Discuss Microsoft DP-300 exam with other Community members:

laks commented on December 26, 2024
so far seems good
UNITED STATES
upvote

Jack commented on October 24, 2024
Muito bom as perguntas
Anonymous
upvote

TheUser commented on October 23, 2024
So far seems good
Anonymous
upvote

anonymus commented on October 23, 2024
master database differential backup is not supported in sql server
EUROPEAN UNION
upvote

Ntombi commented on October 17, 2024
i find the questions helpful for my exam preparation
Anonymous
upvote

Ntombi commented on October 17, 2024
The questions help me to see if I understood what I have learned
Anonymous
upvote

ntombi commented on October 17, 2024
writing exam at the end of the month
Anonymous
upvote

Raby commented on August 13, 2024
Wonderful work guys. The PDF version helped me pass. Thank you
EUROPEAN UNION
upvote