Free DP-300 Exam Braindumps (page: 14)

Page 14 of 76
View Related Case Study

What should you do after a failover of SalesSQLDb1 to ensure that the database remains accessible to SalesSQLDb1App1?

  1. Configure SalesSQLDb1 as writable.
  2. Update the connection strings of SalesSQLDb1App1.
  3. Update the firewall rules of SalesSQLDb1.
  4. Update the users in SalesSQLDb1.

Answer(s): B

Explanation:

Scenario: SalesSQLDb1 uses database firewall rules and contained database users.
Incorrect:
Not C: When using public network access for connecting to the database, we recommend using database-level IP firewall rules for geo-replicated databases.
These rules are replicated with the database, which ensures that all geo-secondaries have the same IP firewall rules as the primary. This approach eliminates the need for customers to manually configure and maintain firewall rules on servers hosting the primary and secondary databases.


Reference:

https://docs.microsoft.com/en-us/azure/azure-sql/database/active-geo-replication-overview#keeping-credentials-and-firewall-rules-in-sync



HOTSPOT (Drag and Drop is not supported)
You have an Azure Data Factory instance named ADF1 and two Azure Synapse Analytics workspaces named WS1 and WS2.

ADF1 contains the following pipelines:

P1:Uses a copy activity to copy data from a nonpartitioned table in a dedicated SQL pool of WS1 to an Azure Data Lake Storage Gen2 account
P2:Uses a copy activity to copy data from text-delimited files in an Azure Data Lake Storage Gen2 account to a nonpartitioned table in a dedicated SQL pool of WS2

You need to configure P1 and P2 to maximize parallelism and performance.

Which dataset settings should you configure for the copy activity of each pipeline? To answer, select the appropriate options in the answer area.

Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:




P1: Set the Partition option to Dynamic Range.
The SQL Server connector in copy activity provides built-in data partitioning to copy data in parallel.

P2: Set the Copy method to PolyBase
Polybase is the most efficient way to move data into Azure Synapse Analytics. Use the staging blob feature to achieve high load speeds from all types of data stores, including Azure Blob storage and Data Lake Store. (Polybase supports Azure Blob storage and Azure Data Lake Store by default.)


Reference:

https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse
https://docs.microsoft.com/en-us/azure/data-factory/load-azure-sql-data-warehouse



You have the following Azure Data Factory pipelines:
-Ingest Data from System1
-Ingest Data from System2
-Populate Dimensions
-Populate Facts

Ingest Data from System1 and Ingest Data from System2 have no dependencies. Populate Dimensions must execute after Ingest Data from System1 and Ingest Data from System2. Populate Facts must execute after the Populate Dimensions pipeline. All the pipelines must execute every eight hours.

What should you do to schedule the pipelines for execution?

  1. Add a schedule trigger to all four pipelines.
  2. Add an event trigger to all four pipelines.
  3. Create a parent pipeline that contains the four pipelines and use an event trigger.
  4. Create a parent pipeline that contains the four pipelines and use a schedule trigger.

Answer(s): D


Reference:

https://www.mssqltips.com/sqlservertip/6137/azure-data-factory-control-flow-activities-overview/



You have an Azure Data Factory pipeline that performs an incremental load of source data to an Azure Data Lake Storage Gen2 account.

Data to be loaded is identified by a column named LastUpdatedDate in the source table. You plan to execute the pipeline every four hours.
You need to ensure that the pipeline execution meets the following requirements:

Automatically retries the execution when the pipeline run fails due to concurrency or throttling limits. Supports backfilling existing data in the table.

Which type of trigger should you use?

  1. tumbling window
  2. on-demand
  3. event
  4. schedule

Answer(s): A

Explanation:

The Tumbling window trigger supports backfill scenarios. Pipeline runs can be scheduled for windows in the past.

Incorrect Answers:
D: Schedule trigger does not support backfill scenarios. Pipeline runs can be executed only on time periods from the current time and the future.


Reference:

https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers



Page 14 of 76



Post your Comments and Discuss Microsoft DP-300 exam with other Community members:

laks commented on December 26, 2024
so far seems good
UNITED STATES
upvote

Jack commented on October 24, 2024
Muito bom as perguntas
Anonymous
upvote

TheUser commented on October 23, 2024
So far seems good
Anonymous
upvote

anonymus commented on October 23, 2024
master database differential backup is not supported in sql server
EUROPEAN UNION
upvote

Ntombi commented on October 17, 2024
i find the questions helpful for my exam preparation
Anonymous
upvote

Ntombi commented on October 17, 2024
The questions help me to see if I understood what I have learned
Anonymous
upvote

ntombi commented on October 17, 2024
writing exam at the end of the month
Anonymous
upvote

Raby commented on August 13, 2024
Wonderful work guys. The PDF version helped me pass. Thank you
EUROPEAN UNION
upvote