Free DP-500 Exam Braindumps (page: 14)

Page 14 of 46

DRAG DROP (Drag and Drop is not supported)
You manage a Power BI dataset that queries a fact table named SalesDetails. SalesDetails contains three date columns named OrderDate, CreatedOnDate, and ModifiedDate.

You need to implement an incremental refresh of SalesDetails. The solution must ensure that OrderDate starts on or after the beginning of the prior year.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.
Select and Place:

  1. See Explanation section for answer.

Answer(s): A

Explanation:



Step 1: Create RangeStart and RangeEndDateTime parameters.
When configuring incremental refresh in Power BI Desktop, you first create two Power Query date/time parameters with the reserved, case-sensitive names RangeStart and RangeEnd. These parameters, defined in the Manage Parameters dialog in Power Query Editor are initially used to filter the data loaded into the Power BI Desktop model table to include only those rows with a date/time within that period.

Step 2: Add an applied step that adds a custom date filter OrderDate is Between RangeStart and RangeEnd.
With RangeStart and RangeEnd parameters defined, you then apply custom Date filters on your table's date column. The filters you apply select a subset of data that will be loaded into the model when you click Apply.

Step 3: Configure an incremental refresh to archive data that starts two years before the refresh date.
After filters have been applied and a subset of data has been loaded into the model, you then define an incremental refresh policy for the table. After the model is published to the service, the policy is used by the service to create and manage table partitions and perform refresh operations. To define the policy, you will use the Incremental refresh and real-time data dialog box to specify both required settings and optional settings.

Step 4: Add an applied step that filters OrderDate to the start of the prior year.


Reference:

https://docs.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-overview



DRAG DROP (Drag and Drop is not supported)
You plan to create a Power BI report that will use an OData feed as the data source. You will retrieve all the entities from two different collections by using the same service root.

The OData feed is still in development. The location of the feed will change once development is complete.

The report will be published before the OData feed development is complete.

You need to minimize development effort to change the data source once the location changes.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:

  1. See Explanation section for answer.

Answer(s): A

Explanation:



Step 1: Create a parameter that contains the service root URI

Step 2: Get data from OData feed source and use the parameter to populate the first part of the URL.
The URI is in the first part of the query.
Example: let
Source = OData.Feed ("https://analytics.dev.azure.com/{organization}/{project}/_odata/v3.0-preview/WorkItemSnapshot? "
&"$apply=filter( "
&"WorkItemType eq 'Bug' "
&"and StateCategory ne 'Completed' "
&"and startswith(Area/AreaPath,'{areapath}') "
&"and DateValue ge {startdate} "
&") "
&"/groupby( "
&"(DateValue,State,WorkItemType,Priority,Severity,Area/AreaPath,Iteration/IterationPath,AreaSK), "
&"aggregate($count as Count) "
&") "
,null, [Implementation="2.0",OmitValues = ODataOmitValues.Nulls,ODataVersion = 4])
in
Source

Box 3: From Advanced Editor, duplicate the query and change the resource path in the URL.

Choose Get Data, and then Blank Query.
From the Power BI Query editor, choose Advanced Editor.
The Advanced Editor window opens.
Edit the query.
Etc.

Incorrect:
Not: From Advanced Editor, get data from an OData feed source and use the parameter to populate the last part of the URL.
The URI is in the first part of the query.


Reference:

https://docs.microsoft.com/en-us/azure/devops/report/powerbi/odataquery-connect



DRAG DROP (Drag and Drop is not supported)
You have an Azure Synapse Analytics serverless SQL pool.

You need to return a list of files and the number of rows in each file.

How should you complete the Transact-SQL statement? To answer, drag the appropriate values to the targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.
Select and Place:

  1. See Explanation section for answer.

Answer(s): A

Explanation:





Box 1: APPROX_COUNT_DISTINCT
The APPROX_COUNT_DISTINCT function returns the approximate number of unique non-null values in a group.

Box 2: OPENROWSET
OPENROWSET function in Synapse SQL reads the content of the file(s) from a data source. The data source is an Azure storage account and it can be explicitly referenced in the OPENROWSET function or can be dynamically inferred from URL of the files that you want to read. The OPENROWSET function can optionally contain a DATA_SOURCE parameter to specify the data source that contains files.

The OPENROWSET function can be referenced in the FROM clause of a query as if it were a table name OPENROWSET. It supports bulk operations through a built-in BULK provider that enables data from a file to be read and returned as a rowset.


Reference:

https://docs.microsoft.com/en-us/sql/t-sql/functions/approx-count-distinct-transact-sql
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-openrowset



HOTSPOT (Drag and Drop is not supported)
You have an Azure Synapse Analytics serverless SQL pool and an Azure Data Lake Storage Gen2 account.

You need to query all the files in the ‘csv/taxi/’ folder and all its subfolders. All the files are in CSV format and have a header row.

How should you complete the query? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

  1. See Explanation section for answer.

Answer(s): A

Explanation:



Box 1: BULK 'csv/taxi*.CSV',
*.CSV to get all the CSV files.

Box 2: FIRSTROW=2
As there is a header we should read from the second line.

Note: FIRSTROW = 'first_row'

Specifies the number of the first row to load. The default is 1 and indicates the first row in the specified data file. The row numbers are determined by counting the row terminators. FIRSTROW is 1-based.

Incorrect:
Not FIRSTROW=1. FIRSTROW=1 is used when there is no header.


Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-openrowset



Page 14 of 46



Post your Comments and Discuss Microsoft DP-500 exam with other Community members:

Summer commented on July 28, 2024
Wonderful site. It helped me pass my exam. Way to go guys!
UNITED STATES
upvote

Siyya commented on January 19, 2024
might help me to prepare for the exam
Anonymous
upvote

Siyya commented on January 19, 2024
might help me to prepare for the exam
Anonymous
upvote

siyaa commented on January 19, 2024
helped me understand the material better.
Anonymous
upvote

Bunny commented on June 19, 2023
Good Content
Anonymous
upvote

Demetrius commented on June 01, 2023
Important and useful
Anonymous
upvote

Kartoos commented on April 06, 2023
The practice exam was an important part of my preparation and helped me understand the material better.
FRANCE
upvote