Free DP-500 Exam Braindumps (page: 16)

Page 16 of 46

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a Power BI dataset named Dataset1.

In Dataset1, you currently have 50 measures that use the same time intelligence logic.

You need to reduce the number of measures, while maintaining the current functionality.

Solution: From Tabular Editor, you create a calculation group.

Does this meet the goal?

  1. Yes
  2. No

Answer(s): A

Explanation:

Calculation groups can significantly reduce the number of redundant measures by grouping common measure expressions as calculation items.



Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a Power BI dataset named Dataset1.

In Dataset1, you currently have 50 measures that use the same time intelligence logic.

You need to reduce the number of measures, while maintaining the current functionality.

Solution: From DAX Studio, you write a query that uses grouping sets.

Does this meet the goal?

  1. Yes
  2. No

Answer(s): B


Reference:

https://community.powerbi.com/t5/Desktop/DAX-Grouping-Sets/m-p/2079648#M774276



Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-encoded business names, survey names, and participant counts. The database is configured to use the default collation.

The queries use OPENROWSET and infer the schema shown in the following table.



You need to recommend changes to the queries to reduce I/O reads and tempdb usage.

Solution: You recommend defining an external table for the Parquet files and updating the query to use the table.

Does this meet the goal?

  1. Yes
  2. No

Answer(s): B

Explanation:

Solution: You recommend using OPENROWSET WITH to explicitly specify the maximum length for businessName and surveyName.

The size of the varchar(8000) columns are too big. Better reduce their size.

A SELECT...FROM OPENROWSET(BULK...) statement queries the data in a file directly, without importing the data into a table. SELECT...FROM OPENROWSET(BULK...) statements can also list bulk-column aliases by using a format file to specify column names, and also data types.


Reference:

https://docs.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql



Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-encoded business names, survey names, and participant counts. The database is configured to use the default collation.

The queries use OPENROWSET and infer the schema shown in the following table.



You need to recommend changes to the queries to reduce I/O reads and tempdb usage.

Solution: You recommend using OPENROWSET WITH to explicitly specify the maximum length for businessName and surveyName.

Does this meet the goal?

  1. Yes
  2. No

Answer(s): B

Explanation:

Instead use Solution: You recommend using OPENROWSET WITH to explicitly define the collation for businessName and surveyName as Latin1_General_100_BIN2_UTF8.

Query Parquet files using serverless SQL pool in Azure Synapse Analytics.
Important
Ensure you are using a UTF-8 database collation (for example Latin1_General_100_BIN2_UTF8) because string values in PARQUET files are encoded using UTF-8 encoding. A mismatch between the text encoding in the PARQUET file and the collation may cause unexpected conversion errors. You can easily change the default collation of the current database using the following T-SQL statement: alter database current collate Latin1_General_100_BIN2_UTF8'.

Note: If you use the Latin1_General_100_BIN2_UTF8 collation you will get an additional performance boost compared to the other collations. The Latin1_General_100_BIN2_UTF8 collation is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the parquet files that will not contain data needed in the queries (file/column-segment pruning). If you use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering is happening within the SQL process. The Latin1_General_100_BIN2_UTF8 collation has additional performance optimization that works only for parquet and CosmosDB. The downside is that you lose fine-grained comparison rules like case insensitivity.


Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-parquet-files



Page 16 of 46



Post your Comments and Discuss Microsoft DP-500 exam with other Community members:

Summer commented on July 28, 2024
Wonderful site. It helped me pass my exam. Way to go guys!
UNITED STATES
upvote

Siyya commented on January 19, 2024
might help me to prepare for the exam
Anonymous
upvote

Siyya commented on January 19, 2024
might help me to prepare for the exam
Anonymous
upvote

siyaa commented on January 19, 2024
helped me understand the material better.
Anonymous
upvote

Bunny commented on June 19, 2023
Good Content
Anonymous
upvote

Demetrius commented on June 01, 2023
Important and useful
Anonymous
upvote

Kartoos commented on April 06, 2023
The practice exam was an important part of my preparation and helped me understand the material better.
FRANCE
upvote