Free DP-203 Exam Braindumps

You have an Azure Synapse Analytics workspace named WS1 that contains an Apache Spark pool named Pool1.
You plan to create a database named DB1 in Pool1.
You need to ensure that when tables are created in DB1, the tables are available automatically as external tables to the built-in serverless SQL pool.
Which format should you use for the tables in DB1?

  1. CSV
  2. ORC
  3. JSON
  4. Parquet

Answer(s): D

Explanation:

Serverless SQL pool can automatically synchronize metadata from Apache Spark. A serverless SQL pool database will be created for each database existing in serverless Apache Spark pools.
For each Spark external table based on Parquet or CSV and located in Azure Storage, an external table is created in a serverless SQL pool database.


Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-storage-files-spark-tables



You are planning a solution to aggregate streaming data that originates in Apache Kafka and is output to Azure Data Lake Storage Gen2. The developers who will implement the stream processing solution use Java. Which service should you recommend using to process the streaming data?

  1. Azure Event Hubs
  2. Azure Data Factory
  3. Azure Stream Analytics
  4. Azure Databricks

Answer(s): D

Explanation:

The following tables summarize the key differences in capabilities for stream processing technologies in Azure.


General capabilities


Reference:

https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/stream-processing



You plan to implement an Azure Data Lake Storage Gen2 container that will contain CSV files. The size of the files will vary based on the number of events that occur per hour.
File sizes range from 4 KB to 5 GB.
You need to ensure that the files stored in the container are optimized for batch processing. What should you do?

  1. Convert the files to JSON
  2. Convert the files to Avro
  3. Compress the files
  4. Merge the files

Answer(s): D



HOTSPOT (Drag and Drop is not supported)
You store files in an Azure Data Lake Storage Gen2 container. The container has the storage policy shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.
Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:


Box 1: moved to cool storage
The ManagementPolicyBaseBlob.TierToCool property gets or sets the function to tier blobs to cool storage. Support blobs currently at Hot tier.
Box 2: container1/contoso.csv As defined by prefixMatch.
prefixMatch: An array of strings for prefixes to be matched. Each rule can define up to 10 case-senstive prefixes. A prefix string must start with a container name.


Reference:

https://docs.microsoft.com/en-us/dotnet/api/ microsoft.azure.management.storage.fluent.models.managementpolicybaseblob.tiertocool






Post your Comments and Discuss Microsoft DP-203 exam with other Community members:

DP-203 Exam Discussions & Posts