Free DP-300 Exam Braindumps (page: 31)

Page 31 of 76

A company plans to use Apache Spark analytics to analyze intrusion detection data.

You need to recommend a solution to analyze network and system activity data for malicious activities and policy violations. The solution must minimize administrative efforts.

What should you recommend?

  1. Azure Data Lake Storage
  2. Azure Databricks
  3. Azure HDInsight
  4. Azure Data Factory

Answer(s): C

Explanation:

Azure HDInsight offers pre-made, monitoring dashboards in the form of solutions that can be used to monitor the workloads running on your clusters. There are solutions for Apache Spark, Hadoop, Apache Kafka, live long and process (LLAP), Apache HBase, and Apache Storm available in the Azure Marketplace.

Note: With Azure HDInsight you can set up Azure Monitor alerts that will trigger when the value of a metric or the results of a query meet certain conditions. You can condition on a query returning a record with a value that is greater than or less than a certain threshold, or even on the number of results returned by a query. For example, you could create an alert to send an email if a Spark job fails or if a Kafka disk usage becomes over 90 percent full.


Reference:

https://azure.microsoft.com/en-us/blog/monitoring-on-azure-hdinsight-part-4-workload-metrics-and-logs/



DRAG DROP (Drag and Drop is not supported)
Your company analyzes images from security cameras and sends alerts to security teams that respond to unusual activity. The solution uses Azure Databricks.

You need to send Apache Spark level events, Spark Structured Streaming metrics, and application metrics to Azure Monitor.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions in the answer area and arrange them in the correct order.

Select and Place:

  1. See Explanation section for answer.

Answer(s): A

Explanation:




Send application metrics using Dropwizard.
Spark uses a configurable metrics system based on the Dropwizard Metrics Library.
To send application metrics from Azure Databricks application code to Azure Monitor, follow these steps: Step 1: Configure your Azure Databricks cluster to use the Databricksmonitoring library.
Prerequisite: Configure your Azure Databricks cluster to use the monitoring library.
Step 2: Build the spark-listeners-loganalytics-1.0-SNAPSHOT.jar JAR file
Step 3: Create Dropwizard counters in your application code Create Dropwizard gauges or counters in your application code



You have an Azure data solution that contains an enterprise data warehouse in Azure Synapse Analytics named DW1.

Several users execute adhoc queries to DW1 concurrently. You regularly perform automated data loads to DW1.
You need to ensure that the automated data loads have enough memory available to complete quickly and successfully when the adhoc queries run.

What should you do?

  1. Assign a smaller resource class to the automated data load queries.
  2. Create sampled statistics to every column in each table of DW1.
  3. Assign a larger resource class to the automated data load queries.
  4. Hash distribute the large fact tables in DW1 before performing the automated data loads.

Answer(s): C

Explanation:

The performance capacity of a query is determined by the user's resource class.
Smaller resource classes reduce the maximum memory per query, but increase concurrency. Larger resource classes increase the maximum memory per query, but reduce concurrency.


Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/resource-classes-for-workload- management



You are monitoring an Azure Stream Analytics job.

You discover that the Backlogged input Events metric is increasing slowly and is consistently non-zero. You need to ensure that the job can handle all the events.
What should you do?

  1. Remove any named consumer groups from the connection and use $default.
  2. Change the compatibility level of the Stream Analytics job.
  3. Create an additional output stream for the existing input stream.
  4. Increase the number of streaming units (SUs).

Answer(s): D

Explanation:

Backlogged Input Events: Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently non-zero, you should scale out your job, by increasing the SUs.


Reference:

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-monitoring



Page 31 of 76



Post your Comments and Discuss Microsoft DP-300 exam with other Community members:

laks commented on December 26, 2024
so far seems good
UNITED STATES
upvote

Jack commented on October 24, 2024
Muito bom as perguntas
Anonymous
upvote

TheUser commented on October 23, 2024
So far seems good
Anonymous
upvote

anonymus commented on October 23, 2024
master database differential backup is not supported in sql server
EUROPEAN UNION
upvote

Ntombi commented on October 17, 2024
i find the questions helpful for my exam preparation
Anonymous
upvote

Ntombi commented on October 17, 2024
The questions help me to see if I understood what I have learned
Anonymous
upvote

ntombi commented on October 17, 2024
writing exam at the end of the month
Anonymous
upvote

Raby commented on August 13, 2024
Wonderful work guys. The PDF version helped me pass. Thank you
EUROPEAN UNION
upvote