Free Certified Data Engineer Professional Exam Braindumps (page: 18)

Page 18 of 46

The business intelligence team has a dashboard configured to track various summary metrics for retail stores. This includes total sales for the previous day alongside totals and averages for a variety of time periods. The fields required to populate this dashboard have the following schema:


For demand forecasting, the Lakehouse contains a validated table of all itemized sales updated incrementally in near real-time. This table, named products_per_order, includes the following fields:


Because reporting on long-term sales trends is less volatile, analysts using the new dashboard only require data to be refreshed once daily. Because the dashboard will be queried interactively by many users throughout a normal business day, it should return results quickly and reduce total compute associated with each materialization.

Which solution meets the expectations of the end users while controlling and limiting possible costs?

  1. Populate the dashboard by configuring a nightly batch job to save the required values as a table overwritten with each update.
  2. Use Structured Streaming to configure a live dashboard against the products_per_order table within a Databricks notebook.
  3. Configure a webhook to execute an incremental read against products_per_order each time the dashboard is refreshed.
  4. Use the Delta Cache to persist the products_per_order table in memory to quickly update the dashboard with each query.
  5. Define a view against the products_per_order table and define the dashboard against this view.

Answer(s): A



A data ingestion task requires a one-TB JSON dataset to be written out to Parquet with a target part-file size of 512 MB. Because Parquet is being used instead of Delta Lake, built-in file-sizing features such as Auto-Optimize & Auto-Compaction cannot be used.

Which strategy will yield the best performance without shuffling data?

  1. Set spark.sql.files.maxPartitionBytes to 512 MB, ingest the data, execute the narrow transformations, and then write to parquet.
  2. Set spark.sql.shuffle.partitions to 2,048 partitions (1TB*1024*1024/512), ingest the data, execute the narrow transformations, optimize the data by sorting it (which automatically repartitions the data), and then write to parquet.
  3. Set spark.sql.adaptive.advisoryPartitionSizeInBytes to 512 MB bytes, ingest the data, execute the narrow transformations, coalesce to 2,048 partitions (1TB*1024*1024/512), and then write to parquet.
  4. Ingest the data, execute the narrow transformations, repartition to 2,048 partitions (1TB* 1024*1024/512), and then write to parquet.
  5. Set spark.sql.shuffle.partitions to 512, ingest the data, execute the narrow transformations, and then write to parquet.

Answer(s): A



A junior data engineer has been asked to develop a streaming data pipeline with a grouped aggregation using DataFrame df. The pipeline needs to calculate the average humidity and average temperature for each non-overlapping five-minute interval. Incremental state information should be maintained for 10 minutes for late-arriving data.

Streaming DataFrame df has the following schema:
"device_id INT, event_time TIMESTAMP, temp FLOAT, humidity FLOAT"

Code block:


Choose the response that correctly fills in the blank within the code block to complete this task.

  1. withWatermark("event_time", "10 minutes")
  2. awaitArrival("event_time", "10 minutes")
  3. await("event_time + ‘10 minutes'")
  4. slidingWindow("event_time", "10 minutes")
  5. delayWrite("event_time", "10 minutes")

Answer(s): A



A data team's Structured Streaming job is configured to calculate running aggregates for item sales to update a downstream marketing dashboard. The marketing team has introduced a new promotion, and they would like to add a new field to track the number of times this promotion code is used for each item. A junior data engineer suggests updating the existing query as follows. Note that proposed changes are in bold.

Original query:


Proposed query:


Proposed query:
.start(“/item_agg”)

Which step must also be completed to put the proposed query into production?

  1. Specify a new checkpointLocation
  2. Increase the shuffle partitions to account for additional aggregates
  3. Run REFRESH TABLE delta.'/item_agg'
  4. Register the data in the "/item_agg" directory to the Hive metastore
  5. Remove .option(‘mergeSchema’, ‘true’) from the streaming write

Answer(s): A



Page 18 of 46



Post your Comments and Discuss Databricks Certified Data Engineer Professional exam with other Community members:

Puran commented on September 18, 2024
Good material and very honest and knowledgeable support team. Contacted the support team and got a reply in less than 30 minutes.
New Zealand
upvote