Databricks-Certified-Professional-Data-Engineer: Certified Data Engineer Professional
Free Practice Exam Questions (page: 6)
Updated On: 2-Jan-2026

A production workload incrementally applies updates from an external Change Data Capture feed to a Delta Lake table as an always-on Structured Stream job. When data was initially migrated for this table, OPTIMIZE was executed and most data files were resized to 1 GB. Auto Optimize and Auto Compaction were both turned on for the streaming production job. Recent review of data files shows that most data files are under 64 MB, although each partition in the table contains at least 1 GB of data and the total table size is over 10 TB.
Which of the following likely explains these smaller file sizes?

  1. Databricks has autotuned to a smaller target file size to reduce duration of MERGE operations
  2. Z-order indices calculated on the table are preventing file compaction
  3. Bloom filter indices calculated on the table are preventing file compaction
  4. Databricks has autotuned to a smaller target file size based on the overall size of data in the table
  5. Databricks has autotuned to a smaller target file size based on the amount of data in each partition

Answer(s): A



Which statement regarding stream-static joins and static Delta tables is correct?

  1. Each microbatch of a stream-static join will use the most recent version of the static Delta table as of each microbatch.
  2. Each microbatch of a stream-static join will use the most recent version of the static Delta table as of the job's initialization.
  3. The checkpoint directory will be used to track state information for the unique keys present in the join.
  4. Stream-static joins cannot use static Delta tables because of consistency issues.
  5. The checkpoint directory will be used to track updates to the static Delta table.

Answer(s): A



A junior data engineer has been asked to develop a streaming data pipeline with a grouped aggregation using DataFramedf. The pipeline needs to calculate the average humidity and average temperature for each non-overlapping five-minute interval. Events are recorded once per minute per device.

Streaming DataFramedf has the following schema:
"device_id INT, event_time TIMESTAMP, temp FLOAT, humidity FLOAT"
Code block:


Choose the response that correctly fills in the blank within the code block to complete this task.

  1. to_interval("event_time", "5 minutes").alias("time")
  2. window("event_time", "5 minutes").alias("time")
  3. "event_time"
  4. window("event_time", "10 minutes").alias("time")
  5. lag("event_time", "10 minutes").alias("time")

Answer(s): B



A data architect has designed a system in which two Structured Streaming jobs will concurrently write to a single bronze Delta table. Each job is subscribing to a different topic from an Apache Kafka source, but they will write data with the same schema. To keep the directory structure simple, a data engineer has decided to nest a checkpoint directory to be shared by both streams.
The proposed directory structure is displayed below:


Which statement describes whether this checkpoint directory structure is valid for the given scenario and why?

  1. No; Delta Lake manages streaming checkpoints in the transaction log.
  2. Yes; both of the streams can share a single checkpoint directory.
  3. No; only one stream can write to a Delta Lake table.
  4. Yes; Delta Lake supports infinite concurrent writers.
  5. No; each of the streams needs to have its own checkpoint directory.

Answer(s): E



Viewing page 6 of 46
Viewing questions 21 - 24 out of 238 questions



Post your Comments and Discuss Databricks Databricks-Certified-Professional-Data-Engineer exam prep with other Community members:

Databricks-Certified-Professional-Data-Engineer Exam Discussions & Posts