Databricks Certified Data Engineer Professional Exam Questions
Certified Data Engineer Professional (Page 4 )

Updated On: 23-Apr-2026

A production workload incrementally applies updates from an external Change Data Capture feed to a Delta Lake table as an always-on Structured Stream job. When data was initially migrated for this table, OPTIMIZE was executed and most data files were resized to 1 GB. Auto Optimize and Auto Compaction were both turned on for the streaming production job. Recent review of data files shows that most data files are under 64 MB, although each partition in the table contains at least 1 GB of data and the total table size is over 10 TB.

Which of the following likely explains these smaller file sizes?

  1. Databricks has autotuned to a smaller target file size to reduce duration of MERGE operations
  2. Z-order indices calculated on the table are preventing file compaction
  3. Bloom filter indices calculated on the table are preventing file compaction
  4. Databricks has autotuned to a smaller target file size based on the overall size of data in the table
  5. Databricks has autotuned to a smaller target file size based on the amount of data in each partition

Answer(s): A



Which statement regarding stream-static joins and static Delta tables is correct?

  1. Each microbatch of a stream-static join will use the most recent version of the static Delta table as of each microbatch.
  2. Each microbatch of a stream-static join will use the most recent version of the static Delta table as of the job's initialization.
  3. The checkpoint directory will be used to track state information for the unique keys present in the join.
  4. Stream-static joins cannot use static Delta tables because of consistency issues.
  5. The checkpoint directory will be used to track updates to the static Delta table.

Answer(s): A



A junior data engineer has been asked to develop a streaming data pipeline with a grouped aggregation using DataFrame df. The pipeline needs to calculate the average humidity and average temperature for each non- overlapping five-minute interval. Events are recorded once per minute per device.

Streaming DataFrame df has the following schema:

"device_id INT, event_time TIMESTAMP, temp FLOAT, humidity FLOAT"

Code block:



Choose the response that correctly fills in the blank within the code block to complete this task.

  1. to_interval("event_time", "5 minutes").alias("time")
  2. window("event_time", "5 minutes").alias("time")
  3. "event_time"
  4. window("event_time", "10 minutes").alias("time")
  5. lag("event_time", "10 minutes").alias("time")

Answer(s): B



A data architect has designed a system in which two Structured Streaming jobs will concurrently write to a single bronze Delta table. Each job is subscribing to a different topic from an Apache Kafka source, but they will write data with the same schema. To keep the directory structure simple, a data engineer has decided to nest a checkpoint directory to be shared by both streams.

The proposed directory structure is displayed below:



Which statement describes whether this checkpoint directory structure is valid for the given scenario and why?

  1. No; Delta Lake manages streaming checkpoints in the transaction log.
  2. Yes; both of the streams can share a single checkpoint directory.
  3. No; only one stream can write to a Delta Lake table.
  4. Yes; Delta Lake supports infinite concurrent writers.
  5. No; each of the streams needs to have its own checkpoint directory.

Answer(s): E



A Structured Streaming job deployed to production has been experiencing delays during peak hours of the day. At present, during normal execution, each microbatch of data is processed in less than 3 seconds. During peak hours of the day, execution time for each microbatch becomes very inconsistent, sometimes exceeding 30 seconds. The streaming write is currently configured with a trigger interval of 10 seconds.

Holding all other variables constant and assuming records need to be processed in less than 10 seconds, which adjustment will meet the requirement?

  1. Decrease the trigger interval to 5 seconds; triggering batches more frequently allows idle executors to begin processing the next batch while longer running tasks from previous batches finish.
  2. Increase the trigger interval to 30 seconds; setting the trigger interval near the maximum execution time observed for each batch is always best practice to ensure no records are dropped.
  3. The trigger interval cannot be modified without modifying the checkpoint directory; to maintain the current stream state, increase the number of shuffle partitions to maximize parallelism.
  4. Use the trigger once option and configure a Databricks job to execute the query every 10 seconds; this ensures all backlogged records are processed with each batch.
  5. Decrease the trigger interval to 5 seconds; triggering batches more frequently may prevent records from backing up and large batches from causing spill.

Answer(s): E



Which statement describes Delta Lake Auto Compaction?

  1. An asynchronous job runs after the write completes to detect if files could be further compacted; if yes, an OPTIMIZE job is executed toward a default of 1 GB.
  2. Before a Jobs cluster terminates, OPTIMIZE is executed on all tables modified during the most recent job.
  3. Optimized writes use logical partitions instead of directory partitions; because partition boundaries are only represented in metadata, fewer small files are written.
  4. Data is queued in a messaging bus instead of committing data directly to memory; all data is committed from the messaging bus in one batch once the job is complete.
  5. An asynchronous job runs after the write completes to detect if files could be further compacted; if yes, an OPTIMIZE job is executed toward a default of 128 MB.

Answer(s): E



Which statement characterizes the general programming model used by Spark Structured Streaming?

  1. Structured Streaming leverages the parallel processing of GPUs to achieve highly parallel data throughput.
  2. Structured Streaming is implemented as a messaging bus and is derived from Apache Kafka.
  3. Structured Streaming uses specialized hardware and I/O streams to achieve sub-second latency for data transfer.
  4. Structured Streaming models new data arriving in a data stream as new rows appended to an unbounded table.
  5. Structured Streaming relies on a distributed network of nodes that hold incremental state values for cached stages.

Answer(s): D



Which configuration parameter directly affects the size of a spark-partition upon ingestion of data into Spark?

  1. spark.sql.files.maxPartitionBytes
  2. spark.sql.autoBroadcastJoinThreshold
  3. spark.sql.files.openCostInBytes
  4. spark.sql.adaptive.coalescePartitions.minPartitionNum
  5. spark.sql.adaptive.advisoryPartitionSizeInBytes

Answer(s): A



Viewing page 4 of 44
Viewing questions 25 - 32 out of 339 questions


Certified Data Engineer Professional Exam Discussions & Posts

AI Tutor AI Tutor 👋 I’m here to help!