Databricks Certified Data Engineer Professional Exam Questions
Certified Data Engineer Professional (Page 17 )

Updated On: 12-Apr-2026

A user new to Databricks is trying to troubleshoot long execution times for some pipeline logic they are working on. Presently, the user is executing code cell-by-cell, using display() calls to confirm code is producing the logically correct results as new transformations are added to an operation. To get a measure of average time to execute, the user is running each cell multiple times interactively.

Which of the following adjustments will get a more accurate measure of how code is likely to perform in production?

  1. Scala is the only language that can be accurately tested using interactive notebooks; because the best performance is achieved by using Scala code compiled to JARs, all PySpark and Spark SQL logic should be refactored.
  2. The only way to meaningfully troubleshoot code execution times in development notebooks is to use production-sized data and production-sized clusters with Run All execution.
  3. Production code development should only be done using an IDE; executing code against a local build of open source Spark and Delta Lake will provide the most accurate benchmarks for how code will perform in production.
  4. Calling display() forces a job to trigger, while many transformations will only add to the logical query plan; because of caching, repeated execution of the same logic does not provide meaningful results.
  5. The Jobs UI should be leveraged to occasionally run the notebook as a job and track execution time during incremental code development because Photon can only be enabled on clusters launched for scheduled jobs.

Answer(s): B



A production cluster has 3 executor nodes and uses the same virtual machine type for the driver and executor.

When evaluating the Ganglia Metrics for this cluster, which indicator would signal a bottleneck caused by code executing on the driver?

  1. The five Minute Load Average remains consistent/flat
  2. Bytes Received never exceeds 80 million bytes per second
  3. Total Disk Space remains constant
  4. Network I/O never spikes
  5. Overall cluster CPU utilization is around 25%

Answer(s): E



Where in the Spark UI can one diagnose a performance problem induced by not leveraging predicate push- down?

  1. In the Executor's log file, by grepping for "predicate push-down"
  2. In the Stage's Detail screen, in the Completed Stages table, by noting the size of data read from the Input column
  3. In the Storage Detail screen, by noting which RDDs are not stored on disk
  4. In the Delta Lake transaction log. by noting the column statistics
  5. In the Query Detail screen, by interpreting the Physical Plan

Answer(s): E



Review the following error traceback:



Which statement describes the error being raised?

  1. The code executed was PySpark but was executed in a Scala notebook.
  2. There is no column in the table named heartrateheartrateheartrate
  3. There is a type error because a column object cannot be multiplied.
  4. There is a type error because a DataFrame object cannot be multiplied.
  5. There is a syntax error because the heartrate column is not correctly identified as a column.

Answer(s): B



Which distribution does Databricks support for installing custom Python code packages?

  1. sbt
  2. CRAN
  3. npm
  4. Wheels
  5. jars

Answer(s): D



Which Python variable contains a list of directories to be searched when trying to locate required modules?

  1. importlib.resource_path
  2. sys.path
  3. os.path
  4. pypi.path
  5. pylib.source

Answer(s): B



Incorporating unit tests into a PySpark application requires upfront attention to the design of your jobs, or a potentially significant refactoring of existing code.

Which statement describes a main benefit that offset this additional effort?

  1. Improves the quality of your data
  2. Validates a complete use case of your application
  3. Troubleshooting is easier since all steps are isolated and tested individually
  4. Yields faster deployment and execution times
  5. Ensures that all steps interact correctly to achieve the desired end result

Answer(s): C



Which statement describes integration testing?

  1. Validates interactions between subsystems of your application
  2. Requires an automated testing framework
  3. Requires manual intervention
  4. Validates an application use case
  5. Validates behavior of individual elements of your application

Answer(s): A



Viewing page 17 of 44
Viewing questions 129 - 136 out of 339 questions



Post your Comments and Discuss Databricks Certified Data Engineer Professional exam dumps with other Community members:

Certified Data Engineer Professional Exam Discussions & Posts

AI Tutor AI Tutor 👋 I’m here to help!