Databricks DATABRICKS-CERTIFIED-ASSOCIATE-DEVELOPER-FOR-APACHE-SPARK-3.0 Exam Questions
Certified Associate Developer for Apache Spark (Page 4 )

Updated On: 21-Feb-2026

Which of the following statements about RDDs is incorrect?

  1. An RDD consists of a single partition.
  2. The high-level DataFrame API is built on top of the low-level RDD API.
  3. RDDs are immutable.
  4. RDD stands for Resilient Distributed Dataset.
  5. RDDs are great for precisely instructing Spark on how to do a query.

Answer(s): A

Explanation:

An RDD consists of a single partition.
Quite the opposite: Spark partitions RDDs and distributes the partitions across multiple nodes.



Which of the elements that are labeled with a circle and a number contain an error or are misrepresented?

  1. 1, 10
  2. 1, 8
  3. 10
  4. 7, 9, 10
  5. 1, 4, 6, 9

Answer(s): B

Explanation:

1: Correct – This should just read "API" or "DataFrame API". The DataFrame is not part of the SQL API. To make a DataFrame accessible via SQL, you first need to create a DataFrame view. That

view can then be accessed via SQL.
4: Although "K_38_INU" looks odd, it is a completely valid name for a DataFrame column. 6: No, StringType is a correct type.
7: Although a StringType may not be the most efficient way to store a phone number, there is nothing fundamentally wrong with using this type here.
8: Correct – TreeType is not a type that Spark supports.
9: No, Spark DataFrames support ArrayType variables. In this case, the variable would represent a sequence of elements with type LongType, which is also a valid type for Spark DataFrames.
10: There is nothing wrong with this row.
More info: Data Types - Spark 3.1.1 Documentation (https://bit.ly/3aAPKJT)



Which of the following describes characteristics of the Spark UI?

  1. Via the Spark UI, workloads can be manually distributed across executors.
  2. Via the Spark UI, stage execution speed can be modified.
  3. The Scheduler tab shows how jobs that are run in parallel by multiple users are distributed across the cluster.
  4. There is a place in the Spark UI that shows the property spark.executor.memory.
  5. Some of the tabs in the Spark UI are named Jobs, Stages, Storage, DAGs, Executors, and SQL.

Answer(s): D

Explanation:

There is a place in the Spark UI that shows the property spark.executor.memory.
Correct, you can see Spark properties such as spark.executor.memory in the Environment tab. Some of the tabs in the Spark UI are named Jobs, Stages, Storage, DAGs, Executors, and SQL.
Wrong – Jobs, Stages, Storage, Executors, and SQL are all tabs in the Spark UI. DAGs can be inspected in the "Jobs" tab in the job details or in the Stages or SQL tab, but are not a separate tab.
Via the Spark UI, workloads can be manually distributed across distributors.
No, the Spark UI is meant for inspecting the inner workings of Spark which ultimately helps understand, debug, and optimize Spark transactions.
Via the Spark UI, stage execution speed can be modified. No, see above.
The Scheduler tab shows how jobs that are run in parallel by multiple users are distributed across the cluster.
No, there is no Scheduler tab.



Which of the following statements about broadcast variables is correct?

  1. Broadcast variables are serialized with every single task.
  2. Broadcast variables are commonly used for tables that do not fit into memory.
  3. Broadcast variables are immutable.
  4. Broadcast variables are occasionally dynamically updated on a per-task basis.
  5. Broadcast variables are local to the worker node and not shared across the cluster.

Answer(s): C

Explanation:

Broadcast variables are local to the worker node and not shared across the cluster.
This is wrong because broadcast variables are meant to be shared across the cluster. As such, they are never just local to the worker node, but available to all worker nodes.
Broadcast variables are commonly used for tables that do not fit into memory.
This is wrong because broadcast variables can only be broadcast because they are small and do fit into memory.
Broadcast variables are serialized with every single task.
This is wrong because they are cached on every machine in the cluster, precisely avoiding to have to be serialized with every single task.
Broadcast variables are occasionally dynamically updated on a per-task basis.
This is wrong because broadcast variables are immutable – they are never updated. More info: Spark – The Definitive Guide, Chapter 14



Which of the following is a viable way to improve Spark's performance when dealing with large amounts of data, given that there is only a single application running on the cluster?

  1. Increase values for the properties spark.default.parallelism and spark.sql.shuffle.partitions
  2. Decrease values for the properties spark.default.parallelism and spark.sql.partitions
  3. Increase values for the properties spark.sql.parallelism and spark.sql.partitions
  4. Increase values for the properties spark.sql.parallelism and spark.sql.shuffle.partitions
  5. Increase values for the properties spark.dynamicAllocation.maxExecutors, spark.default.parallelism, and spark.sql.shuffle.partitions

Answer(s): A

Explanation:

Decrease values for the properties spark.default.parallelism and spark.sql.partitions No, these values need to be increased.
Increase values for the properties spark.sql.parallelism and spark.sql.partitions Wrong, there is no property spark.sql.parallelism.
Increase values for the properties spark.sql.parallelism and spark.sql.shuffle.partitions See above.
Increase values for the properties spark.dynamicAllocation.maxExecutors, spark.default.parallelism, and spark.sql.shuffle.partitions
The property spark.dynamicAllocation.maxExecutors is only in effect if dynamic allocation is enabled, using the spark.dynamicAllocation.enabled property. It is disabled by default. Dynamic allocation can be useful when to run multiple applications on the same cluster in parallel. However, in this case there is only a single application running on the cluster, so enabling dynamic allocation would not yield a performance benefit.
More info: Practical Spark Tips For Data Scientists | Experfy.com and Basics of Apache Spark Configuration Settings | by Halil Ertan | Towards Data Science (https://bit.ly/3gA0A6w ,
https://bit.ly/2QxhNTr)






Post your Comments and Discuss Databricks DATABRICKS-CERTIFIED-ASSOCIATE-DEVELOPER-FOR-APACHE-SPARK-3.0 exam dumps with other Community members:

Join the DATABRICKS-CERTIFIED-ASSOCIATE-DEVELOPER-FOR-APACHE-SPARK-3.0 Discussion