Databricks-Machine-Learning-Associate: Certified Machine Learning Associate
Free Practice Exam Questions (page: 2)
Updated On: 2-Jan-2026

A machine learning engineer has created a Feature Table new_table using Feature Store Client fs.
When creating the table, they specified a metadata description with key information about the Feature Table. They now want to retrieve that metadata programmatically.
Which of the following lines of code will return the metadata description?

  1. There is no way to return the metadata description programmatically.
  2. fs.create_training_set("new_table")
  3. fs.get_table("new_table").description
  4. fs.get_table("new_table").load_df()
  5. fs.get_table("new_table")

Answer(s): C

Explanation:

To retrieve the metadata description of a feature table created using the Feature Store Client (referred here as fs), the correct method involves calling get_table on the fs client with the table name as an argument, followed by accessing the description attribute of the returned object. The code snippet fs.get_table("new_table").description correctly achieves this by fetching the table object for "new_table" and then accessing its description attribute, where the metadata is stored. The other options do not correctly focus on retrieving the metadata description.


Reference:

Databricks Feature Store documentation (Accessing Feature Table Metadata).



A data scientist has a Spark DataFrame spark_df. They want to create a new Spark DataFrame that contains only the rows from spark_df where the value in column price is greater than 0.
Which of the following code blocks will accomplish this task?

  1. spark_df[spark_df["price"] > 0]
  2. spark_df.filter(col("price") > 0)
  3. SELECT * FROM spark_df WHERE price > 0
  4. spark_df.loc[spark_df["price"] > 0,:]
  5. spark_df.loc[:,spark_df["price"] > 0]

Answer(s): B

Explanation:

To filter rows in a Spark DataFrame based on a condition, you use the filter method along with a column condition. The correct syntax in PySpark to accomplish this task is spark_df.filter(col("price") > 0), which filters the DataFrame to include only those rows where the value in the "price" column is greater than 0. The col function is used to specify column-based operations. The other options provided either do not use correct Spark DataFrame syntax or are intended for different types of data manipulation frameworks like pandas.


Reference:

PySpark DataFrame API documentation (Filtering DataFrames).



A health organization is developing a classification model to determine whether or not a patient currently has a specific type of infection. The organization's leaders want to maximize the number of positive cases identified by the model.
Which of the following classification metrics should be used to evaluate the model?

  1. RMSE
  2. Precision
  3. Area under the residual operating curve
  4. Accuracy
  5. Recall

Answer(s): E

Explanation:

When the goal is to maximize the identification of positive cases in a classification task, the metric of interest is Recall. Recall, also known as sensitivity, measures the proportion of actual positives that are correctly identified by the model (i.e., the true positive rate). It is crucial for scenarios where missing a positive case (false negative) has serious implications, such as in medical diagnostics. The other metrics like Precision, RMSE, and Accuracy serve different aspects of performance measurement and are not specifically focused on maximizing the detection of positive cases alone.


Reference:

Classification Metrics in Machine Learning (Understanding Recall).



In which of the following situations is it preferable to impute missing feature values with their median value over the mean value?

  1. When the features are of the categorical type
  2. When the features are of the boolean type
  3. When the features contain a lot of extreme outliers
  4. When the features contain no outliers
  5. When the features contain no missing no values

Answer(s): C

Explanation:

Imputing missing values with the median is often preferred over the mean in scenarios where the data contains a lot of extreme outliers. The median is a more robust measure of central tendency in such cases, as it is not as heavily influenced by outliers as the mean. Using the median ensures that the imputed values are more representative of the typical data point, thus preserving the integrity of the dataset's distribution. The other options are not specifically relevant to the question of handling outliers in numerical data.


Reference:

Data Imputation Techniques (Dealing with Outliers).



Viewing page 2 of 20
Viewing questions 5 - 8 out of 140 questions



Post your Comments and Discuss Databricks Databricks-Machine-Learning-Associate exam prep with other Community members:

Databricks-Machine-Learning-Associate Exam Discussions & Posts