Databricks-Machine-Learning-Associate: Certified Machine Learning Associate
Free Practice Exam Questions (page: 3)
Updated On: 2-Jan-2026

A data scientist has replaced missing values in their feature set with each respective feature variable's median value. A colleague suggests that the data scientist is throwing away valuable information by doing this.
Which of the following approaches can they take to include as much information as possible in the feature set?

  1. Impute the missing values using each respective feature variable's mean value instead of the median value
  2. Refrain from imputing the missing values in favor of letting the machine learning algorithm determine how to handle them
  3. Remove all feature variables that originally contained missing values from the feature set
  4. Create a binary feature variable for each feature that contained missing values indicating whether each row's value has been imputed
  5. Create a constant feature variable for each feature that contained missing values indicating the percentage of rows from the feature that was originally missing

Answer(s): D

Explanation:

By creating a binary feature variable for each feature with missing values to indicate whether a value has been imputed, the data scientist can preserve information about the original state of the data. This approach maintains the integrity of the dataset by marking which values are original and which are synthetic (imputed). Here are the steps to implement this approach:
Identify Missing Values: Determine which features contain missing values. Impute Missing Values: Continue with median imputation or choose another method (mean, mode, regression, etc.) to fill missing values.
Create Indicator Variables: For each feature that had missing values, add a new binary feature. This feature should be '1' if the original value was missing and imputed, and '0' otherwise. Data Integration: Integrate these new binary features into the existing dataset. This maintains a record of where data imputation occurred, allowing models to potentially weight these observations differently.
Model Adjustment: Adjust machine learning models to account for these new features, which might involve considering interactions between these binary indicators and other features.
Reference
"Feature Engineering for Machine Learning" by Alice Zheng and Amanda Casari (O'Reilly Media, 2018), especially the sections on handling missing data. Scikit-learn documentation on imputing missing values: https://scikit- learn.org/stable/modules/impute.html



A data scientist is wanting to explore summary statistics for Spark DataFrame spark_df. The data scientist wants to see the count, mean, standard deviation, minimum, maximum, and interquartile range (IQR) for each numerical feature.
Which of the following lines of code can the data scientist run to accomplish the task?

  1. spark_df.summary ()
  2. spark_df.stats()
  3. spark_df.describe().head()
  4. spark_df.printSchema()
  5. spark_df.toPandas()

Answer(s): A

Explanation:

The summary() function in PySpark's DataFrame API provides descriptive statistics which include count, mean, standard deviation, min, max, and quantiles for numeric columns. Here are the steps on how it can be used:
Import PySpark: Ensure PySpark is installed and correctly configured in the Databricks environment.
Load Data: Load the data into a Spark DataFrame.
Apply Summary: Use spark_df.summary() to generate summary statistics. View Results: The output from the summary() function includes the statistics specified in the query (count, mean, standard deviation, min, max, and potentially quartiles which approximate the interquartile range).
Reference
PySpark Documentation:
https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.summary.ht ml



An organization is developing a feature repository and is electing to one-hot encode all categorical feature variables. A data scientist suggests that the categorical feature variables should not be one- hot encoded within the feature repository.
Which of the following explanations justifies this suggestion?

  1. One-hot encoding is not supported by most machine learning libraries.
  2. One-hot encoding is dependent on the target variable's values which differ for each application.
  3. One-hot encoding is computationally intensive and should only be performed on small samples of training sets for individual machine learning problems.
  4. One-hot encoding is not a common strategy for representing categorical feature variables numerically.
  5. One-hot encoding is a potentially problematic categorical variable strategy for some machine learning algorithms.

Answer(s): E

Explanation:

One-hot encoding transforms categorical variables into a format that can be provided to machine learning algorithms to better predict the output. However, when done prematurely or universally within a feature repository, it can be problematic:
Dimensionality Increase: One-hot encoding significantly increases the feature space, especially with high cardinality features, which can lead to high memory consumption and slower computation. Model Specificity: Some models handle categorical variables natively (like decision trees and boosting algorithms), and premature one-hot encoding can lead to inefficiency and loss of information (e.g., ordinal relationships).
Sparse Matrix Issue: It often results in a sparse matrix where most values are zero, which can be inefficient in both storage and computation for some algorithms. Generalization vs. Specificity: Encoding should ideally be tailored to specific models and use cases rather than applied generally in a feature repository.
Reference
"Feature Engineering and Selection: A Practical Approach for Predictive Models" by Max Kuhn and Kjell Johnson (CRC Press, 2019).



A data scientist has created two linear regression models. The first model uses price as a label variable and the second model uses log(price) as a label variable.
When evaluating the RMSE of each model by comparing the label predictions to the actual price values, the data scientist notices that the RMSE for the second model is much larger than the RMSE of the first model.
Which of the following possible explanations for this difference is invalid?

  1. The second model is much more accurate than the first model
  2. The data scientist failed to exponentiate the predictions in the second model prior to computing the RMSE
  3. The data scientist failed to take the log of the predictions in the first model prior to computing the RMSE
  4. The first model is much more accurate than the second model
  5. The RMSE is an invalid evaluation metric for regression problems

Answer(s): E

Explanation:

The Root Mean Squared Error (RMSE) is a standard and widely used metric for evaluating the accuracy of regression models. The statement that it is invalid is incorrect. Here's a breakdown of why the other statements are or are not valid:

Transformations and RMSE Calculation: If the model predictions were transformed (e.g., using log), they should be converted back to their original scale before calculating RMSE to ensure accuracy in the evaluation. Missteps in this conversion process can lead to misleading RMSE values. Accuracy of Models: Without additional information, we can't definitively say which model is more accurate without considering their RMSE values properly scaled back to the original price scale. Appropriateness of RMSE: RMSE is entirely valid for regression problems as it provides a measure of how accurately a model predicts the outcome, expressed in the same units as the dependent variable.
Reference
"Applied Predictive Modeling" by Max Kuhn and Kjell Johnson (Springer, 2013), particularly the chapters discussing model evaluation metrics.



Viewing page 3 of 20
Viewing questions 9 - 12 out of 140 questions



Post your Comments and Discuss Databricks Databricks-Machine-Learning-Associate exam prep with other Community members:

Databricks-Machine-Learning-Associate Exam Discussions & Posts