Amazon AWS Certified Machine Learning Engineer - Associate MLA-C01 Exam
AWS Certified Machine Learning Engineer - Associate MLA-C01 (Page 4 )

Updated On: 7-Feb-2026

Case study

An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3.

The dataset has a class imbalance that affects the learning of the model's algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data.

After the data is aggregated, the ML engineer must implement a solution to automatically detect anomalies in the data and to visualize the result.

Which solution will meet these requirements?

  1. Use Amazon Athena to automatically detect the anomalies and to visualize the result.
  2. Use Amazon Redshift Spectrum to automatically detect the anomalies. Use Amazon QuickSight to visualize the result.
  3. Use Amazon SageMaker Data Wrangler to automatically detect the anomalies and to visualize the result.
  4. Use AWS Batch to automatically detect the anomalies. Use Amazon QuickSight to visualize the result.

Answer(s): C

Explanation:

Amazon SageMaker Data Wrangler is designed to preprocess, analyze, and visualize data efficiently. It provides built-in tools for anomaly detection, allowing the ML engineer to automatically identify anomalies in the dataset. Additionally, SageMaker Data Wrangler includes visualization capabilities to explore the data and results, meeting the requirements for anomaly detection and visualization in one integrated environment.



Case study

An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3.

The dataset has a class imbalance that affects the learning of the model's algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data.

The training dataset includes categorical data and numerical data. The ML engineer must prepare the training dataset to maximize the accuracy of the model.

Which action will meet this requirement with the LEAST operational overhead?

  1. Use AWS Glue to transform the categorical data into numerical data.
  2. Use AWS Glue to transform the numerical data into categorical data.
  3. Use Amazon SageMaker Data Wrangler to transform the categorical data into numerical data.
  4. Use Amazon SageMaker Data Wrangler to transform the numerical data into categorical data.

Answer(s): C

Explanation:

Transforming categorical data into numerical data is essential for ML models that require numerical input, as it allows the algorithm to process the categorical information effectively. Amazon SageMaker Data Wrangler provides an intuitive interface for data preparation, including built-in transformations like one-hot encoding and label encoding for categorical data. Using SageMaker Data Wrangler reduces operational overhead by offering an integrated environment to preprocess data without needing to write extensive code.



Case study

An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3.

The dataset has a class imbalance that affects the learning of the model's algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data.

Before the ML engineer trains the model, the ML engineer must resolve the issue of the imbalanced data.

Which solution will meet this requirement with the LEAST operational effort?

  1. Use Amazon Athena to identify patterns that contribute to the imbalance. Adjust the dataset accordingly.
  2. Use Amazon SageMaker Studio Classic built-in algorithms to process the imbalanced dataset.
  3. Use AWS Glue DataBrew built-in features to oversample the minority class.
  4. Use the Amazon SageMaker Data Wrangler balance data operation to oversample the minority class.

Answer(s): D

Explanation:

The Amazon SageMaker Data Wrangler balance data operation provides a built-in capability to handle class imbalance by oversampling the minority class or undersampling the majority class. This solution minimizes operational effort by offering an integrated, no-code/low-code approach to address the imbalance directly within SageMaker's data preparation workflow. It ensures that the dataset is balanced, improving the performance of the ML model.



Case study

An ML engineer is developing a fraud detection model on AWS. The training dataset includes transaction logs, customer profiles, and tables from an on-premises MySQL database. The transaction logs and customer profiles are stored in Amazon S3.

The dataset has a class imbalance that affects the learning of the model's algorithm. Additionally, many of the features have interdependencies. The algorithm is not capturing all the desired underlying patterns in the data.

The ML engineer needs to use an Amazon SageMaker built-in algorithm to train the model.

Which algorithm should the ML engineer use to meet this requirement?

  1. LightGBM
  2. Linear learner
  3. K-means clustering
  4. Neural Topic Model (NTM)

Answer(s): A



A company has deployed an XGBoost prediction model in production to predict if a customer is likely to cancel a subscription. The company uses Amazon SageMaker Model Monitor to detect deviations in the F1 score.

During a baseline analysis of model quality, the company recorded a threshold for the F1 score. After several months of no change, the model's F1 score decreases significantly.

What could be the reason for the reduced F1 score?

  1. Concept drift occurred in the underlying customer data that was used for predictions.
  2. The model was not sufficiently complex to capture all the patterns in the original baseline data.
  3. The original baseline data had a data quality issue of missing values.
  4. Incorrect ground truth labels were provided to Model Monitor during the calculation of the baseline.

Answer(s): A

Explanation:

Concept drift occurs when the statistical properties of the data change over time, meaning the relationship between input features and the target variable in the production data differs from the data used during model training. This is a common reason for the degradation of a model's performance metrics, such as the F1 score, over time. In this case, changes in customer behavior or other external factors could cause the predictions to deviate from the actual outcomes, leading to a drop in the F1 score.






Post your Comments and Discuss Amazon AWS Certified Machine Learning Engineer - Associate MLA-C01 exam prep with other Community members:

Join the AWS Certified Machine Learning Engineer - Associate MLA-C01 Discussion