Amazon AWS Certified Machine Learning Engineer - Associate MLA-C01 Exam
AWS Certified Machine Learning Engineer - Associate MLA-C01 (Page 6 )

Updated On: 9-Feb-2026

A company needs to run a batch data-processing job on Amazon EC2 instances. The job will run during the weekend and will take 90 minutes to finish running. The processing can handle interruptions. The company will run the job every weekend for the next 6 months.

Which EC2 instance purchasing option will meet these requirements MOST cost-effectively?

  1. Spot Instances
  2. Reserved Instances
  3. On-Demand Instances
  4. Dedicated Instances

Answer(s): A

Explanation:

Spot Instances are the most cost-effective option for batch jobs that can tolerate interruptions. They offer significant discounts compared to On-Demand Instances because they utilize unused EC2 capacity. Since the job runs on the weekend, lasts only 90 minutes, and can handle interruptions, Spot Instances are ideal for this use case. This purchasing option minimizes costs while meeting the company's requirements.



An ML engineer has an Amazon Comprehend custom model in Account A in the us-east-1 Region. The ML engineer needs to copy the model to Account B in the same Region.

Which solution will meet this requirement with the LEAST development effort?

  1. Use Amazon S3 to make a copy of the model. Transfer the copy to Account B.
  2. Create a resource-based IAM policy. Use the Amazon Comprehend ImportModel API operation to copy the model to Account
  3. Use AWS DataSync to replicate the model from Account A to Account B.
  4. Create an AWS Site-to-Site VPN connection between Account A and Account B to transfer the model.

Answer(s): B

Explanation:

Amazon Comprehend provides the ImportModel API operation, which allows you to copy a custom model between AWS accounts. By creating a resource-based IAM policy on the model in Account A, you can grant Account B the necessary permissions to access and import the model. This approach requires minimal development effort and is the AWS-recommended method for sharing custom models across accounts.



An ML engineer is training a simple neural network model. The ML engineer tracks the performance of the model over time on a validation dataset. The model's performance improves substantially at first and then degrades after a specific number of epochs.

Which solutions will mitigate this problem? (Choose two.)

  1. Enable early stopping on the model.
  2. Increase dropout in the layers.
  3. Increase the number of layers.
  4. Increase the number of neurons.
  5. Investigate and reduce the sources of model bias.

Answer(s): A,B

Explanation:

Early stopping halts training once the performance on the validation dataset stops improving. This prevents the model from overfitting, which is likely the cause of performance degradation after a certain number of epochs.
Dropout is a regularization technique that randomly deactivates neurons during training, reducing overfitting by forcing the model to generalize better. Increasing dropout can help mitigate the problem of performance degradation due to overfitting.



A company has a Retrieval Augmented Generation (RAG) application that uses a vector database to store embeddings of documents. The company must migrate the application to AWS and must implement a solution that provides semantic search of text files. The company has already migrated the text repository to an Amazon S3 bucket.

Which solution will meet these requirements?

  1. Use an AWS Batch job to process the files and generate embeddings. Use AWS Glue to store the embeddings. Use SQL queries to perform the semantic searches.
  2. Use a custom Amazon SageMaker AI notebook to run a custom script to generate embeddings. Use SageMaker Feature Store to store the embeddings. Use SQL queries to perform the semantic searches.
  3. Use the Amazon Kendra S3 connector to ingest the documents from the S3 bucket into Amazon Kendra.
    Query Amazon Kendra to perform the semantic searches.
  4. Use an Amazon Textract asynchronous job to ingest the documents from the S3 bucket. Query Amazon Textract to perform the semantic searches.

Answer(s): C

Explanation:

Amazon Kendra is an AI-powered search service designed for semantic search use cases. It allows ingestion of documents from an Amazon S3 bucket using the Amazon Kendra S3 connector. Once the documents are ingested, Kendra enables semantic searches with its built-in capabilities, removing the need to manually generate embeddings or manage a vector database. This approach is efficient, requires minimal operational effort, and meets the requirements for a Retrieval Augmented Generation (RAG) application.



A company uses Amazon Athena to query a dataset in Amazon S3. The dataset has a target variable that the company wants to predict.

The company needs to use the dataset in a solution to determine if a model can predict the target variable.

Which solution will provide this information with the LEAST development effort?

  1. Create a new model by using Amazon SageMaker Autopilot. Report the model's achieved performance.
  2. Implement custom scripts to perform data pre-processing, multiple linear regression, and performance evaluation. Run the scripts on Amazon EC2 instances.
  3. Configure Amazon Macie to analyze the dataset and to create a model. Report the model's achieved performance.
  4. Select a model from Amazon Bedrock. Tune the model with the data. Report the model's achieved performance.

Answer(s): A

Explanation:

Amazon SageMaker Autopilot automates the process of building, training, and tuning machine learning models. It provides insights into whether the target variable can be effectively predicted by evaluating the model's performance metrics. This solution requires minimal development effort as SageMaker Autopilot handles data preprocessing, algorithm selection, and hyperparameter optimization automatically, making it the most efficient choice for this scenario.






Post your Comments and Discuss Amazon AWS Certified Machine Learning Engineer - Associate MLA-C01 exam prep with other Community members:

Join the AWS Certified Machine Learning Engineer - Associate MLA-C01 Discussion