Amazon AWS Certified Machine Learning - Specialty Exam
AWS Certified Machine Learning - Specialty (MLS-C01) (Page 6 )

Updated On: 1-Feb-2026

An analytics company has an Amazon SageMaker hosted endpoint for an image classification model. The model is a custom-built convolutional neural network (CNN) and uses the PyTorch deep learning framework. The company wants to increase throughput and decrease latency for customers that use the model.

Which solution will meet these requirements MOST cost-effectively?

  1. Use Amazon Elastic Inference on the SageMaker hosted endpoint.
  2. Retrain the CNN with more layers and a larger dataset.
  3. Retrain the CNN with more layers and a smaller dataset.
  4. Choose a SageMaker instance type that has multiple GPUs.

Answer(s): A



A company’s data scientist has trained a new machine learning model that performs better on test data than the company’s existing model performs in the production environment. The data scientist wants to replace the existing model that runs on an Amazon SageMaker endpoint in the production environment. However, the company is concerned that the new model might not work well on the production environment data.

The data scientist needs to perform A/B testing in the production environment to evaluate whether the new model performs well on production environment data.

Which combination of steps must the data scientist take to perform the A/B testing? (Choose two.)

  1. Create a new endpoint configuration that includes a production variant for each of the two models.
  2. Create a new endpoint configuration that includes two target variants that point to different endpoints.
  3. Deploy the new model to the existing endpoint.
  4. Update the existing endpoint to activate the new model.
  5. Update the existing endpoint to use the new endpoint configuration.

Answer(s): A,E



A machine learning (ML) engineer is integrating a production model with a customer metadata repository for real-time inference. The repository is hosted in Amazon SageMaker Feature Store. The engineer wants to retrieve only the latest version of the customer metadata record for a single customer at a time.

Which solution will meet these requirements?

  1. Use the SageMaker Feature Store BatchGetRecord API with the record identifier. Filter to find the latest record.
  2. Create an Amazon Athena query to retrieve the data from the feature table.
  3. Create an Amazon Athena query to retrieve the data from the feature table. Use the write_time value to find the latest record.
  4. Use the SageMaker Feature Store GetRecord API with the record identifier.

Answer(s): D



Each morning, a data scientist at a rental car company creates insights about the previous day’s rental car reservation demands. The company needs to automate this process by streaming the data to Amazon S3 in near real time. The solution must detect high-demand rental cars at each of the company’s locations. The solution also must create a visualization dashboard that automatically refreshes with the most recent data.

Which solution will meet these requirements with the LEAST development time?

  1. Use Amazon Kinesis Data Firehose to stream the reservation data directly to Amazon S3. Detect high-demand outliers by using Amazon QuickSight ML Insights. Visualize the data in QuickSight.
  2. Use Amazon Kinesis Data Streams to stream the reservation data directly to Amazon S3. Detect high-demand outliers by using the Random Cut Forest (RCF) trained model in Amazon SageMaker. Visualize the data in Amazon QuickSight.
  3. Use Amazon Kinesis Data Firehose to stream the reservation data directly to Amazon S3. Detect high-demand outliers by using the Random Cut Forest (RCF) trained model in Amazon SageMaker. Visualize the data in Amazon QuickSight.
  4. Use Amazon Kinesis Data Streams to stream the reservation data directly to Amazon S3. Detect high-demand outliers by using Amazon QuickSight ML Insights. Visualize the data in QuickSight.

Answer(s): A



A machine learning (ML) engineer at a bank is building a data ingestion solution to provide transaction features to financial ML models. Raw transactional data is available in an Amazon Kinesis data stream.

The solution must compute rolling averages of the ingested data from the data stream and must store the results in Amazon SageMaker Feature Store. The solution also must serve the results to the models in near real time.

Which solution will meet these requirements?

  1. Load the data into an Amazon S3 bucket by using Amazon Kinesis Data Firehose. Use a SageMaker Processing job to aggregate the data and to load the results into SageMaker Feature Store as an online feature group.
  2. Write the data directly from the data stream into SageMaker Feature Store as an online feature group. Calculate the rolling averages in place within SageMaker Feature Store by using the SageMaker GetRecord API operation.
  3. Consume the data stream by using an Amazon Kinesis Data Analytics SQL application that calculates the rolling averages. Generate a result stream. Consume the result stream by using a custom AWS Lambda function that publishes the results to SageMaker Feature Store as an online feature group.
  4. Load the data into an Amazon S3 bucket by using Amazon Kinesis Data Firehose. Use a SageMaker Processing job to load the data into SageMaker Feature Store as an offline feature group. Compute the rolling averages at query time.

Answer(s): C



Viewing page 6 of 68
Viewing questions 26 - 30 out of 370 questions



Post your Comments and Discuss Amazon AWS Certified Machine Learning - Specialty exam prep with other Community members:

Join the AWS Certified Machine Learning - Specialty Discussion