Free MLS-C01 Exam Braindumps (page: 18)

Page 18 of 84

A large consumer goods manufacturer has the following products on sale:

•34 different toothpaste variants
•48 different toothbrush variants
•43 different mouthwash variants

The entire sales history of all these products is available in Amazon S3. Currently, the company is using custom-built autoregressive integrated moving average (ARIMA) models to forecast demand for these products. The company wants to predict the demand for a new product that will soon be launched.

Which solution should a Machine Learning Specialist apply?

  1. Train a custom ARIMA model to forecast demand for the new product.
  2. Train an Amazon SageMaker DeepAR algorithm to forecast demand for the new product.
  3. Train an Amazon SageMaker k-means clustering algorithm to forecast demand for the new product.
  4. Train a custom XGBoost model to forecast demand for the new product.

Answer(s): B

Explanation:

The Amazon SageMaker DeepAR forecasting algorithm is a supervised learning algorithm for forecasting scalar (one-dimensional) time series using recurrent neural networks (RNN). Classical forecasting methods, such as autoregressive integrated moving average (ARIMA) or exponential smoothing (ETS), fit a single model to each individual time series. They then use that model to extrapolate the time series into the future.


Reference:

https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html



A Machine Learning Specialist uploads a dataset to an Amazon S3 bucket protected with server-side encryption using AWS KMS.

How should the ML Specialist define the Amazon SageMaker notebook instance so it can read the same dataset from Amazon S3?

  1. Define security group(s) to allow all HTTP inbound/outbound traffic and assign those security group(s) to the Amazon SageMaker notebook instance.
  2. Сonfigure the Amazon SageMaker notebook instance to have access to the VPC. Grant permission in the KMS key policy to the notebook’s KMS role.
  3. Assign an IAM role to the Amazon SageMaker notebook with S3 read access to the dataset. Grant permission in the KMS key policy to that role.
  4. Assign the same KMS key used to encrypt data in Amazon S3 to the Amazon SageMaker notebook instance.

Answer(s): C



A Data Scientist needs to migrate an existing on-premises ETL process to the cloud. The current process runs at regular time intervals and uses PySpark to combine and format multiple large data sources into a single consolidated output for downstream processing.

The Data Scientist has been given the following requirements to the cloud solution:
•Combine multiple data sources.
•Reuse existing PySpark logic.
•Run the solution on the existing schedule.
•Minimize the number of servers that will need to be managed.

Which architecture should the Data Scientist use to build this solution?

  1. Write the raw data to Amazon S3. Schedule an AWS Lambda function to submit a Spark step to a persistent Amazon EMR cluster based on the existing schedule. Use the existing PySpark logic to run the ETL job on the EMR cluster. Output the results to a “processed” location in Amazon S3 that is accessible for downstream use.
  2. Write the raw data to Amazon S3. Create an AWS Glue ETL job to perform the ETL processing against the input data. Write the ETL job in PySpark to leverage the existing logic. Create a new AWS Glue trigger to trigger the ETL job based on the existing schedule. Configure the output target of the ETL job to write to a “processed” location in Amazon S3 that is accessible for downstream use.
  3. Write the raw data to Amazon S3. Schedule an AWS Lambda function to run on the existing schedule and process the input data from Amazon S3. Write the Lambda logic in Python and implement the existing PySpark logic to perform the ETL process. Have the Lambda function output the results to a “processed” location in Amazon S3 that is accessible for downstream use.
  4. Use Amazon Kinesis Data Analytics to stream the input data and perform real-time SQL queries against the stream to carry out the required transformations within the stream. Deliver the output results to a “processed” location in Amazon S3 that is accessible for downstream use.

Answer(s): B



A Data Scientist is building a model to predict customer churn using a dataset of 100 continuous numerical features. The Marketing team has not provided any insight about which features are relevant for churn prediction. The Marketing team wants to interpret the model and see the direct impact of relevant features on the model outcome. While training a logistic regression model, the Data Scientist observes that there is a wide gap between the training and validation set accuracy.

Which methods can the Data Scientist use to improve the model performance and satisfy the Marketing team’s needs? (Choose two.)

  1. Add L1 regularization to the classifier
  2. Add features to the dataset
  3. Perform recursive feature elimination
  4. Perform t-distributed stochastic neighbor embedding (t-SNE)
  5. Perform linear discriminant analysis

Answer(s): A,C

Explanation:

A) Adding L1 regularization to the logistic regression classifier can help to improve the model performance and reduce overfitting. This can also help to highlight the relevant features for churn prediction as L1 regularization can shrink the coefficients of irrelevant features to zero.

C) Recursive feature elimination can be used to select the most relevant features for the model. This can help to improve the model performance and highlight the relevant features for churn prediction.



Page 18 of 84



Post your Comments and Discuss Amazon MLS-C01 exam with other Community members:

Richard commented on October 24, 2023
i am thrilled to say that i passed my amazon web services mls-c01 exam, thanks to study materials. they were comprehensive and well-structured, making my preparation efficient.
Anonymous
upvote

Richard commented on October 24, 2023
I am thrilled to say that I passed my Amazon Web Services MLS-C01 exam, thanks to study materials. They were comprehensive and well-structured, making my preparation efficient.
Anonymous
upvote

Ken commented on October 13, 2021
I would like to share my good news with you about successfully passing my exam. This study package is very relevant and helpful.
AUSTRALIA
upvote

Alex commented on April 19, 2021
A very great in amount of questions are from real exam. Almost same wording. :)
SOUTH KOREA
upvote

MD ABU S CHOWDHURY commented on January 18, 2020
Working on the test..
UNITED STATES
upvote