Free MLS-C01 Exam Braindumps

A large mobile network operating company is building a machine learning model to predict customers who are likely to unsubscribe from the service. The company plans to offer an incentive for these customers as the cost of churn is far greater than the cost of the incentive.

The model produces the following confusion matrix after evaluating on a test dataset of 100 customers:


Based on the model evaluation results, why is this a viable model for production?

  1. The model is 86% accurate and the cost incurred by the company as a result of false negatives is less than the false positives.
  2. The precision of the model is 86%, which is less than the accuracy of the model.
  3. The model is 86% accurate and the cost incurred by the company as a result of false positives is less than the false negatives.
  4. The precision of the model is 86%, which is greater than the accuracy of the model.

Answer(s): A



A machine learning (ML) specialist wants to secure calls to the Amazon SageMaker Service API. The specialist has configured Amazon VPC with a VPC interface endpoint for the Amazon SageMaker Service API and is attempting to secure traffic from specific sets of instances and IAM users. The VPC is configured with a single public subnet.

Which combination of steps should the ML specialist take to secure the traffic? (Choose two.)

  1. Add a VPC endpoint policy to allow access to the IAM users.
  2. Modify the users' IAM policy to allow access to Amazon SageMaker Service API calls only.
  3. Modify the security group on the endpoint network interface to restrict access to the instances.
  4. Modify the ACL on the endpoint network interface to restrict access to the instances.
  5. Add a SageMaker Runtime VPC endpoint interface to the VPC.

Answer(s): A,C


Reference:

https://aws.amazon.com/blogs/machine-learning/private-package-installation-in-amazon-sagemaker-running-in-internet-free-mode/



An e commerce company wants to launch a new cloud-based product recommendation feature for its web application. Due to data localization regulations, any sensitive data must not leave its on-premises data center, and the product recommendation model must be trained and tested using nonsensitive data only. Data transfer to the cloud must use IPsec. The web application is hosted on premises with a PostgreSQL database that contains all the data. The company wants the data to be uploaded securely to Amazon S3 each day for model retraining.

How should a machine learning specialist meet these requirements?

  1. Create an AWS Glue job to connect to the PostgreSQL DB instance. Ingest tables without sensitive data through an AWS Site-to-Site VPN connection directly into Amazon S3.
  2. Create an AWS Glue job to connect to the PostgreSQL DB instance. Ingest all data through an AWS Site- to-Site VPN connection into Amazon S3 while removing sensitive data using a PySpark job.
  3. Use AWS Database Migration Service (AWS DMS) with table mapping to select PostgreSQL tables with no sensitive data through an SSL connection. Replicate data directly into Amazon S3.
  4. Use PostgreSQL logical replication to replicate all data to PostgreSQL in Amazon EC2 through AWS Direct Connect with a VPN connection. Use AWS Glue to move data from Amazon EC2 to Amazon S3.

Answer(s): C


Reference:

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html



A logistics company needs a forecast model to predict next month's inventory requirements for a single item in 10 warehouses. A machine learning specialist uses Amazon Forecast to develop a forecast model from 3 years of monthly data. There is no missing data. The specialist selects the DeepAR+ algorithm to train a predictor.
The predictor means absolute percentage error (MAPE) is much larger than the MAPE produced by the current human forecasters.

Which changes to the CreatePredictor API call could improve the MAPE? (Choose two.)

  1. Set PerformAutoML to true.
  2. Set ForecastHorizon to 4.
  3. Set ForecastFrequency to W for weekly.
  4. Set PerformHPO to true.
  5. Set FeaturizationMethodName to filling.

Answer(s): C,D


Reference:

https://docs.aws.amazon.com/forecast/latest/dg/forecast.dg.pdf



A data scientist wants to use Amazon Forecast to build a forecasting model for inventory demand for a retail company. The company has provided a dataset of historic inventory demand for its products as a .csv file stored in an Amazon S3 bucket. The table below shows a sample of the dataset.


How should the data scientist transform the data?

  1. Use ETL jobs in AWS Glue to separate the dataset into a target time series dataset and an item metadata dataset. Upload both datasets as .csv files to Amazon S3.
  2. Use a Jupyter notebook in Amazon SageMaker to separate the dataset into a related time series dataset and an item metadata dataset. Upload both datasets as tables in Amazon Aurora.
  3. Use AWS Batch jobs to separate the dataset into a target time series dataset, a related time series dataset, and an item metadata dataset. Upload them directly to Forecast from a local machine.
  4. Use a Jupyter notebook in Amazon SageMaker to transform the data into the optimized protobuf recordIO format. Upload the dataset in this format to Amazon S3.

Answer(s): B