Free Professional Machine Learning Engineer Exam Braindumps (page: 19)

Page 19 of 69

You work for a global footwear retailer and need to predict when an item will be out of stock based on historical inventory data Customer behavior is highly dynamic since footwear demand is influenced by many different factors. You want to serve models that are trained on all available data, but track your performance on specific subsets of data before pushing to production. What is the most streamlined and reliable way to perform this validation?

  1. Use then TFX ModelValidator tools to specify performance metrics for production readiness.
  2. Use k-fold cross-validation as a validation strategy to ensure that your model is ready for production.
  3. Use the last relevant week of data as a validation set to ensure that your model is performing accurately on current data.
  4. Use the entire dataset and treat the area under the receiver operating characteristics curve (AUC ROC) as the main metric.

Answer(s): B



You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an “Out of Memory” error. What should you do?

  1. Use batch prediction mode instead of online mode.
  2. Send the request again with a smaller batch of instances.
  3. Use base64 to encode your data before using it for prediction.
  4. Apply for a quota increase for the number of prediction requests.

Answer(s): C



You work at a subscription-based company. You have trained an ensemble of trees and neural networks to predict customer churn, which is the likelihood that customers will not renew their yearly subscription. The average prediction is a 15% churn rate, but for a particular customer the model predicts that they are 70% likely to churn. The customer has a product usage history of 30%, is located in New York City, and became a customer in 1997. You need to explain the difference between the actual prediction, a 70% churn rate, and the average prediction. You want to use Vertex Explainable AI. What should you do?

  1. Train local surrogate models to explain individual predictions.
  2. Configure sampled Shapley explanations on Vertex Explainable AI.
  3. Configure integrated gradients explanations on Vertex Explainable AI.
  4. Measure the effect of each feature as the weight of the feature multiplied by the feature value.

Answer(s): A



You are working on a classification problem with time series data. After conducting just a few experiments using random cross-validation, you achieved an Area Under the Receiver Operating Characteristic Curve (AUC ROC) value of 99% on the training data. You haven’t explored using any sophisticated algorithms or spent any time on hyperparameter tuning. What should your next step be to identify and fix the problem?

  1. Address the model overfitting by using a less complex algorithm and use k-fold cross-validation.
  2. Address data leakage by applying nested cross-validation during model training.
  3. Address data leakage by removing features highly correlated with the target value.
  4. Address the model overfitting by tuning the hyperparameters to reduce the AUC ROC value.

Answer(s): D



Page 19 of 69



Post your Comments and Discuss Google Professional Machine Learning Engineer exam with other Community members:

Tina commented on April 09, 2024
Good questions
Anonymous
upvote

Kavah commented on September 29, 2021
Very responsive and cool support team.
UNITED KINGDOM
upvote