Free Professional Machine Learning Engineer Exam Braindumps (page: 34)

Page 34 of 69

You have recently created a proof-of-concept (POC) deep learning model. You are satisfied with the overall architecture, but you need to determine the value for a couple of hyperparameters. You want to perform hyperparameter tuning on Vertex AI to determine both the appropriate embedding dimension for a categorical feature used by your model and the optimal learning rate. You configure the following settings:

•For the embedding dimension, you set the type to INTEGER with a minValue of 16 and maxValue of 64.
•For the learning rate, you set the type to DOUBLE with a minValue of 10e-05 and maxValue of 10e-02.

You are using the default Bayesian optimization tuning algorithm, and you want to maximize model accuracy. Training time is not a concern. How should you set the hyperparameter scaling for each hyperparameter and the maxParallelTrials?

  1. Use UNIT_LINEAR_SCALE for the embedding dimension, UNIT_LOG_SCALE for the learning rate, and a large number of parallel trials.
  2. Use UNIT_LINEAR_SCALE for the embedding dimension, UNIT_LOG_SCALE for the learning rate, and a small number of parallel trials.
  3. Use UNIT_LOG_SCALE for the embedding dimension, UNIT_LINEAR_SCALE for the learning rate, and a large number of parallel trials.
  4. Use UNIT_LOG_SCALE for the embedding dimension, UNIT_LINEAR_SCALE for the learning rate, and a small number of parallel trials.

Answer(s): B



You are the Director of Data Science at a large company, and your Data Science team has recently begun using the Kubeflow Pipelines SDK to orchestrate their training pipelines. Your team is struggling to integrate their custom Python code into the Kubeflow Pipelines SDK. How should you instruct them to proceed in order to quickly integrate their code with the Kubeflow Pipelines SDK?

  1. Use the func_to_container_op function to create custom components from the Python code.
  2. Use the predefined components available in the Kubeflow Pipelines SDK to access Dataproc, and run the custom code there.
  3. Package the custom Python code into Docker containers, and use the load_component_from_file function to import the containers into the pipeline.
  4. Deploy the custom Python code to Cloud Functions, and use Kubeflow Pipelines to trigger the Cloud Function.

Answer(s): A



You work for the AI team of an automobile company, and you are developing a visual defect detection model using TensorFlow and Keras. To improve your model performance, you want to incorporate some image augmentation functions such as translation, cropping, and contrast tweaking. You randomly apply these functions to each training batch. You want to optimize your data processing pipeline for run time and compute resources utilization. What should you do?

  1. Embed the augmentation functions dynamically in the tf.Data pipeline.
  2. Embed the augmentation functions dynamically as part of Keras generators.
  3. Use Dataflow to create all possible augmentations, and store them as TFRecords.
  4. Use Dataflow to create the augmentations dynamically per training run, and stage them as TFRecords.

Answer(s): C



You work for an online publisher that delivers news articles to over 50 million readers. You have built an AI model that recommends content for the company’s weekly newsletter. A recommendation is considered successful if the article is opened within two days of the newsletter’s published date and the user remains on the page for at least one minute.

All the information needed to compute the success metric is available in BigQuery and is updated hourly. The model is trained on eight weeks of data, on average its performance degrades below the acceptable baseline after five weeks, and training time is 12 hours. You want to ensure that the model’s performance is above the acceptable baseline while minimizing cost. How should you monitor the model to determine when retraining is necessary?

  1. Use Vertex AI Model Monitoring to detect skew of the input features with a sample rate of 100% and a monitoring frequency of two days.
  2. Schedule a cron job in Cloud Tasks to retrain the model every week before the newsletter is created.
  3. Schedule a weekly query in BigQuery to compute the success metric.
  4. Schedule a daily Dataflow job in Cloud Composer to compute the success metric.

Answer(s): D



Page 34 of 69



Post your Comments and Discuss Google Professional Machine Learning Engineer exam with other Community members:

Tina commented on April 09, 2024
Good questions
Anonymous
upvote

Kavah commented on September 29, 2021
Very responsive and cool support team.
UNITED KINGDOM
upvote