Free Professional Machine Learning Engineer Exam Braindumps (page: 13)

Page 13 of 69

You work for an online travel agency that also sells advertising placements on its website to other companies. You have been asked to predict the most relevant web banner that a user should see next. Security is important to your company. The model latency requirements are 300ms@p99, the inventory is thousands of web banners, and your exploratory analysis has shown that navigation context is a good predictor. You want to Implement the simplest solution. How should you configure the prediction pipeline?

  1. Embed the client on the website, and then deploy the model on AI Platform Prediction.
  2. Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on AI Platform Prediction.
  3. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user’s navigation context, and then deploy the model on AI Platform Prediction.
  4. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Memorystore for writing and for reading the user’s navigation context, and then deploy the model on Google Kubernetes Engine.

Answer(s): B



Your team is building a convolutional neural network (CNN)-based architecture from scratch. The preliminary experiments running on your on-premises CPU-only infrastructure were encouraging, but have slow convergence. You have been asked to speed up model training to reduce time-to-market. You want to experiment with virtual machines (VMs) on Google Cloud to leverage more powerful hardware. Your code does not include any manual device placement and has not been wrapped in Estimator model-level abstraction.
Which environment should you train your model on?

  1. AVM on Compute Engine and 1 TPU with all dependencies installed manually.
  2. AVM on Compute Engine and 8 GPUs with all dependencies installed manually.
  3. A Deep Learning VM with an n1-standard-2 machine and 1 GPU with all libraries pre-installed.
  4. A Deep Learning VM with more powerful CPU e2-highcpu-16 machines with all libraries pre-installed.

Answer(s): A



You work on a growing team of more than 50 data scientists who all use AI Platform. You are designing a strategy to organize your jobs, models, and versions in a clean and scalable way. Which strategy should you choose?

  1. Set up restrictive IAM permissions on the AI Platform notebooks so that only a single user or group can access a given instance.
  2. Separate each data scientist’s work into a different project to ensure that the jobs, models, and versions created by each data scientist are accessible only to that user.
  3. Use labels to organize resources into descriptive categories. Apply a label to each created resource so that users can filter the results by label when viewing or monitoring the resources.
  4. Set up a BigQuery sink for Cloud Logging logs that is appropriately filtered to capture information about AI Platform resource usage. In BigQuery, create a SQL view that maps users to the resources they are using

Answer(s): C



You are training a deep learning model for semantic image segmentation with reduced training time. While using a Deep Learning VM Image, you receive the following error: The resource 'projects/ deeplearning-platforn/zones/europe-west4-c/acceleratorTypes/nvidia-tesla-k80' was not found. What should you do?

  1. Ensure that you have GPU quota in the selected region.
  2. Ensure that the required GPU is available in the selected region.
  3. Ensure that you have preemptible GPU quota in the selected region.
  4. Ensure that the selected GPU has enough GPU memory for the workload.

Answer(s): B



Page 13 of 69



Post your Comments and Discuss Google Professional Machine Learning Engineer exam with other Community members:

Tina commented on April 09, 2024
Good questions
Anonymous
upvote

Kavah commented on September 29, 2021
Very responsive and cool support team.
UNITED KINGDOM
upvote