Free Professional Machine Learning Engineer Exam Braindumps (page: 24)

Page 24 of 69

You have been asked to build a model using a dataset that is stored in a medium-sized (~10 GB) BigQuery table. You need to quickly determine whether this data is suitable for model development. You want to create a one-time report that includes both informative visualizations of data distributions and more sophisticated statistical analyses to share with other ML engineers on your team. You require maximum flexibility to create your report. What should you do?

  1. Use Vertex AI Workbench user-managed notebooks to generate the report.
  2. Use the Google Data Studio to create the report.
  3. Use the output from TensorFlow Data Validation on Dataflow to generate the report.
  4. Use Dataprep to create the report.

Answer(s): C



You work on an operations team at an international company that manages a large fleet of on-premises servers located in few data centers around the world. Your team collects monitoring data from the servers, including CPU/memory consumption. When an incident occurs on a server, your team is responsible for fixing it. Incident data has not been properly labeled yet. Your management team wants you to build a predictive maintenance solution that uses monitoring data from the VMs to detect potential failures and then alerts the service desk team. What should you do first?

  1. Train a time-series model to predict the machines’ performance values. Configure an alert if a machine’s actual performance values significantly differ from the predicted performance values.
  2. Implement a simple heuristic (e.g., based on z-score) to label the machines’ historical performance data. Train a model to predict anomalies based on this labeled dataset.
  3. Develop a simple heuristic (e.g., based on z-score) to label the machines’ historical performance data. Test this heuristic in a production environment.
  4. Hire a team of qualified analysts to review and label the machines’ historical performance data. Train a model based on this manually labeled dataset.

Answer(s): D



You are developing an ML model that uses sliced frames from video feed and creates bounding boxes around specific objects. You want to automate the following steps in your training pipeline: ingestion and preprocessing of data in Cloud Storage, followed by training and hyperparameter tuning of the object model using Vertex AI jobs, and finally deploying the model to an endpoint. You want to orchestrate the entire pipeline with minimal cluster management. What approach should you use?

  1. Use Kubeflow Pipelines on Google Kubernetes Engine.
  2. Use Vertex AI Pipelines with TensorFlow Extended (TFX) SDK.
  3. Use Vertex AI Pipelines with Kubeflow Pipelines SDK.
  4. Use Cloud Composer for the orchestration.

Answer(s): A



You are training an object detection machine learning model on a dataset that consists of three million X-ray images, each roughly 2 GB in size. You are using Vertex AI Training to run a custom training application on a Compute Engine instance with 32-cores, 128 GB of RAM, and 1 NVIDIA P100 GPU. You notice that model training is taking a very long time. You want to decrease training time without sacrificing model performance. What should you do?

  1. Increase the instance memory to 512 GB and increase the batch size.
  2. Replace the NVIDIA P100 GPU with a v3-32 TPU in the training job.
  3. Enable early stopping in your Vertex AI Training job.
  4. Use the tf.distribute.Strategy API and run a distributed training job.

Answer(s): C



Page 24 of 69



Post your Comments and Discuss Google Professional Machine Learning Engineer exam with other Community members:

Tina commented on April 09, 2024
Good questions
Anonymous
upvote

Kavah commented on September 29, 2021
Very responsive and cool support team.
UNITED KINGDOM
upvote