Free Professional Machine Learning Engineer Exam Braindumps

You have a demand forecasting pipeline in production that uses Dataflow to preprocess raw data prior to model training and prediction. During preprocessing, you employ Z-score normalization on data stored in BigQuery and write it back to BigQuery. New training data is added every week. You want to make the process more efficient by minimizing computation time and manual intervention. What should you do?

  1. Normalize the data using Google Kubernetes Engine.
  2. Translate the normalization algorithm into SQL for use with BigQuery.
  3. Use the normalizer_fn argument in TensorFlow’s Feature Column API.
  4. Normalize the data with Apache Spark using the Dataproc connector for BigQuery.

Answer(s): B



You need to design a customized deep neural network in Keras that will predict customer purchases based on their purchase history. You want to explore model performance using multiple model architectures, store training data, and be able to compare the evaluation metrics in the same dashboard. What should you do?

  1. Create multiple models using AutoML Tables.
  2. Automate multiple training runs using Cloud Composer.
  3. Run multiple training jobs on AI Platform with similar job names.
  4. Create an experiment in Kubeflow Pipelines to organize multiple runs.

Answer(s): C



You are developing a Kubeflow pipeline on Google Kubernetes Engine. The first step in the pipeline is to issue a query against BigQuery. You plan to use the results of that query as the input to the next step in your pipeline. You want to achieve this in the easiest way possible. What should you do?

  1. Use the BigQuery console to execute your query, and then save the query results into a new BigQuery table.
  2. Write a Python script that uses the BigQuery API to execute queries against BigQuery. Execute this script as the first step in your Kubeflow pipeline.
  3. Use the Kubeflow Pipelines domain-specific language to create a custom component that uses the Python BigQuery client library to execute queries.
  4. Locate the Kubeflow Pipelines repository on GitHub. Find the BigQuery Query Component, copy that component’s URL, and use it to load the component into your pipeline. Use the component to execute queries against BigQuery.

Answer(s): A



You are building a model to predict daily temperatures. You split the data randomly and then transformed the training and test datasets. Temperature data for model training is uploaded hourly. During testing, your model performed with 97% accuracy; however, after deploying to production, the model’s accuracy dropped to 66%. How can you make your production model more accurate?

  1. Normalize the data for the training, and test datasets as two separate steps.
  2. Split the training and test data based on time rather than a random split to avoid leakage.
  3. Add more data to your test set to ensure that you have a fair distribution and sample for testing.
  4. Apply data transformations before splitting, and cross-validate to make sure that the transformations are applied to both the training and test sets.

Answer(s): B






Post your Comments and Discuss Google Professional Machine Learning Engineer exam with other Community members:

Tina commented on April 09, 2024
Good questions
Anonymous
upvote

Kavah commented on September 29, 2021
Very responsive and cool support team.
UNITED KINGDOM
upvote