Free Professional Machine Learning Engineer Exam Braindumps (page: 20)

Page 20 of 69

You need to execute a batch prediction on 100 million records in a BigQuery table with a custom TensorFlow DNN regressor model, and then store the predicted results in a BigQuery table. You want to minimize the effort required to build this inference pipeline. What should you do?

  1. Import the TensorFlow model with BigQuery ML, and run the ml.predict function.
  2. Use the TensorFlow BigQuery reader to load the data, and use the BigQuery API to write the results to BigQuery.
  3. Create a Dataflow pipeline to convert the data in BigQuery to TFRecords. Run a batch inference on Vertex AI Prediction, and write the results to BigQuery.
  4. Load the TensorFlow SavedModel in a Dataflow pipeline. Use the BigQuery I/O connector with a custom function to perform the inference within the pipeline, and write the results to BigQuery.

Answer(s): A



You are creating a deep neural network classification model using a dataset with categorical input values. Certain columns have a cardinality greater than 10,000 unique values. How should you encode these categorical values as input into the model?

  1. Convert each categorical value into an integer value.
  2. Convert the categorical string data to one-hot hash buckets.
  3. Map the categorical variables into a vector of boolean values.
  4. Convert each categorical value into a run-length encoded string.

Answer(s): C



You need to train a natural language model to perform text classification on product descriptions that contain millions of examples and 100,000 unique words. You want to preprocess the words individually so that they can be fed into a recurrent neural network. What should you do?

  1. Create a hot-encoding of words, and feed the encodings into your model.
  2. Identify word embeddings from a pre-trained model, and use the embeddings in your model.
  3. Sort the words by frequency of occurrence, and use the frequencies as the encodings in your model.
  4. Assign a numerical value to each word from 1 to 100,000 and feed the values as inputs in your model.

Answer(s): B



You work for an online travel agency that also sells advertising placements on its website to other companies. You have been asked to predict the most relevant web banner that a user should see next. Security is important to your company. The model latency requirements are 300ms@p99, the inventory is thousands of web banners, and your exploratory analysis has shown that navigation context is a good predictor. You want to Implement the simplest solution. How should you configure the prediction pipeline?

  1. Embed the client on the website, and then deploy the model on AI Platform Prediction.
  2. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Firestore for writing and for reading the user’s navigation context, and then deploy the model on AI Platform Prediction.
  3. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user’s navigation context, and then deploy the model on AI Platform Prediction.
  4. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Memorystore for writing and for reading the user’s navigation context, and then deploy the model on Google Kubernetes Engine.

Answer(s): C



Page 20 of 69



Post your Comments and Discuss Google Professional Machine Learning Engineer exam with other Community members:

Tina commented on April 09, 2024
Good questions
Anonymous
upvote

Kavah commented on September 29, 2021
Very responsive and cool support team.
UNITED KINGDOM
upvote