Free Professional Machine Learning Engineer Exam Braindumps (page: 29)

Page 29 of 69

You need to analyze user activity data from your company’s mobile applications. Your team will use BigQuery for data analysis, transformation, and experimentation with ML algorithms. You need to ensure real-time ingestion of the user activity data into BigQuery. What should you do?

  1. Configure Pub/Sub to stream the data into BigQuery.
  2. Run an Apache Spark streaming job on Dataproc to ingest the data into BigQuery.
  3. Run a Dataflow streaming job to ingest the data into BigQuery.
  4. Configure Pub/Sub and a Dataflow streaming job to ingest the data into BigQuery,

Answer(s): A



You work for a gaming company that manages a popular online multiplayer game where teams with 6 players play against each other in 5-minute battles. There are many new players every day. You need to build a model that automatically assigns available players to teams in real time. User research indicates that the game is more enjoyable when battles have players with similar skill levels. Which business metrics should you track to measure your model’s performance?

  1. Average time players wait before being assigned to a team
  2. Precision and recall of assigning players to teams based on their predicted versus actual ability
  3. User engagement as measured by the number of battles played daily per user
  4. Rate of return as measured by additional revenue generated minus the cost of developing a new model

Answer(s): C



You are building an ML model to predict trends in the stock market based on a wide range of factors. While exploring the data, you notice that some features have a large range. You want to ensure that the features with the largest magnitude don’t overfit the model. What should you do?

  1. Standardize the data by transforming it with a logarithmic function.
  2. Apply a principal component analysis (PCA) to minimize the effect of any particular feature.
  3. Use a binning strategy to replace the magnitude of each feature with the appropriate bin number.
  4. Normalize the data by scaling it to have values between 0 and 1.

Answer(s): D



You work for a biotech startup that is experimenting with deep learning ML models based on properties of biological organisms. Your team frequently works on early-stage experiments with new architectures of ML models, and writes custom TensorFlow ops in C++. You train your models on large datasets and large batch sizes. Your typical batch size has 1024 examples, and each example is about 1 MB in size. The average size of a network with all weights and embeddings is 20 GB. What hardware should you choose for your models?

  1. A cluster with 2 n1-highcpu-64 machines, each with 8 NVIDIA Tesla V100 GPUs (128 GB GPU memory in total), and a n1-highcpu-64 machine with 64 vCPUs and 58 GB RAM
  2. A cluster with 2 a2-megagpu-16g machines, each with 16 NVIDIA Tesla A100 GPUs (640 GB GPU memory in total), 96 vCPUs, and 1.4 TB RAM
  3. A cluster with an n1-highcpu-64 machine with a v2-8 TPU and 64 GB RAM
  4. A cluster with 4 n1-highcpu-96 machines, each with 96 vCPUs and 86 GB RAM

Answer(s): B



Page 29 of 69



Post your Comments and Discuss Google Professional Machine Learning Engineer exam with other Community members:

Tina commented on April 09, 2024
Good questions
Anonymous
upvote

Kavah commented on September 29, 2021
Very responsive and cool support team.
UNITED KINGDOM
upvote