Free Professional Data Engineer Exam Braindumps (page: 4)

Page 4 of 68

You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old.
What should you do?

  1. Disable caching by editing the report settings.
  2. Disable caching in BigQuery by editing table details.
  3. Refresh your browser tab showing the visualizations.
  4. Clear your browser history for the past hour then reload the tab showing the virtualizations.

Answer(s): A


Your company built a TensorFlow neural-network model with a large number of neurons and layers. The model fits well for the training data. However, when tested against new data, it performs poorly.
What method can you employ to address this?

  1. Threading
  2. Serialization
  3. Dropout Methods
  4. Dimensionality Reduction

Answer(s): C


Reference prediction-using-tensorflow-30505541d877

You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and analyze these very large datasets in real time.
What should you do?

  1. Send the data to Google Cloud Datastore and then export to BigQuery.
  2. Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.
  3. Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required.
  4. Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.

Answer(s): B

You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can improve training speed by removing some features while having a minimum effect on model accuracy.
What can you do?

  1. Eliminate features that are highly correlated to the output labels.
  2. Combine highly co-dependent features into one representative feature.
  3. Instead of feeding in each feature individually, average their values in batches of 3.
  4. Remove the features that have null values for more than 50% of the training records.

Answer(s): B

Page 4 of 68

Post your Comments and Discuss Google Professional Data Engineer exam with other Community members:

madhan 6/16/2023 6:22:08 AM
next question