Free Google Cloud Data Engineer Professional Exam Braindumps (page: 11)

Page 11 of 95
View Related Case Study

Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.

Which approach should you take?

  1. Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.
  2. Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.
  3. Use the NOW () function in BigQuery to record the event's time.
  4. Use the automatically generated timestamp from Cloud Pub/Sub to order the data.

Answer(s): B

Explanation:



View Related Case Study

MJTelco's Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required.
Which Cloud Dataflow pipeline configuration setting should you update?

  1. The zone
  2. The number of workers
  3. The disk size per worker
  4. The maximum number of workers

Answer(s): A



View Related Case Study

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

  1. Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.
  2. Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.
  3. Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.
  4. Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Answer(s): C



View Related Case Study

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.

Which two actions should you take? (Choose two.)

  1. Ensure all the tables are included in global dataset.
  2. Ensure each table is included in a dataset for a region.
  3. Adjust the settings for each table to allow a related region-based security group view access.
  4. Adjust the settings for each view to allow a related region-based security group view access.
  5. Adjust the settings for each dataset to allow a related region-based security group view access.

Answer(s): B,D






Post your Comments and Discuss Google Google Cloud Data Engineer Professional exam with other Community members:

Google Cloud Data Engineer Professional Exam Discussions & Posts