Free Professional Data Engineer Exam Braindumps (page: 11)

Page 11 of 68
View Related Case Study

Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day's events. They also want to use streaming ingestion.
What should you do?

  1. Create a table called tracking_table and include a DATE column.
  2. Create a partitioned table called tracking_table and include a TIMESTAMP column.
  3. Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.
  4. Create a table called tracking_table with a TIMESTAMP column to represent the day.

Answer(s): B



View Related Case Study

You need to compose visualization for operations teams with the following requirements:

Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)
The report must not be more than 3 hours delayed from live data. The actionable report should only show suboptimal links. Most suboptimal links should be sorted to the top. Suboptimal links can be grouped and filtered by regional geography. User response time to load the report must be <5 seconds.

You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month.
What should you do?

  1. Look through the current data and compose a series of charts and tables, one for each possible
    combination of criteria.
  2. Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.
  3. Export the data to a spreadsheet, compose a series of charts and tables, one for each possible
    combination of criteria, and spread them across multiple tabs.
  4. Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.

Answer(s): B



View Related Case Study

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

  1. Load the data into Google Sheets, use formulas to calculate a metric, and use
    filters/sorting to show only suboptimal links in a table.
  2. Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.
  3. Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.
  4. Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Answer(s): C



View Related Case Study

MJTelco is building a custom interface to share data. They have these requirements:

They need to do aggregations over their petabyte-scale datasets. They need to scan specific time range rows with a very fast response time (milliseconds).

Which combination of Google Cloud Platform products should you recommend?

  1. Cloud Datastore and Cloud Bigtable
  2. Cloud Bigtable and Cloud SQL
  3. BigQuery and Cloud Bigtable
  4. BigQuery and Cloud Storage

Answer(s): C



Page 11 of 68



Post your Comments and Discuss Google Professional Data Engineer exam with other Community members:

madhan commented on June 16, 2023
next question
EUROPEAN UNION
upvote