Free Google Cloud Data Engineer Professional Exam Braindumps (page: 13)

Page 13 of 95

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs.
What should you recommend they do?

  1. Rewrite the job in Pig.
  2. Rewrite the job in Apache Spark.
  3. Increase the size of the Hadoop cluster.
  4. Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

Answer(s): A



You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee. How can you make that data available while minimizing cost?

  1. Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName.
  2. Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values.
  3. Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.
  4. Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

Answer(s): C



You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity `Movie' the property `actors' and the property `tags' have multiple values but the property `date released' does not. A typical query would ask for all movies with actor=<actorname> ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes?



  1. Option A
  2. Option
  3. Option C
  4. Option D

Answer(s): A



You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a Google Cloud Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible.
What should you do?

  1. Change the processing job to use Google Cloud Dataproc instead.
  2. Manually start the Cloud Dataflow job each morning when you get into the office.
  3. Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job.
  4. Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.

Answer(s): C






Post your Comments and Discuss Google Google Cloud Data Engineer Professional exam with other Community members:

Google Cloud Data Engineer Professional Exam Discussions & Posts