Free Google Google Cloud Data Engineer Professional Exam Questions (page: 12)

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file.
What is the most likely cause of this problem?

  1. The CSV data loaded in BigQuery is not flagged as CSV.
  2. The CSV data has invalid rows that were skipped on import.
  3. The CSV data loaded in BigQuery is not using BigQuery's default encoding.
  4. The CSV data has not gone through an ETL phase before loading into BigQuery.

Answer(s): B



Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.

You are told that due to seasonality, your company expects the number of files to double for the next three months.
Which two actions should you take? (choose two.)

  1. Introduce data compression for each file to increase the rate file of file transfer.
  2. Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps.
  3. Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel.
  4. Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon receiving them.
  5. Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premices data to the designated storage bucket.

Answer(s): C,E



You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of- Things (IoT) devices. The volume of data is growing at 100 TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (ACID). However, high availability and low latency are required.

You need to analyze the data by querying against individual fields.
Which three databases meet your requirements? (Choose three.)

  1. Redis
  2. HBase
  3. MySQL
  4. MongoDB
  5. Cassandra
  6. HDFS with Hive

Answer(s): B,D,F

Explanation:



Suppose you have a table that includes a nested column called "city" inside a column called "person", but when you try to submit the following query in BigQuery, it gives you an error.

SELECT person FROM `project1.example.table1` WHERE city = "London"

How would you correct the error?

  1. Add ", UNNEST(person)" before the WHERE clause.
  2. Change "person" to "person.city".
  3. Change "person" to "city.person".
  4. Add ", UNNEST(city)" before the WHERE clause.

Answer(s): A

Explanation:

To access the person.city column, you need to "UNNEST(person)" and JOIN it to table1 using a comma.


Reference:

https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy- sql#nested_repeated_results



What are two of the benefits of using denormalized data structures in BigQuery?

  1. Reduces the amount of data processed, reduces the amount of storage required
  2. Increases query speed, makes queries simpler
  3. Reduces the amount of storage required, increases query speed
  4. Reduces the amount of data processed, increases query speed

Answer(s): B

Explanation:

Denormalization increases query speed for tables with billions of rows because BigQuery's performance degrades when doing JOINs on large tables, but with a denormalized data structure, you don't have to use JOINs, since all of the data has been combined into one table. Denormalization also makes queries simpler because you do not have to use JOIN clauses.

Denormalization increases the amount of data processed and the amount of storage required because it creates redundant data.


Reference:

https://cloud.google.com/solutions/bigquery-data-warehouse#denormalizing_data



Viewing page 12 of 78
Viewing questions 56 - 60 out of 384 questions



Post your Comments and Discuss Google Google Cloud Data Engineer Professional exam prep with other Community members:

Google Cloud Data Engineer Professional Exam Discussions & Posts