Free Professional Data Engineer Exam Braindumps (page: 33)

Page 32 of 95

Cloud Bigtable is Google's ______ Big Data database service.

  1. Relational
  2. mySQL
  3. NoSQL
  4. SQL Server

Answer(s): C

Explanation:

Cloud Bigtable is Google's NoSQL Big Data database service. It is the same database that Google uses for services, such as Search, Analytics, Maps, and Gmail.

It is used for requirements that are low latency and high throughput including Internet of Things (IoT), user analytics, and financial data analysis.


Reference:

https://cloud.google.com/bigtable/



When you store data in Cloud Bigtable, what is the recommended minimum amount of stored data?

  1. 500 TB
  2. 1 GB
  3. 1 TB
  4. 500 GB

Answer(s): C

Explanation:

Cloud Bigtable is not a relational database. It does not support SQL queries, joins, or multi-row transactions. It is not a good solution for less than 1 TB of data.


Reference:

https://cloud.google.com/bigtable/docs/overview#title_short_and_other_storage_options



If you're running a performance test that depends upon Cloud Bigtable, all the choices except one below are recommended steps.
Which is NOT a recommended step to follow?

  1. Do not use a production instance.
  2. Run your test for at least 10 minutes.
  3. Before you test, run a heavy pre-test for several minutes.
  4. Use at least 300 GB of data.

Answer(s): A

Explanation:

If you're running a performance test that depends upon Cloud Bigtable, be sure to follow these steps as you plan and execute your test:

Use a production instance. A development instance will not give you an accurate sense of how a production instance performs under load.

Use at least 300 GB of data. Cloud Bigtable performs best with 1 TB or more of data. However, 300 GB of data is enough to provide reasonable results in a performance test on a 3-node cluster. On larger clusters, use 100 GB of data per node.

Before you test, run a heavy pre-test for several minutes. This step gives Cloud Bigtable a chance to balance data across your nodes based on the access patterns it observes.

Run your test for at least 10 minutes. This step lets Cloud Bigtable further optimize your data, and it helps ensure that you will test reads from disk as well as cached reads from memory.


Reference:

https://cloud.google.com/bigtable/docs/performance



Cloud Bigtable is a recommended option for storing very large amounts of ____________________________?

  1. multi-keyed data with very high latency
  2. multi-keyed data with very low latency
  3. single-keyed data with very low latency
  4. single-keyed data with very high latency

Answer(s): C

Explanation:

Cloud Bigtable is a sparsely populated table that can scale to billions of rows and thousands of columns, allowing you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations.


Reference:

https://cloud.google.com/bigtable/docs/overview






Post your Comments and Discuss Google Professional Data Engineer exam with other Community members:

Exam Discussions & Posts