Free Professional Data Engineer Exam Braindumps (page: 34)

Page 33 of 95

Google Cloud Bigtable indexes a single value in each row. This value is called the _______.

  1. primary key
  2. unique key
  3. row key
  4. master key

Answer(s): C

Explanation:

Cloud Bigtable is a sparsely populated table that can scale to billions of rows and thousands of columns, allowing you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key.


Reference:

https://cloud.google.com/bigtable/docs/overview



What is the HBase Shell for Cloud Bigtable?

  1. The HBase shell is a GUI based interface that performs administrative tasks, such as creating and deleting tables.
  2. The HBase shell is a command-line tool that performs administrative tasks, such as creating and deleting tables.
  3. The HBase shell is a hypervisor based shell that performs administrative tasks, such as creating and deleting new virtualized instances.
  4. The HBase shell is a command-line tool that performs only user account management functions to grant access to Cloud Bigtable instances.

Answer(s): B

Explanation:

The HBase shell is a command-line tool that performs administrative tasks, such as creating and deleting tables. The Cloud Bigtable HBase client for Java makes it possible to use the HBase shell to connect to Cloud Bigtable.


Reference:

https://cloud.google.com/bigtable/docs/installing-hbase-shell



What is the recommended action to do in order to switch between SSD and HDD storage for your Google Cloud Bigtable instance?

  1. create a third instance and sync the data from the two storage types via batch jobs
  2. export the data from the existing instance and import the data into a new instance
  3. run parallel instances where one is HDD and the other is SDD
  4. the selection is final and you must resume using the same storage type

Answer(s): B

Explanation:

When you create a Cloud Bigtable instance and cluster, your choice of SSD or HDD storage for the cluster is permanent. You cannot use the Google Cloud Platform Console to change the type of storage that is used for the cluster.

If you need to convert an existing HDD cluster to SSD, or vice-versa, you can export the data from the existing instance and import the data into a new instance. Alternatively, you can write a Cloud Dataflow or Hadoop MapReduce job that copies the data from one instance to another.


Reference:

https://cloud.google.com/bigtable/docs/choosing-ssd-hdd­



You are training a spam classifier. You notice that you are overfitting the training dat

  1. Which three actions can you take to resolve this problem? (Choose three.)
  2. Get more training examples
  3. Reduce the number of training examples
  4. Use a smaller set of features
  5. Use a larger set of features
  6. Increase the regularization parameters
  7. Decrease the regularization parameters

Answer(s): A,D,F






Post your Comments and Discuss Google Professional Data Engineer exam with other Community members:

Exam Discussions & Posts