Free PROFESSIONAL-CLOUD-DATABASE-ENGINEER Exam Braindumps (page: 4)

Page 4 of 34

You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to

APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices.
What should you do?

  1. Maintain a target of 23% CPU utilization by locating:
    cluster-a in zone us-central1-a cluster-b in zone europe-west1-d cluster-c in zone asia-east1-b
  2. Maintain a target of 23% CPU utilization by locating:
    cluster-a in zone us-central1-a cluster-b in zone us-central1-b cluster-c in zone us-east1-a
  3. Maintain a target of 35% CPU utilization by locating:
    cluster-a in zone us-central1-a cluster-b in zone australia-southeast1-a cluster-c in zone europe-west1-d cluster-d in zone asia-east1-b
  4. Maintain a target of 35% CPU utilization by locating:
    cluster-a in zone us-central1-a cluster-b in zone us-central2-a cluster-c in zone asia-northeast1-b cluster-d in zone asia-east1-b

Answer(s): D

Explanation:

https://cloud.google.com/bigtable/docs/replication-settings#regional-failover



Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second.
What should you do?

  1. Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.
  2. Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
  3. Use Memorystore to handle your low-latency requirements and for real-time analytics.
  4. Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.

Answer(s): A

Explanation:

Start with the lowest tier and smallest size and then grow your instance as needed. Memorystore provides automated scaling using APIs, and optimized node placement across zones for redundancy. Memorystore for Memcached can support clusters as large as 5 TB, enabling millions of QPS at very low latency



Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hot-spots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation.
What should you do? (Choose two.)

  1. Use an auto-incrementing value as the primary key.
  2. Normalize the data model.
  3. Promote low-cardinality attributes in multi-attribute primary keys.
  4. Promote high-cardinality attributes in multi-attribute primary keys.
  5. Use bit-reverse sequential value as the primary key.

Answer(s): D,E

Explanation:

https://cloud.google.com/spanner/docs/schema-design D because high cardinality means you have more unique values in the collumn. That's a good thing for a hot-spotting issue. E because Spanner specifically has this feature to reduce hot spotting. Basically, it generates unique values https://cloud.google.com/spanner/docs/schema-design#bit_reverse_primary_key

D) Promote high-cardinality attributes in multi-attribute primary keys. This is a correct answer because promoting high-cardinality attributes in multi-attribute primary keys can help avoid hotspots in Cloud Spanner. High-cardinality attributes are those that have many distinct values, such as UUIDs, email addresses, or timestamps1. By placing high-cardinality attributes first in the primary key, you can ensure that the rows are distributed more evenly across the key space, and avoid having too many requests sent to the same server2. E) Use bit-reverse sequential value as the primary key. This is a correct answer because using bit-reverse sequential value as the primary key can help avoid hotspots in Cloud Spanner. Bit-reverse sequential value is a technique that reverses the bits of a monotonically increasing value, such as a timestamp or an auto-incrementing ID1. By reversing the bits, you can create a pseudo-random value that spreads the writes across the key space, and avoid having all the inserts occurring at the end of the table2.



You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries.
What should you do?

  1. Use log messages produced by Cloud SQL.
  2. Use Query Insights for Cloud SQL.
  3. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
  4. Use Cloud SQL instance monitoring in the Google Cloud Console.

Answer(s): B

Explanation:

https://cloud.google.com/sql/docs/mysql/using-query-insights#introduction



Page 4 of 34



Post your Comments and Discuss Google PROFESSIONAL-CLOUD-DATABASE-ENGINEER exam with other Community members:

muksgreen commented on March 01, 2024
brainstorming excellent
JAPAN
upvote