Free PROFESSIONAL-CLOUD-DATABASE-ENGINEER Exam Braindumps (page: 6)

Page 6 of 34

Your team uses thousands of connected IoT devices to collect device maintenance data for your oil and gas customers in real time. You want to design inspection routines, device repair, and replacement schedules based on insights gathered from the data produced by these devices. You need a managed solution that is highly scalable, supports a multi-cloud strategy, and offers low latency for these IoT devices.
What should you do?

  1. Use Firestore with Looker.
  2. Use Cloud Spanner with Data Studio.
  3. Use MongoD8 Atlas with Charts.
  4. Use Bigtable with Looker.

Answer(s): C

Explanation:

This scenario has BigTable written all over it - large amounts of data from many devices to be analysed in realtime. I would even argue it could qualify as a multicloud solution, given the links to HBASE. BUT it does not support SQL queries and is not therefore compatible (on its own) with Looker. Firestore + Looker has the same problem. Spanner + Data Studio is at least a compatible pairing, but I agree with others that it doesn't fit this use-case - not least because it's Google-native. By contrast, MongoDB Atlas is a managed solution (just not by Google) which is compatible with the proposed reporting tool (Mongo's own Charts), it's specifically designed for this type of solution and of course it can run on any cloud.



Your application follows a microservices architecture and uses a single large Cloud SQL instance, which is starting to have performance issues as your application grows. in the Cloud Monitoring dashboard, the CPU utilization looks normal You want to follow Google-recommended practices to resolve and prevent these performance issues while avoiding any major refactoring.
What should you do?

  1. Use Cloud Spanner instead of Cloud SQL.
  2. Increase the number of CPUs for your instance.
  3. Increase the storage size for the instance.
  4. Use many smaller Cloud SQL instances.

Answer(s): D

Explanation:

https://cloud.google.com/sql/docs/mysql/best-practices#data-arch



You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance.
What should you do?

  1. Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.
  2. Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.
  3. Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.
  4. Create a CSV file by running the SQL statement SELECT...INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.

Answer(s): C

Explanation:

https://cloud.google.com/sql/docs/mysql/import-export#serverless



You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi- zonal setup. The primary and read replica instances are in the same region but in different zones. You need to ensure that you split the application load between both instances.
What should you do?

  1. Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.
  2. Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.
  3. Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.
  4. Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.

Answer(s): B

Explanation:

https://severalnines.com/blog/how-achieve-postgresql-high-availability-pgbouncer/

https://cloud.google.com/blog/products/databases/using-haproxy-to-scale-read-only-workloads-on- cloud-sql-for-postgresql
This answer is correct because PgBouncer is a lightweight connection pooler for PostgreSQL that can help you distribute read requests between the Cloud SQL primary and read replica instances1. PgBouncer can also improve performance and scalability by reducing the overhead of creating new connections and reusing existing ones1. You can install PgBouncer on a Compute Engine instance and configure it to connect to the Cloud SQL instances using private IP addresses or the Cloud SQL Auth proxy2.



Page 6 of 34



Post your Comments and Discuss Google PROFESSIONAL-CLOUD-DATABASE-ENGINEER exam with other Community members:

muksgreen commented on March 01, 2024
brainstorming excellent
JAPAN
upvote