Free PROFESSIONAL-CLOUD-DATABASE-ENGINEER Exam Braindumps (page: 5)

Page 5 of 34

You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse.
What should you do?

  1. Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.
  2. Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences
  3. Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
  4. Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.

Answer(s): C

Explanation:

Use Firestore in Datastore mode for new server projects. Firestore in Datastore mode allows you to use established Datastore server architectures while removing fundamental Datastore limitations. Datastore mode can automatically scale to millions of writes per second. Use Firestore in Native mode for new mobile and web apps. Firestore offers mobile and web client libraries with real-time and offline features. Native mode can automatically scale to millions of concurrent clients.



Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports.
Which two approaches should you implement? (Choose two.)

  1. Create secondary indexes on the replica.
  2. Create additional read replicas, and partition your analytics users to use different read replicas.
  3. Disable replication on the read replica, and set the flag for parallel replication on the read replica.
    Re-enable replication and optimize performance by setting flags on the primary instance.
  4. Disable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.
  5. Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.

Answer(s): B,C

Explanation:

Replication lag and slow report performance. E is eliminated because using BigQuery would mean changes to the current reports. Report slowness could be the result of poor indexing or just too much read load (or both!). Since excessive load is mentioned in the question, creating additional read replicas and spreading the analytics workload around makes B correct and eliminates A as a way to speed up reporting. That leaves the replication problem. Cloud SQL enables single threaded replication by default, so it stands to reason enabling parallel replication would help the lag. To do that you disable replication on the replica (not the primary), set flags on the replica and optionally set flags on the primary instance to optimize performance for parallel replication. That makes C correct and D incorrect. https://cloud.google.com/sql/docs/mysql/replication/manage- replicas#configuring-parallel-replication



You are evaluating Cloud SQL for PostgreSQL as a possible destination for your on-premises PostgreSQL instances. Geography is becoming increasingly relevant to customer privacy worldwide. Your solution must support data residency requirements and include a strategy to:
configure where data is stored control where the encryption keys are stored govern the access to data
What should you do?

  1. Replicate Cloud SQL databases across different zones.
  2. Create a Cloud SQL for PostgreSQL instance on Google Cloud for the data that does not need to adhere to data residency requirements. Keep the data that must adhere to data residency requirements on-premises. Make application changes to support both databases.
  3. Allow application access to data only if the users are in the same region as the Google Cloud region for the Cloud SQL for PostgreSQL database.
  4. Use features like customer-managed encryption keys (CMEK), VPC Service Controls, and Identity and Access Management (IAM) policies.

Answer(s): D

Explanation:

https://cloud.google.com/blog/products/identity-security/meet-data-residency-requirements-with- google-cloud



Your customer is running a MySQL database on-premises with read replicas. The nightly incremental backups are expensive and add maintenance overhead. You want to follow Google-recommended practices to migrate the database to Google Cloud, and you need to ensure minimal downtime.
What should you do?

  1. Create a Google Kubernetes Engine (GKE) cluster, install MySQL on the cluster, and then import the dump file.
  2. Use the mysqldump utility to take a backup of the existing on-premises database, and then import it into Cloud SQL.
  3. Create a Compute Engine VM, install MySQL on the VM, and then import the dump file.
  4. Create an external replica, and use Cloud SQL to synchronize the data to the replica.

Answer(s): D

Explanation:

https://cloud.google.com/sql/docs/mysql/replication/configure-replication-from-external



Page 5 of 34



Post your Comments and Discuss Google PROFESSIONAL-CLOUD-DATABASE-ENGINEER exam with other Community members:

muksgreen commented on March 01, 2024
brainstorming excellent
JAPAN
upvote