Free PROFESSIONAL-CLOUD-DATABASE-ENGINEER Exam Braindumps (page: 13)

Page 13 of 34

You are migrating an on-premises application to Google Cloud. The application requires a high availability (HA) PostgreSQL database to support business-critical functions. Your company's disaster recovery strategy requires a recovery time objective (RTO) and recovery point objective (RPO) within 30 minutes of failure. You plan to use a Google Cloud managed service.
What should you do to maximize uptime for your application?

  1. Deploy Cloud SQL for PostgreSQL in a regional configuration. Create a read replica in a different zone in the same region and a read replica in another region for disaster recovery.
  2. Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Take periodic backups, and use this backup to restore to a new Cloud SQL for PostgreSQL instance in another region during a disaster recovery event.
  3. Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross- region read replica, and promote the read replica as the primary node for disaster recovery.
  4. Migrate the PostgreSQL database to multi-regional Cloud Spanner so that a single region outage will not affect your application. Update the schema to support Cloud Spanner data types, and refactor the application.

Answer(s): C

Explanation:

The best answer is deploy an HA configuration and have a read replica you could promote to the primary in a different region



Your team is running a Cloud SQL for MySQL instance with a 5 TB database that must be available 24/7. You need to save database backups on object storage with minimal operational overhead or risk to your production workloads.
What should you do?

  1. Use Cloud SQL serverless exports.
  2. Create a read replica, and then use the mysqldump utility to export each table.
  3. Clone the Cloud SQL instance, and then use the mysqldump utlity to export the data.
  4. Use the mysqldump utility on the primary database instance to export the backup.

Answer(s): A

Explanation:

https://cloud.google.com/blog/products/databases/introducing-cloud-sql-serverless-exports



You are deploying a new Cloud SQL instance on Google Cloud using the Cloud SQL Auth proxy. You have identified snippets of application code that need to access the new Cloud SQL instance. The snippets reside and execute on an application server running on a Compute Engine machine. You want to follow Google-recommended practices to set up Identity and Access Management (IAM) as quickly and securely as possible.
What should you do?

  1. For each application code, set up a common shared user account.
  2. For each application code, set up a dedicated user account.
  3. For the application server, set up a service account.
  4. For the application server, set up a common shared user account.

Answer(s): C

Explanation:

https://cloud.google.com/sql/docs/mysql/sql-proxy#using-a-service-account



Your organization is running a low-latency reporting application on Microsoft SQL Server. In addition to the database engine, you are using SQL Server Analysis Services (SSAS), SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS) in your on-premises environment. You want to migrate your Microsoft SQL Server database instances to Google Cloud. You need to ensure minimal disruption to the existing architecture during migration.
What should you do?

  1. Migrate to Cloud SQL for SQL Server.
  2. Migrate to Cloud SQL for PostgreSQL.
  3. Migrate to Compute Engine.
  4. Migrate to Google Kubernetes Engine (GKE).

Answer(s): C

Explanation:

https://cloud.google.com/sql/docs/sqlserver/features



Page 13 of 34



Post your Comments and Discuss Google PROFESSIONAL-CLOUD-DATABASE-ENGINEER exam with other Community members:

muksgreen commented on March 01, 2024
brainstorming excellent
JAPAN
upvote