Free Google Cloud Architect Professional Exam Braindumps (page: 27)

Page 26 of 68

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files The database is about to run out of storage space How can you remediate the problem with the least amount of downtime?

  1. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
  2. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine.
  3. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux.
  4. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk.
  5. In the Cloud Platform Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.

Answer(s): A

Explanation:

On Linux instances, connect to your instance and manually resize your partitions and file systems to use the additional disk space that you added.
Extend the file system on the disk or the partition to use the added space. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID.

sudo resize2fs /dev/[DISK_ID][PARTITION_NUMBER]
where [DISK_ID] is the device name and [PARTITION_NUMBER] is the partition number for the device where you are resizing the file system.


Reference:

https://cloud.google.com/compute/docs/disks/add-persistent-disk



Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?

  1. Add each tier to a different subnetwork.
  2. Set up software based firewalls on individual VMs.
  3. Add tags to each tier and set up routes to allow the desired traffic flow.
  4. Add tags to each tier and set up firewall rules to allow the desired traffic flow.

Answer(s): D

Explanation:

https://aws.amazon.com/blogs/aws/building-three-tier-architectures-with-security-groups/

Google Cloud Platform(GCP) enforces firewall rules through rules and tags. GCP rules and tags can be defined once and used across all regions.


Reference:

https://cloud.google.com/docs/compare/openstack/ https://aws.amazon.com/it/blogs/aws/building-three-tier-architectures-with-security-groups/



To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department.
Which two steps should you take? Choose 2 answers

  1. Use the --no-auto-delete flag on all persistent disks and stop the VM.
  2. Use the -auto-delete flag on all persistent disks and terminate the VM.
  3. Apply VM CPU utilization label and include it in the BigQuery billing export.
  4. Use Google BigQuery billing export and labels to associate cost to groups.
  5. Store all state into local SSD, snapshot the persistent disks, and terminate the VM.

  6. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM.

Answer(s): A,D

Explanation:

https://cloud.google.com/billing/docs/how-to/export-data-bigquery



Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a 99.99% availability SLA under these conditions However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load.
What should you do?

  1. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones.
  2. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones.
  3. Expose the new system to a larger group of users, and increase group ' size each day until autoscale logic is tnggered on all layers. At the same time, terminate random resources on both zones.
  4. Capture existing users input, and replay captured user load until resource utilization crosses 80%.
    Also, derive estimated number of users based on existing users usage of the app, and deploy enough resources to handle 200% of expected load.

Answer(s): A






Post your Comments and Discuss Google Google Cloud Architect Professional exam with other Community members:

Google Cloud Architect Professional Discussions & Posts