Free Google Associate Cloud Engineer Exam Braindumps (page: 18)

Page 17 of 74

You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few requests per day. You want to minimize costs.
What should you do

  1. Deploy the container on Cloud Run.
  2. Deploy the container on Cloud Run on GKE.
  3. Deploy the container on App Engine Flexible.
  4. Deploy the container on Google Kubernetes Engine, with cluster autoscaling and horizontal pod autoscaling enabled.

Answer(s): A

Explanation:

Cloud Run takes any container images and pairs great with the container ecosystem: Cloud Build, Artifact Registry, Docker. ... No infrastructure to manage: once deployed, Cloud Run manages your services so you can sleep well. Fast autoscaling. Cloud Run automatically scales up or down from zero to N depending on traffic.

https://cloud.google.com/run



Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow.
What should you do?

  1. Link the acquired company's projects to your company's billing account.
  2. Configure the acquired company's billing account and your company's billing account to export the billing data into the same BigQuery dataset.
  3. Migrate the acquired company's projects into your company's GCP organization. Link the migrated projects to your company's billing account.
  4. Create a new GCP organization and a new billing account. Migrate the acquired company's projects and your company's projects into the new GCP organization and link the projects to the new billing account.

Answer(s): A

Explanation:

https://cloud.google.com/resource-manager/docs/project-migration#oauth_consent_screen https://cloud.google.com/resource-manager/docs/project-migration



You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices.

What should you do?

  1. Add the support team group to the roles/monitoring.viewer role
  2. Add the support team group to the roles/spanner.databaseUser role.
  3. Add the support team group to the roles/spanner.databaseReader role.
  4. Add the support team group to the roles/stackdriver.accounts.viewer role.

Answer(s): A

Explanation:

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements.
roles/monitoring.viewer. is the right answer.


Reference:

https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles



For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on all the instances. You want to minimize cost.
What should you do?

  1. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances' metadata to add the following value: logs-destination:
    bq://platform-logs.
  2. 1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2.
    Create a Cloud Function that is triggered by messages in the logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.
  3. 1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3.
    Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.
  4. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2.
    Configure this Cloud Function to create a BigQuery Job that executes this query:INSERT INTO dataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

Answer(s): C

Explanation:

1. In Stackdriver Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.






Post your Comments and Discuss Google Google Associate Cloud Engineer exam with other Community members:

Google Associate Cloud Engineer Discussions & Posts