Free Google Associate Cloud Engineer Exam Braindumps (page: 14)

Page 13 of 74

You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory.
What should you do?

  1. Rely on live migration to move the workload to a machine with more memory.
  2. Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 G
  3. Stop the VM, change the machine type to n1-standard-8, and start the VM.
  4. Stop the VM, increase the memory to 8 GB, and start the VM.

Answer(s): D

Explanation:

In Google compute engine, if predefined machine types don't meet your needs, you can create an instance with custom virtualized hardware settings. Specifically, you can create an instance with a custom number of vCPUs and custom memory, effectively using a custom machine type. Custom machine types are ideal for the following scenarios:
1. Workloads that aren't a good fit for the predefined machine types that are available to you.
2. Workloads that require more processing power or more memory but don't need all of the upgrades that are provided by the next machine type level. In our scenario, we only need a memory upgrade. Moving to a bigger instance would also bump up the CPU which we don't need so we have to use a custom machine type. It is not possible to change memory while the instance is running so you need to first stop the instance, change the memory and then start it again. See below a screenshot that shows how CPU/Memory can be customized for an instance that has been stopped.
Ref: https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine- type



You have production and test workloads that you want to deploy on Compute Engine. Production VMs need to be in a different subnet than the test VMs. All the VMs must be able to reach each other over internal IP without creating additional routes. You need to set up VPC and the 2 subnets.
Which configuration meets these requirements?

  1. Create a single custom VPC with 2 subnets. Create each subnet in a different region and with a different CIDR range.
  2. Create a single custom VPC with 2 subnets. Create each subnet in the same region and with the same CIDR range.
  3. Create 2 custom VPCs, each with a single subnet. Create each subnet is a different region and with a different CIDR range.
  4. Create 2 custom VPCs, each with a single subnet. Create each subnet in the same region and with the same CIDR range.

Answer(s): A

Explanation:

When we create subnets in the same VPC with different CIDR ranges, they can communicate automatically within VPC. Resources within a VPC network can communicate with one another by using internal (private) IPv4 addresses, subject to applicable network firewall rules

Ref: https://cloud.google.com/vpc/docs/vpc



You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated.
What should you do?

  1. Create a health check on port 443 and use that when creating the Managed Instance Group.
  2. Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.
  3. In the Instance Template, add the label `health-check'.
  4. In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Answer(s): A

Explanation:

https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in- migs#setting_up_an_autohealing_policy



Your company has a Google Cloud Platform project that uses BigQuery for data warehousing. Your data science team changes frequently and has few members. You need to allow members of this team to perform queries. You want to follow Google-recommended practices.
What should you do?

  1. 1. Create an IAM entry for each data scientist's user account.2. Assign the BigQuery jobUser role to the group.
  2. 1. Create an IAM entry for each data scientist's user account.2. Assign the BigQuery dataViewer user role to the group.
  3. 1. Create a dedicated Google group in Cloud Identity.2. Add each data scientist's user account to the group.3. Assign the BigQuery jobUser role to the group.
  4. 1. Create a dedicated Google group in Cloud Identity.2. Add each data scientist's user account to the group.3. Assign the BigQuery dataViewer user role to the group.

Answer(s): C

Explanation:

Read the dataset's metadata and to list tables in the dataset. Read data and metadata from the dataset's tables.
When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.

BigQuery Data Viewer
(roles/bigquery.dataViewer)

When applied to a table or view, this role provides permissions to:

Read data and metadata from the table or view.
This role cannot be applied to individual models or routines.

When applied to a dataset, this role provides permissions to:

Read the dataset's metadata and list tables in the dataset.
Read data and metadata from the dataset's tables.
When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are necessary to allow the running of jobs.

Lowest-level resources where you can grant this role:

Table
View

BigQuery Job User
(roles/bigquery.jobUser)

Provides permissions to run jobs, including queries, within the project.

Lowest-level resources where you can grant this role:

Project to run jobs https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser databaseUser needs additional role permission to run jobs https://cloud.google.com/spanner/docs/iam#spanner.databaseUser






Post your Comments and Discuss Google Google Associate Cloud Engineer exam with other Community members:

Google Associate Cloud Engineer Discussions & Posts