Free Professional Cloud Security Engineer Exam Braindumps (page: 17)

Page 16 of 60

A customer wants to deploy a large number of 3-tier web applications on Compute Engine.

How should the customer ensure authenticated network separation between the different tiers of the application?

  1. Run each tier in its own Project, and segregate using Project labels.
  2. Run each tier with a different Service Account (SA), and use SA-based firewall rules.
  3. Run each tier in its own subnet, and use subnet-based firewall rules.
  4. Run each tier with its own VM tags, and use tag-based firewall rules.

Answer(s): B

Explanation:

"Isolate VMs using service accounts when possible" "even though it is possible to uses tags for target filtering in this manner, we recommend that you use service accounts where possible. Target tags are not access-controlled and can be changed by someone with the instanceAdmin role while VMs are in service. Service accounts are access-controlled, meaning that a specific user must be explicitly authorized to use a service account. There can only be one service account per instance, whereas there can be multiple tags. Also, service accounts assigned to a VM can only be changed when the VM is stopped." https://cloud.google.com/solutions/best-practices-vpc-design#isolate-vms-service- accounts



A manager wants to start retaining security event logs for 2 years while minimizing costs. You write a filter to select the appropriate log entries.

Where should you export the logs?

  1. BigQuery datasets
  2. Cloud Storage buckets
  3. StackDriver logging
  4. Cloud Pub/Sub topics

Answer(s): B


Reference:

https://cloud.google.com/logging/docs/exclusions



For compliance reasons, an organization needs to ensure that in-scope PCI Kubernetes Pods reside on "in- scope" Nodes only. These Nodes can only contain the "in-scope" Pods.

How should the organization achieve this objective?

  1. Add a nodeSelector field to the pod configuration to only use the Nodes labeled inscope: true.
  2. Create a node pool with the label inscope: true and a Pod Security Policy that only allows the Pods to run on Nodes with that label.
  3. Place a taint on the Nodes with the label inscope: true and effect NoSchedule and a toleration to match in the Pod configuration.
  4. Run all in-scope Pods in the namespace "in-scope-pci".

Answer(s): A

Explanation:

nodeSelector is the simplest recommended form of node selection constraint. You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have. Kubernetes only schedules the Pod onto nodes that have each of the labels you specify. => https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also evaluates other parameters as part of its function. => https://kubernetes.io/docs/concepts/scheduling- eviction/taint-and-toleration/



In an effort for your company messaging app to comply with FIPS 140-2, a decision was made to use GCP compute and network services. The messaging app architecture includes a Managed Instance Group (MIG) that controls a cluster of Compute Engine instances. The instances use Local SSDs for data caching and UDP for instance-to-instance communications. The app development team is willing to make any changes necessary to comply with the standard

Which options should you recommend to meet the requirements?

  1. Encrypt all cache storage and VM-to-VM communication using the BoringCrypto module.
  2. Set Disk Encryption on the Instance Template used by the MIG to customer-managed key and use BoringSSL for all data transit between instances.
  3. Change the app instance-to-instance communications from UDP to TCP and enable BoringSSL on clients' TLS connections.
  4. Set Disk Encryption on the Instance Template used by the MIG to Google-managed Key and use BoringSSL library on all instance-to-instance communications.

Answer(s): A

Explanation:

https://cloud.google.com/security/compliance/fips-140-2-validated Google Cloud Platform uses a FIPS 140-2 validated encryption module called BoringCrypto (certificate 3318) in our production environment. This means that both data in transit to the customer and between data centers, and data at rest are encrypted using FIPS 140-2 validated encryption. The module that achieved FIPS 140-2 validation is part of our BoringSSL library.






Post your Comments and Discuss Google Professional Cloud Security Engineer exam with other Community members:

Exam Discussions & Posts