Free Professional Cloud Security Engineer Exam Braindumps (page: 12)

Page 11 of 60

Your company runs a website that will store PII on Google Cloud Platform. To comply with data privacy regulations, this data can only be stored for a specific amount of time and must be fully deleted after this specific period. Data that has not yet reached the time period should not be deleted. You want to automate the process of complying with this regulation.

What should you do?

  1. Store the data in a single Persistent Disk, and delete the disk at expiration time.
  2. Store the data in a single BigQuery table and set the appropriate table expiration time.
  3. Store the data in a Cloud Storage bucket, and configure the bucket's Object Lifecycle Management feature.
  4. Store the data in a single BigTable table and set an expiration time on the column families.

Answer(s): C

Explanation:

"To support common use cases like setting a Time to Live (TTL) for objects, retaining noncurrent versions of objects, or "downgrading" storage classes of objects to help manage costs, Cloud Storage offers the Object Lifecycle Management feature. This page describes the feature as well as the options available when using it. To learn how to enable Object Lifecycle Management, and for examples of lifecycle policies, see Managing Lifecycles." https://cloud.google.com/storage/docs/lifecycle



A DevOps team will create a new container to run on Google Kubernetes Engine. As the application will be internet-facing, they want to minimize the attack surface of the container.

What should they do?

  1. Use Cloud Build to build the container images.
  2. Build small containers using small base images.
  3. Delete non-used versions from Container Registry.
  4. Use a Continuous Delivery tool to deploy the application.

Answer(s): B

Explanation:

Small containers usually have a smaller attack surface as compared to containers that use large base images. https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-how-and-why-to-

build-small-container-images



While migrating your organization's infrastructure to GCP, a large number of users will need to access GCP Console. The Identity Management team already has a well-established way to manage your users and want to keep using your existing Active Directory or LDAP server along with the existing SSO password.

What should you do?

  1. Manually synchronize the data in Google domain with your existing Active Directory or LDAP server.
  2. Use Google Cloud Directory Sync to synchronize the data in Google domain with your existing Active Directory or LDAP server.
  3. Users sign in directly to the GCP Console using the credentials from your on-premises Kerberos compliant identity provider.
  4. Users sign in using OpenID (OIDC) compatible IdP, receive an authentication token, then use that token to log in to the GCP Console.

Answer(s): B

Explanation:

https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-configuring- single-sign-on


Reference:

https://cloud.google.com/blog/products/identity-security/using-your-existing-identity- management- system-with-google-cloud-platform



Your company is using GSuite and has developed an application meant for internal usage on Google App Engine. You need to make sure that an external user cannot gain access to the application even when an employee's password has been compromised.

What should you do?

  1. Enforce 2-factor authentication in GSuite for all users.
  2. Configure Cloud Identity-Aware Proxy for the App Engine Application.
  3. Provision user passwords using GSuite Password Sync.
  4. Configure Cloud VPN between your private network and GCP.

Answer(s): A

Explanation:

https://docs.google.com/document/d/11o3e14tyhnT7w45Q8- r9ZmTAfj2WUNUpJPZImrxm_F4/edit?usp=sharing https://support.google.com/a/answer/175197?hl=en






Post your Comments and Discuss Google Professional Cloud Security Engineer exam with other Community members:

Exam Discussions & Posts