Free CLOUD-DIGITAL-LEADER Exam Braindumps (page: 18)

Page 17 of 104

Your team is publishing research results and needs to make large amounts of data available to other researchers within the professional community and the public at minimum cost.

How should you host the data?

  1. Use a Cloud Storage bucket and enable "Requester Pays.'
  2. Use a Cloud Storage bucket and provide Signed URLs for the data files.
  3. Use a Cloud Storage bucket and set up a Cloud Interconnect connection to allow access to the data.
  4. Host the data on-premises. and set up a Cloud Interconnect connection to allow access to the data.

Answer(s): A

Explanation:

Enabling Requester Pays is useful, for example, if you have a lot of data you want to make available to users, but you don't want to be charged for their access to that data.

Reference link- https://cloud.google.com/storage/docs/requester-pays



Your company needs to segment Google Cloud resources used by each team from the others. The teams' efforts are changing frequently, and you need to reduce operational risk and maintain cost visibility.
Which approach does Google recommend?

  1. One project per team.
  2. One organization per team.
  3. One project that contains all of each team's resources.
  4. One top-level folder per team.

Answer(s): A


Reference:

https://cloud.google.com/security/infrastructure/design

The Teams need to segmented to have visibility on the resources each team consumes



How do Migrate for Compute Engine and Migrate for Anthos differ?

  1. Unlike Migrate for Anthos, Migrate for Compute Engine assumes that the migration source is VMware vSphere.
  2. Migrate for Compute Engine charges for ingress, but Migrate for Anthos does not.
  3. Migrate for Compute Engine is closed source, and Migrate for Anthos is open source.
  4. Migrate for Anthos migrates to containers, and Migrate for Compute Engine migrates to virtual machines.

Answer(s): D


Reference:

https://cloud.google.com/migrate/anthos
Migrate workloads to Compute Engine with Migrate for Compute Engine. Migrate from Compute Engine to containers with Migrate for Anthos and GKE.
This method makes sense, for instance, in cases where you want to conduct a data-center migration and migrate all workloads into Compute Engine, and only at a second stage selectively modernize suitable workloads to containers.

https://cloud.google.com/migrate/containers/docs/architecture



An IoT platform is providing services to home security systems. They have more than a million customers, each with many home devices. Burglaries or child safety issues are concerns that the clients customers. Therefore, the platform has to respond very quickly in near real time.
What could be a typical data pipeline used to support this platform on Google Cloud?

  1. Cloud Pub/Sub, Cloud Dataflow, Data Studio
  2. Cloud Functions, Cloud Dataproc, Looker
  3. Cloud Pub/Sub, Cloud Dataflow, BigQuery
  4. Cloud Functions, Cloud Dataproc, BigQuery

Answer(s): A

Explanation:

Explanation

=> Cloud Pub/Sub- Cloud Pub/Sub is the best to be the end-point for ingesting large amounts of data. It will grow as required, can stream data to downstream systems, and can also work with intermittently available backends.
=> Cloud Dataflow- supports streaming data and therefore is an appropriate option for processing the data that is ingested.

=> BigQuery- BigQuery also supports streaming data and its possible to do real time ana-lytics on it.

=> DataStudio- DataStudio and Looker are for visualization. They don't have any in-built analysis.

=> Cloud Functions- Cloud Functions is a useful serverless endpoint. However, Pub/Sub is better in this case because it can also retain messages for a set period if it was not possi-ble to deliver it first time.

=>Cloud Dataproc- Cloud Dataproc is used for Hadoop/Spark workloads and won't be a good fit here.






Post your Comments and Discuss Google CLOUD-DIGITAL-LEADER exam with other Community members:

CLOUD-DIGITAL-LEADER Discussions & Posts