Free CLOUD-DIGITAL-LEADER Exam Braindumps (page: 51)

Page 50 of 104

Your customer is moving to Google Cloud. They have many teams, each working on many projects.
How should they organize resources?

  1. Let each team have one shared Folder with multiple Projects within it so that there is a separation of concerns.
  2. Let each Project have one Folder so that there is a clear separation of concerns.
  3. Let each team have an Organization so that they can entirely manage themselves with their own identity.
  4. Let each team have one shared Project so that it is easy to manage.

Answer(s): A

Explanation:

The recommended approach is to have folders corresponding to teams/departments and they manage the projects within that.
-> Sharing a single project will cause a conflict of resources, billing, concerns, etc. -> One folder per project is unnecessary overuse of abstraction/grouping. -> Teams and projects in a company should ideally be centrally managed in a single Organization.



Considering Google Cloud Storage different Options which of the following is / are correct on the basis of their real world use cases?

  1. Cloud Storage : Images, Large Media, files , backups.
  2. Google Cloud BigTable : AdTech, Financial and IoT Data.
  3. Cloud SQL : User Credentials, customer orders.
  4. All of the Above.

Answer(s): D

Explanation:

Cloud Datastore is the best for semi-structured application data that is used in app engines' applications. Bigtable is best for analytical data with heavy read/write events like AdTech, Financial or IoT data. Cloud Storage is best for structured and unstructured, binary or object data like images, large media files and backups. SQL is best for web frameworks and in existing applications like storing user credentials and customer orders. Cloud Spanner is best for large scale database applications that are larger than two terabytes; for example, for financial trading and e-commerce use cases. As I mentioned at the beginning of the module, depending on your application, you might use one or several of these services to get the job done.



The government has mandated that companies in a particular section of healthcare must retain all the data they collect for a period of 10 years in case an audit needs to be done. Your client, who is in that industry, needs to follow regulations. In addition, your client wants to do an analysis of the data quite frequently in the first year. They also don't want to be liable for any data beyond year 10.
What would recommend for your customer?

  1. Use Cloud Storage with nearline storage in year one and Coldline storage thereaf-ter. Use Object lifecycle management to move between storage types and delete them after 10 years.
  2. Use Cloud Storage with standard storage in year one and Coldline storage there-after. Set a Cloud Scheduler trigger for 1 year to change storage types and 10 years to delete the data.
  3. Use Cloud Storage with standard storage in year one and archival storage thereaf-ter. Use Object lifecycle management to move between storage types and delete them after 10 years.
  4. Use Cloud Storage with standard storage in year one and Coldline storage there-after. Set a Cloud Tasks to trigger for 1 year to change storage types and 10 years to delete the data.

Answer(s): C

Explanation:

Cloud storage supports Object Lifecycle Management. To support common use cases like setting a Time to Live (TTL) for objects, retaining noncurrent versions of objects, or "downgrading" storage classes of objects to help manage costs, Cloud Storage offers the Object Lifecycle Management feature.
Standard storage is recommended for frequently accessed data and Archive for data accessed less than once a year.

Nearline, Coldline, and Archive offer ultra-low-cost, highly-durable, highly available archival storage. For data accessed less than once a year, Archive is a cost-effective storage option for the long-term preservation of data. Coldline is also ideal for cold storage--data your business expects to touch less than once a quarter. For warmer storage, choose Nearline: data you expect to access less than once a month, but possibly multiple times throughout the year.



You are discussing scaling requirements with a gaming company.
When the game launches, they are expecting incoming data surges of 2 million users or more during weekends and holidays. Their on- premise systems have had issues scaling and they want your advice on solving the issue.
What do you recommend?

  1. Either Compute Engine VMs or Kubernetes nodes work, but it is better to keep a buffer of an extra 2 million users.
  2. We can deploy a Pub/Sub to ingest data which will grow to absorb demand and pass it on to other stages.
  3. We will allocate Compute Engine VMs estimating 80% capacity of 2 million users.
  4. We will allocate Kubernetes nodes estimating 80% capacity of 2 million users.

Answer(s): B

Explanation:

When there are huge surges in demand, it is preferable to use serverless technologies that automatically scale on demand. In this case, the key concern is data ingestion. Pub/Sub is a serverless system that can expand to absorb such demand.






Post your Comments and Discuss Google CLOUD-DIGITAL-LEADER exam with other Community members:

CLOUD-DIGITAL-LEADER Discussions & Posts