Free Google Cloud Architect Professional Exam Braindumps (page: 5)

Page 4 of 68
View Related Case Study

For this question, refer to the TerramEarth case study.

To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections.
What should you do?

  1. Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket.
    Run the ETL process using data in the bucket.
  2. Use multiple Google Container Engine clusters running FTP servers located in different regions.
    Save the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.
  3. Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.
  4. Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.

Answer(s): D

Explanation:

https://cloud.google.com/storage/docs/locations



View Related Case Study

For this question, refer to the TerramEarth case study.

TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the dat

  1. What is the most cost-effective way to run this job?
  2. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.
  3. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.
  4. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi region bucket and use a Dataproc cluster to finish the job.
  5. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the jo

Answer(s): D

Explanation:

Storageguarantees 2 replicates which are geo diverse (100 miles apart) which can get better remote latency and availability.

More importantly, is that multiregional heavily leverages Edge caching and CDNs to provide the content to the end users.

All this redundancy and caching means that Multiregional comes with overhead to sync and ensure consistency between geo-diverse areas. As such, it's much better for write-once-read-many scenarios. This means frequently accessed (e.g. "hot" objects) around the world, such as website content, streaming videos, gaming or mobile applications.


Reference:

https://medium.com/google-cloud/google-cloud-storage-what-bucket-class-for-the-best- performance-5c847ac8f9f2



View Related Case Study

For this question, refer to the TerramEarth case study.

TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry dat

  1. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs.
    What should they do?
  2. Have the vehicle' computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.
  3. Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.
  4. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.
  5. Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Answer(s): D

Explanation:

Coldline Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per- operation costs. For example:
Cold Data Storage - Infrequently accessed data, such as data stored for legal or regulatory reasons, can be stored at low cost as Coldline Storage, and be available when you need it. Disaster recovery - In the event of a disaster recovery event, recovery time is key. Cloud Storage provides low latency access to data stored as Coldline Storage.


Reference:

https://cloud.google.com/storage/docs/storage-classes



View Related Case Study

For this question refer to the TerramEarth case study

Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field How can you accomplish this goal?

  1. Have your engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically.
  2. Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.
  3. Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically.
  4. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.

Answer(s): B






Post your Comments and Discuss Google Google Cloud Architect Professional exam with other Community members:

Google Cloud Architect Professional Discussions & Posts