Free Professional Data Engineer Exam Braindumps (page: 20)

Page 20 of 68

If you want to create a machine learning model that predicts the price of a particular stock based on its recent price history, what type of estimator should you use?

  1. Unsupervised learning
  2. Regressor
  3. Classifier
  4. Clustering estimator

Answer(s): B

Explanation:

Regression is the supervised learning task for modeling and predicting continuous, numeric variables. Examples include predicting real-estate prices, stock price movements, or student test scores.
Classification is the supervised learning task for modeling and predicting categorical variables. Examples include predicting employee churn, email spam, financial fraud, or student letter grades.
Clustering is an unsupervised learning task for finding natural groupings of observations (i.e. clusters) based on the inherent structure within your dataset. Examples include
customer segmentation, grouping similar items in e-commerce, and social network analysis.


Reference:

https://elitedatascience.com/machine-learning-algorithms



To run a TensorFlow training job on your own computer using Cloud Machine Learning Engine, what would your command start with?

  1. gcloud ml-engine local train
  2. gcloud ml-engine jobs submit training
  3. gcloud ml-engine jobs submit training local
  4. You can't run a TensorFlow program on your own computer using Cloud ML Engine .

Answer(s): A

Explanation:

gcloud ml-engine local train - run a Cloud ML Engine training job locally This command runs the specified module in an environment similar to that of a live Cloud ML Engine Training Job.
This is especially useful in the case of testing distributed models, as it allows you to validate that you are properly interacting with the Cloud ML Engine cluster configuration.


Reference:

https://cloud.google.com/sdk/gcloud/reference/ml-engine/local/train



When you design a Google Cloud Bigtable schema it is recommended that you _________.

  1. Avoid schema designs that are based on NoSQL concepts
  2. Create schema designs that are based on a relational database design
  3. Avoid schema designs that require atomicity across rows
  4. Create schema designs that require atomicity across rows

Answer(s): C

Explanation:

All operations are atomic at the row level. For example, if you update two rows in a table, it's possible that one row will be updated successfully and the other update will fail. Avoid schema designs that require atomicity across rows.


Reference:

https://cloud.google.com/bigtable/docs/schema-design#row-keys



The _________ for Cloud Bigtable makes it possible to use Cloud Bigtable in a Cloud Dataflow pipeline.

  1. Cloud Dataflow connector
  2. DataFlow SDK
  3. BiqQuery API
  4. BigQuery Data Transfer Service

Answer(s): A

Explanation:

The Cloud Dataflow connector for Cloud Bigtable makes it possible to use Cloud Bigtable in a Cloud Dataflow pipeline. You can use the connector for both batch and streaming operations.


Reference:

https://cloud.google.com/bigtable/docs/dataflow-hbase



Page 20 of 68



Post your Comments and Discuss Google Professional Data Engineer exam with other Community members:

madhan 6/16/2023 6:22:08 AM
next question
EUROPEAN UNION
upvote