Free AZ-204 Exam Braindumps (page: 35)

Page 34 of 116

HOTSPOT (Drag and Drop is not supported)
A company develops a series of mobile games. All games use a single leaderboard service.
You have the following requirements:
-Code must be scalable and allow for growth.
-Each record must consist of a playerId, gameId, score, and time played.
-When users reach a new high score, the system will save the new score using the SaveScore function below.
Each game is assigned an Id based on the series title.
You plan to store customer information in Azure Cosmos DB. The following data already exists in the database:
You develop the following code to save scores in the data store. (Line numbers are included for reference only.)
You develop the following code to query the database. (Line numbers are included for reference only.)
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:




  1. See Explanation section for answer.

Answer(s): A

Explanation:


Box 1: Yes
Create a table.
A CloudTableClient object lets you get reference objects for tables and entities. The following code creates a CloudTableClient object and uses it to create a new
CloudTable object, which represents a table
// Retrieve storage account from connection-string.
CloudStorageAccount storageAccount =
CloudStorageAccount.parse(storageConnectionString);
// Create the table client.
CloudTableClient tableClient = storageAccount.createCloudTableClient();
// Create the table if it doesn't exist.
String tableName = "people";
CloudTable cloudTable = tableClient.getTableReference(tableName); cloudTable.createIfNotExists();
Box 2: No
New records are inserted with TableOperation.insert. Old records are not updated.
To update old records TableOperation.insertOrReplace should be used instead.
Box 3: No
Box 4: Yes


Reference:

https://docs.microsoft.com/en-us/azure/cosmos-db/table-storage-how-to-use-java



You develop and deploy a web application to Azure App Service. The application accesses data stored in an Azure Storage account. The account contains several containers with several blobs with large amounts of data. You deploy all Azure resources to a single region.
You need to move the Azure Storage account to the new region. You must copy all data to the new region.
What should you do first?

  1. Export the Azure Storage account Azure Resource Manager template
  2. Initiate a storage account failover
  3. Configure object replication for all blobs
  4. Use the AzCopy command line tool
  5. Create a new Azure Storage account in the current region
  6. Create a new subscription in the current region

Answer(s): A

Explanation:

To move a storage account, create a copy of your storage account in another region. Then, move your data to that account by using AzCopy, or another tool of your choice and finally, delete the resources in the source region.
To get started, export, and then modify a Resource Manager template.


Reference:

https://docs.microsoft.com/en-us/azure/storage/common/storage-account-move?tabs=azure-portal



HOTSPOT (Drag and Drop is not supported)
You are developing an application to collect the following telemetry data for delivery drivers: first name, last name, package count, item id, and current location coordinates. The app will store the data in Azure Cosmos DB.
You need to configure Azure Cosmos DB to query the data.
Which values should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:


Box 1: Core (SQL)
Core(SQL) API stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. SQL API supports analytics and offers performance isolation between operational and analytical workloads.
Box 2: item id
item id is a unique identifier and is suitable for the partition key.


Reference:

https://docs.microsoft.com/en-us/azure/cosmos-db/choose-api
https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview



DRAG DROP (Drag and Drop is not supported)
You are implementing an Azure solution that uses Azure Cosmos DB and the latest Azure Cosmos DB SDK. You add a change feed processor to a new container instance.
You attempt to read a batch of 100 documents. The process fails when reading one of the documents. The solution must monitor the progress of the change feed processor instance on the new container as the change feed is read. You must prevent the change feed processor from retrying the entire batch when one document cannot be read.
You need to implement the change feed processor to read the documents.
Which features should you use? To answer, drag the appropriate features to the cored requirements. Each feature may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each cored selection is worth one point.
Select and Place:

  1. See Explanation section for answer.

Answer(s): A

Explanation:


Box 1: Change feed estimator
You can use the change feed estimator to monitor the progress of your change feed processor instances as they read the change feed or use the life cycle notifications to detect underlying failures.
Box 2: Dead-letter queue
To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might be another Cosmos container. The exact data store does not matter, simply that the unprocessed changes are persisted.


Reference:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-processor






Post your Comments and Discuss Microsoft AZ-204 exam with other Community members:

Exam Discussions & Posts