Free DBS-C01 Exam Braindumps (page: 14)

Page 13 of 82

A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table.
To prepare the new table with identical settings, which steps should be performed? (Choose two.)

  1. Re-create global secondary indexes in the new table
  2. Define IAM policies for access to the new table
  3. Define the TTL settings
  4. Encrypt the table from the AWS Management Console or use the update-table command
  5. Set the provisioned read and write capacity

Answer(s): B,C

Explanation:


Reference:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html



A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.
Which process should the Database Specialist recommend to meet these requirements?

  1. Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.
  2. Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
  3. Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.
  4. Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.

Answer(s): A

Explanation:


Reference:

https://aws.amazon.com/blogs/mt/aws-cloudformation-signed-sealed-and-deployed/



A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi- AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.
What should the company do to address this space constraint issue?

  1. Log in to the host and run the rm $PGDATA/pg_logs/* command
  2. Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
  3. Create a ticket with AWS Support to have the logs deleted
  4. Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

Answer(s): B



A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost- effective and able to handle unpredictable application traffic.

What should a Database Specialist recommend for this user?

  1. Create an Amazon DynamoDB table with provisioned capacity mode
  2. Create an Amazon DocumentDB cluster
  3. Create an Amazon DynamoDB table with on-demand capacity mode
  4. Create an Amazon Aurora Serverless DB cluster

Answer(s): C

Explanation:


Reference:

https://aws.amazon.com/dynamodb/






Post your Comments and Discuss Amazon DBS-C01 exam with other Community members:

DBS-C01 Discussions & Posts