Free AWS-Certified-Big-Data-Specialty Exam Braindumps

Kinesis Streams supports re-sharding which enables you to adjust the number of shards in your stream in order to adapt to changes in the rate of data flow through the stream. Which statements are true about re-sharding?

  1. In a shard split, you divide a single shard into two shards.
  2. In a shard merge, you combine the two shards into a single shard.
  3. You can merge more than two shards in a single operation.
  4. You cannot split a shard in to more than two shards in a single operation.

Answer(s): A,B,D

Explanation:

Kinesis Streams supports re-sharding which enables you to adjust the number of shards in your stream in order to adapt to changes in the rate of data flow through the stream.
In a shard split, you divide a single shard into two shards. In a shard merge, you combine the two shards into a single shard.You cannot split a shard in to more than two shards in a single operation. You cannot merge more than two shards in a single operation.


Reference:

http://docs.aws.amazon.com/streams/latest/dev/working-with-kinesis.html



As the default setting, how long will Amazon Kinesis Streams store records for? (choose one)

  1. 24 hours
  2. 72 hours
  3. 48 Hours
  4. 12 hours

Answer(s): A

Explanation:

By default Amazon Kinesis Streams store records for 24 hours


Reference:

http://docs.aws.amazon.com/streams/latest/dev/working-with-kinesis.html



A Snowball client transfers data to a Snowball appliance using the command, with the root directory identifier in the destination path.

  1. cp; s3://
  2. scp; /usr/sbin/

Answer(s): A

Explanation:

To transfer data to a Snowball appliance using the Snowball client from a command prompt, you run the copy command with two paths specified, the source and the destination: snowball cp [options] [path/to/data/source] s3://[path/to/data/destination]


Reference:

http://docs.aws.amazon.com/snowball/latest/ug/transfer-data.html



When using Snowball to import data to AWS, the data is stored in a(n) _.

  1. DynamoDB instance
    B. EBS volume
  2. S3 bucket
  3. EC2 instance

Answer(s): C

Explanation:

When using Snowball to import data to AWS, the data is stored in an S3 bucket. Data can also be copied from an S3 bucket to your datacenter.


Reference:

http://docs.aws.amazon.com/snowball/latest/ug/jobs.html






Post your Comments and Discuss Amazon AWS-Certified-Big-Data-Specialty exam with other Community members:

AWS-Certified-Big-Data-Specialty Exam Discussions & Posts