Free DAS-C01 Exam Braindumps (page: 12)

Page 11 of 42

A mortgage company has a microservice for accepting payments. This microservice uses the Amazon DynamoDB encryption client with AWS KMS managed keys to encrypt the sensitive data before writing the data to DynamoDB. The nance team should be able to load this data into Amazon Redshift and aggregate the values within the sensitive elds. The Amazon Redshift cluster is shared with other data analysts from different business units.
Which steps should a data analyst take to accomplish this task e ciently and securely?

  1. Create an AWS Lambda function to process the DynamoDB stream. Decrypt the sensitive data using the same KMS key. Save the output to a restricted S3 bucket for the nance team. Create a nance table in Amazon Redshift that is accessible to the nance team only. Use the COPY command to load the data from Amazon S3 to the nance table.
  2. Create an AWS Lambda function to process the DynamoDB stream. Save the output to a restricted S3 bucket for the nance team. Create a nance table in Amazon Redshift that is accessible to the nance team only. Use the COPY command with the IAM role that has access to the KMS key to load the data from S3 to the nance table.
  3. Create an Amazon EMR cluster with an EMR_EC2_DefaultRole role that has access to the KMS key. Create Apache Hive tables that reference the data stored in DynamoDB and the nance table in Amazon Redshift. In Hive, select the data from DynamoDB and then insert the output to the nance table in Amazon Redshift.
  4. Create an Amazon EMR cluster. Create Apache Hive tables that reference the data stored in DynamoDB. Insert the output to the restricted Amazon S3 bucket for the nance team. Use the COPY command with the IAM role that has access to the KMS key to load the data from Amazon S3 to the nance table in Amazon Redshift.

Answer(s): A



A company is building a data lake and needs to ingest data from a relational database that has time-series data. The company wants to use managed services to accomplish this. The process needs to be scheduled daily and bring incremental data only from the source into Amazon S3.
What is the MOST cost-effective approach to meet these requirements?

  1. Use AWS Glue to connect to the data source using JDBC Drivers. Ingest incremental records only using job bookmarks.
  2. Use AWS Glue to connect to the data source using JDBC Drivers. Store the last updated key in an Amazon DynamoDB table and ingest the data using the updated key as a lter.
  3. Use AWS Glue to connect to the data source using JDBC Drivers and ingest the entire dataset. Use appropriate Apache Spark libraries to compare the dataset, and nd the delta.
  4. Use AWS Glue to connect to the data source using JDBC Drivers and ingest the full data. Use AWS DataSync to ensure the delta only is written into Amazon S3.

Answer(s): A



An Amazon Redshift database contains sensitive user data. Logging is necessary to meet compliance requirements. The logs must contain database authentication attempts, connections, and disconnections. The logs must also contain each query run against the database and record which database user ran each query.
Which steps will create the required logs?

  1. Enable Amazon Redshift Enhanced VPC Routing. Enable VPC Flow Logs to monitor tra c.
  2. Allow access to the Amazon Redshift database using AWS IAM only. Log access using AWS CloudTrail.
  3. Enable audit logging for Amazon Redshift using the AWS Management Console or the AWS CLI.
  4. Enable and download audit reports from AWS Artifact.

Answer(s): C


Reference:

https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html



A company that monitors weather conditions from remote construction sites is setting up a solution to collect temperature data from the following two weather stations.

Station A, which has 10 sensors
Station B, which has ve sensors

These weather stations were placed by onsite subject-matter experts.
Each sensor has a unique ID. The data collected from each sensor will be collected using Amazon Kinesis Data Streams. Based on the total incoming and outgoing data throughput, a single Amazon Kinesis data stream with two shards is created. Two partition keys are created based on the station names. During testing, there is a bottleneck on data coming from Station A, but not from Station B. Upon review, it is con rmed that the total stream throughput is still less than the allocated Kinesis Data Streams throughput.

How can this bottleneck be resolved without increasing the overall cost and complexity of the solution, while retaining the data collection quality requirements?

  1. Increase the number of shards in Kinesis Data Streams to increase the level of parallelism.
  2. Create a separate Kinesis data stream for Station A with two shards, and stream Station A sensor data to the new stream.
  3. Modify the partition key to use the sensor ID instead of the station name.
  4. Reduce the number of sensors in Station A from 10 to 5 sensors.

Answer(s): C






Post your Comments and Discuss Amazon DAS-C01 exam with other Community members:

DAS-C01 Discussions & Posts