Free DAS-C01 Exam Braindumps (page: 8)

Page 7 of 42

A media company has a streaming playback application. The company needs to collect and analyze data to provide near-real-time feedback on playback issues within 30 seconds. The company requires a consumer application to identify playback issues, such as decreased quality during a speci ed time frame. The data will be streamed in JSON format. The schema can change over time.
Which solution will meet these requirements?

  1. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Con gure an S3 event to invoke an AWS Lambda function to process and analyze the data.
  2. Send the data to Amazon Managed Streaming for Apache Kafka. Con gure Amazon Kinesis Data Analytics for SQL Application as the consumer application to process and analyze the data.
  3. Send the data to Amazon Kinesis Data Firehose with delivery to Amazon S3. Con gure Amazon S3 to initiate an event for AWS Lambda to process and analyze the data.
  4. Send the data to Amazon Kinesis Data Streams. Con gure an Amazon Kinesis Data Analytics for Apache Flink application as the consumer application to process and analyze the data.

Answer(s): D



An ecommerce company stores customer purchase data in Amazon RDS. The company wants a solution to store and analyze historical data. The most recent 6 months of data will be queried frequently for analytics workloads. This data is several terabytes large. Once a month, historical data for the last 5 years must be accessible and will be joined with the more recent data. The company wants to optimize performance and cost.
Which storage solution will meet these requirements?

  1. Create a read replica of the RDS database to store the most recent 6 months of data. Copy the historical data into Amazon S3. Create an AWS Glue Data Catalog of the data in Amazon S3 and Amazon RDS. Run historical queries using Amazon Athena.
  2. Use an ETL tool to incrementally load the most recent 6 months of data into an Amazon Redshift cluster. Run more frequent queries against this cluster. Create a read replica of the RDS database to run queries on the historical data.
  3. Incrementally copy data from Amazon RDS to Amazon S3. Create an AWS Glue Data Catalog of the data in Amazon S3. Use Amazon Athena to query the data.
  4. Incrementally copy data from Amazon RDS to Amazon S3. Load and store the most recent 6 months of data in Amazon Redshift. Con gure an Amazon Redshift Spectrum table to connect to all historical data.

Answer(s): D



A company leverages Amazon Athena for ad-hoc queries against data stored in Amazon S3. The company wants to implement additional controls to separate query execution and query history among users, teams, or applications running in the same AWS account to comply with internal security policies.
Which solution meets these requirements?

  1. Create an S3 bucket for each given use case, create an S3 bucket policy that grants permissions to appropriate individual IAM users. and apply the S3 bucket policy to the S3 bucket.
  2. Create an Athena workgroup for each given use case, apply tags to the workgroup, and create an IAM policy using the tags to apply appropriate permissions to the workgroup.
  3. Create an IAM role for each given use case, assign appropriate permissions to the role for the given use case, and add the role to associate the role with Athena.
  4. Create an AWS Glue Data Catalog resource policy for each given use case that grants permissions to appropriate individual IAM users, and apply the resource policy to the speci c tables used by Athena.

Answer(s): B


Reference:

https://aws.amazon.com/athena/faqs/



A company wants to use an automatic machine learning (ML) Random Cut Forest (RCF) algorithm to visualize complex real-world scenarios, such as detecting seasonality and trends, excluding outers, and imputing missing values. The team working on this project is non-technical and is looking for an out-of-the-box solution that will require the LEAST amount of management overhead.
Which solution will meet these requirements?

  1. Use an AWS Glue ML transform to create a forecast and then use Amazon QuickSight to visualize the data.
  2. Use Amazon QuickSight to visualize the data and then use ML-powered forecasting to forecast the key business metrics.
  3. Use a pre-build ML AMI from the AWS Marketplace to create forecasts and then use Amazon QuickSight to visualize the data.
  4. Use calculated elds to create a new forecast and then use Amazon QuickSight to visualize the data.

Answer(s): B


Reference:

https://aws.amazon.com/blogs/big-data/query-visualize-and-forecast-trufactor-web-session-intelligence-with-aws-data-exchange/






Post your Comments and Discuss Amazon DAS-C01 exam with other Community members:

DAS-C01 Discussions & Posts