Free DEA-C01 Exam Braindumps (page: 24)

Page 23 of 53

A data engineer wants to improve the performance of SQL queries in Amazon Athena that run against a sales data table.
The data engineer wants to understand the execution plan of a specific SQL statement. The data engineer also wants to see the computational cost of each operation in a SQL query.
Which statement does the data engineer need to run to meet these requirements?

  1. EXPLAIN SELECT * FROM sales;
  2. EXPLAIN ANALYZE FROM sales;
  3. EXPLAIN ANALYZE SELECT * FROM sales;
  4. EXPLAIN FROM sales;

Answer(s): C



A company plans to provision a log delivery stream within a VPC. The company configured the VPC flow logs to publish to Amazon CloudWatch Logs. The company needs to send the flow logs to Splunk in near real time for further analysis.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Configure an Amazon Kinesis Data Streams data stream to use Splunk as the destination. Create a CloudWatch Logs subscription filter to send log events to the data stream.
  2. Create an Amazon Kinesis Data Firehose delivery stream to use Splunk as the destination. Create a CloudWatch Logs subscription filter to send log events to the delivery stream.
  3. Create an Amazon Kinesis Data Firehose delivery stream to use Splunk as the destination. Create an AWS Lambda function to send the flow logs from CloudWatch Logs to the delivery stream.
  4. Configure an Amazon Kinesis Data Streams data stream to use Splunk as the destination. Create an AWS Lambda function to send the flow logs from CloudWatch Logs to the data stream.

Answer(s): B



A company has a data lake on AWS. The data lake ingests sources of data from business units. The company uses Amazon Athena for queries. The storage layer is Amazon S3 with an AWS Glue Data Catalog as a metadata repository.
The company wants to make the data available to data scientists and business analysts. However, the company first needs to manage fine-grained, column-level data access for Athena based on the user roles and responsibilities.
Which solution will meet these requirements?

  1. Set up AWS Lake Formation. Define security policy-based rules for the users and applications by IAM role in Lake Formation.
  2. Define an IAM resource-based policy for AWS Glue tables. Attach the same policy to IAM user groups.
  3. Define an IAM identity-based policy for AWS Glue tables. Attach the same policy to IAM roles. Associate the IAM roles with IAM groups that contain the users.
  4. Create a resource share in AWS Resource Access Manager (AWS RAM) to grant access to IAM users.

Answer(s): A



A company has developed several AWS Glue extract, transform, and load (ETL) jobs to validate and transform data from Amazon S3. The ETL jobs load the data into Amazon RDS for MySQL in batches once every day. The ETL jobs use a DynamicFrame to read the S3 data.
The ETL jobs currently process all the data that is in the S3 bucket. However, the company wants the jobs to process only the daily incremental data.
Which solution will meet this requirement with the LEAST coding effort?

  1. Create an ETL job that reads the S3 file status and logs the status in Amazon DynamoDB.
  2. Enable job bookmarks for the ETL jobs to update the state after a run to keep track of previously processed data.
  3. Enable job metrics for the ETL jobs to help keep track of processed objects in Amazon CloudWatch.
  4. Configure the ETL jobs to delete processed objects from Amazon S3 after each run.

Answer(s): B






Post your Comments and Discuss Amazon DEA-C01 exam with other Community members:

DEA-C01 Discussions & Posts