Free AWS Certified Data Engineer - Associate DEA-C01 Exam Braindumps (page: 2)

Page 2 of 39

A data engineer maintains custom Python scripts that perform a data formatting process that many AWS Lambda functions use. When the data engineer needs to modify the Python scripts, the data engineer must manually update all the Lambda functions.
The data engineer requires a less manual way to update the Lambda functions.
Which solution will meet this requirement?

  1. Store a pointer to the custom Python scripts in the execution context object in a shared Amazon S3 bucket.
  2. Package the custom Python scripts into Lambda layers. Apply the Lambda layers to the Lambda functions.
  3. Store a pointer to the custom Python scripts in environment variables in a shared Amazon S3 bucket.
  4. Assign the same alias to each Lambda function. Call reach Lambda function by specifying the function's alias.

Answer(s): B



A company created an extract, transform, and load (ETL) data pipeline in AWS Glue. A data engineer must crawl a table that is in Microsoft SQL Server. The data engineer needs to extract, transform, and load the output of the crawl to an Amazon S3 bucket. The data engineer also must orchestrate the data pipeline.
Which AWS service or feature will meet these requirements MOST cost-effectively?

  1. AWS Step Functions
  2. AWS Glue workflows
  3. AWS Glue Studio
  4. Amazon Managed Workflows for Apache Airflow (Amazon MWAA)

Answer(s): B



A financial services company stores financial data in Amazon Redshift. A data engineer wants to run real-time queries on the financial data to support a web-based trading application. The data engineer wants to run the queries from within the trading application.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Establish WebSocket connections to Amazon Redshift.
  2. Use the Amazon Redshift Data API.
  3. Set up Java Database Connectivity (JDBC) connections to Amazon Redshift.
  4. Store frequently accessed data in Amazon S3. Use Amazon S3 Select to run the queries.

Answer(s): B



A company uses Amazon Athena for one-time queries against data that is in Amazon S3. The company has several use cases. The company must implement permission controls to separate query processes and access to query history among users, teams, and applications that are in the same AWS account.
Which solution will meet these requirements?

  1. Create an S3 bucket for each use case. Create an S3 bucket policy that grants permissions to appropriate individual IAM users. Apply the S3 bucket policy to the S3 bucket.
  2. Create an Athena workgroup for each use case. Apply tags to the workgroup. Create an IAM policy that uses the tags to apply appropriate permissions to the workgroup.
  3. Create an IAM role for each use case. Assign appropriate permissions to the role for each use case. Associate the role with Athena.
  4. Create an AWS Glue Data Catalog resource policy that grants permissions to appropriate individual IAM users for each use case. Apply the resource policy to the specific tables that Athena uses.

Answer(s): B



Page 2 of 39



Post your Comments and Discuss Amazon AWS Certified Data Engineer - Associate DEA-C01 exam with other Community members:

saif Ali commented on October 24, 2024
for Question no 50 The answer would be using lambda vdf as this provides automation
INDIA
upvote

Josh commented on October 09, 2024
Team, thanks for the wonderful support. This guide helped me a lot.
UNITED STATES
upvote

Ming commented on September 19, 2024
Very cool very precise. I highly recommend this study package.
UNITED STATES
upvote

Geovani commented on September 18, 2024
Very useful content and point by point explanation. And also the payment and download process was straight forward. Good job guys.
Italy
upvote