Free DAS-C01 Exam Braindumps

A company analyzes its data in an Amazon Redshift data warehouse, which currently has a cluster of three dense storage nodes. Due to a recent business acquisition, the company needs to load an additional 4 TB of user data into Amazon Redshift. The engineering team will combine all the user data and apply complex calculations that require I/O intensive resources. The company needs to adjust the cluster's capacity to support the change in analytical and storage requirements.
Which solution meets these requirements?

  1. Resize the cluster using elastic resize with dense compute nodes.
  2. Resize the cluster using classic resize with dense compute nodes.
  3. Resize the cluster using elastic resize with dense storage nodes.
  4. Resize the cluster using classic resize with dense storage nodes.

Answer(s): A


Reference:

https://aws.amazon.com/redshift/pricing/



A company stores its sales and marketing data that includes personally identi able information (PII) in Amazon S3. The company allows its analysts to launch their own Amazon EMR cluster and run analytics reports with the data. To meet compliance requirements, the company must ensure the data is not publicly accessible throughout this process. A data engineer has secured Amazon S3 but must ensure the individual EMR clusters created by the analysts are not exposed to the public internet.
Which solution should the data engineer to meet this compliance requirement with LEAST amount of effort?

  1. Create an EMR security con guration and ensure the security con guration is associated with the EMR clusters when they are created.
  2. Check the security group of the EMR clusters regularly to ensure it does not allow inbound tra c from IPv4 0.0.0.0/0 or IPv6 ::/0.
  3. Enable the block public access setting for Amazon EMR at the account level before any EMR cluster is created.
  4. Use AWS WAF to block public internet access to the EMR clusters across the board.

Answer(s): C


Reference:

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-security-groups.html



A nancial company uses Amazon S3 as its data lake and has set up a data warehouse using a multi-node Amazon Redshift cluster. The data les in the data lake are organized in folders based on the data source of each data le. All the data les are loaded to one table in the Amazon Redshift cluster using a separate
COPY command for each data le location. With this approach, loading all the data les into Amazon Redshift takes a long time to complete. Users want a faster solution with little or no increase in cost while maintaining the segregation of the data les in the S3 data lake.
Which solution meets these requirements?

  1. Use Amazon EMR to copy all the data les into one folder and issue a COPY command to load the data into Amazon Redshift.
  2. Load all the data les in parallel to Amazon Aurora, and run an AWS Glue job to load the data into Amazon Redshift.
  3. Use an AWS Glue job to copy all the data les into one folder and issue a COPY command to load the data into Amazon Redshift.
  4. Create a manifest le that contains the data le locations and issue a COPY command to load the data into Amazon Redshift.

Answer(s): D


Reference:

https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html



A company's marketing team has asked for help in identifying a high performing long-term storage service for their data based on the following requirements:
The data size is approximately 32 TB uncompressed.
There is a low volume of single-row inserts each day.
There is a high volume of aggregation queries each day.
Multiple complex joins are performed.
The queries typically involve a small subset of the columns in a table.
Which storage service will provide the MOST performant solution?

  1. Amazon Aurora MySQL
  2. Amazon Redshift
  3. Amazon Neptune
  4. Amazon Elasticsearch

Answer(s): B






Post your Comments and Discuss Amazon DAS-C01 exam with other Community members:

DAS-C01 Discussions & Posts