Free SAP-C01 Exam Braindumps (page: 59)

Page 59 of 134

A company has built a high performance computing (HPC) cluster in AWS for a tightly coupled workload that generates a large number of shared files stored in Amazon EFS. The cluster was performing well when the number of Amazon EC2 instances in the cluster was 100. However, when the company increased the cluster size to 1.000 EC2 instances, overall performance was well below expectations.

Which collection of design choices should a solutions architect make to achieve the maximum performance from the HPC cluster? (Choose three.)

  1. Ensure the HPC cluster is launched within a single Availability Zone.
  2. Launch the EC2 instances and attach elastic network interfaces in multiples of four.
  3. Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled.
  4. Ensure the cluster is launched across multiple Availability Zones.
  5. Replace Amazon EFS with multiple Amazon EBS volumes in a RAID array.
  6. Replace Amazon EFS with Amazon FSx for Lustre.

Answer(s): A,C,F

Explanation:

A) Launching the HPC cluster within a single Availability Zone reduces latency, which is crucial for tightly coupled workloads.
C) Selecting EC2 instances with Elastic Fabric Adapter (EFA) provides lower latency and higher throughput for high-performance computing, which is essential for improving the cluster's performance at larger scales.
F) Replacing Amazon EFS with Amazon FSx for Lustre is a better choice for HPC workloads because FSx for Lustre is optimized for fast, shared storage and high-throughput performance, ideal for large-scale, tightly coupled workloads.
These options provide the best performance improvements for the HPC cluster when scaling to a larger number of instances.



A company is designing an AWS Organizations structure. The company wants to standardize a process to apply tags across the entire organization. The company will require tags with specific values when a user creates a new resource. Each of the company's OUs will have unique tag values.

Which solution will meet these requirements?

  1. Use an SCP to deny the creation of resources that do not have the required tags. Create a tag policy that includes the tag values that the company has assigned to each OU. Attach the tag policies to the OUs.
  2. Use an SCP to deny the creation of resources that do not have the required tags. Create a tag policy that includes the tag values that the company has assigned to each OU. Attach the tag policies to the organization's management account.
  3. Use an SCP to allow the creation of resources only when the resources have the required tags. Create a tag policy that includes the tag values that the company has assigned to each OU. Attach the tag policies to the OUs.
  4. Use an SCP to deny the creation of resources that do not have the required tags. Define the list of tags. Attach the SCP to the OUs.

Answer(s): A

Explanation:

A) Using a Service Control Policy (SCP) to deny the creation of resources that do not have the required tags ensures that all resources comply with the company's tagging standards. Attaching a tag policy with specific tag values to each Organizational Unit (OU) allows the company to enforce unique tag values for each OU. This solution meets the requirement of standardizing tags across the organization while ensuring that specific tag values are applied at the OU level.



A company has more than 10,000 sensors that send data to an on-premises Apache Kafka server by using the Message Queuing Telemetry Transport (MQTT) protocol. The on-premises Kafka server transforms the data and then stores the results as objects in an Amazon S3 bucket.

Recently, the Kafka server crashed. The company lost sensor data while the server was being restored. A solutions architect must create a new design on AWS that is highly available and scalable to prevent a similar occurrence.

Which solution will meet these requirements?

  1. Launch two Amazon EC2 instances to host the Kafka server in an active/standby configuration across two Availability Zones. Create a domain name in Amazon Route 53. Create a Route 53 failover policy. Route the sensors to send the data to the domain name.
  2. Migrate the on-premises Kafka server to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Create a Network Load Balancer (NLB) that points to the Amazon MSK broker. Enable NLB health checks. Route the sensors to send the data to the NL
  3. Deploy AWS IoT Core, and connect it to an Amazon Kinesis Data Firehose delivery stream. Use an AWS Lambda function to handle data transformation. Route the sensors to send the data to AWS IoT Core.
  4. Deploy AWS IoT Core, and launch an Amazon EC2 instance to host the Kafka server. Configure AWS IoT Core to send the data to the EC2 instance. Route the sensors to send the data to AWS IoT Core.

Answer(s): C

Explanation:

C) Deploying AWS IoT Core and connecting it to an Amazon Kinesis Data Firehose provides a highly available, scalable, and serverless solution to handle the ingestion of data from the sensors. AWS Lambda can be used to transform the data before storing it in Amazon S3, eliminating the need for self-managed Kafka servers and ensuring that the system is resilient to failures. This solution prevents data loss and provides seamless scalability, making it ideal for the company's requirements



A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances. Amazon Elastic File System (Amazon EFS) file systems, and Amazon RDS DB instances.

To meet regulatory and business requirements, the company must make the following changes for data backups:

•Backups must be retained based on custom daily, weekly, and monthly requirements.
•Backups must be replicated to at least one other AWS Region immediately after capture.
•The backup solution must provide a single source of backup status across the AWS environment.
•The backup solution must send immediate notifications upon failure of any resource backup.

Which combination of steps will meet these requirements with the LEAST amount of operational overhead? (Choose three.)

  1. Create an AWS Backup plan with a backup rule for each of the retention requirements.
  2. Configure an AWS Backup plan to copy backups to another Region.
  3. Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.
  4. Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP_JOB_COMPLETE
  5. Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.
  6. Set up RDS snapshots on each database.

Answer(s): A,B,D

Explanation:

A) Creating an AWS Backup plan with rules for each retention requirement allows you to meet the company's custom daily, weekly, and monthly backup retention needs with minimal operational overhead.
B) Configuring the AWS Backup plan to copy backups to another AWS Region ensures that backups are immediately replicated to meet the cross-region replication requirement.
D) Adding an Amazon SNS topic to the backup plan provides immediate notifications for any backup failure, ensuring timely alerts if a backup job fails.
This combination leverages AWS Backup's native features to meet all the requirements efficiently and with the least operational effort.



Page 59 of 134



Post your Comments and Discuss Amazon SAP-C01 exam with other Community members:

Mike commented on October 08, 2024
Not bad at all
CANADA
upvote

Petro UA commented on October 01, 2024
hate DNS questions. So need to practice more
UNITED STATES
upvote

Gilbert commented on September 14, 2024
Cant wait to pass mine
Anonymous
upvote

Paresh commented on April 19, 2023
There were only 3 new questions that I did not see in this exam dumps. There rest of the questions were all word by word from this dump.
UNITED STATES
upvote

Matthew commented on October 18, 2022
An extremely helpful study package. I highly recommend.
UNITED STATES
upvote

Peter commented on June 23, 2022
I thought these were practice exam questions but they turned out to be real questoins from the actual exam.
NETHERLANDS
upvote

Henry commented on September 29, 2021
I do not have the words to thank you guys. Passing this exam was creting many scary thoughts. I am gold I used your braindumps and passed. I can get a beer and relax now.
AUSTRALIA
upvote

Nik commented on April 12, 2021
I would not be able to pass my exam without your help. You guys rock!
SINGAPOR
upvote

Rohit commented on January 09, 2021
Thank you for the 50% sale. I really appreicate this price cut during this extra ordinary time where everyone is having financial problem.
INDIA
upvote

Roger-That commented on December 23, 2020
The 20% holiday discount is a sweet deal. Thank you for the discount code.
UNITED STATES
upvote

Duke commented on October 23, 2020
It is helpful. Questions are real. Purcahse is easy but the only problem, there is no option to pay in Euro. Only USD.
GERMANY
upvote

Tan Jin commented on September 09, 2020
The questions from this exam dumps is valid. I got 88% in my exam today.
SINGAPORE
upvote

Dave commented on November 05, 2019
Useful practice questions to get a feel of the actual exam. Some of the answers are not correct so please exercise caution.
EUROPEAN UNION
upvote

Je commented on October 02, 2018
Great
UNITED STATES
upvote

Invisible Angel commented on January 11, 2018
Have yet to try. But most recommend it
NEW ZEALAND
upvote

Mic commented on December 26, 2017
Nice dumps, site is secure and checkout process is a breeze.
UNITED STATES
upvote