Free AWS-SOLUTIONS-ARCHITECT-PROFESSIONAL Exam Braindumps (page: 59)

Page 59 of 134

A company has built a high performance computing (HPC) cluster in AWS for a tightly coupled workload that generates a large number of shared files stored in Amazon EFS. The cluster was performing well when the number of Amazon EC2 instances in the cluster was 100. However, when the company increased the cluster size to 1.000 EC2 instances, overall performance was well below expectations.

Which collection of design choices should a solutions architect make to achieve the maximum performance from the HPC cluster? (Choose three.)

  1. Ensure the HPC cluster is launched within a single Availability Zone.
  2. Launch the EC2 instances and attach elastic network interfaces in multiples of four.
  3. Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled.
  4. Ensure the cluster is launched across multiple Availability Zones.
  5. Replace Amazon EFS with multiple Amazon EBS volumes in a RAID array.
  6. Replace Amazon EFS with Amazon FSx for Lustre.

Answer(s): A,C,F

Explanation:

A) Launching the HPC cluster within a single Availability Zone reduces latency, which is crucial for tightly coupled workloads.
C) Selecting EC2 instances with Elastic Fabric Adapter (EFA) provides lower latency and higher throughput for high-performance computing, which is essential for improving the cluster's performance at larger scales.
F) Replacing Amazon EFS with Amazon FSx for Lustre is a better choice for HPC workloads because FSx for Lustre is optimized for fast, shared storage and high-throughput performance, ideal for large-scale, tightly coupled workloads.
These options provide the best performance improvements for the HPC cluster when scaling to a larger number of instances.



A company is designing an AWS Organizations structure. The company wants to standardize a process to apply tags across the entire organization. The company will require tags with specific values when a user creates a new resource. Each of the company's OUs will have unique tag values.

Which solution will meet these requirements?

  1. Use an SCP to deny the creation of resources that do not have the required tags. Create a tag policy that includes the tag values that the company has assigned to each OU. Attach the tag policies to the OUs.
  2. Use an SCP to deny the creation of resources that do not have the required tags. Create a tag policy that includes the tag values that the company has assigned to each OU. Attach the tag policies to the organization's management account.
  3. Use an SCP to allow the creation of resources only when the resources have the required tags. Create a tag policy that includes the tag values that the company has assigned to each OU. Attach the tag policies to the OUs.
  4. Use an SCP to deny the creation of resources that do not have the required tags. Define the list of tags. Attach the SCP to the OUs.

Answer(s): A

Explanation:

A) Using a Service Control Policy (SCP) to deny the creation of resources that do not have the required tags ensures that all resources comply with the company's tagging standards. Attaching a tag policy with specific tag values to each Organizational Unit (OU) allows the company to enforce unique tag values for each OU. This solution meets the requirement of standardizing tags across the organization while ensuring that specific tag values are applied at the OU level.



A company has more than 10,000 sensors that send data to an on-premises Apache Kafka server by using the Message Queuing Telemetry Transport (MQTT) protocol. The on-premises Kafka server transforms the data and then stores the results as objects in an Amazon S3 bucket.

Recently, the Kafka server crashed. The company lost sensor data while the server was being restored. A solutions architect must create a new design on AWS that is highly available and scalable to prevent a similar occurrence.

Which solution will meet these requirements?

  1. Launch two Amazon EC2 instances to host the Kafka server in an active/standby configuration across two Availability Zones. Create a domain name in Amazon Route 53. Create a Route 53 failover policy. Route the sensors to send the data to the domain name.
  2. Migrate the on-premises Kafka server to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Create a Network Load Balancer (NLB) that points to the Amazon MSK broker. Enable NLB health checks. Route the sensors to send the data to the NL
  3. Deploy AWS IoT Core, and connect it to an Amazon Kinesis Data Firehose delivery stream. Use an AWS Lambda function to handle data transformation. Route the sensors to send the data to AWS IoT Core.
  4. Deploy AWS IoT Core, and launch an Amazon EC2 instance to host the Kafka server. Configure AWS IoT Core to send the data to the EC2 instance. Route the sensors to send the data to AWS IoT Core.

Answer(s): C

Explanation:

C) Deploying AWS IoT Core and connecting it to an Amazon Kinesis Data Firehose provides a highly available, scalable, and serverless solution to handle the ingestion of data from the sensors. AWS Lambda can be used to transform the data before storing it in Amazon S3, eliminating the need for self-managed Kafka servers and ensuring that the system is resilient to failures. This solution prevents data loss and provides seamless scalability, making it ideal for the company's requirements



A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances. Amazon Elastic File System (Amazon EFS) file systems, and Amazon RDS DB instances.

To meet regulatory and business requirements, the company must make the following changes for data backups:

•Backups must be retained based on custom daily, weekly, and monthly requirements.
•Backups must be replicated to at least one other AWS Region immediately after capture.
•The backup solution must provide a single source of backup status across the AWS environment.
•The backup solution must send immediate notifications upon failure of any resource backup.

Which combination of steps will meet these requirements with the LEAST amount of operational overhead? (Choose three.)

  1. Create an AWS Backup plan with a backup rule for each of the retention requirements.
  2. Configure an AWS Backup plan to copy backups to another Region.
  3. Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.
  4. Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except BACKUP_JOB_COMPLETE
  5. Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.
  6. Set up RDS snapshots on each database.

Answer(s): A,B,D

Explanation:

A) Creating an AWS Backup plan with rules for each retention requirement allows you to meet the company's custom daily, weekly, and monthly backup retention needs with minimal operational overhead.
B) Configuring the AWS Backup plan to copy backups to another AWS Region ensures that backups are immediately replicated to meet the cross-region replication requirement.
D) Adding an Amazon SNS topic to the backup plan provides immediate notifications for any backup failure, ensuring timely alerts if a backup job fails.
This combination leverages AWS Backup's native features to meet all the requirements efficiently and with the least operational effort.



Page 59 of 134



Post your Comments and Discuss Amazon AWS-SOLUTIONS-ARCHITECT-PROFESSIONAL exam with other Community members:

Zak commented on June 28, 2024
@AppleKid, I manged to pass this exam after failing once. Do not set for your exam without memorizing these questions. These are what you will see in the real exam.
Anonymous
upvote

Apple Kid commented on June 26, 2024
Did anyone gave exam recently and tell if these are good?
Anonymous
upvote

Captain commented on June 26, 2024
This is so helpful
Anonymous
upvote

udaya commented on April 25, 2024
stulll learning and seem to be questions are helpful
Anonymous
upvote

Jerry commented on February 18, 2024
very good for exam !!!!
HONG KONG
upvote

AWS-Guy commented on February 16, 2024
Precise and to the point. I aced this exam and now going for the next exam. Very great full to this site and it's wonderful content.
CANADA
upvote

Jerry commented on February 12, 2024
very good exam stuff
HONG KONG
upvote

travis head commented on November 16, 2023
I gave the Amazon SAP-C02 tests and prepared from this site as it has latest mock tests available which helped me evaluate my performance and score 919/1000
Anonymous
upvote

Weed Flipper commented on October 07, 2020
This is good stuff man.
CANADA
upvote

IT-Guy commented on September 29, 2020
Xengine software is good and free. Too bad it is only in English and no support for French.
FRANCE
upvote

pema commented on August 30, 2019
Can I have the latest version of this exam?
GERMANY
upvote

MrSimha commented on February 23, 2019
Thank you
Anonymous
upvote

Phil C. commented on November 12, 2018
To soon to tell, but I will be back to post a review after my exam.
Anonymous
upvote

MD EJAZ ALI TANWIR commented on August 20, 2017
This is valid dump in US. Thank you guys for providing this.
UNITED STATES
upvote

flypig commented on June 02, 2017
The Braindumps will short my ready time for this exam!
CHINA
upvote