Free AWS-SOLUTIONS-ARCHITECT-PROFESSIONAL Exam Braindumps (page: 36)

Page 36 of 134

A weather service provides high-resolution weather maps from a web application hosted on AWS in the eu-west-1 Region. The weather maps are updated frequently and stored in Amazon S3 along with static HTML content. The web application is fronted by Amazon CloudFront.

The company recently expanded to serve users in the us-east-1 Region, and these new users report that viewing their respective weather maps is slow from time to time.

Which combination of steps will resolve the us-east-1 performance issues? (Choose two.)

  1. Configure the AWS Global Accelerator endpoint for the S3 bucket in eu-west-1. Configure endpoint groups for TCP ports 80 and 443 in us-east-1.
  2. Create a new S3 bucket in us-east-1. Configure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1.
  3. Use Lambda@Edge to modify requests from North America to use the S3 Transfer Acceleration endpoint in us-east-1.
  4. Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1.
  5. Configure the AWS Global Accelerator endpoint for us-east-1 as an origin on the CloudFront distribution. Use Lambda@Edge to modify requests from North America to use the new origin.

Answer(s): B,D

Explanation:

To resolve the performance issues for users in the us-east-1 Region who are experiencing slow access to high-resolution weather maps, the following steps can be taken:
B) Create a new S3 bucket in us-east-1. Configure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1.
-By creating an S3 bucket in us-east-1 and setting up cross-Region replication, the company ensures that copies of the frequently updated weather maps are stored closer to the new user base. This minimizes latency since users in the us-east-1 Region will be able to access the data directly from their local bucket rather than fetching it from eu-west-1.
D) Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1.
-Implementing Lambda@Edge to modify the requests allows the CloudFront distribution to dynamically route requests to the new S3 bucket in us-east-1 when users in North America access the weather maps. This ensures that requests from users in the us-east-1 Region are served from the geographically closer bucket, reducing latency and improving performance.
These two steps effectively address the performance issues by leveraging AWS infrastructure to optimize content delivery based on user location while ensuring that data is kept up to date across Regions.



A solutions architect is investigating an issue in which a company cannot establish new sessions in Amazon Workspaces. An initial analysis indicates that the issue involves user profiles. The Amazon Workspaces environment is configured to use Amazon FSx for Windows File Server as the profile share storage. The FSx for Windows File Server file system is configured with 10 TB of storage.

The solutions architect discovers that the file system has reached Its maximum capacity. The solutions architect must ensure that users can regain access. The solution also must prevent the problem from occurring again.

Which solution will meet these requirements?

  1. Remove old user profiles to create space. Migrate the user profiles to an Amazon FSx for Lustre file system.
  2. Increase capacity by using the update-file-system command. Implement an Amazon CloudWatch metric that monitors free space. Use Amazon EventBridge to invoke an AWS Lambda function to increase capacity as required.
  3. Monitor the file system by using the FreeStorageCapacity metric in Amazon CloudWatch. Use AWS Step Functions to increase the capacity as required.
  4. Remove old user profiles to create space. Create an additional FSx for Windows File Server file system. Update the user profile redirection for 50% of the users to use the new file system.

Answer(s): B

Explanation:

To address the issue of reaching the maximum capacity of the Amazon FSx for Windows File Server file system that is used for user profiles in Amazon Workspaces, the solutions architect should implement the following steps:
B) Increase capacity by using the update-file-system command. Implement an Amazon CloudWatch metric that monitors free space. Use Amazon EventBridge to invoke an AWS Lambda function to increase capacity as required.
-Increase Capacity: The immediate action is to increase the storage capacity of the FSx for Windows File Server file system using the update-file-system command. This will provide more space for user profiles and allow users to establish new sessions again.
-Monitoring and Automation: Implementing an Amazon CloudWatch metric to monitor the free storage capacity allows the organization to stay informed about the current usage. By setting up an Amazon EventBridge rule that triggers an AWS Lambda function when the available space falls below a specified threshold, the company can automate the process of increasing storage capacity proactively. This prevents similar issues from occurring in the future by ensuring that the file system has enough capacity to accommodate user profiles.
This solution effectively resolves the current access issue while also implementing a monitoring and automation strategy to prevent future capacity-related problems.



An international delivery company hosts a delivery management system on AWS. Drivers use the system to upload confirmation of delivery. Confirmation includes the recipient’s signature or a photo of the package with the recipient. The driver’s handheld device uploads signatures and photos through FTP to a single Amazon EC2 instance. Each handheld device saves a file in a directory based on the signed-in user, and the file name matches the delivery number. The EC2 instance then adds metadata to the file after querying a central database to pull delivery information. The file is then placed in Amazon S3 for archiving.

As the company expands, drivers report that the system is rejecting connections. The FTP server is having problems because of dropped connections and memory issues in response to these problems, a system engineer schedules a cron task to reboot the EC2 instance every 30 minutes. The billing team reports that files are not always in the archive and that the central system is not always updated.

A solutions architect needs to design a solution that maximizes scalability to ensure that the archive always receives the files and that systems are always updated. The handheld devices cannot be modified, so the company cannot deploy a new application.

Which solution will meet these requirements?

  1. Create an AMI of the existing EC2 instance. Create an Auto Scaling group of EC2 instances behind an Application Load Balancer. Configure the Auto Scaling group to have a minimum of three instances.
  2. Use AWS Transfer Family to create an FTP server that places the files in Amazon Elastic File System (Amazon EFS). Mount the EFS volume to the existing EC2 instance. Point the EC2 instance to the new path for file processing.
  3. Use AWS Transfer Family to create an FTP server that places the files in Amazon S3. Use an S3 event notification through Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.
  4. Update the handheld devices to place the files directly in Amazon S3. Use an S3 event notification through Amazon Simple Queue Service (Amazon SQS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.

Answer(s): C

Explanation:

To address the issues with the delivery management system and ensure scalability, reliability, and efficiency, the following solution is recommended:
C) Use AWS Transfer Family to create an FTP server that places the files in Amazon S3. Use an S3 event notification through Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.
1.AWS Transfer Family for FTP: By using AWS Transfer Family, the company can create a fully managed FTP server that scales automatically, eliminating the current issues with dropped connections and memory problems associated with the single EC2 instance. This service can handle multiple concurrent connections, which is crucial as the company expands.
2.Storing Files in Amazon S3: Storing the uploaded files directly in Amazon S3 provides a durable and highly available storage solution. This setup ensures that files are archived reliably without the risk of loss due to instance failures.
3.S3 Event Notifications: Configuring S3 event notifications to trigger an AWS Lambda function whenever a file is uploaded allows for the automation of metadata processing. The Lambda function can be designed to query the central database for delivery information, add the necessary metadata to the file, and update the delivery system. This removes manual intervention and ensures that the system is always updated with the latest delivery confirmations.
This solution meets the requirements for scalability and reliability, while also allowing the existing handheld devices to continue functioning without modification. It addresses the connectivity and file archiving issues effectively.



A company is running an application in the AWS Cloud. The application runs on containers m an Amazon Elastic Container Service (Amazon ECS) cluster. The ECS tasks use the Fargate launch type. The application's data is relational and is stored in Amazon Aurora MySQL. To meet regulatory requirements, the application must be able to recover to a separate AWS Region in the event of an application failure. In case of a failure, no data can be lost.

Which solution will meet these requirements with the LEAST amount of operational overhead?

  1. Provision an Aurora Replica in a different Region.
  2. Set up AWS DataSync for continuous replication of the data to a different Region.
  3. Set up AWS Database Migration Service (AWS DMS) to perform a continuous replication of the data to a different Region.
  4. Use Amazon Data Lifecycle Manager (Amazon DLM) to schedule a snapshot every 5 minutes.

Answer(s): A

Explanation:

To meet the requirements for disaster recovery and ensure no data loss in the event of an application failure, the recommended solution is:
A) Provision an Aurora Replica in a different Region.
1.Aurora Global Database: Creating an Aurora Replica in a different Region takes advantage of Amazon Aurora's Global Database feature, which allows for low-latency global reads and enables quick recovery from regional failures. This setup allows for automatic replication of data across regions.
2.Continuous Replication: Aurora provides continuous, asynchronous replication to the replica in another Region, ensuring that changes made to the primary database are replicated with minimal lag. This feature helps in achieving the regulatory requirement that no data can be lost in case of a failure.
3.Minimal Operational Overhead: Setting up an Aurora Replica in another Region is straightforward and requires minimal ongoing management compared to options like AWS DataSync or AWS DMS, which may involve more complex configuration and monitoring.
4.Rapid Failover: In the event of a failure, failing over to the Aurora Replica can be done quickly through the AWS Management Console or API, facilitating a smooth transition with minimal downtime.
This solution effectively balances the requirements of regulatory compliance with low operational overhead while ensuring high availability and data integrity.



Page 36 of 134



Post your Comments and Discuss Amazon AWS-SOLUTIONS-ARCHITECT-PROFESSIONAL exam with other Community members:

Zak commented on June 28, 2024
@AppleKid, I manged to pass this exam after failing once. Do not set for your exam without memorizing these questions. These are what you will see in the real exam.
Anonymous
upvote

Apple Kid commented on June 26, 2024
Did anyone gave exam recently and tell if these are good?
Anonymous
upvote

Captain commented on June 26, 2024
This is so helpful
Anonymous
upvote

udaya commented on April 25, 2024
stulll learning and seem to be questions are helpful
Anonymous
upvote

Jerry commented on February 18, 2024
very good for exam !!!!
HONG KONG
upvote

AWS-Guy commented on February 16, 2024
Precise and to the point. I aced this exam and now going for the next exam. Very great full to this site and it's wonderful content.
CANADA
upvote

Jerry commented on February 12, 2024
very good exam stuff
HONG KONG
upvote

travis head commented on November 16, 2023
I gave the Amazon SAP-C02 tests and prepared from this site as it has latest mock tests available which helped me evaluate my performance and score 919/1000
Anonymous
upvote

Weed Flipper commented on October 07, 2020
This is good stuff man.
CANADA
upvote

IT-Guy commented on September 29, 2020
Xengine software is good and free. Too bad it is only in English and no support for French.
FRANCE
upvote

pema commented on August 30, 2019
Can I have the latest version of this exam?
GERMANY
upvote

MrSimha commented on February 23, 2019
Thank you
Anonymous
upvote

Phil C. commented on November 12, 2018
To soon to tell, but I will be back to post a review after my exam.
Anonymous
upvote

MD EJAZ ALI TANWIR commented on August 20, 2017
This is valid dump in US. Thank you guys for providing this.
UNITED STATES
upvote

flypig commented on June 02, 2017
The Braindumps will short my ready time for this exam!
CHINA
upvote