Free SAP-C01 Exam Braindumps (page: 36)

Page 36 of 134

A weather service provides high-resolution weather maps from a web application hosted on AWS in the eu-west-1 Region. The weather maps are updated frequently and stored in Amazon S3 along with static HTML content. The web application is fronted by Amazon CloudFront.

The company recently expanded to serve users in the us-east-1 Region, and these new users report that viewing their respective weather maps is slow from time to time.

Which combination of steps will resolve the us-east-1 performance issues? (Choose two.)

  1. Configure the AWS Global Accelerator endpoint for the S3 bucket in eu-west-1. Configure endpoint groups for TCP ports 80 and 443 in us-east-1.
  2. Create a new S3 bucket in us-east-1. Configure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1.
  3. Use Lambda@Edge to modify requests from North America to use the S3 Transfer Acceleration endpoint in us-east-1.
  4. Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1.
  5. Configure the AWS Global Accelerator endpoint for us-east-1 as an origin on the CloudFront distribution. Use Lambda@Edge to modify requests from North America to use the new origin.

Answer(s): B,D

Explanation:

To resolve the performance issues for users in the us-east-1 Region who are experiencing slow access to high-resolution weather maps, the following steps can be taken:
B) Create a new S3 bucket in us-east-1. Configure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1.
-By creating an S3 bucket in us-east-1 and setting up cross-Region replication, the company ensures that copies of the frequently updated weather maps are stored closer to the new user base. This minimizes latency since users in the us-east-1 Region will be able to access the data directly from their local bucket rather than fetching it from eu-west-1.
D) Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1.
-Implementing Lambda@Edge to modify the requests allows the CloudFront distribution to dynamically route requests to the new S3 bucket in us-east-1 when users in North America access the weather maps. This ensures that requests from users in the us-east-1 Region are served from the geographically closer bucket, reducing latency and improving performance.
These two steps effectively address the performance issues by leveraging AWS infrastructure to optimize content delivery based on user location while ensuring that data is kept up to date across Regions.



A solutions architect is investigating an issue in which a company cannot establish new sessions in Amazon Workspaces. An initial analysis indicates that the issue involves user profiles. The Amazon Workspaces environment is configured to use Amazon FSx for Windows File Server as the profile share storage. The FSx for Windows File Server file system is configured with 10 TB of storage.

The solutions architect discovers that the file system has reached Its maximum capacity. The solutions architect must ensure that users can regain access. The solution also must prevent the problem from occurring again.

Which solution will meet these requirements?

  1. Remove old user profiles to create space. Migrate the user profiles to an Amazon FSx for Lustre file system.
  2. Increase capacity by using the update-file-system command. Implement an Amazon CloudWatch metric that monitors free space. Use Amazon EventBridge to invoke an AWS Lambda function to increase capacity as required.
  3. Monitor the file system by using the FreeStorageCapacity metric in Amazon CloudWatch. Use AWS Step Functions to increase the capacity as required.
  4. Remove old user profiles to create space. Create an additional FSx for Windows File Server file system. Update the user profile redirection for 50% of the users to use the new file system.

Answer(s): B

Explanation:

To address the issue of reaching the maximum capacity of the Amazon FSx for Windows File Server file system that is used for user profiles in Amazon Workspaces, the solutions architect should implement the following steps:
B) Increase capacity by using the update-file-system command. Implement an Amazon CloudWatch metric that monitors free space. Use Amazon EventBridge to invoke an AWS Lambda function to increase capacity as required.
-Increase Capacity: The immediate action is to increase the storage capacity of the FSx for Windows File Server file system using the update-file-system command. This will provide more space for user profiles and allow users to establish new sessions again.
-Monitoring and Automation: Implementing an Amazon CloudWatch metric to monitor the free storage capacity allows the organization to stay informed about the current usage. By setting up an Amazon EventBridge rule that triggers an AWS Lambda function when the available space falls below a specified threshold, the company can automate the process of increasing storage capacity proactively. This prevents similar issues from occurring in the future by ensuring that the file system has enough capacity to accommodate user profiles.
This solution effectively resolves the current access issue while also implementing a monitoring and automation strategy to prevent future capacity-related problems.



An international delivery company hosts a delivery management system on AWS. Drivers use the system to upload confirmation of delivery. Confirmation includes the recipient’s signature or a photo of the package with the recipient. The driver’s handheld device uploads signatures and photos through FTP to a single Amazon EC2 instance. Each handheld device saves a file in a directory based on the signed-in user, and the file name matches the delivery number. The EC2 instance then adds metadata to the file after querying a central database to pull delivery information. The file is then placed in Amazon S3 for archiving.

As the company expands, drivers report that the system is rejecting connections. The FTP server is having problems because of dropped connections and memory issues in response to these problems, a system engineer schedules a cron task to reboot the EC2 instance every 30 minutes. The billing team reports that files are not always in the archive and that the central system is not always updated.

A solutions architect needs to design a solution that maximizes scalability to ensure that the archive always receives the files and that systems are always updated. The handheld devices cannot be modified, so the company cannot deploy a new application.

Which solution will meet these requirements?

  1. Create an AMI of the existing EC2 instance. Create an Auto Scaling group of EC2 instances behind an Application Load Balancer. Configure the Auto Scaling group to have a minimum of three instances.
  2. Use AWS Transfer Family to create an FTP server that places the files in Amazon Elastic File System (Amazon EFS). Mount the EFS volume to the existing EC2 instance. Point the EC2 instance to the new path for file processing.
  3. Use AWS Transfer Family to create an FTP server that places the files in Amazon S3. Use an S3 event notification through Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.
  4. Update the handheld devices to place the files directly in Amazon S3. Use an S3 event notification through Amazon Simple Queue Service (Amazon SQS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.

Answer(s): C

Explanation:

To address the issues with the delivery management system and ensure scalability, reliability, and efficiency, the following solution is recommended:
C) Use AWS Transfer Family to create an FTP server that places the files in Amazon S3. Use an S3 event notification through Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.
1.AWS Transfer Family for FTP: By using AWS Transfer Family, the company can create a fully managed FTP server that scales automatically, eliminating the current issues with dropped connections and memory problems associated with the single EC2 instance. This service can handle multiple concurrent connections, which is crucial as the company expands.
2.Storing Files in Amazon S3: Storing the uploaded files directly in Amazon S3 provides a durable and highly available storage solution. This setup ensures that files are archived reliably without the risk of loss due to instance failures.
3.S3 Event Notifications: Configuring S3 event notifications to trigger an AWS Lambda function whenever a file is uploaded allows for the automation of metadata processing. The Lambda function can be designed to query the central database for delivery information, add the necessary metadata to the file, and update the delivery system. This removes manual intervention and ensures that the system is always updated with the latest delivery confirmations.
This solution meets the requirements for scalability and reliability, while also allowing the existing handheld devices to continue functioning without modification. It addresses the connectivity and file archiving issues effectively.



A company is running an application in the AWS Cloud. The application runs on containers m an Amazon Elastic Container Service (Amazon ECS) cluster. The ECS tasks use the Fargate launch type. The application's data is relational and is stored in Amazon Aurora MySQL. To meet regulatory requirements, the application must be able to recover to a separate AWS Region in the event of an application failure. In case of a failure, no data can be lost.

Which solution will meet these requirements with the LEAST amount of operational overhead?

  1. Provision an Aurora Replica in a different Region.
  2. Set up AWS DataSync for continuous replication of the data to a different Region.
  3. Set up AWS Database Migration Service (AWS DMS) to perform a continuous replication of the data to a different Region.
  4. Use Amazon Data Lifecycle Manager (Amazon DLM) to schedule a snapshot every 5 minutes.

Answer(s): A

Explanation:

To meet the requirements for disaster recovery and ensure no data loss in the event of an application failure, the recommended solution is:
A) Provision an Aurora Replica in a different Region.
1.Aurora Global Database: Creating an Aurora Replica in a different Region takes advantage of Amazon Aurora's Global Database feature, which allows for low-latency global reads and enables quick recovery from regional failures. This setup allows for automatic replication of data across regions.
2.Continuous Replication: Aurora provides continuous, asynchronous replication to the replica in another Region, ensuring that changes made to the primary database are replicated with minimal lag. This feature helps in achieving the regulatory requirement that no data can be lost in case of a failure.
3.Minimal Operational Overhead: Setting up an Aurora Replica in another Region is straightforward and requires minimal ongoing management compared to options like AWS DataSync or AWS DMS, which may involve more complex configuration and monitoring.
4.Rapid Failover: In the event of a failure, failing over to the Aurora Replica can be done quickly through the AWS Management Console or API, facilitating a smooth transition with minimal downtime.
This solution effectively balances the requirements of regulatory compliance with low operational overhead while ensuring high availability and data integrity.



Page 36 of 134



Post your Comments and Discuss Amazon SAP-C01 exam with other Community members:

Mike commented on October 08, 2024
Not bad at all
CANADA
upvote

Petro UA commented on October 01, 2024
hate DNS questions. So need to practice more
UNITED STATES
upvote

Gilbert commented on September 14, 2024
Cant wait to pass mine
Anonymous
upvote

Paresh commented on April 19, 2023
There were only 3 new questions that I did not see in this exam dumps. There rest of the questions were all word by word from this dump.
UNITED STATES
upvote

Matthew commented on October 18, 2022
An extremely helpful study package. I highly recommend.
UNITED STATES
upvote

Peter commented on June 23, 2022
I thought these were practice exam questions but they turned out to be real questoins from the actual exam.
NETHERLANDS
upvote

Henry commented on September 29, 2021
I do not have the words to thank you guys. Passing this exam was creting many scary thoughts. I am gold I used your braindumps and passed. I can get a beer and relax now.
AUSTRALIA
upvote

Nik commented on April 12, 2021
I would not be able to pass my exam without your help. You guys rock!
SINGAPOR
upvote

Rohit commented on January 09, 2021
Thank you for the 50% sale. I really appreicate this price cut during this extra ordinary time where everyone is having financial problem.
INDIA
upvote

Roger-That commented on December 23, 2020
The 20% holiday discount is a sweet deal. Thank you for the discount code.
UNITED STATES
upvote

Duke commented on October 23, 2020
It is helpful. Questions are real. Purcahse is easy but the only problem, there is no option to pay in Euro. Only USD.
GERMANY
upvote

Tan Jin commented on September 09, 2020
The questions from this exam dumps is valid. I got 88% in my exam today.
SINGAPORE
upvote

Dave commented on November 05, 2019
Useful practice questions to get a feel of the actual exam. Some of the answers are not correct so please exercise caution.
EUROPEAN UNION
upvote

Je commented on October 02, 2018
Great
UNITED STATES
upvote

Invisible Angel commented on January 11, 2018
Have yet to try. But most recommend it
NEW ZEALAND
upvote

Mic commented on December 26, 2017
Nice dumps, site is secure and checkout process is a breeze.
UNITED STATES
upvote