Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 15 )

Updated On: 18-Mar-2026

A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days.
Which storage solution is MOST cost-effective?

  1. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after object creation.
  2. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation.
  3. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation.
  4. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Move the files to S3 Glacier 4 years after object creation.

Answer(s): C

Explanation:

A) Incorrect. Glacier is not accessible for immediate retrieval; 30 days is fine, but Glacier incurs retrieval latency and may hinder access, plus not the most cost-effective for the 4-year window with frequent access in first 30 days.
B) Incorrect. One Zone-IA has lower durability (Single AZ) and is less ideal for critical data; potential data loss risk and unsupported cross-region resilience.
C) Correct. S3 Standard-IA offers cost savings after 30 days while maintaining immediate retrieval for infrequent access requirements; aligns with 4-year retention and immediate access needs.
D) Incorrect. Moving to Glacier after 4 years adds unnecessary retrieval path and complexity; Standard-IA already provides cost savings during long-term retention with frequent access in early period.



A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes to an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?

  1. Use the CreateQueue API call to create a new queue.
  2. Use the AddPermission API call to add appropriate permissions.
  3. Use the ReceiveMessage API call to set an appropriate wait time.
  4. Use the ChangeMessageVisibility API call to increase the visibility timeout.

Answer(s): D

Explanation:

The correct answer is D. Increasing the visibility timeout with ChangeMessageVisibility ensures that once a consumer receives a message, it isn’t delivered to other consumers for the extended period while it’s being processed, reducing duplicates in RDS.
A is incorrect because CreateQueue creates a new queue, not addressing duplicates. B is incorrect because AddPermission manages access rights, not processing deduplication. C is incorrect because ReceiveMessage with wait time (long polling) affects latency, not preventing concurrent processing that can cause duplicates.



A solutions architect is designing a new hybrid architecture to extend a company's on-premises infrastructure to AWS. The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?

  1. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection fails.
  2. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
  3. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails.
  4. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails.

Answer(s): A

Explanation:

A) Correct: Primary Direct Connect for low latency and high availability; VPN as backup ensures connectivity if DX fails, meeting high availability with lower cost by using VPN over the public internet during failover.
B) VPN only does not provide consistent low latency or the same reliability as Direct Connect for primary; second VPN as backup is acceptable but not as cost-effective for a DX-centric design with low latency requirements.
C) Two Direct Connect connections in the same region increase cost and complexity; failover still requires routing adjustments but VPN is not needed for this requirement.
D) There is no AWS CLI “Direct Connect failover” automatic backup; you must configure an independent backup path (VPN) for failover.



A company is running a business-critical web application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Availability Zone. The company wants the application to be highly available with minimum downtime and minimum loss of data.
Which solution will meet these requirements with the LEAST operational effort?

  1. Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora PostgreSQL Cross-Region Replication.
  2. Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ. Configure an Amazon RDS Proxy instance for the database.
  3. Configure the Auto Scaling group to use one Availability Zone. Generate hourly snapshots of the database. Recover the database from the snapshots in the event of a failure.
  4. Configure the Auto Scaling group to use multiple AWS Regions. Write the data from the application to Amazon S3. Use S3 Event Notifications to launch an AWS Lambda function to write the data to the database.

Answer(s): B

Explanation:

A concise explanation of the correct answer and why others are incorrect:
B) Using multi-AZ Auto Scaling with a Multi-AZ database and RDS Proxy provides high availability and durability with minimal operational effort. Multi-AZ protects the database from AZ-level failures, RDS Proxy improves connection management and failover handling for applications behind an ALB. A) cross-region complexity and latency, plus Cross-Region Aurora replication adds complexity and cost for DR rather than HA within a single region. C) Single AZ and hourly snapshots offer poor availability and potential data loss. D) Multi-region writes and S3-based logging add latency, complexity, and do not ensure database durability during region failures.



A company's HTTP application is behind a Network Load Balancer (NLB). The NLB's target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service.
The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application's availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?

  1. Enable HTTP health checks on the NLB, supplying the URL of the company's application.
  2. Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected. the application will restart.
  3. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's application. Configure an Auto Scaling action to replace unhealthy instances.
  4. Create an Amazon Cloud Watch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state.

Answer(s): C

Explanation:

The correct answer is C.
A) Incorrect: NLB supports TCP/UDP health checks or HTTP/HTTPS using a target group, but it cannot use an application URL for health checks like an ALB can. Replacing with ALB and enabling HTTP health checks ensures application-level health checks and automatic replacement of unhealthy instances via ASG.
B) Incorrect: A cron job requires custom scripts and ongoing maintenance, contradicting the requirement to avoid writing code or scripts.
D) Incorrect: CloudWatch UnhealthyHostCount monitors platform health, not application-level HTTP errors. It may trigger replacements too late or inappropriately without direct app health checks.



Viewing page 15 of 205
Viewing questions 71 - 75 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!