Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 24 )

Updated On: 18-Mar-2026

A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database running on Amazon EC2. The company wants this application to be highly available with low operational complexity.
Which architecture offers the HIGHEST availability?

  1. Add a second ActiveMQ server to another Availability Zone. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
  2. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
  3. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
  4. Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.

Answer(s): D

Explanation:

Amazon MQ with active/standby across two AZs plus an Auto Scaling group for consumer EC2 instances across two AZs and Multi-AZ RDS provide built-in high availability for broker, compute, and database with fault tolerance and automatic recovery, reducing operational complexity.
A) Requires manual broker and DB replication across AZs; higher complexity and potential single points.
B) Adds MQ and consumer but still relies on EC2 MySQL; no managed DB high availability, increasing ops.
C) Adds Multi-AZ RDS but keeps manual EC2 MySQL; partial HA and more management of DB.
D) Provides fully managed, HA architecture across layers with Auto Scaling and Multi-AZ RDS, minimizing maintenance and ensuring failover.



A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS with minimum code changes and minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute the incoming requests.
  2. Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming requests.
  3. Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use Amazon API Gateway as an entry point to the Lambda functions.
  4. Use a high performance computing (HPC) solution such as AWS ParallelCluster to establish an HPC cluster that can process the incoming requests at the appropriate scale.

Answer(s): A

Explanation:

AWS Fargate with ECS and Service Auto Scaling minimizes operational overhead by running containerized workloads without managing servers, while an Application Load Balancer elastically distributes traffic.
A) Correct: Fargate eliminates server provisioning and management; ECS handles container orchestration; Auto Scaling adapts to demand; ALB provides Layer 7 routing for HTTP(S).
B) Requires managing EC2 instances and capacity planning; more maintenance than Fargate; scaling must be handled at OS/container level.
C) Lambda introduces new code and stateless function approach; not optimal for long-running containerized web apps or existing architecture; API Gateway adds management overhead.
D) HPC clusters are inappropriate for web request scaling; intended for compute-heavy batch workloads; high operational overhead.



A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS. A custom application in the company’s data center runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and needs to begin the transfer process as soon as possible.
The data center does not have any available network bandwidth for additional workloads. A solutions architect must transfer the data and must configure the transformation job to continue to run in the AWS Cloud.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Use AWS DataSync to move the data. Create a custom transformation job by using AWS Glue.
  2. Order an AWS Snowcone device to move the data. Deploy the transformation application to the device.
  3. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using AWS Glue.
  4. Order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute. Copy the data to the device. Create a new EC2 instance on AWS to run the transformation application.

Answer(s): C

Explanation:

The correct answer is C. Snowball Edge Storage Optimized can ship large data (50 TB) offline, copy data locally, and includes edge compute to run a custom transformation during transfer, enabling the job to continue in AWS with minimal downtime and operational overhead.
A is incorrect because DataSync requires network bandwidth or VPN/direct connect; the premise is no available bandwidth and DataSync plus Glue adds unnecessary data movement steps.
B is incorrect because Snowcone has limited capacity (10 TB) and only edge compute; cannot handle 50 TB.
D is incorrect because using Snowball Edge with EC2 adds more management overhead and a separate AWS EC2 deployment; option C provides integrated storage+compute with lower overhead.



A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?

  1. Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.
  2. Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.
  3. Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
  4. Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata.

Answer(s): C

Explanation:

AWS Lambda with S3 for photos and DynamoDB for metadata provides serverless, scalable compute that automatically handles varying concurrent load, eliminating capacity planning for EC2. Storing images in S3 offloads storage, reduces management, and pairs with Lambda event-driven processing to apply frames as requested.
A) Lambda only says store metadata in DynamoDB; lacks explicit photo storage handling via scalable object store.
B) Kinesis Data Firehose is for streaming data delivery, not for on-demand photo processing and metadata association.
D) Scaling EC2 with io2 EBS is manual capacity planning and not as cost-efficient or scalable as serverless options.
C) Correct: Lambda + S3 + DynamoDB delivers scalable, event-driven processing and proper data separation.



A medical records company is hosting an application on Amazon EC2 instances. The application processes customer data files that are stored on Amazon S3. The EC2 instances are hosted in public subnets. The EC2 instances access Amazon S3 over the internet, but they do not require any other network access.
A new requirement mandates that the network traffic for file transfers take a private route and not be sent over the internet.
Which change to the network architecture should a solutions architect recommend to meet this requirement?

  1. Create a NAT gateway. Configure the route table for the public subnets to send traffic to Amazon S3 through the NAT gateway.
  2. Configure the security group for the EC2 instances to restrict outbound traffic so that only traffic to the S3 prefix list is permitted.
  3. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table for the private subnets.
  4. Remove the internet gateway from the VPC. Set up an AWS Direct Connect connection, and route traffic to Amazon S3 over the Direct Connect connection.

Answer(s): C

Explanation:

A solution that ensures private routing to S3 is to place EC2 in private subnets and use a VPC endpoint for S3, associated with the private subnets’ route table. This keeps S3 traffic within the AWS network.
A) NAT gateway would still route via the NAT device over the internet path; not private to S3. B) Restricting SG egress to S3 prefix list does not guarantee private AWS network path or avoid internet egress. D) Direct Connect is unnecessary and costlier; it doesn’t specifically provide S3 private access within VPC unless combined with endpoints, which is not stated. C) correctly enables private connectivity via VPC endpoint.



Viewing page 24 of 205
Viewing questions 116 - 120 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!