Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 30 )

Updated On: 20-Mar-2026

A company runs a web-based portal that provides users with global breaking news, local alerts, and weather updates. The portal delivers each user a personalized view by using mixture of static and dynamic content. Content is served over HTTPS through an API server running on an Amazon EC2 instance behind an Application Load Balancer (ALB). The company wants the portal to provide this content to its users across the world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all users?

  1. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and dynamic content by specifying the ALB as an origin.
  2. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to serve all content from the ALB in the closest Region.
  3. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve the static content. Serve the dynamic content directly from the ALB.
  4. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy to serve all content from the ALB in the closest Region.

Answer(s): A

Explanation:

CloudFront with the ALB as the origin minimizes latency for both static and dynamic content by caching static content at edge locations and routing dynamic requests to the ALB with low latency. A uses a single region but global edge caching reduces round-trips and speeds content delivery worldwide. B adds complexity with multi-region deployment and latency routing, but CloudFront already optimizes delivery; Route 53 latency routing is unnecessary. C only caches static content, leaving dynamic content on ALB, which increases latency for dynamic requests. D relies on geolocation routing to route to nearest region but duplicates regions and lacks global edge caching benefits for dynamic content.



A gaming company is designing a highly available architecture. The application runs on a modified Linux kernel and supports only UDP-based traffic. The company needs the front-end tier to provide the best possible user experience. That tier must have low latency, route traffic to the nearest edge location, and provide static IP addresses for entry into the application endpoints.
What should a solutions architect do to meet these requirements?

  1. Configure Amazon Route 53 to forward requests to an Application Load Balancer. Use AWS Lambda for the application in AWS Application Auto Scaling.
  2. Configure Amazon CloudFront to forward requests to a Network Load Balancer. Use AWS Lambda for the application in an AWS Application Auto Scaling group.
  3. Configure AWS Global Accelerator to forward requests to a Network Load Balancer. Use Amazon EC2 instances for the application in an EC2 Auto Scaling group.
  4. Configure Amazon API Gateway to forward requests to an Application Load Balancer. Use Amazon EC2 instances for the application in an EC2 Auto Scaling group.

Answer(s): C

Explanation:

Global Accelerator provides any-to-any UDP support, static IP addresses, and routes traffic to the nearest edge location with low latency, which matches the requirement for UDP-based traffic and static entry points. It can forward to a Network Load Balancer, enabling high performance for TCP/UDP with static IPs. Using EC2 instances in an Auto Scaling group ensures control over kernel and UDP handling.
A) Route 53 with ALB is HTTP/HTTPS focused and ALB operates at Layer 7; not optimal for UDP or static IPs at edge.
B) CloudFront is HTTP/HTTPS and does not support UDP traffic; NLB as backend adds latency and lacks edge-native routing.
D) API Gateway targets HTTP APIs; not suited for UDP or static edge IPs; uses ALB/EC2, not ideal for UDP.



A company wants to migrate its existing on-premises monolithic application to AWS. The company wants to keep as much of the front-end code and the backend code as possible. However, the company wants to break the application into smaller applications. A different team will manage each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?

  1. Host the application on AWS Lambda. Integrate the application with Amazon API Gateway.
  2. Host the application with AWS Amplify. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
  3. Host the application on Amazon EC2 instances. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as targets.
  4. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load Balancer with Amazon ECS as the target.

Answer(s): D

Explanation:

The correct answer is D. ECS with an Application Load Balancer scales containers and supports microservices architecture, enabling multiple teams to own distinct services while minimizing operational overhead through managed orchestration and auto scaling.
A) Lambda/serverless is often not ideal for monolithic to microservices rearchitecture without substantial refactoring; stateful/long-running components and startup latency can impede modernization.
B) Amplify is primarily for front-end web/mobile apps; it doesn’t natively manage diverse backend microservices as a scalable, multi-team orchestration solution.
C) EC2 with ASG provides full control but introduces significant operational overhead compared to managed containers for breaking into smaller services.



A company recently started using Amazon Aurora as the data store for its global ecommerce application. When large reports are run, developers report that the ecommerce application is performing poorly. After reviewing metrics in Amazon CloudWatch, a solutions architect finds that the ReadIOPS and CPUUtilizalion metrics are spiking when monthly reports run.
What is the MOST cost-effective solution?

  1. Migrate the monthly reporting to Amazon Redshift.
  2. Migrate the monthly reporting to an Aurora Replica.
  3. Migrate the Aurora database to a larger instance class.
  4. Increase the Provisioned IOPS on the Aurora instance.

Answer(s): B

Explanation:

The correct answer is B. Using an Aurora Replica offloads read-intensive reporting queries from the primary instance, reducing ReadIOPS and CPUUtilization on the primary while remaining cost-effective with simple replication scaling.
A) Redshift is a separate analytically oriented data warehouse; migrating adds cost and development effort without addressing live OLTP workload spiking.
C) Scaling the Aurora instance class may help, but it increases cost and still concentrates load on a single primary; it’s less cost-efficient than distributing reads.
D) Increasing Provisioned IOPS on the primary could reduce latency but at higher ongoing cost and doesn’t specifically offload read workloads.



A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The analytics software is written in PHP and uses a MySQL database. The analytics software, the web server that provides PHP, and the database server are all hosted on the EC2 instance. The application is showing signs of performance degradation during busy times and is presenting 5xx errors. The company needs to make the application scale seamlessly.
Which solution will meet these requirements MOST cost-effectively?

  1. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2 On-Demand Instance. Use an Application Load Balancer to distribute the load to each EC2 instance.
  2. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web application. Use the AMI to launch a second EC2 On-Demand Instance. Use Amazon Route 53 weighted routing to distribute the load across the two EC2 instances.
  3. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AWS Lambda function to stop the EC2 instance and change the instance type. Create an Amazon CloudWatch alarm to invoke the Lambda function when CPU utilization surpasses 75%.
  4. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web application. Apply the AMI to a launch template. Create an Auto Scaling group with the launch template Configure the launch template to use a Spot Fleet. Attach an Application Load Balancer to the Auto Scaling group.

Answer(s): D

Explanation:

The correct answer is D. It uses Aurora MySQL for scalable, low-latency DB, an AMI-based web app with a Launch Template, and an Auto Scaling group with an Application Load Balancer to scale out during demand, meeting cost-efficiency with Spot Fleet optimization.
A) Only adds ELB and separate instances; lacks launch template/Auto Scaling to scale automatically; still on EC2 single instance pattern.
B) Route 53 weighted routing distributes traffic but does not auto-scale or manage instances; increases management overhead and cost.
C) Lambda-based scaling is reactive and complex; changing instance type costs and cold starts; not cost-effective for web tier scaling.



Viewing page 30 of 205
Viewing questions 146 - 150 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!