Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 6 )

Updated On: 18-Mar-2026

An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24 hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Use Amazon S3 to host the full website in different S3 buckets. Add Amazon CloudFront distributions. Set the S3 buckets as origins for the distributions. Store the order data in Amazon S3.
  2. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones. Add an Application Load Balancer (ALB) to distribute the website traffic. Add another ALB for the backend APIs. Store the data in Amazon RDS for MySQL.
  3. Migrate the full application to run in containers. Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use the Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic. Store the data in Amazon RDS for MySQL.
  4. Use an Amazon S3 bucket to host the website's static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin. Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.

Answer(s): D

Explanation:

A concise justification:
D) Uses S3 for static content with CloudFront, plus API Gateway and Lambda for backend, and DynamoDB for scalable, low-latency data storage. This serverless approach minimizes operational overhead, scales to millions of requests per hour with millisecond latency, and avoids provisioning/manage servers or clusters.
A) S3 with CloudFront alone handles static content and low-latency delivery but lacks a scalable, serverless backend and suitable data store for dynamic order data.
B) EC2 Auto Scaling with ALB incurs more operational effort and management; scaling the backend and database is heavier.
C) EKS with Cluster Autoscaler adds Kubernetes management complexity; still requires relational DB and more maintenance.



A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.
Which storage option meets these requirements?

  1. S3 Standard
  2. S3 Intelligent-Tiering
  3. S3 Standard-Infrequent Access (S3 Standard-IA)
  4. S3 One Zone-Infrequent Access (S3 One Zone-IA)

Answer(s): B

Explanation:

S3 Intelligent-Tiering automatically moves objects between frequent and infrequent access tiers based on usage, minimizing cost while providing the same-namespace resilience and durability as S3, including across AZs. A) S3 Standard is durable and highly available but not cost-optimized for unpredictable access patterns. C) S3 Standard-IA lowers storage cost but incurs retrieval fees and does not optimize for unpredictable access automatically. D) S3 One Zone-IA stores data in a single AZ, reducing durability resilience to AZ loss, which contradicts the requirement. B) correctly balances cost across access patterns and preserves data across AZs without manual tiering.



A company is storing backup files by using Amazon S3 Standard storage. The files are accessed frequently for 1 month. However, the files are not accessed after 1 month. The company must keep the files indefinitely.
Which storage solution will meet these requirements MOST cost-effectively?

  1. Configure S3 Intelligent-Tiering to automatically migrate objects.
  2. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
  3. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month.
  4. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month.

Answer(s): B

Explanation:

S3 Glacier Deep Archive is the lowest-cost storage for long-term data that is rarely accessed, making B cost-optimal for keeping backups indefinitely after an initial month of frequent access.
A) S3 Intelligent-Tiering adds monitoring costs and may still tier to frequent access tiers; not as cost-effective for long-term retention with no access. B) Correct: transitions after 1 month to Glacier Deep Archive minimizes storage cost while preserving access if needed in future. C) S3 Standard-IA incurs retrieval costs and higher per-GB price than Glacier Deep Archive for long-term, infrequently accessed data. D) One Zone-IA stores data in a single AZ and has higher risk of data loss and higher retrieval costs relative to Glacier Deep Archive for immutable backups.



A company observes an increase in Amazon EC2 costs in its most recent bill. The billing team notices unwanted vertical scaling of instance types for a couple of EC2 instances. A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth analysis to identify the root cause of the vertical scaling.
How should the solutions architect generate the information with the LEAST operational overhead?

  1. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types.
  2. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
  3. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months.
  4. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a source to generate an interactive graph based on instance types.

Answer(s): B

Explanation:

The correct answer is B. Cost Explorer’s granular filtering allows you to drill down EC2 costs by instance type, over a two-month window, enabling fast, insight-driven root-cause analysis with minimal setup and operational overhead.
A) AWS Budgets focuses on cost thresholds and alerts, not in-depth per-instance-type cost analysis.
C) Billing dashboard graphs offer basic visuals but lack the granular, customizable filters needed for root-cause analysis by instance type and time range.
D) Cost and Usage Reports with QuickSight provides full analytics but involves additional data processing and setup, increasing overhead.



A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?

  1. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers.
  2. Change the platform from Aurora to Amazon DynamoDProvision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
  3. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
  4. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.

Answer(s): D

Explanation:

A) Using EC2 with Tomcat increases management overhead and does not leverage Lambda scaling; JDBC on EC2 breaks the serverless pattern and adds provisioning complexity. B) DynamoDB/DAX changes database type; adds unnecessary complexity and does not address Lambda quota limitations or seamless scaling into Aurora PostgreSQL. C) SNS coupling is a pub/sub pattern but without guaranteed ordering or durable backpressure between ingestion and load, risking data loss or throttling. D) SQS decouples ingestion from processing, provides backpressure, and allows Lambda to poll batches, reducing concurrent execution pressure and scaling smoothly into Aurora PostgreSQL.



Viewing page 6 of 205
Viewing questions 26 - 30 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!