Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 27 )

Updated On: 18-Mar-2026

A solutions architect needs to implement a solution to reduce a company's storage costs. All the company's data is in the Amazon S3 Standard storage class. The company must keep all data for at least 25 years. Data from the most recent 2 years must be highly available and immediately retrievable.
Which solution will meet these requirements?

  1. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive immediately.
  2. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years.
  3. Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3 Glacier Deep Archive.
  4. Set up an S3 Lifecycle policy to transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately and to S3 Glacier Deep Archive after 2 years.

Answer(s): B

Explanation:

S) B)
B) Correct: Transition data after 2 years ensures the most recent 2 years remain in S3 Standard or readily accessible lifecycle while older data moves to Glacier Deep Archive, meeting 25-year retention and cost savings.
A) Incorrect: Immediate transition to Glacier Deep Archive violates the requirement for highly available and immediately retrievable data for the most recent 2 years.
C) Incorrect: S3 Intelligent-Tiering archiving option is not applicable to guarantee immediate retrievability to 2 years; it optimizes by tiering but does not guarantee Glacier Deep Archive as the target for the time window.
D) Incorrect: S3 One Zone-IA is not durable or highly available like Standard; immediate transition followed by 2-year rule does not satisfy the requirement for long-term cost-effective, highly available storage for the initial period.



A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the maximum possible I/O performance for video processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to meet requirements for archival media that is not in use anymore.
Which set of services should a solutions architect recommend to meet these requirements?

  1. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
  2. Amazon EBS for maximum performance, Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage
  3. Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage
  4. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

Answer(s): D

Explanation:

The correct answer is D.
A) Incorrect: EC2 instance store is ephemeral and not durable; unsuitable for archival or durable storage needs.
B) Incorrect: EFS is durable but not cost-optimized for large archival workloads; S3 Glacier is appropriate for archival. EBS for 10 TB and high IOPS is excessive here.
C) Incorrect: EC2 instance store again is ephemeral and not durable; EC2 for archival storage is inappropriate, and S3 is better for durable/archival storage.
D) Correct: EC2 instance store offers maximum I/O on the instance for video processing, S3 provides durable storage for 300 TB with high durability, and S3 Glacier provides cost-effective archival storage for 900 TB.



A company wants to run applications in containers in the AWS Cloud. These applications are stateless and can tolerate disruptions within the underlying infrastructure. The company needs a solution that minimizes cost and operational overhead.
What should a solutions architect do to meet these requirements?

  1. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.
  2. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
  3. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers.
  4. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.

Answer(s): B

Explanation:

Spot instances in an EKS managed node group provide cost savings with automated lifecycle and capacity management suitable for stateless, disruption-tolerant workloads, minimizing operational overhead while leveraging Kubernetes scheduling for containerized apps. A) EC2 Auto Scaling with Spot Instances is viable but lacks Kubernetes management benefits and ecosystem for container orchestration. C) On-Demand in EC2 ASG ensures availability but higher cost and no automation for Kubernetes. D) On-Demand in EKS node group reduces cost efficiency compared to Spot-based EKS nodes. Thus B correctly combines cost efficiency, managed Kubernetes control plane, and minimal overhead for stateless containers.



A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning is limiting the company's growth. A solutions architect must improve the application's infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)

  1. Migrate the PostgreSQL database to Amazon Aurora.
  2. Migrate the web application to be hosted on Amazon EC2 instances.
  3. Set up an Amazon CloudFront distribution for the web application content.
  4. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
  5. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).

Answer(s): A,E

Explanation:

The correct combination A and E modernizes storage and compute with managed services, reducing operational overhead. A) Migrating PostgreSQL to Amazon Aurora provides a fully managed, scalable relational database with automated backups, patching, and replication, lowering maintenance. E) Hosting the containerized web app on AWS Fargate with ECS eliminates server provisioning and cluster management, scaling transparently, and fits containerized workloads. B is incorrect because EC2 hosting increases operational burden. C is not essential to reduce ops and doesn’t address DB or containerization. D is unnecessary unless caching is required for latency, and Aurora already handles most OLTP workloads efficiently.



An application runs on Amazon EC2 instances across multiple Availability Zonas. The instances run in an Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the group?

  1. Use a simple scaling policy to dynamically scale the Auto Scaling group.
  2. Use a target tracking policy to dynamically scale the Auto Scaling group.
  3. Use an AWS Lambda function ta update the desired Auto Scaling group capacity.
  4. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.

Answer(s): B

Explanation:

A target tracking policy to dynamically scale the Auto Scaling group.
A) Uses simple scaling, which reacts to individual thresholds and can lead to oscillation and non-optimal target utilization.
B) Correct: target tracking maintains a specified metric at a target value (e.g., 40% CPU), automatically adjusting capacity to keep instances near the desired utilization.
C) Lambda is unnecessary for capacity management and introduces latency and complexity.
D) Scheduled scaling ignores real-time workload and can’t maintain a fixed target utilization across variable traffic.



Viewing page 27 of 205
Viewing questions 131 - 135 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!