Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 9 )

Updated On: 18-Mar-2026

A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Create an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets.
  2. Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption.
  3. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets.
  4. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-KMS). Configure replication between the S3 buckets.

Answer(s): B

Explanation:

A concise explanation: The correct approach is B because a customer managed multi-Region KMS key enables cross-region accessibility for encryption/decryption without exposing keys in each region’s data plane, while replication ensures both regions contain the same data and the same KMS CMK is used via client-side encryption management. A is wrong because SSE-S3 does not use a customer-managed key. C is wrong because it uses SSE-S3 (not KMS) and does not ensure the same CMK is used for both regions. D is wrong because it uses SSE-KMS but requires the same CMK in both regions; multi-Region CMK with client-side encryption is more appropriate to meet “least operational overhead.”



A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS services and follows the AWS Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Use the EC2 serial console to directly access the terminal interface of each instance for administration.
  2. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.
  3. Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance.
  4. Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local on-premises machines to connect directly to the instances by using SSH keys across the VPN tunnel.

Answer(s): B

Explanation:

AWS Systems Manager Session Manager with an IAM role attached to each instance provides agent-based, audited, and permission-controlled remote access without opening inbound ports. It integrates with IAM and SSM, supports logging and session encryption, and minimizes operational overhead, fitting the Well-Architected principle of security and reliability.
A) EC2 serial console is manual, per-instance, and not scalable for remote administration at scale. B) Correct: scalable, secure, least-ops. C) SSH ключ + bastion adds maintenance, exposure, and bastion management overhead. D) Site-to-site VPN plus direct SSH is complex, costly, and increases attack surface with direct instance exposure.



A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from around the world. The company must decrease latency for users who access the website.
Which solution meets these requirements MOST cost-effectively?

  1. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries.
  2. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators.
  3. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
  4. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint.

Answer(s): C

Explanation:

CloudFront provides a global CDN in front of S3 with edge locations, reducing latency for worldwide users at a low, scalable cost for static websites. C is correct.
A) Not cost-effective or practical: replicating buckets across all regions multiplies storage and management overhead; geolocation routing doesn’t inherently optimize for global latency unless content is replicated.
B) AWS Global Accelerator is for regional endpoints and multi-service support; using it with S3 static site adds unnecessary cost and complexity for this use case.
D) S3 Transfer Acceleration improves transfer speed to S3, not general web latency via a CDN; pricing and latency benefits are less favorable for public static sites compared to CloudFront.



A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every day through the company's website.
The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage performance is the problem.
Which solution addresses this performance issue?

  1. Change the storage type to Provisioned IOPS SSD.
  2. Change the DB instance to a memory optimized instance class.
  3. Change the DB instance to a burstable performance instance class.
  4. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.

Answer(s): A

Explanation:

Provisioned IOPS SSD provides consistent high IOPS and throughput for write-heavy workloads, mitigating long insert latencies on large, heavily updated MySQL tables. A) is correct because IOPS/throughput must meet demand for 10M+ rows with frequent updates; GP2 may scale but can throttle under heavy write load. B) Memory-optimized instances help CPU/RAM hot spots but don’t directly address storage IOPS/throughput bottlenecks. C) Burstable instances (t3,t4g) can throttle performance during sustained load, worsening latency. D) Read replicas improve read scaling, not write latency; writes still go to the primary and are the bottleneck.



A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?

  1. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
  2. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
  3. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Set up the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
  4. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.

Answer(s): A

Explanation:

A) This uses Kinesis Data Firehose to ingest at scale with managed delivery, durable S3 storage, and lifecycle policy to archive older data, meeting high availability and minimal operational overhead.
B) Requires managing EC2 fleet across AZs and custom S3 writing, increasing maintenance and cost; not as native or scalable as Firehose.
C) OpenSearch is not ideal for archival analytics and requires manual snapshots; deletes data after 14 days complicates access and increases operational burden.
D) SQS with polling and manual archival introduces complexity and potential data loss window; less suitable for continuous high-volume ingest and long-term analytics.



Viewing page 9 of 205
Viewing questions 41 - 45 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!