Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 20 )

Updated On: 16-Mar-2026

A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?

  1. Configure an S3 gateway endpoint.
  2. Create an S3 bucket in a private subnet.
  3. Create an S3 bucket in the same AWS Region as the EC2 instances.
  4. Configure a NAT gateway in the same subnet as the EC2 instances.

Answer(s): A

Explanation:

S3 gateway endpoint provides private, region-local routing to S3 over the AWS network without traversing the internet, meeting the no-internet requirement.
A) S3 gateway endpoint is correct: uses VPC endpoint to S3 via AWS backbone, ensuring traffic never leaves AWS.
B) A private subnet alone does not prevent internet egress; unless paired with a VPC endpoint or security controls, requests can still reach the internet.
C) Bucket region alignment doesn’t affect data path; traffic could still go via the internet unless a VPC endpoint is used.
D) NAT gateway enables internet access for private subnets, which violates the no-internet requirement.



A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the application tier running on Amazon EC2 instances inside a VPC.
Which combination of steps should a solutions architect take to accomplish this? (Choose two.)

  1. Configure a VPC gateway endpoint for Amazon S3 within the VPC.
  2. Create a bucket policy to make the objects in the S3 bucket public.
  3. Create a bucket policy that limits access to only the application tier running in the VP
  4. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
  5. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket.

Answer(s): A,C

Explanation:

A) A VPC gateway endpoint for Amazon S3 within the VPC allows private, regionally-routeable access to S3 without traversing the public internet, ensuring secure access from EC2 in the VPC.
C) A bucket policy that limits access to only the application tier running in the VPC (e.g., IAM role or VPC principal) enforces least privilege and ensures only trusted sources within the VPC can access the bucket.
B) Makes objects public—breaches confidentiality.
D) Embedding IAM credentials on EC2 is insecure and not recommended; use instance roles.
E) NAT is unnecessary for S3 access within a VPC and adds cost and complexity.



A company runs an on-premises application that is powered by a MySQL database. The company is migrating the application to AWS to increase the application's elasticity and availability.
The current architecture shows heavy read activity on the database during times of normal operation. Every 4 hours, the company's development team pulls a full export of the production database to populate a database in the staging environment. During this period, users experience unacceptable application latency. The development team is unable to use the staging environment until the procedure completes.
A solutions architect must recommend replacement architecture that alleviates the application latency issue. The replacement architecture also must give the development team the ability to continue using the staging environment without delay.
Which solution meets these requirements?

  1. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
  2. Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on-demand.
  3. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Use the standby instance for the staging database.
  4. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.

Answer(s): B

Explanation:

Aurora MySQL with on-demand cloning provides low-latency reads and can create a fast, isolated staging copy without impacting production, meeting elasticity and zero-wait staging access.
A) Uses backup/restore with mysqldump, which is slow and volume-intensive, causing latency during production export and delaying staging access.
C) RDS MySQL with Multi-AZ and read replicas adds complexity and still relies on backup/restore for staging; standby cannot be used for a live staging clone.
D) Similar to C, relies on mysqldump for staging data, introducing heavy I/O and latency during export, delaying staging access.



A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple processing to transform the data and save the data in JSON format for later analysis.
Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days, users will upload a few files or no files.
Which solution meets these requirements with the LEAST operational overhead?

  1. Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an Amazon Aurora DB cluster.
  2. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoD
  3. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
  4. Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in an Amazon Aurora DB cluster.

Answer(s): C

Explanation:

S3 event-driven processing with Lambda provides near-real-time handling, serverless scaling for variable demand, and minimal operational overhead. Using SQS with Lambda (option C) ensures asynchronous, decoupled processing of each uploaded file and Lambda scales automatically to the event rate; storing JSON in DynamoDB suits small, fast lookups and is fully managed.
A) EMR is heavy, requires cluster management, and is unsuitable for simple one-time processing with variable load.
B) EC2-based processing adds maintenance, scaling, and higher overhead than serverless.
D) EventBridge to Kinesis adds unnecessary components and complexity; Aurora introduces relational storage with more management.



An application allows users at a company's headquarters to access product data. The product data is stored in an Amazon RDS MySQL DB instance. The operations team has isolated an application performance slowdown and wants to separate read traffic from write traffic. A solutions architect needs to optimize the application's performance quickly.
What should the solutions architect recommend?

  1. Change the existing database to a Multi-AZ deployment. Serve the read requests from the primary Availability Zone.
  2. Change the existing database to a Multi-AZ deployment. Serve the read requests from the secondary Availability Zone.
  3. Create read replicas for the database. Configure the read replicas with half of the compute and storage resources as the source database.
  4. Create read replicas for the database. Configure the read replicas with the same compute and storage resources as the source database.

Answer(s): D

Explanation:

A) D) incorrect to start; A and B discuss Multi-AZ but read/write separation via AZ placement is not how reads are scaled in RDS; Multi-AZ provides high availability, not read offload.
C) Incorrect: read replicas typically have same or scaled resources; halving resources can cause underprovisioning and latency; not a recommended quick optimization.
D) Correct: creating read replicas provides true read offload from the primary, enabling separation of read traffic without impacting write performance; maintaining same compute/storage ensures consistent performance and reduces configuration risk.



Viewing page 20 of 205
Viewing questions 96 - 100 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!