Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 12 )

Updated On: 18-Mar-2026

A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

  1. Configure the application to send the data to Amazon Kinesis Data Firehose.
  2. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
  3. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application's API for the data.
  4. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the data.
  5. Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by email.

Answer(s): B,D

Explanation:

The correct combination uses a scheduled event to fetch data via Lambda and SES to email formatted HTML. D explains scheduling with EventBridge to invoke Lambda to query the API. B uses SES to format and send the report by email. A is incorrect because Kinesis Firehose is for streaming data delivery, not for generating and emailing formatted reports. C is incorrect because Glue is overkill for simple API queries and not needed to send emails. E is incorrect since SNS as an S3 destination is not a direct email report delivery mechanism and does not cover formatting.



A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales automatically. is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?

  1. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS). Use Amazon S3 for storage.
  2. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Block Store (Amazon EBS) for storage.
  3. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.
  4. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for storage.

Answer(s): C

Explanation:

The correct answer is C.
A) ECS with S3 is not a standard file system structure and lacks shared POSIX-style file system semantics across multiple instances, which the requirement for a standard file system expects.
B) EKS with EBS provides block storage, not a scalable, shared file system across many instances; also EBS does not natively provide a distributed file system suitable for huge, varying file sizes at scale.
C) EC2 in a Multi-AZ ASG with EFS gives a scalable, highly available NFS-like shared POSIX-compliant file system that auto-scales and supports large, varying output sizes with minimal management.
D) EC2 with EBS offers block storage, not a shared file system; lacks the native, scalable shared file system capability required for this workload.



A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during the entire 10-year period. The records must be stored with maximum resiliency.
Which solution will meet these requirements?

  1. Store the records in S3 Glacier for the entire 10-year period. Use an access control policy to deny deletion of the records for a period of 10 years.
  2. Store the records by using S3 Intelligent-Tiering. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to allow deletion.
  3. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.
  4. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Use S3 Object Lock in governance mode for a period of 10 years.

Answer(s): C

Explanation:

S) C) is correct because it uses a standard-to-Glacier Deep Archive transition after 1 year via S3 Lifecycle, aligning immediate access for 1 year with archiving for the remaining 9 years, while S3 Object Lock in compliance mode enforces immutable retention for the full 10-year period, meeting the non-delete requirement and maximum resiliency (S3's durability and cross-region replication options can be enabled). A) Glacier alone lacks immediate access for 1 year. B) IAM policy-based deletion denial can be bypassed and doesn’t guarantee immutable retention. D) One Zone-IA reduces durability (single AZ) and governance mode does not guarantee full 10-year immutability.



A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2 instances. The file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable storage solution that preserves how users currently access the files.
What should a solutions architect do to meet these requirements?

  1. Migrate all the data to Amazon S3. Set up IAM authentication for users to access files.
  2. Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 instances.
  3. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
  4. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS.

Answer(s): C

Explanation:

Extending to Amazon FSx for Windows File Server with Multi-AZ preserves native Windows file shares and access semantics while providing built-in HA, durability, and SMB-compatible shares. It supports Windows ACLs, NTFS permissions, and seamless migration of existing data, meeting stay-on-fileserver behavior.
A) S3 with IAM authentication does not preserve Windows file-share semantics or SMB access; lacks native Windows file-share features.
B) S3 File Gateway provides object storage access, not native Windows file shares or SMB, so parity with current usage is not maintained.
D) EFS offers NFS-based shares, not SMB/Windows-native file shares, so it does not match Windows workload access patterns.



A solutions architect is developing a VPC architecture that includes multiple subnets. The architecture will host applications that use Amazon EC2 instances and Amazon RDS DB instances. The architecture consists of six subnets in two Availability Zones. Each Availability Zone includes a public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the RDS databases.
Which solution will meet these requirements?

  1. Create a new route table that excludes the route to the public subnets' CIDR blocks. Associate the route table with the database subnets.
  2. Create a security group that denies inbound traffic from the security group that is assigned to instances in the public subnets. Attach the security group to the DB instances.
  3. Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.
  4. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.

Answer(s): C

Explanation:

The correct answer is C. A security group that allows inbound traffic from the security group of private-subnet instances ensures only EC2 in private subnets can reach the RDS DB instances, enforcing the required isolation.
A) Creating a route table to exclude public CIDRs does not control EC2-to-RDS access at the database layer; it affects routing, not access control. B) Denying inbound traffic from public-subnet instances via a deny rule is not possible with security groups (they are stateful allowlists only). C) Correct: explicit allow from private-subnet SG to DB SG provides controlled access. D) VPC peering does not apply to intra-VPC subnet communication for this use case and adds unnecessary complexity.



Viewing page 12 of 205
Viewing questions 56 - 60 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!