Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 4 )

Updated On: 15-Mar-2026

A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?

  1. Use AWS Secrets Manager. Turn on automatic rotation.
  2. Use AWS Systems Manager Parameter Store. Turn on automatic rotation.
  3. Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket.
  4. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.

Answer(s): A

Explanation:

AWS Secrets Manager provides centralized credential management with automatic rotation for database credentials, minimizing operational overhead for applications running on EC2 and connecting to Aurora. It supports seamless retrieval by apps and can rotate secrets without code changes.
A) Correct: Secrets Manager with automatic rotation reduces manual credential maintenance and ensures credentials are rotated automatically.
B) Systems Manager Parameter Store is suitable for secret values but lacks built-in database credential rotation like Secrets Manager; rotation is not as seamless for database credentials.
C) Storing credentials in S3 introduces risk if access controls or rotation aren’t robust; not ideal for dynamic database credentials.
D) Encrypting and mounting credentials on EBS volumes increases maintenance and does not automate rotation or centralized management.



A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?

  1. Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.
  2. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution.
  3. Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application.
  4. Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.

Answer(s): A

Explanation:

A) CloudFront with S3 as origin and ALB as origin? Wait, correct is A: CloudFront distribution with S3 bucket and ALB as origins, and Route 53 routes to CloudFront. This provides caching for static data from S3 and dynamic data from ALB, reducing latency globally; CloudFront edge caches serve static content and forward dynamic requests to ALB, while Route 53 routing to the CloudFront distribution ensures global performance. B, C, D incorrect: Global Accelerator is not needed when CloudFront already provides edge caching; mixing endpoints and multiple accelerators adds complexity and can misroute traffic; using separate endpoints or domain splits is not required and can complicate TLS and routing.



A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Store the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule.
  2. Store the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule.
  3. Store the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.
  4. Encrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets.

Answer(s): A

Explanation:

A) Secrets Manager with multi-Region replication and rotation minimizes operational overhead by providing built-in secret storage, automatic rotation, and cross-region replication for IAM/RDS credentials. This aligns with RDS for MySQL integration and reduces custom tooling.
B) Systems Manager Parameter Store replication exists but not as seamless for cross-region rotation of database credentials; multi-region replication is less common and rotation may require custom steps.
C) S3 SSE plus Lambda rotation adds significant custom logic and lacks native secret rotation for RDS; higher maintenance.
D) DynamoDB with KMS keys and Lambda rotation is a custom approach requiring bespoke rotation logic and does not provide native, managed RDS credential rotation.



A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
Which solution will meet these requirements?

  1. Use Amazon Redshift with a single node for leader and compute functionality.
  2. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
  3. Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
  4. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.

Answer(s): C

Explanation:

Aurora with Auto Scaling and replicas provides a highly available, read-heavy, scalable relational database layer that automatically adds read replicas to handle unpredictable read workloads while maintaining multi-AZ durability.
A) Redshift is a data warehouse optimized for analytics, not as a transactional DB or OLTP with automatic read scaling for a live ecommerce workload.
B) Single-AZ RDS with cross-AZ readers helps availability but lacks the elastic, per-tenant read-scaling and automatic replica fleet management provided by Aurora Auto Scaling.
D) ElastiCache Memcached adds a cache layer but does not replace the authoritative transactional database; relying on Spot Instances is unreliable for a critical database tier.



A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.
Which solution will meet these requirements?

  1. Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.
  2. Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
  3. Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VP
  4. Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.

Answer(s): C

Explanation:

Amazon Network Firewall provides managed inspection and filtering capabilities within a VPC, allowing you to define rules to inspect and filter traffic entering and leaving the production VPC, matching the on-prem inspection server functionality. A) GuardDuty is for threat detection and monitoring, not real-time traffic inspection/filtering. B) Traffic Mirroring duplicates traffic for analysis elsewhere but does not perform in-line inspection or enforcement. D) Firewall Manager centralizes policy management across accounts but relies on underlying firewall appliances; it does not itself provide direct in-line inspection/filtering within the VPC. Therefore, Network Firewall is the correct in-line traffic inspection and filtering solution.



Viewing page 4 of 205
Viewing questions 16 - 20 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!