Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 29 )

Updated On: 20-Mar-2026

A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and accessible during the migration. The Aurora database must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Choose two.)

  1. Create an ongoing replication task.
  2. Create a database backup of the on-premises database.
  3. Create an AWS Database Migration Service (AWS DMS) replication server.
  4. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
  5. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization.

Answer(s): A,C

Explanation:

AWS DMS with ongoing replication provides continuous data replication while the source remains online, keeping Aurora PostgreSQL synchronized with the on-premises PostgreSQL during migration. A) Ongoing replication task ensures ongoing data capture and apply to target. C) AWS DMS replication server orchestrates the continuous replication between on-premises and Aurora.
B) Backups alone do not keep the target synchronized in real time. D) SCT is for heterogeneous/schema conversions and is not required for ongoing synchronization during migration. E) EventBridge/CloudWatch Events are for monitoring/eventing, not for maintaining data synchronization.



A company uses AWS Organizations to create dedicated AWS accounts for each business unit to manage each business unit's account independently upon request. The root email recipient missed a notification that was sent to the root user email address of one account. The company wants to ensure that all future notifications are not missed. Future notifications must be limited to account administrators.
Which solution will meet these requirements?

  1. Configure the company’s email server to forward notification email messages that are sent to the AWS account root user email address to all users in the organization.
  2. Configure all AWS account root user email addresses as distribution lists that go to a few administrators who can respond to alerts. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.
  3. Configure all AWS account root user email messages to be sent to one administrator who is responsible for monitoring alerts and forwarding those alerts to the appropriate groups.
  4. Configure all existing AWS accounts and all newly created accounts to use the same root user email address. Configure AWS account alternate contacts in the AWS Organizations console or programmatically.

Answer(s): B

Explanation:

A) Not correct. Forwarding all root emails to all users defeats least-privilege and introduces exposure; root emails are sensitive and should not be broadly distributed. B) Correct. Using distribution lists ensures alerts reach designated administrators, while configuring alternate contacts provides escalation and contact details per account for notifications and support. C) Not correct. Relying on a single administrator creates a single point of failure and may miss alerts; it also does not scale across multiple accounts. D) Not correct. Using the same root email across all accounts reduces accountability and disrupts per-account ownership; alternate contacts should be configured per account.



A company runs its ecommerce application on AWS. Every new order is published as a massage in a RabbitMQ queue that runs on an Amazon EC2 instance in a single Availability Zone. These messages are processed by a different application that runs on a separate EC2 instance. This application stores the details in a PostgreSQL database on another EC2 instance. All the EC2 instances are in the same Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least operational overhead.
What should a solutions architect do to meet these requirements?

  1. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for EC2 instances that host the application. Create another Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database.
  2. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ. Create a Multi-AZ Auto Scaling group for EC2 instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
  3. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2 instances that host the application. Migrate the database to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
  4. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another Multi-AZ Auto Scaling group for EC2 instances that host the application. Create a third Multi-AZ Auto Scaling group for EC2 instances that host the PostgreSQL database

Answer(s): B

Explanation:

Migrating to Amazon MQ for RabbitMQ provides managed, highly available messaging with multi-AZ redundancy, while using RDS for PostgreSQL gives a fully managed, multi-AZ database with automatic failover, reducing operational overhead. This combination delivers highest availability with minimal maintenance.
A) Incorrect: auto-scaling EC2 for RabbitMQ and DB adds management of replicas; lacks managed messaging and single-tenant redundancy benefits.
C) Incorrect: placing RabbitMQ on EC2 in Multi-AZ does not provide managed broker reliability or multi-AZ DB for the database; still relies on EC2 for messaging.
D) Incorrect: requires three ASGs; adds complexity without the gains of managed broker and managed multi-AZ DB.
B) Correct: managed, redundant messaging with MQ plus Multi-AZ RDS PostgreSQL aligns with highest availability and minimal ops.



A reporting team receives files each day in an Amazon S3 bucket. The reporting team manually reviews and copies the files from this initial S3 bucket to an analysis S3 bucket each day at the same time to use with Amazon QuickSight. Additional teams are starting to send more files in larger sizes to the initial S3 bucket.
The reporting team wants to move the files automatically analysis S3 bucket as the files enter the initial S3 bucket. The reporting team also wants to use AWS Lambda functions to run pattern-matching code on the copied data. In addition, the reporting team wants to send the data files to a pipeline in Amazon SageMaker Pipelines.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?

  1. Create a Lambda function to copy the files to the analysis S3 bucket. Create an S3 event notification for the analysis S3 bucket. Configure Lambda and SageMaker Pipelines as destinations of the event notification. Configure s3:ObjectCreated:Put as the event type.
  2. Create a Lambda function to copy the files to the analysis S3 bucket. Configure the analysis S3 bucket to send event notifications to Amazon EventBridge (Amazon CloudWatch Events). Configure an ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda and SageMaker Pipelines as targets for the rule.
  3. Configure S3 replication between the S3 buckets. Create an S3 event notification for the analysis S3 bucket. Configure Lambda and SageMaker Pipelines as destinations of the event notification. Configure s3:ObjectCreated:Put as the event type.
  4. Configure S3 replication between the S3 buckets. Configure the analysis S3 bucket to send event notifications to Amazon EventBridge (Amazon CloudWatch Events). Configure an ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda and SageMaker Pipelines as targets for the rule.

Answer(s): D

Explanation:

S) D is correct because enabling S3 Replication ensures automatic, server-side copy of incoming objects to the analysis bucket with minimal management overhead. Then using EventBridge to propagate ObjectCreated events from the analysis bucket to Lambda for pattern matching and to SageMaker Pipelines allows scalable, event-driven processing without custom polling.
A is incorrect because using a Lambda copy on each object plus direct Lambda/SageMaker in the event notification adds overhead and potential duplication; lacks centralized replication handling.
B is incorrect because replicating via EventBridge after Lambda adds latency and complexity; EventBridge alone does not ensure immediate cross-bucket replication.
C is incorrect because S3 replication is not in play with immediate event-driven processing from the destination bucket without EventBridge routing.



A solutions architect needs to help a company optimize the cost of running an application on AWS. The application will use Amazon EC2 instances, AWS Fargate, and AWS Lambda for compute within the architecture.
The EC2 instances will run the data ingestion layer of the application. EC2 usage will be sporadic and unpredictable. Workloads that run on EC2 instances can be interrupted at any time. The application front end will run on Fargate, and Lambda will serve the API layer. The front-end utilization and API layer utilization will be predictable over the course of the next year.
Which combination of purchasing options will provide the MOST cost-effective solution for hosting this application? (Choose two.)

  1. Use Spot Instances for the data ingestion layer
  2. Use On-Demand Instances for the data ingestion layer
  3. Purchase a 1-year Compute Savings Plan for the front end and API layer.
  4. Purchase 1-year All Upfront Reserved instances for the data ingestion layer.
  5. Purchase a 1-year EC2 instance Savings Plan for the front end and API layer.

Answer(s): A,C

Explanation:

Spot Instances for the data ingestion layer (A) leverages interruption-tolerant EC2 workloads with volatile demand, lowering cost for unpredictable, interruptible tasks. A 1-year Compute Savings Plan for the front end (Fargate) and API layer (C) optimizes steady, predictable usage over a year with flexible instance families, reducing compute spend across services that are stable. B is incorrect because On-Demand is more expensive for unpredictable workloads than Spot. D is incorrect because All Upfront Reserved Instances require steady, long-term, non-interruptible usage and are not suitable for interruption-tolerant ingestion tasks. E is incorrect because Savings Plans for EC2 across front end and API layer misaligns with Fargate/Lambda usage and does not apply.



Viewing page 29 of 205
Viewing questions 141 - 145 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!