Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 10 )

Updated On: 21-Mar-2026

A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
  2. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
  3. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule's target. Create a second EventBridge (Cloud Watch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.
  4. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

Answer(s): B

Explanation:

The flow-based integration reduces overhead by orchestrating data transfer from multiple SaaS sources to S3 and notifying on completion without managing EC2-based logic, providing scalability with minimal ops.
A) Autoscaling focuses on scaling EC2 but does not optimize SaaS-to-S3 data transfer orchestration or reduce manual management; adds complexity for notifications.
B) AppFlow handles SaaS-to-S3 data transfers natively, scales automatically, and can trigger SNS on completion with minimal configuration, meeting the low-operational overhead requirement.
C) EventBridge rules for each SaaS source introduce multiple rules and custom routing logic; increases maintenance and latency.
D) Migrating to ECS adds container orchestration overhead and requires managing containerized deployments, not as low-op as AppFlow.



A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC. The EC2 instances run inside several subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 instances download images from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. The company is concerned about data transfer charges.
What is the MOST cost-effective way for the company to avoid Regional data transfer charges?

  1. Launch the NAT gateway in each Availability Zone.
  2. Replace the NAT gateway with a NAT instance.
  3. Deploy a gateway VPC endpoint for Amazon S3.
  4. Provision an EC2 Dedicated Host to run the EC2 instances.

Answer(s): C

Explanation:

The correct answer is C.
A) Not necessary; NAT gateway per AZ increases cost and still routes through NAT, not reducing S3 data transfer charges. B) NAT instances may save on hourly costs but require management and don’t eliminate Regional data transfer when accessing S3. C) Gateway VPC endpoint for S3 enables private S3 access over the AWS network, avoiding data transfer charges between AZs and within the VPC to S3. D) EC2 Dedicated Host has no impact on S3 data transfer costs and is unrelated to egress/ingress routing or regional data transfer.



A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?

  1. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint.
  2. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
  3. Order daily AWS Snowball devices. Load the data onto the Snowball devices and return the devices to AWS each day.
  4. Submit a support ticket through the AWS Management Console. Request the removal of S3 service limits from the account.

Answer(s): B

Explanation:

A) VPN plus VPC gateway endpoint is not sufficient for large, time-sensitive backups and would still use internet bandwidth, not addressing long-term scaling or internal bandwidth needs.
B) Direct Connect provides a dedicated, low-latency, high-bandwidth path for backup traffic, reducing internet reliance and supporting timely S3 data transfers, meeting both latency and bandwidth goals.
C) Snowball is for offline, bulk data transfer not suitable for continuous or daily time-sensitive backups and introduces operational overhead and timing gaps.
D) Requesting removal of S3 limits does not address network bandwidth or backup performance; it only alters quotas without solution for data transfer path.



A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

  1. Enable versioning on the S3 bucket.
  2. Enable MFA Delete on the S3 bucket.
  3. Create a bucket policy on the S3 bucket.
  4. Enable default encryption on the S3 bucket.
  5. Create a lifecycle policy for the objects in the S3 bucket.

Answer(s): A,B

Explanation:

Enabling versioning preserves previous object versions, allowing recovery from accidental deletions. Enabling MFA Delete requires MFA to permanently delete object versions, adding protection against intentional or accidental deletions. A) Correct: versioning preserves data; B) Correct: MFA Delete provides an additional deletion safeguard. C) Bucket policy alone does not prevent deletions unless combined with specific deny rules; D) Default encryption protects data at rest but not deletion protection; E) Lifecycle policies manage data aging/transition, not deletion protection.



A company has a data ingestion workflow that consists of the following:
• An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries
• An AWS Lambda function to process the data and record metadata
The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function does not ingest the corresponding data unless the company manually reruns the job.
Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Choose two.)

  1. Deploy the Lambda function in multiple Availability Zones.
  2. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.
  3. Increase the CPU and memory that are allocated to the Lambda function.
  4. Increase provisioned throughput for the Lambda function.
  5. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.

Answer(s): B,E

Explanation:

The correct options B and E address decoupling and reliable processing. B ensures messages published to SNS are persisted in an SQS queue, enabling durable delivery even when the Lambda or network is temporarily unavailable. E enables the Lambda to poll and process messages from SQS, decoupling ingestion from real-time delivery and providing retries for failed invocations. A is incorrect because Lambda already runs within multiple AZs; it doesn’t guarantee ingestion retry semantics. C is not relevant to reliability under transient network failures; CPU/memory won’t guarantee retries. D is incorrect because Lambda is not provisioned throughput-based; it uses concurrency limits, not throughput units.



Viewing page 10 of 205
Viewing questions 46 - 50 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!