Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 31 )

Updated On: 20-Mar-2026

A company runs a stateless web application in production on a group of Amazon EC2 On-Demand Instances behind an Application Load Balancer. The application experiences heavy usage during an 8-hour period each business day. Application usage is moderate and steady overnight. Application usage is low during weekends.
The company wants to minimize its EC2 costs without affecting the availability of the application.
Which solution will meet these requirements?

  1. Use Spot Instances for the entire workload.
  2. Use Reserved Instances for the baseline level of usage. Use Spot instances for any additional capacity that the application needs.
  3. Use On-Demand Instances for the baseline level of usage. Use Spot Instances for any additional capacity that the application needs.
  4. Use Dedicated Instances for the baseline level of usage. Use On-Demand Instances for any additional capacity that the application needs.

Answer(s): B

Explanation:

A concise, cost-optimized approach uses a stable baseline with cheaper spot capacity for extra demand.
A) Spot Instances for the entire workload is risky due to potential interruptions, harming availability.
B) Correct: Baseline steady usage covered by Reserved Instances (cost-efficiency over time), with Spot Instances for variable, extra capacity during peaks, preserving availability when Spot interruptions occur.
C) On-Demand for baseline is more expensive than Reserved for predictable baseline ramp; spot for extras is good, but misses cost benefits of RI for steady usage.
D) Dedicated Instances are pricelier and unnecessary for this scenario; not optimized for cost.



A company needs to retain application log files for a critical application for 10 years. The application team regularly accesses logs from the past month for troubleshooting, but logs older than 1 month are rarely accessed. The application generates more than 10 TB of logs per month.
Which storage option meets these requirements MOST cost-effectively?

  1. Store the logs in Amazon S3. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive.
  2. Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive.
  3. Store the logs in Amazon CloudWatch Logs. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive.
  4. Store the logs in Amazon CloudWatch Logs. Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive.

Answer(s): B

Explanation:

Storing logs in S3 with lifecycle policy to move older-than-1-month data to S3 Glacier Deep Archive is most cost-effective for 10-year retention and infrequent access, while keeping recent logs readily available in S3.
A) Incorrect: AWS Backup isn’t needed for S3 data lifecycle; Glacier Deep Archive via lifecycle should be configured directly on S3 objects, and AWS Backup adds complexity/cost.
C) Incorrect: CloudWatch Logs is not cost-effective for long-term 10-year retention at scale; exporting to Glacier is better via S3 lifecycle, not via AWS Backup.
D) Incorrect: Lifecycle on CloudWatch Logs isn’t applicable; CloudWatch Logs data must be exported to S3 first to leverage S3 lifecycle tiers.



A company has a data ingestion workflow that includes the following components:
An Amazon Simple Notification Service (Amazon SNS) topic that receives notifications about new data deliveries
An AWS Lambda function that processes and stores the data
The ingestion workflow occasionally fails because of network connectivity issues. When failure occurs, the corresponding data is not ingested unless the company manually reruns the job.
What should a solutions architect do to ensure that all notifications are eventually processed?

  1. Configure the Lambda function for deployment across multiple Availability Zones.
  2. Modify the Lambda function's configuration to increase the CPU and memory allocations for the function.
  3. Configure the SNS topic’s retry strategy to increase both the number of retries and the wait time between retries.
  4. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination. Modify the Lambda function to process messages in the queue.

Answer(s): D

Explanation:

A) The correct choice is D.
D) Using an SQS queue as the on-failure destination and configuring Lambda to process messages from the queue ensures eventual processing and decouples retries from the live flow. Messages that fail Lambda processing are durably stored in SQS and retried, even during transient network issues, eliminating manual reruns.
A) Deploying Lambda across AZs does not affect processing guarantees or failure handling for transient network outages.
B) Increasing CPU/memory does not address failure handling, retries, or message durability; it may improve performance but not guarantee eventual processing.
C) SNS retry strategy applies to message delivery to endpoints but does not guarantee retries of Lambda processing failures or provide durable queuing for failed invocations.



A company has a service that produces event data. The company wants to use AWS to process the event data as it is received. The data is written in a specific order that must be maintained throughout processing. The company wants to implement a solution that minimizes operational overhead.
How should a solutions architect accomplish this?

  1. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process messages from the queue.
  2. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an AWS Lambda function as a subscriber.
  3. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold messages. Set up an AWS Lambda function to process messages from the queue independently.
  4. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.

Answer(s): A

Explanation:

The correct answer is A because a FIFO SQS queue preserves message order and exactly-once processing with Lambda as a consumer minimizes operational overhead, meeting ordered processing requirements with serverless integration.
B is incorrect because SNS with Lambda as a subscriber delivers messages but does not guarantee in-order processing; not suitable for maintaining strict event order.
C is incorrect because a standard SQS queue does not guarantee message order, risking out-of-order processing, which violates the requirement.
D is incorrect because using SNS to publish and then enqueue to SQS still relies on a separate publish-subscribe flow and does not inherently guarantee in-order processing within a single consumer, increasing complexity.



A company is migrating an application from on-premises servers to Amazon EC2 instances. As part of the migration design requirements, a solutions architect must implement infrastructure metric alarms. The company does not need to take action if CPU utilization increases to more than 50% for a short burst of time. However, if the CPU utilization increases to more than 50% and read IOPS on the disk are high at the same time, the company needs to act as soon as possible. The solutions architect also must reduce false alarms.
What should the solutions architect do to meet these requirements?

  1. Create Amazon CloudWatch composite alarms where possible.
  2. Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.
  3. Create Amazon CloudWatch Synthetics canaries to monitor the application and raise an alarm.
  4. Create single Amazon CloudWatch metric alarms with multiple metric thresholds where possible.

Answer(s): A

Explanation:

The correct answer is A.
A) Composite alarms combine multiple alarms and only trigger when all constituent alarms are in the ALARM state, which suits the requirement to act only when CPU >50% and read IOPS are high simultaneously, reducing false positives.
B) Dashboards visualize metrics but do not raise alarms or automate actions, so they don’t meet the “act as soon as possible” requirement.
C) Synthetics canaries test availability, not real-time infrastructure metrics or multi-mmetric correlation for alarms.
D) Single alarms with multiple thresholds aren’t supported; composite alarms are the appropriate mechanism for correlated conditions.



Viewing page 31 of 205
Viewing questions 151 - 155 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!