Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 8 )

Updated On: 16-Mar-2026

A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?

  1. Use AWS Config rules to define and detect resources that are not properly tagged.
  2. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
  3. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
  4. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.

Answer(s): A

Explanation:

A) AWS Config rules provide continuous governance and can evaluate tagging compliance across EC2, RDS, and Redshift with built-in or custom rules, minimizing operational effort.
B) Cost Explorer is for cost and usage but not for enforcing tag compliance; it does not automatically detect or remediate tagging gaps.
C) Writing API calls and running from EC2 is manual, ad-hoc, and requires ongoing maintenance; not scalable or low-effort.
D) Scheduling API checks via Lambda adds automation but still requires building/maintaining custom logic; Config rules offer centralized, managed tagging governance.



A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images.
Which method is the MOST cost-effective for hosting the website?

  1. Containerize the website and host it in AWS Fargate.
  2. Create an Amazon S3 bucket and host the website there.
  3. Deploy a web server on an Amazon EC2 instance to host the website.
  4. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.

Answer(s): B

Explanation:

Hosting static website content is most cost-effective in S3 with static website hosting and optional CloudFront. A) Fargate adds container orchestration and compute costs for dynamic workloads, not needed for static assets. C) EC2 incurs server management and higher costs for a simple static site. D) ALB with Lambda/Express introduces unnecessary compute and latency for static files. B) S3 static website hosting provides low per-GB storage and GET request costs, automatic scalability, and minimal maintenance, fitting use case of HTML/CSS/JS/images accessed by teams.



A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?

  1. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.
  2. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
  3. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
  4. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.

Answer(s): C

Explanation:

The correct answer, C, uses Kinesis Data Streams for near-real-time ingestion, Lambda for on-the-fly redaction, and DynamoDB for low-latency storage accessible by multiple applications, meeting scalability and latency requirements.
A is incorrect because DynamoDB Streams with direct writes and in-place data masking is not designed for real-time processing of large volumes and cross-application consumption with flexible transformation; it’s better suited for event-driven updates, not streaming ingestion.
B is incorrect because Firehose is for near-real-time delivery to destinations (S3, Redshift, DynamoDB) but introducing Firehose plus Lambda redaction adds latency and complexity; batch-oriented storage in S3 is less suitable for low-latency access.
D is incorrect because batching to S3 and per-file Lambda processing introduces additional delay and complexity; DynamoDB as the primary low-latency store is not used directly for stream consumers here.



A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its AWS resources and record a history of API calls made to these resources.
What should a solutions architect do to meet these requirements?

  1. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls.
  2. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
  3. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls.
  4. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls.

Answer(s): B

Explanation:

AWS Config tracks configuration changes and provides a history of resource configurations, while AWS CloudTrail records API calls for governance and auditing. Therefore option B correctly assigns AWS Config for configuration history and AWS CloudTrail for API call logging.
A) Reverses roles; CloudTrail logs API activity, not configuration history; Config is needed for resource configuration changes.
C) CloudWatch does not provide a comprehensive API-call log for governance; it is primarily for metrics and logs, not an authoritative API activity history.
D) CloudTrail handles API calls, but CloudWatch alone does not provide configuration history.



A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks.
Which solution meets these requirements?

  1. Enable Amazon GuardDuty on the account.
  2. Enable Amazon Inspector on the EC2 instances.
  3. Enable AWS Shield and assign Amazon Route 53 to it.
  4. Enable AWS Shield Advanced and assign the ELB to it.

Answer(s): D

Explanation:

AWS Shield Advanced, when assigned to the ELB, provides comprehensive DDoS protection for public-facing apps at the edge and scales with traffic, offering enhanced DDoS mitigation, 24/7 DDoS response team access, and attack analytics, which fits the requirement for large-scale protection of the web application behind an ELB.
A) GuardDuty detects anomalous activity and potential threats in AWS accounts and workloads, not dedicated DDoS mitigation.
B) Inspector assesses EC2 instances for vulnerabilities, not DDoS protection.
C) Shield with Route 53 is viable, but Route 53 is the DNS provider; Shield Advanced is primarily applied to the load balancer or CloudFront, and the option emphasizes assigning to ELB, which is the correct pairing for this scenario.



Viewing page 8 of 205
Viewing questions 36 - 40 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!