Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 11 )

Updated On: 18-Mar-2026

A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in size.
Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation.
What should a solutions architect do to meet these requirements with the LEAST development effort?

  1. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan the objects in the bucket. If objects contain PII, trigger an S3 Lifecycle policy to remove the objects that contain PII.
  2. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
  3. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
  4. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain PII, use Amazon Simple Email Service (Amazon SES) to trigger a notification to the administrators and trigger an S3 Lifecycle policy to remove the meats that contain PII.

Answer(s): B

Explanation:

S3 + Macie provides scalable, low-effort PII discovery with automated remediation and alerting, minimizing development work.
A) Incorrect: Inspector is for security assessments of EC2 and on-prem assets, not scalable S3 object PII scanning; lifecycle removal would be manual/configured but not aligned with PII discovery.
B) Correct: Macie automatically discovers PII in S3, can alert via SNS for remediation, and requires minimal custom code.
C) Incorrect: Custom Lambda requires building and maintaining scanning logic for PII, increasing development effort; lacks the built-in PII discovery capabilities of Macie.
D) Incorrect: Uses SES for alerts (email) and lifecycle, but requires custom scanning; more friction and less robust alerting than Macie + SNS.



A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week.
What should the company do to guarantee the EC2 capacity?

  1. Purchase Reserved Instances that specify the Region needed.
  2. Create an On-Demand Capacity Reservation that specifies the Region needed.
  3. Purchase Reserved Instances that specify the Region and three Availability Zones needed.
  4. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.

Answer(s): D

Explanation:

Creating an On-Demand Capacity Reservation (ODCR) across the specific region and all three AZs guarantees EC2 capacity for the defined time window, ensuring availability even if competing demands occur. It reserves the exact instances in the chosen AZs for the duration of the event.
A) Incorrect: Reserved Instances provide discounted pricing, not guaranteed capacity or explicit AZ-level reservations for a time-bound event.
B) Incorrect: ODCR in a region without specifying AZs does not guarantee multi-AZ capacity.
C) Incorrect: RI region- and AZ-specific reservations exist for pricing benefits, but RIs do not guarantee capacity for a fixed period.



A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?

  1. Move the catalog to Amazon ElastiCache for Redis.
  2. Deploy a larger EC2 instance with a larger instance store.
  3. Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
  4. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.

Answer(s): D

Explanation:

The correct answer is D.
A) Not correct because EC2 instance store is ephemeral and does not provide durability or high availability; data is lost on stop, termination, or failure.
B) Not correct; increasing instance size does not protect against instance failure or data loss in the ephemeral store, and it still lacks durable, shared storage.
C) Not correct; S3 Glacier Deep Archive is for long-term archival, not high availability or low-latency access for catalog data.
D) Correct because Amazon EFS provides a durable, scalable, shared file system accessible from multiple instances, enabling high availability and data durability beyond a single EC2 instance.



A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the files infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-old as quickly as possible. A delay in retrieving older files is acceptable.
Which solution will meet these requirements MOST cost-effectively?

  1. Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval.
  2. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.
  3. Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata from Amazon S3.
  4. Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.

Answer(s): B

Explanation:

S) B is correct because it keeps <1-year data in fast S3 storage with automatic tiering to Glacier after 1 year, enabling cost-effective access via Athena for rapid queries on current data and Glacier retrieval for older data, matching infrequent access pattern and fast query for recent data.
A is incorrect because Glacier Instant Retrieval is optimized for near-immediate access but storing and querying tags directly via Glacier is not a typical cost-effective pattern and wouldn’t leverage Athena for data in S3 and efficient access to recent data.
C is incorrect because relying on S3 Standard with per-archive metadata searches and lifecycle to Glacier Instant Retrieval adds unnecessary complexity and higher costs for frequent current access.
D is incorrect because Deep Archive is too slow and costly for retrievals within a year; using RDS for metadata adds operational overhead and is not aligned with S3-native querying.



A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.
What should a solutions architect do to meet these requirements?

  1. Create an AWS Lambda function to apply the patch to all EC2 instances.
  2. Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
  3. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
  4. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.

Answer(s): D

Explanation:

A) Correct answer is D) Run Command to execute a custom patch script across all EC2 instances. This allows immediate, centralized execution of a third-party patch on 1,000 Linux instances without provisioning maintenance windows or waiting for SSM features that may not natively patch third-party software.
B) Patch Manager is for standardized OS patches and approved catalogs; third-party software may not be covered or require a custom patch baseline and inventory, delaying remediation.
C) Maintenance windows schedule patches; not suitable for immediate critical remediation and would delay deployment.
D) Lambda is not ideal for large-scale, on-demand remote execution across many instances; Run Command is designed for remote command execution at scale.



Viewing page 11 of 205
Viewing questions 51 - 55 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!