Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 19 )

Updated On: 16-Mar-2026

A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently.
Which solution meets these requirements?

  1. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS Secrets Manager.
  2. Store the database user credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to access OpsCenter.
  3. Store the database user credentials in a secure Amazon S3 bucket. Grant the necessary IAM permissions to allow the web servers to retrieve credentials and access the database.
  4. Store the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file system. The web server should be able to decrypt the files and access the database.

Answer(s): A

Explanation:

A) Storing credentials in AWS Secrets Manager and granting web servers access aligns with rotating credentials automatically and securely via built-in secret rotation for RDS-compatible databases.
B) OpsCenter is for operational issue management, not credential storage or rotation.
C) Storing credentials in S3 requires manual rotation and access controls; not as seamless or secure for frequent rotation as Secrets Manager.
D) Per-host file-based encryption with KMS lacks centralized rotation, auditability, and scalable credential management compared to Secrets Manager.



A company hosts an application on AWS Lambda functions that are invoked by an Amazon API Gateway API. The Lambda functions save customer data to an Amazon Aurora MySQL database. Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is complete. The result is that customer data is not recorded for some of the event.
A solutions architect needs to design a solution that stores customer data that is created during database upgrades.
Which solution will meet these requirements?

  1. Provision an Amazon RDS proxy to sit between the Lambda functions and the database. Configure the Lambda functions to connect to the RDS proxy.
  2. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code that stores the customer data in the database.
  3. Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to the database.
  4. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the queue and stores the customer data in the database.

Answer(s): D

Explanation:

The correct answer is D.
D) Using an SQS FIFO queue decouples data ingestion from the database write. API Gateway/Lambda can enqueue events when the database is unavailable during upgrades, ensuring no data is lost, and a separate Lambda consumer can reliably persist data to Aurora once connections are available.
A) RDS Proxy helps manage database connections, but does not guarantee data durability during upgrade outages; it doesn’t inherently buffer writes when the DB is unavailable.
B) Extending Lambda duration and retries may still fail to preserve data if the DB is unreachable; timing is unpredictable and may violate data integrity.
C) Storing locally in Lambda is volatile and not durable across function invocations or instances, risking data loss.



A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that is 3 TB in size and growing. The company has started to share the data with a European marketing firm that has S3 buckets. The company wants to ensure that its data transfer costs remain as low as possible.
Which solution will meet these requirements?

  1. Configure the Requester Pays feature on the company's S3 bucket.
  2. Configure S3 Cross-Region Replication from the company's S3 bucket to one of the marketing firm's S3 buckets.
  3. Configure cross-account access for the marketing firm so that the marketing firm has access to the company's S3 bucket.
  4. Configure the company's S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing firm's S3 buckets.

Answer(s): B

Explanation:

Cross-Region Replication minimizes transfer costs by keeping copies in the destination region, allowing data sharing with the European firm without egress charges from the source region for replicated objects.
A) Requester Pays shifts data access costs to the requester, not reducing cross-border data transfer fees for shared data, so it doesn’t minimize overall transfer costs for this workflow.
C) Cross-account access enables access rights but does not automatically reduce data transfer costs or replicate data to the partner’s region.
D) S3 Intelligent-Tiering optimizes storage costs, not cross-region data sharing or transfer costs, and syncing to another bucket does not inherently reduce egress charges.



A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM user credentials according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3 bucket and want a more secure solution.
What should a solutions architect do to secure the audit documents?

  1. Enable the versioning and MFA Delete features on the S3 bucket.
  2. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account.
  3. Add an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3:DeleteObject action during audit dates.
  4. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.

Answer(s): A

Explanation:

Enabling versioning and MFA Delete protects objects from accidental deletion and requires additional authentication for deletion, providing a robust, data-protective layer beyond IAM least privilege.
A) Correct: Versioning preserves previous object versions; MFA Delete enforces deletion requests with MFA, preventing accidental or malicious deletions.
B) MFA on IAM users prevents sign-in, not deletion actions unless MFA Delete is enabled; it does not enforce object-level safeguards.
C) Lifecycle policies manage transitions/deletions by time, not per-action denial; they cannot selectively deny DeleteObject for audit dates.
D) KMS encryption protects data at rest but does not prevent deletions; access to keys can’t alone block DeleteObject actions.



A company is using a SQL database to store movie data that is publicly accessible. The database runs on an Amazon RDS Single-AZ DB instance. A script runs queries at random intervals each day to record the number of new movies that have been added to the database. The script must report a final total during business hours.
The company's development team notices that the database performance is inadequate for development tasks when the script is running. A solutions architect must recommend a solution to resolve this issue.
Which solution will meet this requirement with the LEAST operational overhead?

  1. Modify the DB instance to be a Multi-AZ deployment.
  2. Create a read replica of the database. Configure the script to query only the read replica.
  3. Instruct the development team to manually export the entries in the database at the end of each day.
  4. Use Amazon ElastiCache to cache the common queries that the script runs against the database.

Answer(s): B

Explanation:

The correct answer is B. Creating a read replica allows the script to run queries against the replica, offloading read traffic from the primary DB instance and improving performance during development tasks, with minimal operational overhead.
A) Multi-AZ improves high availability and db failover, not read traffic scaling, and adds some overhead without addressing read workload separation.
C) Manual daily exports add significant operational overhead and potential inconsistencies; not scalable or automatic.
D) ElastiCache caching helps with repeated reads but introduces cache coherence and added complexity; using a read replica is simpler and more direct for read-heavy workloads.



Viewing page 19 of 205
Viewing questions 91 - 95 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!