Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 47 )

Updated On: 16-Mar-2026

A company runs an application on a large fleet of Amazon EC2 instances. The application reads and writes entries into an Amazon DynamoDB table. The size of the DynamoDB table continuously grows, but the application needs only data from the last 30 days. The company needs a solution that minimizes cost and development effort.
Which solution meets these requirements?

  1. Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the original stack.
  2. Use an EC2 instance that runs a monitoring application from AWS Marketplace. Configure the monitoring application to use Amazon DynamoDB Streams to store the timestamp when a new item is created in the table. Use a script that runs on the EC2 instance to delete items that have a timestamp that is older than 30 days.
  3. Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table. Configure the Lambda function to delete items in the table that are older than 30 days.
  4. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the table. Configure DynamoDB to use the attribute as the TTL attribute.

Answer(s): D

Explanation:

D) Using DynamoDB Time To Live (TTL) with a per-item 30-day expiry is the simplest, cost-efficient, and low-development approach. TTL automatically deletes items after the specified timestamp, requiring no ongoing processing or Lambda/EC2 maintenance, and minimizes storage costs for aging data.
A) CloudFormation redeploy every 30 days is operationally heavy and unnecessary; it does not automate data expiry.
B) EC2-based monitoring with DynamoDB Streams and custom deletion adds unmanaged maintenance, scaling concerns, and cost.
C) Lambda-driven deletion still incurs invocation costs and requires continuous event handling; TTL is more straightforward and scalable.



A company has a Microsoft .NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the application. The AWS application environment should be highly available.
Which combination of actions should the company take to meet these requirements? (Choose two.)

  1. Refactor the application as serverless with AWS Lambda functions running .NET Core.
  2. Rehost the application in AWS Elastic Beanstalk with the .NET platform in a Multi-AZ deployment.
  3. Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI).
  4. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment.
  5. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment.

Answer(s): B,E

Explanation:

Rehosting in Elastic Beanstalk with a Multi-AZ deployment preserves existing .NET app architecture and provides high availability, while migrating the Oracle database to Oracle on RDS (Multi-AZ) minimizes changes and maintains Oracle compatibility. This aligns with minimizing development changes and ensures HA across compute and database layers.
A) Refactoring to serverless would require substantial code changes and is not aligned with minimizing changes.
C) EC2 with Amazon Linux AMI would require OS/image changes and is not as compatible with a Windows/.NET on-prem setup.
D) Migrating to DynamoDB is a redesign and not suitable for the existing Oracle workloads.
E) Oracle on RDS in Multi-AZ preserves Oracle features and provides managed HA for the database.



A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are possible at this time. The company needs a solution that minimizes operational overhead.
Which solution meets these requirements?

  1. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.
  2. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage
  3. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.
  4. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.

Answer(s): D

Explanation:

F) Actually D is correct: EKS with Fargate provides serverless Kubernetes compute, minimizing operational overhead, and Amazon DocumentDB (MongoDB compatibility) offers MongoDB-compatible storage without managing a database engine. This matches no-code/low-ops migration while preserving Kubernetes workloads.
A) ECS with EC2 and MongoDB on EC2 requires managing MongoDB and compute capacity, increasing ops.
B) ECS with Fargate and DynamoDB changes data storage to a different database, not preserving MongoDB behavior.
C) EKS with EC2 and DynamoDB replaces MongoDB with a different store and requires managing nodes, increasing ops.



A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns. The transcript files must be stored for 7 years for auditing purposes.
Which solution will meet these requirements?

  1. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning models for transcript file analysis.
  2. Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena for transcript file analysis.
  3. Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries for transcript file analysis.
  4. Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract for transcript file analysis.

Answer(s): B

Explanation:

Amazon Transcribe supports automatic speech recognition with speaker diarization (multiple speaker recognition) and can produce transcript files that can be stored in S3 and later queried with tools like Athena for analysis. B is correct.
A is incorrect because Rekognition is for video/audio face/recognition and not suited for transcription; it does not provide native transcript analysis workflow with Athena. C is incorrect because Translate is for language translation, not transcription or diarization; Redshift is overkill and not suited for transcript analysis workflow. D is incorrect because Textract analyzes text from documents, not audio transcripts; Rekognition alone does not handle transcripts or analytics.



A company hosts its application on AWS. The company uses Amazon Cognito to manage users. When users log in to the application, the application fetches required data from Amazon DynamoDB by using a REST API that is hosted in Amazon API Gateway. The company wants an AWS managed solution that will control access to the REST API to reduce development efforts.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Configure an AWS Lambda function to be an authorizer in API Gateway to validate which user made the request.
  2. For each user, create and assign an API key that must be sent with each request. Validate the key by using an AWS Lambda function.
  3. Send the user’s email address in the header with every request. Invoke an AWS Lambda function to validate that the user with that email address has proper access.
  4. Configure an Amazon Cognito user pool authorizer in API Gateway to allow Amazon Cognito to validate each request.

Answer(s): D

Explanation:

The correct answer is D. A Cognito user pool authorizer allows API Gateway to validate JWTs issued by Cognito, providing built-in, managed authentication with no extra Lambda code, aligning with the requirement to minimize operational overhead.
A is incorrect because a Lambda authorizer adds custom logic and maintenance; it increases overhead compared to built-in Cognito integration.
B is incorrect because API keys are not tied to user authentication and are not intended for per-user access control; this approach is unsuitable for scalable, secure user-level access.
C is incorrect because sending emails in headers and validating in Lambda is custom, brittle, and does not leverage managed authentication or token validation.



Viewing page 47 of 205
Viewing questions 231 - 235 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!