Amazon SAA-C03 Exam Questions
AWS Certified Solutions Architect - Associate SAA-C03 (Page 17 )

Updated On: 18-Mar-2026

A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON files stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon S3 where it can be accessed by several additional systems that provide critical near-real-time analytics. A secure transfer is important because the data is considered sensitive.
Which solution offers the MOST reliable data transfer?

  1. AWS DataSync over public internet
  2. AWS DataSync over AWS Direct Connect
  3. AWS Database Migration Service (AWS DMS) over public internet
  4. AWS Database Migration Service (AWS DMS) over AWS Direct Connect

Answer(s): B

Explanation:

AWS DataSync over AWS Direct Connect provides a secure, high-bandwidth, low-latency transfer path from on-premises SAN to S3 with reliable, incremental transfers and strong encryption, meeting near-real-time analytics needs.
A) DataSync over public internet is less reliable and sensitive to internet variability and potential security exposure.
C) DMS is for database replication, not bulk file/object transfers to S3, and not optimal for file-based JSON on SAN data.
D) DMS over Direct Connect is not appropriate for file/object migration to S3; DMS specializes in database migrations rather than general object storage transfers.



A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?

  1. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
  2. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3.
  3. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
  4. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3.

Answer(s): C

Explanation:

The correct option is C because API Gateway → Kinesis Data Stream → Firehose → S3 provides a fully managed, real-time ingestion pipeline with built-in data transformation via Lambda and minimal operational overhead.
A) Uses EC2 for API, then Firehose consumes from Kinesis data stream and Lambda transforms—unnecessary EC2 and more moving parts; still feasible but higher operational burden.
B) AWS Glue is not a real-time streaming ingestion endpoint and deploying on EC2 increases management; Glue is ETL/batch-oriented, not ideal for low-latency streaming with API gateway.
D) API Gateway to Glue with Lambda lacks real-time streaming integration and Firehose buffering/delivery that reduce ops; Glue is not the streaming endpoint here.



A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally efficient solution that meets these requirements?

  1. Use DynamoDB point-in-time recovery to back up the table continuously.
  2. Use AWS Backup to create backup schedules and retention policies for the table.
  3. Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.
  4. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function. Configure the Lambda function to back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

Answer(s): B

Explanation:

AWS Backup provides centralized, automated backup planning with schedules and long-term retention policies for DynamoDB, meeting the 7-year retention requirement with minimal operational overhead.
A) Point-in-time recovery is continuous restores for 35 days (default), not long-term archival for 7 years.
C) On-demand DynamoDB backups stored in S3 require manual initiation and separate lifecycle management; less scalable for policy-driven retention.
D) Event-driven Lambda backup introduces operational overhead and risk of gaps; not as efficient as a managed backup service with retention policies.
B) Correct: centralized, policy-driven backup schedules and retention via AWS Backup.



A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not be used on most mornings. In the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur, they will happen very quickly.
What should a solutions architect recommend?

  1. Create a DynamoDB table in on-demand capacity mode.
  2. Create a DynamoDB table with a global secondary index.
  3. Create a DynamoDB table with provisioned capacity and auto scaling.
  4. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.

Answer(s): A

Explanation:

D) incorrect because global tables are for multi-region replication, not cost optimization for bursty traffic. A) correct because DynamoDB on-demand mode automatically handles unpredictable traffic with pay-per-request pricing, eliminating provisioning and scaling concerns. B) incorrect; a global secondary index does not address throughput provisioning or bursting behavior. C) incorrect; provisioned capacity with auto scaling can handle some variability but relies on historical patterns and may not respond instantly to sudden spikes, whereas on-demand is designed for unpredictable workloads. E) option not present. Overall, on-demand is best for infrequent usage with highly variable, sudden traffic.



A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A solutions architect needs ta share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner's AWS account. The AMI is backed by Amazon Elastic Block Store (Amazon EBS) and uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt EBS volume snapshots.
What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner's AWS account?

  1. Make the encrypted AMI and snapshots publicly available. Modify the key policy to allow the MSP Partner's AWS account to use the key.
  2. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the key policy to allow the MSP Partner's AWS account to use the key.
  3. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner's AWS account only. Modify the key policy to trust a new KMS key that is owned by the MSP Partner for encryption.
  4. Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner's AWS account, Encrypt the S3 bucket with a new KMS key that is owned by the MSP Partner. Copy and launch the AMI in the MSP Partner's AWS account.

Answer(s): B

Explanation:

Sharing the AMI with launch permissions to the MSP Partner’s account and allowing the keys used to decrypt EBS snapshots via KMS key policy is the correct and secure approach (option B). It preserves encryption, avoids public exposure, and restricts access to the intended account.
A is incorrect because making the AMI and snapshots public is insecure and contradicts least-privilege; it also improperly modifies the key policy to share the key publicly.
C is incorrect because trusting a new MSP-owned KMS key for encryption is unnecessary and adds complexity; the original KMS key can be shared with the MSP account via proper key policy.
D is incorrect because exporting to S3 and recreating the AMI in the MSP account is unnecessary and risks integrity; sharing the AMI with proper launch permissions is simpler and secure.



Viewing page 17 of 205
Viewing questions 81 - 85 out of 824 questions



Post your Comments and Discuss Amazon SAA-C03 exam dumps with other Community members:

SAA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!