SAA-C03 Exam Discussions & Posts
ovelaguz
Commented on April 13, 2026
Question 73:
The correct choices are C and D.
- C: Replace the bastion host's SG to only allow inbound SSH (port 22) from the company’s external IP range. This ensures the bastion is reachable only from known on-premise networks, reducing exposure.
- D: Replace the application instances’ SG to only allow inbound SSH from the bastion host’s private IP address. This enforces SSH access to apps only via the bastion over the private path inside the VPC.
Why these work:
- They implement a secure bastion-based SSH flow: on-prem access goes to the bastion, then from the bastion’s private IP to the app instances.
- Using the bastion’s private IP for the app SSH source prevents SSH from the bastion’s public endpoint, avoiding exposure to the internet.
Why the others are incorrect:
- A: Limiting the bastion to inbound from application IPs blocks the initial SSH from on-prem to the bastion.
- B: Allowing only internal IPs would block on-prem access via the internet to the bastion.
- E: Allowing SSH to apps from the bastion’s public IP would expose the app tier to the internet and bypass the private path.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 13, 2026
Question 66:
- Why: Use a lifecycle policy to move data from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days. The data is frequently accessed in the first 30 days, then rarely accessed but still needs immediate retrieval. S3 Standard-IA provides cost savings after the initial period while preserving instant access, and it supports a 4-year retention.
- Why the other options are not as good:
- A: Moving to S3 Glacier incurs retrieval latency; not ideal for immediate access needs in the first 30 days.
- B: One Zone-IA has lower durability (Single AZ) and is riskier for critical data.
- D: Moving to Glacier after 4 years adds unnecessary retrieval steps; Standard-IA already meets long-term, infrequent-access needs with immediate retrieval.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 13, 2026
Question 49:
Question 49: Explanation
- Correct answer: B — Store individual files in
Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year.
- Why: Data under 1 year stays in a fast, automatically tiered storage class, while older data moves to a cheaper archival tier. This matches the pattern of frequent access for recent data and infrequent access for older data, with cost-effective analytics via
Amazon Athena on current data and Glacier retrieval for older data when needed.
- Why the other options are not suitable:
- A: Glacier Instant Retrieval isn’t ideal for ongoing analytics on data under 1 year; querying directly from Glacier is not efficient.
- C: Relying solely on Athena without automatic tiering misses the cost optimization for data that ages into cheaper storage.
- D: Overly complex (uses RDS for metadata) and uses Glacier Deep Archive, which is too slow for the near-term data and adds unnecessary components.
- Key concept: Use a tiering strategy with S3 Intelligent-Tiering for active data and Glacier Flexible Retrieval for long-term archival, enabling cost-effective, fast access to recent data and inexpensive storage for older data.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 13, 2026
Question 48:
Question 48: Explanation
- Correct answer: D — Move catalog to an
Amazon EFS file system.
- Why:
EC2 instance store is ephemeral; data is not durable and is lost if the instance stops/terminates or fails. To achieve high availability and durability, you need shared, durable storage that multiple EC2 instances can mount.
- Why other options are not suitable:
- A ElastiCache for Redis is in-memory caching, not durable storage for a catalog.
- B Increasing instance size with instance store doesn’t provide durability or multi-AZ availability.
- C S3 Glacier Deep Archive is archival storage with high latency, not suitable for a catalog requiring frequent access.
- Key concept: Use
Amazon EFS for a durable, scalable, shared filesystem that is accessible from multiple EC2 instances and replicated across AZs, delivering high availability and durability for the catalog data.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 13, 2026
Question 193:
- Correct answer: B. Use Amazon ElastiCache for Redis.
Why:
- The goal is to reduce reads against RDS while keeping high availability. Using Redis as an in-memory cache sits in front of RDS, so frequently read data is served from memory instead of hitting the database.
ElastiCache for Redis supports replication and automatic failover, providing high availability for the cache layer itself.
- Redis offers richer data structures and persistence options, making it more suitable than Memcached in this scenario.
- Options A (EC2 MySQL) and C (Route 53 caching) do not reduce DB reads or provide appropriate caching/HA. D (Memcached) is an alternative but Redis is generally preferred for its features and resilience.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 189:
The correct answers are B and D.
- B: Use
S3 Object Lock in compliance mode. This enforces immutable, write-once behavior for the retention period, preventing any overwrites or deletions during the 5 years.
- D: Use server-side encryption with AWS KMS customer managed keys (CMKs) and enable automatic key rotation. This provides at-rest encryption with automated rotation for long-term data.
Why the others aren’t correct:
- A uses Object Lock in governance mode, which can be overridden; not guaranteed immutability for the full period.
- C uses SSE-S3 (no key rotation control).
- E uses imported keys, which require manual rotation and more overhead.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 188:
The correct answer is A.
Why A works:
- They need a highly available, low-ops SFTP solution that writes to an
S3 data lake. AWS Transfer Family provides a fully managed SFTP server with an S3 backend, handling scaling, availability, and server maintenance automatically.
- It offers a public endpoint (or VPC endpoints if private access is needed) and integrates with IAM for authentication and access control, meeting partner onboarding and security needs without managing servers.
Why the others don’t:
- B (S3 File Gateway) is for on-premises file access to S3 and does not natively provide an SFTP server for partners.
- C–E require running and maintaining EC2 instances and custom upload logic (cron jobs), which adds operational overhead and is less available than a managed service.
Key concept: For a managed, highly available SFTP interface to S3 with minimal ops, use AWS Transfer Family with an S3 backend.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 172:
- Answer: Configure a CloudFront field-level encryption profile.
- Field-level encryption (FLE) encrypts specific data fields at the CloudFront edge, so sensitive information stays encrypted throughout the request journey to the origin.
- Only designated applications that hold the corresponding decryption keys can access the sensitive fields, meeting the requirement to restrict access.
- Why the other options are not suitable:
- A) CloudFront signed URL: controls access to content, not per-field data encryption.
- B) CloudFront signed cookies: also access control, not per-field encryption.
- D) Origin Protocol Policy HTTPS Only: secures transport to the origin but does not encrypt specific fields or restrict field-level access.
- Quick implementation idea (conceptual):
- Create a field-level encryption profile/config and specify which fields to encrypt.
- Upload a public key to CloudFront for encryption; keep the private key with the designated applications (or your origin) for decryption.
- Deploy so the origin can decrypt and handle data only by authorized apps.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 164:
- Correct answer: C — Use
Amazon SQS (standard queue) with a dead-letter queue, and integrate both applications with the queue.
Why this is correct:
- SQS standard queue provides durable, highly scalable message storage and supports at-least-once delivery. Messages can be retained for up to 14 days, satisfying the requirement to keep messages if processing fails.
- A dead-letter queue (DLQ) collects messages that fail repeatedly, preventing them from blocking others and enabling separate investigation.
- Decoupling sender and processor improves operational efficiency; components scale independently and you avoid managing servers.
Why the other options are less suitable:
- A: EC2 + Redis requires managing infrastructure and does not inherently guarantee durable persistence or robust message processing semantics.
- B: Kinesis is a streaming service with more complex semantics and retention, and is not as straightforward for per-message retry with a DLQ.
- D: SNS is publish/subscribe, not a durable queue; it can lose messages and lacks built-in dead-letter semantics for processing failures.
Key concepts:
- Use a durable queue for decoupling and reliability.
- Implement idempotent processing on the consumer side to handle potential duplicates from SQS.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 159:
Question 159 asks how to block unauthorized requests to a publicly accessible API built with API Gateway and Lambda when botnet traffic spikes.
- A: Create a usage plan with an API key shared only with genuine users. This restricts access to known users and allows quota controls at the API level.
- C: Implement an AWS WAF rule to target malicious requests and filter them out at the edge before they reach Lambda.
Why the others aren’t correct
- B: Filtering inside Lambda is brittle (IP spoofing, botnets) and adds processing cost; edge-based controls are more reliable.
- D: Turning into a private API would block legitimate public users.
- E: DNS redirection is not a proper access control measure and can disrupt users.
- F: Creating an IAM role per user is not scalable for a public API.
Concepts:
- Use API keys with a usage plan to control access to public APIs.
- Use
AWS WAF at API Gateway to block or rate-limit malicious traffic before it reaches backend services.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 157:
Question 157 asks how to meet 5-year data retention and indefinite audit logs for an Aurora PostgreSQL DB, given automated backups are already configured.
- D: Configure an Amazon CloudWatch Logs export for the DB cluster. This stores database activity/audit logs in CloudWatch Logs indefinitely, satisfying the requirement to keep audit logs long-term.
- E: Use AWS Backup to take the backups and keep them for 5 years. AWS Backup provides centralized lifecycle management and can retain backups for a fixed 5-year period, meeting the data-retention requirement beyond what Aurora automated backups offer.
Why the others aren’t correct
- A (Take a manual snapshot): not automated and not suitable for long-term retention.
- B (Lifecycle policy for automated backups): not a defined feature for RDS/Aurora automated backups.
- C (Automated backup retention for 5 years): automated backups in Aurora have a limited retention window (not indefinite 5 years) and don’t cover audit-log retention.
Concepts:
- Use CloudWatch Logs exports to preserve audit logs beyond DB lifecycle.
- Use AWS Backup for long-term retention and centralized policy control for backups.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 154:
The correct answer is B.
- B. Use S3 Object Lock in compliance mode with a retention period of 365 days.
Why: Compliance mode enforces write-once-read-many for all objects for the retention period, and cannot be overridden, meeting the requirement that no one can modify or delete files for at least 1 year after creation. This also allows the few scientists to add new files, while existing files remain immutable for the retention window.
Why the others aren’t correct:
- A. Governance mode with a legal hold of 1 year.
Governance mode can be overridden by users with special permissions; not truly immutable for all users as required.
- C. IAM role to restrict deletions.
IAM alone cannot guarantee immutability or prevent overwrites/deletes once objects are added.
- D. Lambda to track hashes.
Hash tracking does not enforce immutability or prevent deletions/modifications.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 150:
Answer: A — Create CloudWatch composite alarms where possible.
- Composite alarms let you combine multiple underlying alarms (e.g., CPU > 50% and disk read IOPS high) and trigger only when all are in ALARM (AND logic). This matches the requirement to act only when both conditions occur, reducing false alarms from short CPU bursts.
- Why not the others:
- Dashboards visualize data but don’t raise alarms or automate actions.
- Synthetics canaries monitor availability, not real-time infrastructure metric correlation.
- CloudWatch doesn’t support “single alarm with multiple thresholds” for correlated metrics; composite alarms are designed for this use case.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 143:
- Answer: D: Host the application on
Amazon ECS. Set up an Application Load Balancer with ECS as the target.
- You want to break the monolith into smaller, independently managed microservices with minimal operational overhead. ECS provides managed container orchestration, enabling multiple teams to own distinct services.
- An Application Load Balancer routes traffic to the appropriate containerized services and supports auto scaling, enhancing scalability.
- Why other options aren’t as suitable:
- A: Using Lambda is serverless but often requires substantial refactoring for a monolith moving to microservices; may introduce startup latency and stateful handling issues.
- B: Amplify is frontend-focused and not designed to orchestrate multiple backend microservices.
- C: Pure EC2 with ASG gives control but higher operational overhead vs a managed container approach for breaking into multiple services.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 142:
Why:
AWS Global Accelerator provides any-to-any UDP support, static IP addresses, and routes traffic to the nearest edge location, delivering the lowest latency for UDP-based traffic.
- It can forward traffic to a
Network Load Balancer (NLB), which is suitable for UDP/TCP workloads and supports static IPs.
- Using
EC2 instances in an Auto Scaling group gives control over kernel/UDP handling and allows scalable, highly available front-end servers.
Why the others aren’t as good:
- A: Route 53 with ALB targets HTTP/HTTPS, not UDP, and lacks static edge IPs.
- B: CloudFront is HTTP/HTTPS oriented and doesn’t support UDP at edge; NLB behind CloudFront adds complexity and latency.
- D: API Gateway targets HTTP traffic; not suitable for UDP and doesn’t provide static edge IPs.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 141:
Why:
- Use
CloudFront with the ALB as the origin. CloudFront caches static content at edge locations worldwide, so users get static assets from nearby edges.
- For dynamic content, CloudFront forwards requests to the origin (the
ALB in the single region). The near edge location still reduces the round-trip time to the origin, improving latency for dynamic responses.
- A single region is sufficient because edge caching at CloudFront delivers content globally without duplicating back-end deployments; latency is minimized by the edge network rather than multi-region routing.
Why the others aren’t as good:
- B adds multi-region deployment and Route 53 latency routing, which is unnecessary since CloudFront already optimizes global delivery.
- C only caches static content; dynamic content would still travel to the ALB and be slower.
- D uses geolocation routing to a closest region but loses CloudFront’s global edge caching benefits and adds complexity.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 140:
Correct answers: A and C
- A. Use Spot Instances for the data ingestion layer. The data ingestion on EC2 is sporadic and can tolerate interruptions, making Spot Instances the most cost-efficient option for this layer.
- C. Purchase a 1-year Compute Savings Plan for the front end and API layer. The front end runs on Fargate and the API on Lambda. A 1-year Compute Savings Plan covers compute usage across EC2, Fargate, and Lambda, providing significant savings with flexibility across these services.
Why others are less optimal:
- B On-Demand is more expensive for the ingestion layer than Spot.
- D All Upfront RI for the ingestion layer is inflexible and doesn't suit potentially interrupted workloads.
- E Savings Plan for EC2 only ignores Fargate/Lambda, which would miss savings on those services.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 139:
- Enable S3 Replication between the source and analysis buckets so new objects are automatically copied as they arrive, with minimal manual effort.
- Use EventBridge to propagate ObjectCreated events from the analysis bucket to Lambda (for pattern matching) and to SageMaker Pipelines (for the ML pipeline), enabling event-driven processing.
- Why the other options aren’t as good:
- A: Copies done by a Lambda function would add overhead, risk duplicates, and require custom logic instead of built-in replication.
- B: Relying on EventBridge alone won’t ensure immediate cross-bucket replication.
- C: Lacking EventBridge routing means coordinating Lambda and SageMaker without centralized event-driven triggers.
- E: S3 replication is already the key; using a single root or assorting per-bucket rules adds unnecessary complexity.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 135:
Question 135 asks how to connect privately to a service hosted in an external provider’s VPC, with access restricted to that service and initiated only from your VPC.
- Correct answer: D) Use
AWS PrivateLink to connect to the target service. Create a VPC endpoint for the target service.
The provider creates a VPC Endpoint Service; you create an Interface Endpoint in your VPC to connect. Traffic stays on the AWS network and is restricted to that service.
- PrivateLink creates a private, service-specific connection that originates from your VPC and remains within the AWS network.
- It limits access to the single target service, satisfying the security requirement.
- Why the other options are incorrect:
- A: VPC peering connects entire VPCs, not a single service; traffic may reach other resources and isn’t PrivateLink-based.
- B: A provider-side VPN/Gateway doesn’t restrict access to just one service and isn’t PrivateLink.
- C: NAT gateway exposes traffic to the internet and does not establish a private, service-scoped connection.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 134:
Question 134 asks for a serverless, globally replicated data analytics solution for data in S3, with encryption and minimal ops.
- Correct answer: C) Load into the existing S3 bucket. Enable
CRR with SSE-S3. Use Athena to query the data.
- Using a single data store (S3) with serverless analytics (Athena) minimizes operational overhead.
- Enable CRR to replicate encrypted objects to another region, fulfilling global availability and DR needs.
- SSE-S3 keeps encryption managed by S3 with no extra key management.
- Why the other options are less suitable:
- A: Requiring SSE-KMS multi-Region keys adds key management overhead and potential latency; although valid, it's unnecessary for least overhead.
- B: Recommends RDS, which is not serverless analytics and increases operational overhead.
- D: Also uses RDS, which introduces database management and is not aligned with a serverless analytics model.
Key concepts:
CRR replicates S3 objects between regions.
SSE-S3 provides server-side encryption with minimal management.
Athena enables serverless SQL queries directly on S3 data.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 133:
Question 133 asks how to upgrade an on-premises Oracle DB to the latest version, set up DR, minimize operational overhead, and keep OS access.
- Correct answer: C) Migrate to
RDS Custom for Oracle. Create a read replica for the database in another AWS Region.
- RDS Custom for Oracle provides managed DB provisioning with OS access for maintenance/admin tasks, while handling patching/ upgrades and reducing operational overhead.
- A cross-region read replica gives DR capability with controlled lag and regional failover, meeting DR requirements while still allowing OS access for maintenance if needed.
- Why the other options are less suitable:
- A) EC2 + manual replication: high operational overhead (full OS/db management, failover handling).
- B) RDS for Oracle: no OS access; cross-region backups don’t provide OS-level control.
- D) Standby in another AZ: not cross-region DR and still limits OS access.
Key concepts:
- RDS Custom provides OS access plus managed DB operations.
- Cross-region read replicas support DR with lower overhead than full active-active setups.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 131:
Question 131 asks how to serve all files via CloudFront while preventing direct access to the S3 URL.
- Correct approach: Use an Origin Access Identity (OAI). Create an OAI, assign it to the CloudFront distribution, and configure the S3 bucket so that only the OAI has permission to read objects. This keeps the bucket private and ensures that objects can only be retrieved through CloudFront, not via direct S3 URLs.
- Why the other options are incorrect:
- A: Creating per-object IAM users and policies for CloudFront isn't a standard, scalable way to restrict S3 access from CloudFront; CloudFront doesn’t use IAM users for access in this scenario.
- B: A bucket policy with the CloudFront distribution ID as the Principal is not a valid pattern; OAIs are the supported mechanism.
- C: A bucket policy that uses the distribution ID as Principal is invalid; you should use an OAI to grant S3 access to CloudFront.
Implementation outline:
- In CloudFront, create an Origin Access Identity and attach it to the distribution.
- In the S3 bucket, block public access and grant
s3:GetObject to the OAI.
- Verify that objects are accessible via the CloudFront URL but not via the S3 URL.
Outcome: S3 remains private; content is served securely and privately through CloudFront.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 120:
Question 120 summary:
- Scenario: Self-managed DNS with three EC2s behind an NLB in us-west-2, plus another NLB in eu-west-1 with three EC2s. Goal: fast, highly available routing across US/Europe.
- Correct answer: Create an Amazon Route 53 Global Accelerator with endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as endpoints.
Why this is correct:
- Global Accelerator provides a single fixed set of global IPs and health-aware routing across multiple regions, directing traffic to the closest healthy endpoint (here, the two NLBs). This improves performance and availability for users in both US and Europe with minimal configuration.
Why the other options are less suitable:
- A: Geolocation routing + CloudFront is regional or cached and won’t automatically optimize health across regions, adding unnecessary complexity.
- C: Attaching Elastic IPs to six instances is impractical and does not provide regional health-aware routing.
- D: Latency-based routing to ALBs would require changing infrastructure and does not give centralized, global optimization like Global Accelerator.
Key concepts:
- Global Accelerator uses endpoint groups per region and health checks to route traffic to healthy NLBs across regions.
- It provides fast failover and consistent performance for multi-region setups.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 117:
- The requirement: store all application logs in
Amazon OpenSearch Service in near real time with the least operational overhead.
- Correct answer: A — Configure a
CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (OpenSearch). This is a fully managed, near-real-time integration with minimal setup and no custom code.
- It provides near-real-time ingestion directly from CloudWatch Logs to OpenSearch.
- It involves minimal configuration and no additional services or agents.
- Why the other options are worse for this scenario:
- B: A Lambda function would require writing code, handling retries, scaling, and maintenance.
- C: Kinesis Data Firehose adds an extra managed service layer and more configuration for near-real-time ingestion.
- D: Installing agents on each server and using Kinesis adds significant operational burden and scaling concerns.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 109:
Answer: D
Explanation:
- Use an
S3 bucket with S3 Object Lock enabled and Versioning. This provides immutability for new uploads so objects can’t be deleted or overwritten during the retention/hold period.
- Apply a Legal Hold to the objects. A legal hold prevents deletion or modification until it’s released, giving flexible control without a fixed retention period.
- Grant delete capability only to specific users by adding the IAM permission s3:PutObjectLegalHold (and related policies) to those users, limiting who can remove the hold or delete objects.
- Why the others don’t fit:
- A) Glacier is archival and not directly integrated with per-object delete permissions in S3.
- B) Governance mode with a long retention prevents deletions but doesn’t provide per-user delete control.
- C) CloudTrail/recovery does not enforce immutability.
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 108:
Answer: A
Explanation:
- This uses an event-driven pattern: when an RDS update occurs (a listing is sold), a Lambda function is triggered to publish a message to an SQS queue.
- A standard (non-FIFO)
SQS queue supports multiple consumers to poll and process the deletion data, enabling decoupled, scalable delivery to multiple targets with reliable delivery.
- Why not B: A FIFO queue is unnecessary here; it introduces lower throughput and additional deduplication logic, which increases overhead.
- Why not C: RDS event notifications are for DB instance-level events, not for per-row data changes like a sold listing; and adding SNS fan-out adds complexity.
- Why not D: An SNS fan-out with multiple SQS queues adds extra hops; a direct Lambda-to-SQS flow is simpler and more efficient for this use case.
Key concepts:
- Event-driven architecture with
AWS Lambda and SQS
- Decoupled, scalable delivery to multiple targets
- Standard vs FIFO queues trade-offs (throughput vs strict ordering)
San Miguel De Allende, Mexico
ovelaguz
Commented on April 12, 2026
Question 98:
- Standard SQS queues provide at-least-once delivery, so a message can be delivered again if processing isn’t complete before the visibility timeout.
- Increasing the visibility timeout to exceed the sum of the Lambda function timeout and the batch window prevents the message from becoming visible again while still being processed, eliminating duplicate Lambda invocations and duplicate emails.
- A) Long polling reduces empty receives but doesn’t prevent duplicates.
- B) FIFO with deduplication is unnecessary overhead for this issue; Standard QoS already allows duplicates.
- D) Deleting messages before processing risks losing messages if processing fails.
- E) Not applicable to this scenario.
- Key concept: Adjust the
SQS visibility timeout in relation to the Lambda runtime and batch window to avoid reprocessing and duplicates.
San Miguel De Allende, Mexico
Anonymous User
Commented on April 12, 2026
Question 86:
Question 86 asks for a secure way for web servers to connect to a common RDS MySQL Multi-AZ DB instance while meeting a requirement to rotate user credentials frequently.
- Correct answer: A. Store the database user credentials in
AWS Secrets Manager and grant the web servers the necessary IAM permissions to access Secrets Manager. Secrets Manager supports automatic rotation for RDS-compatible databases, giving centralized, secure, and frequently rotated credentials without manual effort.
Why this works:
- Centralized, automatically rotated credentials reduce the risk of credential leakage.
- Web servers fetch credentials securely from Secrets Manager via IAM permissions.
- Rotation is built-in and scalable for multiple web servers.
Why the other options are not suitable:
- B:
AWS Systems Manager OpsCenter is for operational issue management, not credential storage or rotation.
- C: Storing credentials in a secure S3 bucket requires manual rotation and lacks integrated rotation/auditability.
- D: Per-host files encrypted with KMS do not provide centralized rotation or easy auditability for many servers.
Implementation hint:
- Create a secret in Secrets Manager for the DB credentials, enable rotation for the secret, and attach an IAM role to the web servers to allow
secretsmanager:GetSecretValue (and related permissions) to retrieve credentials at runtime.
San Miguel De Allende, Mexico
Anonymous User
Commented on April 12, 2026
Question 59:
Question 59 focuses on ingesting and analyzing 30+ TB of clickstream data daily.
- Answer: D
- Why: Use a managed streaming pipeline: collect with
Kinesis Data Streams, deliver with Kinesis Data Firehose to an S3 data lake, then load into Amazon Redshift for analytics. This provides scalable, real-time ingestion and straightforward analytics loading.
Why the others are not suitable:
- A) AWS Data Pipeline is deprecated for new workloads.
- B) EC2-based ECS requires managing infrastructure and isn’t a managed streaming solution.
- C) CloudFront is a CDN, not a data ingestion or streaming mechanism; Lambda alone isn’t scalable for continuous 30 TB/day without orchestration.
San Miguel De Allende, Mexico
Anonymous User
Commented on April 12, 2026
Question 11:
- Correct answer: A — Use
AWS Secrets Manager with automatic rotation.
Why this is correct:
- Centralizes database credentials in Secrets Manager instead of local files.
- Supports automatic rotation for database credentials, reducing manual maintenance.
- Applications can retrieve secrets at runtime without code changes, improving security and reducing ops overhead.
How to implement (high level):
- Create a secret in Secrets Manager for the Aurora DB credentials.
- Enable automatic rotation (MySQL/Aurora-compatible rotation) with the built-in Lambda function.
- Grant your EC2 instances an IAM role that allows
secrets:GetSecretValue.
- Update the application to fetch credentials from Secrets Manager (or continue using the secret’s value without embedding passwords in the host).
Why the other options are less suitable:
- B: Parameter Store can hold secrets but lacks the seamless, native rotation for database credentials you get with Secrets Manager.
- C: Storing credentials in S3 and rotating with Lambda is manual and riskier; no native rotation flow.
- D: Rotating via encrypted EBS volumes is complex, manual, and doesn’t provide centralized or automated credential rotation.
San Miguel De Allende, Mexico
Anonymous User
Commented on April 12, 2026
Question 3:
- The global condition key aws:PrincipalOrgID allows you to grant access only to principals (users/roles) that belong to your AWS Organization.
- By adding this condition to the S3 bucket policy, all accounts within the organization gain access without managing per-account permissions or OUs.
- This minimizes ongoing admin effort and scales automatically as accounts are added to the org.
- Why the others are less suitable:
- B (aws:PrincipalOrgPaths): Requires explicit OU paths and ongoing maintenance as accounts move between OUs; increases complexity.
- C (CloudTrail monitoring): It logs events but cannot enforce real-time access control.
- D (aws:PrincipalTag): Needs tagging every user and maintaining tag-based policies, adding manual overhead.
- Quick example (conceptual):
- Policy with a Condition like "aws:PrincipalOrgID": "o-1234567890" grants access to any principal from your organization, with no per-user updates needed.
San Miguel De Allende, Mexico
Prasad
Commented on January 24, 2026
Question 108: D is correct answer, not A
Anonymous
Reborn
Commented on January 24, 2026
Question 534
Did the answer consider:
- The 90-Day Minimum Billing Rule of S3 Glacier Flexible Retrieval
- Retrieval "Index" Overhead: Glacier is designed for massive archives. When you move a file to Glacier Flexible Retrieval, AWS adds 40 KB of metadata (32 KB for index/billing and 8 KB for the filename/metadata) to every single object.
EUROPEAN UNION
Reborn
Commented on January 24, 2026
Question 523:
Why not Answer A: AWS AppSync pipeline resolvers
AWS AppSync pipeline resolvers are purpose-built for exactly this use case.
What AppSync pipeline resolvers do well
Allow a single API call to:
Execute multiple DynamoDB operations
Query multiple tables sequentially or in parallel
Run fully managed, serverless logic (no infrastructure to manage)
Offload orchestration logic from Lambda code
Maintain low latency by executing resolver steps natively inside AppSync
EUROPEAN UNION
Reborn
Commented on January 22, 2026
Question 338:
I choose Answer D: Set up an Aurora global database for the DB cluster. Specify a minimum of one DB instance in the secondary Region.
Answer B: Set up an Aurora global database for the DB cluster. When setup is complete, remove the DB instance from the secondary Region.
- If you remove the instance, there is no DB instance to serve as a DR standby.
EUROPEAN UNION
Reborn
Commented on January 22, 2026
Question 302:
Which combination of solutions will meet these requirements? (Choose two.)
But show answer only got 1 option
EUROPEAN UNION
Reborn
Commented on January 22, 2026
Question 275:
Answer A: Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens.
this is a predictable traffic pattern, not an unexpected one, proactively scales before users arrive should solve the problem: very slow when the day begins
EUROPEAN UNION
Reborn
Commented on January 21, 2026
Question: 263
A - Deploy an Amazon Elastic Container Service (Amazon ECS) cluster.
D - Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of greater than or equal to 2.
Why D looks fully cover the answer A, but the question required to choose 2 options
EUROPEAN UNION
Reborn
Commented on January 21, 2026
QUESTION: 208
Answer A and B is a bit confusing,
Option B, ignoring the incorrect mention of security groups looks correct
EUROPEAN UNION
amazon
Commented on January 10, 2026
QUESTION: 88 right answer is : A) Configure the requester pays feature on the compani;s S3 bucket
CANADA
Raju Prasad
Commented on November 19, 2025
In Question-239 correct answer is A I think. Reason is: API Gateway introduces extra operational overhead and cost. However, API Gateway can integrate with Lambda and support AWS_IAM authentication and that's works correctly. On the otherhand, Lambda function URLs allow HTTPS endpoints without needing API Gateway or ALB. They natively support IAM authentication. Please suggest whether my analysis is correct or not.
Well, API Gateway also provides many additional features such as caching, throttling, and monitoring, which can be useful for a microservice, so, this is another plus point with API Gateway. So, which one should we mark correct answer.
CHINA
Glauco
Commented on November 19, 2025
Question 82 is D!
Anonymous
Ricardo Nelumba
Commented on September 12, 2025
On question 30 the correct answer is A
Here's why:
- Cost savings: You only pay for storage while the instance is stopped, eliminating compute charges for ~692 hours per month (720 - 48 = 672 hours minimum)
- Maintains requirements: The instance retains its full compute and memory configuration when restarted
- Minimal operational overhead: Stop/start operations are simple, quick (typically 2-5 minutes), and can be automated
- No data risk: All data and configurations remain intact
- Performance Insights: Remains enabled and configured as before
Anonymous
Rud
Commented on September 10, 2025
I passed this exam today. Here are my 2 cents:
1- Focus on understanding core AWS services
2- Practice with these practice questions as they are real exams.
3- Time management is crucial so pay attention.
GERMANY
MK
Commented on September 06, 2025
Question 88 - answer should be C
Configure cross-account access for the marketing firm so that the marketing firm has access to the company's S3 bucket. (? correct)
The cheapest way is to grant cross-account IAM permissions (bucket policy or resource policy).
The marketing firm can directly access the U.S.-based S3 bucket.
Since S3 is a global service, accessing data in the original bucket avoids unnecessary duplication or replication costs.
? Most cost-effective.
AUSTRALIA
MK
Commented on September 06, 2025
Question 82 - A company hosts its web applications in the AWS Cloud. The company configures Elastic Load Balancers to use certificates that are imported into AWS Certificate Manager (ACM). The company's security team must be notified 30 days before the expiration of each certificate.
What should a solutions architect recommend to meet this requirement?
Answer should be D as the efficient and easy solution.
AUSTRALIA
Geremy
Commented on August 28, 2025
Passed this exam about 3 days ago. Most of the questions were the same in my exam.
This is a valid exam dumps pdf and all worth the time.
UNITED KINGDOM
Gayathiri
Commented on August 25, 2025
I had prepared for AWS SAA-C03 exam and after preparation, came here to revise my knowledge. I had passed AWS SAA-C03 exam. There were 15 to 20 exactly same question appeared in the exam. Understood the concepts from entire question practice test. Thanks you for providing these practice test.
Anonymous
Anoop
Commented on August 09, 2025
These questions are worth beneficial for someone preparing for aws- saa-c03
Anonymous
Suraj
Commented on August 01, 2025
good exam dumps pdf which is v helpful while prep for the SAA.
thanks!
Anonymous