Amazon SAP-C02 Exam
AWS Certified Solutions Architect - Professional SAP-C02 (Page 3 )

Updated On: 1-Feb-2026

A company is running a traditional web application on Amazon EC2 instances. The company needs to refactor the application as microservices that run on containers. Separate versions of the application exist in two distinct environments: production and testing. Load for the application is variable, but the minimum load and the maximum load are known. A solutions architect needs to design the updated application with a serverless architecture that minimizes operational complexity.

Which solution will meet these requirements MOST cost-effectively?

  1. Upload the container images to AWS Lambda as functions. Configure a concurrency limit for the associated Lambda functions to handle the expected peak load. Configure two separate Lambda integrations within Amazon API Gateway: one for production and one for testing.
  2. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the ECS clusters.
  3. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the EKS clusters.
  4. Upload the container images to AWS Elastic Beanstalk. In Elastic Beanstalk, create separate environments and deployments for production and testing. Configure two separate Application Load Balancers to direct traffic to the Elastic Beanstalk deployments.

Answer(s): B

Explanation:

Option B is the most cost-effective and operationally efficient solution because it uses Amazon ECS with Fargate, which automatically scales to handle variable loads without the need for managing infrastructure. This setup minimizes operational complexity and ensures separate environments for production and testing, with traffic directed by Application Load Balancers.



A company has a multi-tier web application that runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum value and the maximum value for the Auto Scaling group are set to zero. An Amazon RDS Multi-AZ DB instance stores the application’s data. The DB instance has a read replica in the backup Region. The application presents an endpoint to end users by using an Amazon Route 53 record.

The company needs to reduce its RTO to less than 15 minutes by giving the application the ability to automatically fail over to the backup Region. The company does not have a large enough budget for an active-active strategy.

What should a solutions architect recommend to meet these requirements?

  1. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.
  2. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application’s Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.
  3. Configure the Auto Scaling group in the backup Region to have the same values as the Auto Scaling group in the primary Region. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Remove the read replica. Replace the read replica with a standalone RDS DB instance. Configure Cross-Region Replication between the RDS DB instances by using snapshots and Amazon S3.
  4. Configure an endpoint in AWS Global Accelerator with the two ALBs as equal weighted targets. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.

Answer(s): B

Explanation:

Option B is the best choice because it uses an AWS Lambda function to promote the read replica and adjust the Auto Scaling group values in the backup region. This setup, combined with Route 53 health checks and failover routing, ensures automatic failover within the required RTO of less than 15 minutes without the need for an active-active strategy, keeping costs low.



A company is hosting a critical application on a single Amazon EC2 instance. The application uses an Amazon ElastiCache for Redis single-node cluster for an in-memory data store. The application uses an Amazon RDS for MariaDB DB instance for a relational database. For the application to function, each piece of the infrastructure must be healthy and must be in an active state.

A solutions architect needs to improve the application's architecture so that the infrastructure can automatically recover from failure with the least possible downtime.

Which combination of steps will meet these requirements? (Choose three.)

  1. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.
  2. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are configured in unlimited mode.
  3. Modify the DB instance to create a read replica in the same Availability Zone. Promote the read replica to be the primary DB instance in failure scenarios.
  4. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones.
  5. Create a replication group for the ElastiCache for Redis cluster. Configure the cluster to use an Auto Scaling group that has a minimum capacity of two instances.
  6. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.

Answer(s): A,D,F

Explanation:

To ensure the application’s infrastructure can automatically recover from failures with minimal downtime, the following steps are necessary:

A: Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.

This step ensures high availability and fault tolerance for the application by distributing traffic across multiple instances and automatically scaling the number of instances based on demand.
D: Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones.

Multi-AZ deployments provide enhanced availability and durability for the database. In the event of an AZ failure, Amazon RDS automatically fails over to the standby instance in another AZ, minimizing downtime.
F: Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.

Enabling Multi-AZ for ElastiCache ensures that the in-memory data store is highly available and can failover to a replica in another AZ if the primary node fails, thus reducing downtime.
These steps collectively enhance the resilience and availability of the application’s infrastructure, ensuring it can recover quickly from failures.



A retail company is operating its ecommerce application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB instance as the database backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones.

After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs.

While the company is working on the problem, the solutions architect needs to provide a custom error page instead of the standard ALB error page to visitors.

Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Choose two.)

  1. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
  2. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
  3. Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage.
  4. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a public accessible web server.
  5. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.

Answer(s): A,E

Explanation:

To provide a custom error page with minimal operational overhead:

A: Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.

This allows for a simple and scalable way to serve custom error pages.
E: Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.

This ensures users see a custom error page through CloudFront, reducing backend load and providing a seamless experience.



A company has many AWS accounts and uses AWS Organizations to manage all of them. A solutions architect must implement a solution that the company can use to share a common network across multiple accounts.

The company’s infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network. Individual accounts cannot have the ability to manage their own networks. However, individual accounts must be able to create AWS resources within subnets.

Which combination of actions should the solutions architect perform to meet these requirements? (Choose two.)

  1. Create a transit gateway in the infrastructure account.
  2. Enable resource sharing from the AWS Organizations management account.
  3. Create VPCs in each AWS account within the organization in AWS Organizations. Configure the VPCs to share the same CIDR range and subnets as the VPC in the infrastructure account. Peer the VPCs in each individual account with the VPC in the infrastructure account.
  4. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share.
  5. Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each prefix list to associate with the resource share.

Answer(s): B,D

Explanation:

To share a common network across multiple AWS accounts, the solutions architect should leverage AWS Resource Access Manager (RAM) and AWS Organizations for efficient and secure resource sharing.

B: Enable resource sharing from the AWS Organizations management account: This action allows the sharing of resources, such as VPCs and subnets, across accounts within the organization. AWS Organizations helps streamline governance and resource management across multiple AWS accounts.

D: Create a resource share in AWS Resource Access Manager in the infrastructure account: By using AWS RAM, the infrastructure team can share specific resources like subnets with other accounts, ensuring that individual accounts can create resources in shared subnets without managing their own network infrastructure. RAM allows secure and managed sharing of resources within the organization's structure.

These steps ensure that the network is centrally managed by the infrastructure team while still allowing other accounts to deploy resources within the shared network environment.



Viewing page 3 of 107
Viewing questions 11 - 15 out of 533 questions



Post your Comments and Discuss Amazon SAP-C02 exam prep with other Community members:

Join the SAP-C02 Discussion