Amazon SAP-C01 Exam
AWS Certified Solutions Architect - Professional SAP-C02 (Page 11 )

Updated On: 1-Feb-2026

A company is building a serverless application that runs on an AWS Lambda function that is attached to a VPC. The company needs to integrate the application with a new service from an external provider. The external provider supports only requests that come from public IPv4 addresses that are in an allow list.

The company must provide a single public IP address to the external provider before the application can start using the new service.

Which solution will give the application the ability to access the new service?

  1. Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway. Configure the VPC to use the NAT gateway.
  2. Deploy an egress-only internet gateway. Associate an Elastic IP address with the egress-only internet gateway. Configure the elastic network interface on the Lambda function to use the egress-only internet gateway.
  3. Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the Lambda function to use the internet gateway.
  4. Deploy an internet gateway. Associate an Elastic IP address with the internet gateway. Configure the default route in the public VPC route table to use the internet gateway.

Answer(s): A

Explanation:

A) Deploy a NAT gateway. Associate an Elastic IP address with the NAT gateway. Configure the VPC to use the NAT gateway is the correct solution. A NAT gateway allows resources within a private subnet (such as a Lambda function attached to a VPC) to access the internet or external services while keeping the internal resources private. By associating an Elastic IP address with the NAT gateway, you can ensure that all outbound traffic uses a single, predictable public IP address. This setup will satisfy the requirement of providing a single public IP address to the external provider for allow list purposes.



A solutions architect has developed a web application that uses an Amazon API Gateway Regional endpoint and an AWS Lambda function. The consumers of the web application are all close to the AWS Region where the application will be deployed. The Lambda function only queries an Amazon Aurora MySQL database. The solutions architect has configured the database to have three read replicas.

During testing, the application does not meet performance requirements. Under high load, the application opens a large number of database connections. The solutions architect must improve the application’s performance.

Which actions should the solutions architect take to meet these requirements? (Choose two.)

  1. Use the cluster endpoint of the Aurora database.
  2. Use RDS Proxy to set up a connection pool to the reader endpoint of the Aurora database.
  3. Use the Lambda Provisioned Concurrency feature.
  4. Move the code for opening the database connection in the Lambda function outside of the event handler.
  5. Change the API Gateway endpoint to an edge-optimized endpoint.

Answer(s): B,D

Explanation:

B) Use RDS Proxy to set up a connection pool to the reader endpoint of the Aurora database, and
D) Move the code for opening the database connection in the Lambda function outside of the event handler are the correct answers.

RDS Proxy helps improve database performance by efficiently managing database connections through a connection pool, which is critical in high-load scenarios where too many connections could overwhelm the Aurora MySQL database. By directing traffic to the reader endpoint, it also offloads the read queries from the primary instance.
Moving the code for opening the database connection outside the Lambda function's event handler ensures that the database connection is reused across multiple invocations, reducing the overhead of repeatedly opening and closing connections, improving both performance and scalability.



A company is planning to host a web application on AWS and wants to load balance the traffic across a group of Amazon EC2 instances. One of the security requirements is to enable end-to-end encryption in transit between the client and the web server.

Which solution will meet this requirement?

  1. Place the EC2 instances behind an Application Load Balancer (ALB). Provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. Export the SSL certificate and install it on each EC2 instance. Configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances.
  2. Associate the EC2 instances with a target group. Provision an SSL certificate using AWS Certificate Manager (ACM). Create an Amazon CloudFront distribution and configure it to use the SSL certificate. Set CloudFront to use the target group as the origin server.
  3. Place the EC2 instances behind an Application Load Balancer (ALB) Provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. Provision a third-party SSL certificate and install it on each EC2 instance. Configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances.
  4. Place the EC2 instances behind a Network Load Balancer (NLB). Provision a third-party SSL certificate and install it on the NLB and on each EC2 instance. Configure the NLB to listen on port 443 and to forward traffic to port 443 on the instances.

Answer(s): C

Explanation:

C) Place the EC2 instances behind an Application Load Balancer (ALB). Provision an SSL certificate using AWS Certificate Manager (ACM), and associate the SSL certificate with the ALB. Provision a third-party SSL certificate and install it on each EC2 instance. Configure the ALB to listen on port 443 and to forward traffic to port 443 on the instances is the correct solution.

This approach ensures end-to-end encryption by using an SSL certificate for both the Application Load Balancer (ALB) and the EC2 instances. The ALB handles the SSL termination for the initial client connection, and by installing a third-party SSL certificate on the EC2 instances, traffic between the ALB and the EC2 instances is also encrypted, ensuring end-to-end encryption.

This setup meets the security requirement while providing load balancing for traffic to the EC2 instances.



A company wants to migrate its data analytics environment from on premises to AWS. The environment consists of two simple Node.js applications. One of the applications collects sensor data and loads it into a MySQL database. The other application aggregates the data into reports. When the aggregation jobs run, some of the load jobs fail to run correctly.

The company must resolve the data loading issue. The company also needs the migration to occur without interruptions or changes for the company’s customers.

What should a solutions architect do to meet these requirements?

  1. Set up an Amazon Aurora MySQL database as a replication target for the on-premises database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind a Network Load Balancer (NLB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the NLB.
  2. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Move the aggregation jobs to run against the Aurora MySQL database. Set up collection endpoints behind an Application Load Balancer (ALB) as Amazon EC2 instances in an Auto Scaling group. When the databases are synced, point the collector DNS record to the AL Disable the AWS DMS sync task after the cutover from on premises to AWS.
  3. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS.
  4. Set up an Amazon Aurora MySQL database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as an Amazon Kinesis data stream. Use Amazon Kinesis Data Firehose to replicate the data to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the Kinesis data stream.

Answer(s): C

Explanation:

C) Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS is the correct answer.

This solution leverages AWS DMS for seamless, continuous replication from the on-premises MySQL database to Amazon Aurora, ensuring minimal downtime during the migration. Using Aurora Replica for the aggregation jobs offloads the read traffic, improving performance and reducing load on the primary database. By setting up AWS Lambda functions for data collection and Amazon RDS Proxy for connection management, the solution provides scalability and handles the load jobs efficiently. The use of ALB for the Lambda endpoints allows smooth handling of traffic, ensuring no interruptions for customers during the migration.



A health insurance company stores personally identifiable information (PII) in an Amazon S3 bucket. The company uses server-side encryption with S3 managed encryption keys (SSE-S3) to encrypt the objects. According to a new requirement, all current and future objects in the S3 bucket must be encrypted by keys that the company’s security team manages. The S3 bucket does not have versioning enabled.

Which solution will meet these requirements?

  1. In the S3 bucket properties, change the default encryption to SSE-S3 with a customer managed key. Use the AWS CLI to re-upload all objects in the S3 bucket. Set an S3 bucket policy to deny unencrypted PutObject requests.
  2. In the S3 bucket properties, change the default encryption to server-side encryption with AWS KMS managed encryption keys (SSE-KMS). Set an S3 bucket policy to deny unencrypted PutObject requests. Use the AWS CLI to re-upload all objects in the S3 bucket.
  3. In the S3 bucket properties, change the default encryption to server-side encryption with AWS KMS managed encryption keys (SSE-KMS). Set an S3 bucket policy to automatically encrypt objects on GetObject and PutObject requests.
  4. In the S3 bucket properties, change the default encryption to AES-256 with a customer managed key. Attach a policy to deny unencrypted PutObject requests to any entities that access the S3 bucket. Use the AWS CLI to re-upload all objects in the S3 bucket.

Answer(s): B

Explanation:

B) In the S3 bucket properties, change the default encryption to server-side encryption with AWS KMS managed encryption keys (SSE-KMS). Set an S3 bucket policy to deny unencrypted PutObject requests. Use the AWS CLI to re-upload all objects in the S3 bucket is the correct answer.

The requirement is to ensure that all current and future objects in the S3 bucket are encrypted with customer-managed keys, which means using AWS KMS managed encryption keys (SSE-KMS) instead of the default SSE-S3 encryption.

To meet this requirement:

You need to change the default encryption setting to SSE-KMS.
You must deny any unencrypted PutObject requests to ensure compliance for future uploads.
Because the bucket does not have versioning enabled, you need to re-upload the existing objects to apply the new encryption (SSE-KMS).
This solution ensures compliance with the new encryption requirements while properly encrypting both existing and future objects with KMS-managed keys.



Viewing page 11 of 107
Viewing questions 51 - 55 out of 533 questions



Post your Comments and Discuss Amazon SAP-C01 exam prep with other Community members:

Join the SAP-C01 Discussion