Amazon SAP-C01 Exam
AWS Certified Solutions Architect - Professional SAP-C02 (Page 2 )

Updated On: 26-Jan-2026

A company needs to architect a hybrid DNS solution. This solution will use an Amazon Route 53 private hosted zone for the domain cloud.example.com for the resources stored within VPCs.

The company has the following DNS resolution requirements:

-On-premises systems should be able to resolve and connect to cloud.example.com.
-All VPCs should be able to resolve cloud.example.com.

There is already an AWS Direct Connect connection between the on-premises corporate network and AWS Transit Gateway.

Which architecture should the company use to meet these requirements with the HIGHEST performance?

  1. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
  2. Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the conditional forwarder.
  3. Associate the private hosted zone to the shared services VP Create a Route 53 outbound resolver in the shared services VP Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the outbound resolver.
  4. Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.

Answer(s): A

Explanation:

The best architecture for this scenario is to associate the private hosted zone with all VPCs, ensuring that each VPC can directly resolve cloud.example.com. By deploying a Route 53 inbound resolver in the shared services VPC, DNS queries from on-premises systems can be forwarded through the Direct Connect and Transit Gateway to the resolver, providing high-performance resolution of the private hosted zone. This setup ensures both on-premises and VPC resources can access the domain with minimal latency.



A company’s CISO has asked a solutions architect to re-engineer the company's current CI/CD practices to make sure patch deployments to its application can happen as quickly as possible with minimal downtime if vulnerabilities are discovered. The company must also be able to quickly roll back a change in case of errors.

The web application is deployed in a fleet of Amazon EC2 instances behind an Application Load Balancer. The company is currently using GitHub to host the application source code, and has configured an AWS CodeBuild project to build the application. The company also intends to use AWS CodePipeline to trigger builds from GitHub commits using the existing CodeBuild project.

What CI/CD configuration meets all of the requirements?

  1. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for in-place deployment. Monitor the newly deployed code, and, if there are any issues, push another code update
  2. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for blue/green deployments. Monitor the newly deployed code, and, if there are any issues, trigger a manual rollback using CodeDeploy.
  3. Configure CodePipeline with a deploy stage using AWS CloudFormation to create a pipeline for test and production stacks. Monitor the newly deployed code, and, if there are any issues, push another code update.
  4. Configure the CodePipeline with a deploy stage using AWS OpsWorks and in-place deployments. Monitor the newly deployed code, and, if there are any issues, push another code update.

Answer(s): B

Explanation:

Configuring CodePipeline with a deploy stage using AWS CodeDeploy for blue/green deployments allows for quick patch deployments with minimal downtime. In this setup, new application versions are deployed to a separate set of EC2 instances (the blue or green environment) while the current version remains operational. If vulnerabilities are discovered or issues arise after deployment, a manual rollback can be easily triggered to revert to the previous version without affecting the user experience. This configuration meets the requirements for rapid deployments and effective rollback capabilities, ensuring that the application remains available and secure.



A company needs to monitor a growing number of Amazon S3 buckets across two AWS Regions. The company also needs to track the percentage of objects that are encrypted in Amazon S3. The company needs a dashboard to display this information for internal compliance teams.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Create a new 3 Storage Lens dashboard in each Region to track bucket and encryption metrics. Aggregate data from both Region dashboards into a single dashboard in Amazon QuickSight for the compliance teams.
  2. Deploy an AWS Lambda function in each Region to list the number of buckets and the encryption status of objects. Store this data in Amazon S3. Use Amazon Athena queries to display the data on a custom dashboard in Amazon QuickSight for the compliance teams.
  3. Use the S3 Storage Lens default dashboard to track bucket and encryption metrics. Give the compliance teams access to the dashboard directly in the S3 console.
  4. Create an Amazon EventBridge rule to detect AWS CloudTrail events for S3 object creation. Configure the rule to invoke an AWS Lambda function to record encryption metrics in Amazon DynamoDB. Use Amazon QuickSight to display the metrics in a dashboard for the compliance teams.

Answer(s): C

Explanation:

Using the S3 Storage Lens default dashboard allows the company to monitor bucket and encryption metrics directly from the S3 console with minimal setup and maintenance. This solution provides a comprehensive overview of the encryption status of objects across multiple S3 buckets and Regions, meeting the compliance requirements without the need for additional infrastructure or custom implementations. Giving the compliance teams direct access to this dashboard ensures they can easily view and track the necessary metrics.



A software as a service (SaaS) company uses AWS to host a service that is powered by AWS PrivateLink. The service consists of proprietary software that runs on three Amazon EC2 instances behind a Network Load Balancer (NLB). The instances are in private subnets in multiple Availability Zones in the eu-west-2 Region. All the company's customers are in eu-west-2.

However, the company now acquires a new customer in the us-east-1 Region. The company creates a new VPC and new subnets in us-east-1. The company establishes inter-Region VPC peering between the VPCs in the two Regions.

The company wants to give the new customer access to the SaaS service, but the company does not want to immediately deploy new EC2 resources in us-east-1.

Which solution will meet these requirements?

  1. Configure a PrivateLink endpoint service in us-east-1 to use the existing NLB that is in eu-west-2. Grant specific AWS accounts access to connect to the SaaS service.
  2. Create an NLB in us-east-1. Create an IP target group that uses the IP addresses of the company's instances in eu-west-2 that host the SaaS service. Configure a PrivateLink endpoint service that uses the NLB that is in us-east-1. Grant specific AWS accounts access to connect to the SaaS service.
  3. Create an Application Load Balancer (ALB) in front of the EC2 instances in eu-west-2. Create an NLB in us-east-1. Associate the NLB that is in us-east-1 with an ALB target group that uses the ALB that is in eu-west-2. Configure a PrivateLink endpoint service that uses the NLB that is in us-east-1. Grant specific AWS accounts access to connect to the SaaS service.
  4. Use AWS Resource Access Manager (AWS RAM) to share the EC2 instances that are in eu-west-2. In us-east-1, create an NLB and an instance target group that includes the shared EC2 instances from eu-west-2. Configure a PrivateLink endpoint service that uses the NLB that is in us-east-1. Grant specific AWS accounts access to connect to the SaaS service.

Answer(s): B

Explanation:

Creating an NLB in us-east-1 with an IP target group that uses the IP addresses of the existing EC2 instances in eu-west-2 allows the company to provide access to the SaaS service without deploying new resources in the new region. This setup enables the existing resources to be accessed over the inter-Region VPC peering connection while utilizing a PrivateLink endpoint service. Granting specific AWS accounts access to this PrivateLink service ensures that only authorized customers can connect to the SaaS offering, meeting the requirements for the new customer.



A financial services company has an asset management product that thousands of customers use around the world. The customers provide feedback about the product through surveys. The company is building a new analytical solution that runs on Amazon EMR to analyze the data from these surveys. The following user personas need to access the analytical solution to perform different actions:

•Administrator: Provisions the EMR cluster for the analytics team based on the team’s requirements
•Data engineer: Runs ETL scripts to process, transform, and enrich the datasets
•Data analyst: Runs SQL and Hive queries on the data

A solutions architect must ensure that all the user personas have least privilege access to only the resources that they need. The user personas must be able to launch only applications that are approved and authorized. The solution also must ensure tagging for all resources that the user personas create.

Which solution will meet these requirements?

  1. Create IAM roles for each user persona. Attach identity-based policies to define which actions the user who assumes the role can perform. Create an AWS Config rule to check for noncompliant resources. Configure the rule to notify the administrator to remediate the noncompliant resources.
  2. Setup Kerberos-based authentication for EMR clusters upon launch. Specify a Kerberos security configuration along with cluster-specific Kerberos options.
  3. Use AWS Service Catalog to control the Amazon EMR versions available for deployment, the cluster configuration, and the permissions for each user persona.
  4. Launch the EMR cluster by using AWS CloudFormation, Attach resource-based policies to the EMR cluster during cluster creation. Create an AWS. Config rule to check for noncompliant clusters and noncompliant Amazon S3 buckets. Configure the rule to notify the administrator to remediate the noncompliant resources.

Answer(s): C

Explanation:

Using AWS Service Catalog allows the company to create and manage approved products and configurations, which ensures that each user persona can launch only authorized EMR applications and configurations. By defining the permissions for each persona within the Service Catalog, the solutions architect can enforce least privilege access while maintaining control over the EMR versions and configurations available for deployment. This solution also facilitates resource tagging during the provisioning process, aligning with the company’s governance and compliance requirements.



Viewing page 2 of 107
Viewing questions 6 - 10 out of 533 questions



Post your Comments and Discuss Amazon SAP-C01 exam prep with other Community members:

Join the SAP-C01 Discussion