Amazon SAP-C01 Exam
AWS Certified Solutions Architect - Professional SAP-C02 (Page 8 )

Updated On: 1-Feb-2026

A company is in the process of implementing AWS Organizations to constrain its developers to use only Amazon EC2, Amazon S3, and Amazon DynamoDB. The developers account resides in a dedicated organizational unit (OU). The solutions architect has implemented the following SCP on the developers account:



When this policy is deployed, IAM users in the developers account are still able to use AWS services that are not listed in the policy.

What should the solutions architect do to eliminate the developers’ ability to use services outside the scope of this policy?

  1. Create an explicit deny statement for each AWS service that should be constrained.
  2. Remove the FullAWSAccess SCP from the developers account’s OU.
  3. Modify the FullAWSAccess SCP to explicitly deny all services.
  4. Add an explicit deny statement using a wildcard to the end of the SCP.

Answer(s): B

Explanation:

B) Remove the FullAWSAccess SCP from the developers account’s OU is the correct answer.

In AWS Organizations, Service Control Policies (SCPs) act as a permission boundary that can limit the use of AWS services. When a FullAWSAccess SCP is applied, it allows all AWS services unless explicitly constrained. Even though the solutions architect has created a restrictive SCP, the FullAWSAccess SCP would still allow the use of all services unless it is removed.

To enforce the limitation to only Amazon EC2, Amazon S3, and Amazon DynamoDB, the FullAWSAccess SCP must be removed, ensuring that only the restrictive SCP with the allowed services is applied. This eliminates the developers' ability to access services outside of the defined scope in the SCP.

Adding explicit deny statements (option A) is not necessary because the restrictive SCP should already limit access, and keeping the FullAWSAccess SCP overrides those restrictions.



A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.

A solutions architect needs to implement a solution so that the app can handle the new and varying load. Which solution will meet these requirements with the LEAST operational overhead?

  1. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
  2. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.
  3. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.
  4. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.

Answer(s): D

Explanation:

D) Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB is the correct answer because it provides a scalable and efficient solution with low operational overhead. An ALB distributes incoming traffic across multiple targets (EC2 instances) automatically, ensuring better load balancing and handling of traffic spikes. Moving the EC2 instances to private subnets enhances security while the ALB manages external traffic. This solution allows the infrastructure to scale efficiently with traffic changes while reducing manual intervention.



A company has created an OU in AWS Organizations for each of its engineering teams. Each OU owns multiple AWS accounts. The organization has hundreds of AWS accounts.

A solutions architect must design a solution so that each OU can view a breakdown of usage costs across its AWS accounts.

Which solution meets these requirements?

  1. Create an AWS Cost and Usage Report (CUR) for each OU by using AWS Resource Access Manager. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
  2. Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
  3. Create an AWS Cost and Usage Report (CUR) in each AWS Organizations member account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard.
  4. Create an AWS Cost and Usage Report (CUR) by using AWS Systems Manager. Allow each team to visualize the CUR through Systems Manager OpsCenter dashboards.

Answer(s): B

Explanation:

B) Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. Allow each team to visualize the CUR through an Amazon QuickSight dashboard is the correct solution because AWS Cost and Usage Reports (CURs) can be generated from the management account of AWS Organizations. By centralizing this reporting in the management account, it allows for a comprehensive view of usage across all OUs and accounts within the organization. Amazon QuickSight can then be used to visualize this data, enabling each team to see their breakdown of costs in a user-friendly dashboard. This approach avoids the complexity of managing multiple CURs and offers the most straightforward and scalable solution.



A company is storing data on premises on a Windows file server. The company produces 5 GB of new data daily.

The company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. The company already has established an AWS Direct Connect connection between the on-premises network and AWS.

Which data migration strategy should the company use?

  1. Use the file gateway option in AWS Storage Gateway to replace the existing Windows file server, and point the existing file share to the new file gateway.
  2. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx.
  3. Use AWS Data Pipeline to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS).
  4. Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon Elastic File System (Amazon EFS).

Answer(s): B

Explanation:

B) Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx is the correct answer. AWS DataSync is designed to automate the migration of data between on-premises systems and AWS, making it ideal for this scenario. The company already has a Direct Connect connection, and Amazon FSx provides a fully managed Windows-native file system. DataSync can be scheduled to handle the daily replication of the 5 GB of new data from the on-premises server to Amazon FSx with minimal operational overhead.



A company’s solutions architect is reviewing a web application that runs on AWS. The application references static assets in an Amazon S3 bucket in the us-east-1 Region. The company needs resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second Region.

Which solution will meet these requirements with the LEAST operational overhead?

  1. Configure the application to write each object to both S3 buckets. Set up an Amazon Route 53 public hosted zone with a record set by using a weighted routing policy for each S3 bucket. Configure the application to reference the objects by using the Route 53 DNS name.
  2. Create an AWS Lambda function to copy objects from the S3 bucket in us-east-1 to the S3 bucket in the second Region. Invoke the Lambda function each time an object is written to the S3 bucket in us-east-1. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
  3. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
  4. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. If failover is required, update the application code to load S3 objects from the S3 bucket in the second Region.

Answer(s): C

Explanation:

C) Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins is the correct answer because it provides resiliency across multiple Regions with minimal operational overhead. S3 replication ensures that objects are automatically copied from one Region to another, and Amazon CloudFront's origin group allows for automatic failover between the two S3 buckets. This approach ensures high availability and resilience for the web application while requiring minimal manual intervention.



Viewing page 8 of 107
Viewing questions 36 - 40 out of 533 questions



Post your Comments and Discuss Amazon SAP-C01 exam prep with other Community members:

Join the SAP-C01 Discussion