Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 Exam Questions
AWS Certified CloudOps Engineer - Associate SOA-C03 (Page 16 )

Updated On: 29-Mar-2026

An AWS CloudFormation template creates an Amazon RDS instance. This template is used to build up development environments as needed and then delete the stack when the environment is no longer required. The RDS-persisted data must be retained for further use, even after the CloudFormation stack is deleted.

How can this be achieved in a reliable and efficient way?

  1. Write a script to continue backing up the RDS instance every five minutes.
  2. Create an AWS Lambda function to take a snapshot of the RDS instance, and manually invoke the function before deleting the stack.
  3. Use the Snapshot Deletion Policy in the CloudFormation template definition of the RDS instance.
  4. Create a new CloudFormation template to perform backups of the RDS instance, and run this template before deleting the stack.

Answer(s): C

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

AWS CloudFormation supports the DeletionPolicy attribute to control what happens to a resource when a stack is deleted. For Amazon RDS DB instances, setting DeletionPolicy: Snapshot instructs CloudFormation to retain a final DB snapshot automatically at stack deletion. CloudOps best practice recommends using this native mechanism for data retention and auditability, avoiding manual scripts or out-of-band processes. Options A, B, and D introduce operational overhead and potential human error. With DeletionPolicy set to Snapshot, the environment can be repeatedly created and torn down while preserving data states for later restoration with minimal manual steps. This aligns with IaC principles--declarative, repeatable, and reliable--and supports efficient lifecycle management of ephemeral development stacks.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Deployment, Provisioning and Automation

· AWS CloudFormation User Guide ­ DeletionPolicy Attribute (Snapshot for RDS)

· AWS Well-Architected Framework ­ Operational Excellence Pillar



A company has a VPC that contains a public subnet and a private subnet. The company deploys an Amazon EC2 instance that uses an Amazon Linux Amazon Machine Image (AMI) and has the AWS Systems Manager Agent (SSM Agent) installed in the private subnet. The EC2 instance is in a security group that allows only outbound traffic.

A CloudOps engineer needs to give a group of privileged administrators the ability to connect to the instance through SSH without exposing the instance to the internet.

Which solution will meet this requirement?

  1. Create an EC2 Instance Connect endpoint in the private subnet. Update the security group to allow inbound SSH traffic. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
  2. Create a Systems Manager endpoint in the private subnet. Update the security group to allow SSH traffic from the private network where the Systems Manager endpoint is connected. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
  3. Create an EC2 Instance Connect endpoint in the public subnet. Update the security group to allow SSH traffic from the private network. Create an IAM group for privileged administrators. Assign the PowerUserAccess managed policy to the IAM group.
  4. Create a Systems Manager endpoint in the public subnet. Create an IAM role that has the AmazonSSMManagedInstanceCore permission for the EC2 instance. Create an IAM group for privileged administrators. Assign the AmazonEC2ReadOnlyAccess IAM policy to the IAM group.

Answer(s): A

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

EC2 Instance Connect Endpoint (EIC Endpoint) enables SSH to instances in private subnets without public IPs and without needing to traverse the public internet. CloudOps guidance explains that you deploy the endpoint in the same VPC/subnet as the targets, then allow inbound SSH on the instance security group from the endpoint's security group. Access is governed by IAM--administrators must have Instance Connect permissions; while the example uses a broad policy, the key mechanism is EIC in the private subnet plus SG rules scoped to the endpoint. Systems Manager Session Manager can provide shell access without SSH, but the requirement explicitly states "connect through SSH," making EIC the purpose-built solution. Options B and D misuse Systems Manager for SSH and propose unnecessary SG changes or incorrect endpoint placement; Option C places the endpoint in a public subnet, which is not required for private SSH access. Therefore, creating an EC2 Instance Connect endpoint in the private subnet and updating SGs accordingly meets the requirement while keeping the instance non-internet-exposed.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Security and Compliance

· Amazon EC2 ­ Instance Connect Endpoint (Private SSH Access)

· AWS Well-Architected Framework ­ Security Pillar (Least Privilege Network Access)



A global company runs a critical primary workload in the us-east-1 Region. The company wants to ensure business continuity with minimal downtime in case of a workload failure. The company wants to replicate the workload to a second AWS Region.

A CloudOps engineer needs a solution that achieves a recovery time objective (RTO) of less than 10 minutes and a zero recovery point objective (RPO) to meet service level agreements.

Which solution will meet these requirements?

  1. Implement a pilot light architecture that provides real-time data replication in the second Region.
    Configure Amazon Route 53 health checks and automated DNS failover.
  2. Implement a warm standby architecture that provides regular data replication in a second Region.
    Configure Amazon Route 53 health checks and automated DNS failover.
  3. Implement an active-active architecture that provides real-time data replication across two Regions. Use Amazon Route 53 health checks and a weighted routing policy.
  4. Implement a custom script to generate a regular backup of the data and store it in an S3 bucket that is in a second Region. Use the backup to launch the application in the second Region in the event of a workload failure.

Answer(s): C

Explanation:

According to the AWS Cloud Operations and Disaster Recovery documentation, the active-active multi-Region architecture provides the lowest possible RTO and RPO among all disaster recovery strategies. In this approach, workloads are deployed and actively running in multiple AWS Regions simultaneously. All data is continuously replicated in real time between Regions using fully managed replication services, ensuring zero data loss (zero RPO).

Because both Regions are active and capable of handling requests, failover between them is instantaneous, meeting the RTO of less than 10 minutes. Amazon Route 53 is used with weighted or latency-based routing policies and health checks to automatically route traffic away from an impaired Region to the healthy Region without manual intervention.

In contrast:

Pilot Light Architecture maintains only a minimal copy of the environment in the secondary Region. It requires time to scale up infrastructure during a disaster, resulting in longer RTO and potential data loss (non-zero RPO).

Warm Standby Architecture keeps partially running infrastructure in the secondary Region. Although faster than pilot light, it still requires scaling and synchronization, resulting in higher RTO and RPO compared to active-active.

Backup and Restore (option D) relies on periodic backups and restores data when needed. This approach has the highest RTO and RPO, unsuitable for mission-critical workloads demanding high availability and zero data loss.

Therefore, based on AWS-recommended disaster recovery strategies outlined in the AWS Cloud Operations and Disaster Recovery Guide, the Active-Active Multi-Region architecture (Option C) is the only approach that guarantees RTO <10 minutes and RPO = 0, achieving continuous availability and business continuity across Regions.


Reference:

AWS Cloud Operations and Disaster Recovery Whitepaper ­ Section: Disaster Recovery Strategies ­ Multi-Site (Active-Active) Approach; AWS CloudOps Best Practices for Reliability and Business Continuity.



Optimization]

A CloudOps engineer is using AWS Compute Optimizer to generate recommendations for a fleet of Amazon EC2 instances. Some of the instances use newly released instance types, while other instances use older instance types.

After the analysis is complete, the CloudOps engineer notices that some of the EC2 instances are missing from the Compute Optimizer dashboard.

What is the likely cause of this issue?

  1. The missing instances have insufficient historical Amazon CloudWatch metric data for analysis.
  2. Compute Optimizer does not support the instance types of the missing instances.
  3. Compute Optimizer already considers the missing instances to be optimized.
  4. The missing instances are running a Windows operating system.

Answer(s): B

Explanation:

According to the AWS Cloud Operations and Compute Optimizer documentation, Compute Optimizer provides right-sizing recommendations by analyzing Amazon CloudWatch metrics and instance configuration data. However, AWS explicitly notes that only supported instance types are included in Compute Optimizer analyses. If an EC2 instance type is newly released or not yet supported by Compute Optimizer, it will not appear in the Compute Optimizer dashboard until official support is added.

The documentation explains that "Compute Optimizer analyses only supported resource types and instance families. Instances using unsupported or newly launched instance types will not appear in the Compute Optimizer console." This ensures the service provides accurate recommendations based on sufficient performance history and benchmark data.

While CloudWatch metrics are required for analysis, the complete absence of instances from the dashboard -- rather than "insufficient metric data" notifications -- indicates unsupported instance types. Compute Optimizer would normally still display those with limited metrics but would flag them as "insufficient data," not remove them entirely.

Therefore, the most accurate cause of missing instances in this case is that Compute Optimizer does not support the newly released instance types, making option B correct.


Reference:

AWS Cloud Operations & Compute Optimizer Guide ­ Section: Supported Resources and Limitations in Compute Optimizer



A company uses an organization in AWS Organizations to manage multiple AWS accounts. The company needs to send specific events from all the accounts in the organization to a new receiver account, where an AWS Lambda function will process the events.

A CloudOps engineer configures Amazon EventBridge to route events to a target event bus in the us- west-2 Region in the receiver account. The CloudOps engineer creates rules in both the sender and receiver accounts that match the specified events. The rules do not specify an account parameter in the event pattern. IAM roles are created in the sender accounts to allow PutEvents actions on the target event bus.

However, the first test events from the us-east-1 Region are not processed by the Lambda function in the receiving account.

What is the likely reason the events are not processed?

  1. Interface VPC endpoints for EventBridge are required in the sender accounts and receiver accounts.
  2. The target Lambda function is in a different AWS Region, which is not supported by EventBridge.
  3. The resource-based policy on the target event bus must be modified to allow PutEvents API calls from the sender accounts.
  4. The rule in the receiving account must specify {"account": ["sender-account-id"]} in its event pattern and must include the receiving account I

Answer(s): C

Explanation:

Per the AWS Cloud Operations and EventBridge documentation, when events are sent across AWS accounts -- particularly from multiple accounts in an AWS Organization -- the target event bus in the receiver account must include a resource-based policy that explicitly allows events:PutEvents API calls from the sender accounts or the organization ID.

Even if the sender accounts have IAM permissions to call PutEvents, the receiving event bus must trust those accounts via a resource policy. Without this configuration, EventBridge automatically rejects incoming cross-account events, and those events never reach the target Lambda function for processing.

AWS guidance states that "Cross-account event delivery requires a resource-based policy on the event bus that grants permissions to the source accounts or organization." The policy can include either individual AWS account IDs or the organization's root ID.

In this scenario, because the events originate from multiple accounts and there is no resource policy on the target event bus to authorize those sender accounts, the events are not delivered.

Therefore, the correct cause is C ­ the resource-based policy on the target event bus must be modified to allow PutEvents API calls from the sender accounts.


Reference:

AWS Cloud Operations ­ EventBridge Cross-Account Event Delivery Section, Permissions for Event Bus Targets and Organizational Event Routing



Viewing page 16 of 34
Viewing questions 76 - 80 out of 165 questions



Post your Comments and Discuss Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 exam dumps with other Community members:

AWS Certified CloudOps Engineer - Associate SOA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!