Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 Exam Questions
AWS Certified CloudOps Engineer - Associate SOA-C03 (Page 5 )

Updated On: 24-Mar-2026

Application A runs on Amazon EC2 instances behind a Network Load Balancer (NLB). The EC2 instances are in an Auto Scaling group and are in the same subnet that is associated with the NLB. Other applications from an on-premises environment cannot communicate with Application A on port 8080.

To troubleshoot the issue, a CloudOps engineer analyzes the flow logs. The flow logs include the following records:

ACCEPT from 192.168.0.13:59003 172.31.16.139:8080

REJECT from 172.31.16.139:8080 192.168.0.13:59003

What is the reason for the rejected traffic?

  1. The security group of the EC2 instances has no Allow rule for the traffic from the NLB.
  2. The security group of the NLB has no Allow rule for the traffic from the on-premises environment.
  3. The ACL of the on-premises environment does not allow traffic to the AWS environment.
  4. The network ACL that is associated with the subnet does not allow outbound traffic for the ephemeral port range.

Answer(s): D

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

VPC Flow Logs show the request arriving and being ACCEPTed on dstport 8080 and the corresponding response being REJECTed on the return path to the client's ephemeral port (59003). AWS networking guidance states that security groups are stateful (return traffic is automatically allowed) while network ACLs are stateless and require explicit inbound and outbound rules for both directions. CloudOps operational guidance for VPC networking further notes that when you allow an inbound request (for example, TCP 8080) through a subnet's network ACL, you must also allow the outbound ephemeral port range (typically 1024­65535) for the response traffic; otherwise, the return packets are dropped and appear as REJECT in flow logs. The observed pattern--request accepted to 8080, response rejected to 59003--matches a missing outbound ephemeral-range allow on the subnet's NACL. Therefore, the cause is the subnet NACL, not security groups or on-premises ACLs. The remediation is to add an outbound ALLOW rule on the NACL for the appropriate ephemeral TCP port range back to the on-premises CIDR (and the corresponding inbound rule if asymmetric).


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Networking and Content Delivery

· Amazon VPC ­ Network ACLs (stateless behavior and rule requirements)

· Amazon VPC ­ Security Groups (stateful return traffic)

· VPC Flow Logs ­ Record fields, ACCEPT/REJECT analysis



A company runs a website on Amazon EC2 instances. Users can upload images to an Amazon S3 bucket and publish the images to the website. The company wants to deploy a serverless image- processing application that uses an AWS Lambda function to resize the uploaded images.

The company's development team has created the Lambda function. A CloudOps engineer must implement a solution to invoke the Lambda function when users upload new images to the S3 bucket.

Which solution will meet this requirement?

  1. Configure an Amazon Simple Notification Service (Amazon SNS) topic to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  2. Configure an Amazon CloudWatch alarm to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  3. Configure S3 Event Notifications to invoke the Lambda function when a user uploads a new image to the S3 bucket.
  4. Configure an Amazon Simple Queue Service (Amazon SQS) queue to invoke the Lambda function when a user uploads a new image to the S3 bucket.

Answer(s): C

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

Use Amazon S3 Event Notifications with AWS Lambda to trigger image processing on object creation. S3 natively supports invoking Lambda for events such as s3:ObjectCreated:*, providing a serverless, low-latency pipeline without managing additional services. AWS operational guidance states that "Amazon S3 can directly invoke a Lambda function in response to object-created events," allowing you to pass event metadata (bucket/key) to the function for resizing and writing results back to S3. This approach minimizes operational overhead, scales automatically with upload volume, and integrates with standard retry semantics. SNS or SQS can be added for fan-out or buffering patterns, but they are not required when the requirement is simply "invoke the Lambda function on upload." CloudWatch alarms do not detect individual S3 object uploads and cannot directly satisfy per-object triggers. Therefore, configuring S3 Lambda event notifications meets the requirement most directly and aligns with CloudOps best practices for event-driven, serverless automation.


Reference:

· Using AWS Lambda with Amazon S3 (Lambda Developer Guide)

· Amazon S3 Event Notifications (S3 User Guide)

· AWS Well-Architected ­ Serverless Applications (Operational Excellence)



A company hosts a production MySQL database on an Amazon Aurora single-node DB cluster. The database is queried heavily for reporting purposes. The DB cluster is experiencing periods of performance degradation because of high CPU utilization and maximum connections errors. A CloudOps engineer needs to improve the stability of the database.

Which solution will meet these requirements?

  1. Create an Aurora Replica node. Create an Auto Scaling policy to scale replicas based on CPU utilization. Ensure that all reporting requests use the read-only connection string.
  2. Create a second Aurora MySQL single-node DB cluster in a second Availability Zone. Ensure that all reporting requests use the connection string for this additional node.
  3. Create an AWS Lambda function that caches reporting requests. Ensure that all reporting requests call the Lambda function.
  4. Create a multi-node Amazon ElastiCache cluster. Ensure that all reporting requests use the ElastiCache cluster. Use the database if the data is not in the cache.

Answer(s): A

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

Amazon Aurora supports up to 15 Aurora Replicas that share the same storage volume and provide read scaling and improved availability. Official guidance states that replicas "offload read traffic from the writer" and that you should direct read-only workloads to the reader endpoint, reducing CPU pressure and connection counts on the primary. Aurora also supports Replica Auto Scaling through Application Auto Scaling policies using metrics such as CPU utilization or connections to add or remove replicas automatically. This design addresses both high CPU and maximum connections by moving reporting traffic to read replicas while keeping a single write primary for OLTP. Option B creates a separate cluster with independent storage, increasing operational overhead and data synchronization complexity. Options C and D introduce application-layer caching changes that may not guarantee data freshness or relieve the write node directly. Therefore, adding read replicas and routing reporting to the reader endpoint, with auto scaling based on load, is the least intrusive, CloudOps-aligned way to stabilize performance.


Reference:

· Amazon Aurora ­ Replicas and Reader Endpoint (Aurora User Guide)

· Aurora Replica Auto Scaling (Aurora & Application Auto Scaling Guides)

· AWS Well-Architected Framework ­ Reliability & Performance Efficiency



A CloudOps engineer configures an application to run on Amazon EC2 instances behind an Application Load Balancer (ALB) in a simple scaling Auto Scaling group with the default settings. The Auto Scaling group is configured to use the RequestCountPerTarget metric for scaling. The CloudOps engineer notices that the RequestCountPerTarget metric exceeded the specified limit twice in 180 seconds.

How will the number of EC2 instances in this Auto Scaling group be affected in this scenario?

  1. The Auto Scaling group will launch an additional EC2 instance every time the RequestCountPerTarget metric exceeds the predefined limit.
  2. The Auto Scaling group will launch one EC2 instance and will wait for the default cooldown period before launching another instance.
  3. The Auto Scaling group will send an alert to the ALB to rebalance the traffic and not add new EC2 instances until the load is normalized.
  4. The Auto Scaling group will try to distribute the traffic among all EC2 instances before launching another instance.

Answer(s): B

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

With simple scaling policies, an Auto Scaling group performs one scaling activity when the alarm condition is met, then observes a default cooldown period (300 seconds) before another scaling activity of the same type can begin. CloudOps guidance explains that cooldown prevents rapid successive scale-outs by allowing time for the newly launched instance(s) to register with the load balancer and impact the metric. Even if the alarm breaches multiple times during the cooldown window, the group waits until the cooldown completes before evaluating and acting again. In this case, although RequestCountPerTarget exceeded the threshold twice within 180 seconds, the group will launch a single instance and then wait for cooldown before any additional scale-out can occur. Options A, C, and D do not reflect the behavior of simple scaling with cooldowns; A describes step/target-tracking-like behavior, and C/D are not Auto Scaling mechanics.


Reference:

· Amazon EC2 Auto Scaling ­ Simple Scaling Policies and Cooldown (User Guide)

· Elastic Load Balancing Metrics ­ ALB RequestCountPerTarget (CloudWatch Metrics)

· AWS Well-Architected Framework ­ Performance Efficiency & Operational Excellence



A company uses Amazon ElastiCache (Redis OSS) to cache application dat

  1. A CloudOps engineer must implement a solution to increase the resilience of the cache. The solution also must minimize the recovery time objective (RTO).
    Which solution will meet these requirements?
  2. Replace ElastiCache (Redis OSS) with ElastiCache (Memcached).
  3. Create an Amazon EventBridge rule to initiate a backup every hour. Restore the backup when necessary.
  4. Create a read replica in a second Availability Zone. Enable Multi-AZ for the ElastiCache (Redis OSS) replication group.
  5. Enable automatic backups. Restore the backups when necessary.

Answer(s): C

Explanation:

Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Doocuments:

For high availability and fast failover, ElastiCache for Redis supports replication groups with Multi-AZ and automatic failover. CloudOps guidance states that a primary node can be paired with one or more replicas across multiple Availability Zones; if the primary fails, Redis automatically promotes a replica to primary in seconds, thereby minimizing RTO. This architecture maintains in-memory data continuity without waiting for backup restore operations. Backups (Options B and D) provide durability but require restore and re-warm procedures that increase RTO and may impact application latency. Switching engines (Option A) to Memcached does not provide Redis replication/failover semantics and would not inherently improve resilience for this use case. Therefore, creating a read replica in a different AZ and enabling Multi-AZ with automatic failover is the prescribed CloudOps pattern to increase resilience and achieve the lowest practical RTO for Redis caches.


Reference:

· AWS Certified CloudOps Engineer ­ Associate (SOA-C03) Exam Guide ­ Reliability and Business Continuity

· Amazon ElastiCache for Redis ­ Replication Groups, Multi-AZ, and Automatic Failover

· AWS Well-Architected Framework ­ Reliability Pillar



Viewing page 5 of 34
Viewing questions 21 - 25 out of 165 questions



Post your Comments and Discuss Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 exam dumps with other Community members:

AWS Certified CloudOps Engineer - Associate SOA-C03 Exam Discussions & Posts

AI Tutor 👋 I’m here to help!