Amazon AWS Certified DevOps Engineer - Professional DOP-C02 Exam Questions
AWS Certified DevOps Engineer - Professional DOP-C02 (Page 4 )

Updated On: 23-Apr-2026

A company's developers use Amazon EC2 instances as remote workstations. The company is concerned that users can create or modify EC2 security groups to allow unrestricted inbound access.

A DevOps engineer needs to develop a solution to detect when users create unrestricted security group rules. The solution must detect changes to security group rules in near real time, remove unrestricted rules, and send email notifications to the security team. The DevOps engineer has created an AWS Lambda function that checks for security group ID from input, removes rules that grant unrestricted access, and sends notifications through Amazon Simple Notification Service (Amazon SNS).

What should the DevOps engineer do next to meet the requirements?

  1. Configure the Lambda function to be invoked by the SNS topic. Create an AWS CloudTrail subscription for the SNS topic. Configure a subscription filter for security group modification events.
  2. Create an Amazon EventBridge scheduled rule to invoke the Lambda function. Define a schedule pattern that runs the Lambda function every hour.
  3. Create an Amazon EventBridge event rule that has the default event bus as the source. Define the rule's event pattern to match EC2 security group creation and modification events. Configure the rule to invoke the Lambda function.
  4. Create an Amazon EventBridge custom event bus that subscribes to events from all AWS services.
    Configure the Lambda function to be invoked by the custom event bus.

Answer(s): C



A DevOps engineer is creating an AWS CloudFormation template to deploy a web service. The web service will run on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). The DevOps engineer must ensure that the service can accept requests from clients that have IPv6 addresses.

What should the DevOps engineer do with the CloudFormation template so that IPv6 clients can access the

web service?

  1. Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance.
  2. Assign each EC2 instance an IPv6 Elastic IP address. Create a target group, and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the AL
  3. Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address.
  4. Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443, and specify the dualstack IP address type on the ALB. Create a target group, and add the EC2 instances as targets.
    Associate the target group with the ALB.

Answer(s): D



A company uses AWS Organizations and AWS Control Tower to manage all the company's AWS accounts.
The company uses the Enterprise Support plan.

A DevOps engineer is using Account Factory for Terraform (AFT) to provision new accounts. When new accounts are provisioned, the DevOps engineer notices that the support plan for the new accounts is set to the Basic Support plan. The DevOps engineer needs to implement a solution to provision the new accounts with the Enterprise Support plan.

Which solution will meet these requirements?

  1. Use an AWS Config conformance pack to deploy the account-part-of-organizations AWS Config rule and to automatically remediate any noncompliant accounts.
  2. Create an AWS Lambda function to create a ticket for AWS Support to add the account to the Enterprise Support plan. Grant the Lambda function the support:ResolveCase permission.
  3. Add an additional value to the control_tower_parameters input to set the AWSEnterpriseSupport parameter as the organization's management account number.
  4. Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration.
    Redeploy AFT and apply the changes.

Answer(s): D



A company's DevOps engineer uses AWS Systems Manager to perform maintenance tasks during maintenance windows. The company has a few Amazon EC2 instances that require a restart after notifications from AWS Health. The DevOps engineer needs to implement an automated solution to remediate these notifications. The DevOps engineer creates an Amazon EventBridge rule.

How should the DevOps engineer configure the EventBridge rule to meet these requirements?

  1. Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a Systems Manager document to restart the EC2 instance.
  2. Configure an event source of Systems Manager and an event type that indicates a maintenance window.
    Target a Systems Manager document to restart the EC2 instance.
  3. Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the
    EC2 instance during a maintenance window.
  4. Configure an event source of EC2 and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.

Answer(s): A



A company has containerized all of its in-house quality control applications. The company is running Jenkins on Amazon EC2 instances, which require patching and upgrading. The compliance officer has requested a DevOps engineer begin encrypting build artifacts since they contain company intellectual property.

What should the DevOps engineer do to accomplish this in the MOST maintainable manner?

  1. Automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default.
  2. Deploy Jenkins to an Amazon ECS cluster and copy build artifacts to an Amazon S3 bucket with default encryption enabled.
  3. Leverage AWS CodePipeline with a build action and encrypt the artifacts using AWS Secrets Manager.
  4. Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on EC2 instances.

Answer(s): D



An IT team has built an AWS CloudFormation template so others in the company can quickly and reliably deploy and terminate an application. The template creates an Amazon EC2 instance with a user data script to install the application and an Amazon S3 bucket that the application uses to serve static webpages while it is running.

All resources should be removed when the CloudFormation stack is deleted. However, the team observes that CloudFormation reports an error during stack deletion, and the S3 bucket created by the stack is not deleted.

How can the team resolve the error in the MOST efficient manner to ensure that all resources are deleted without errors?

  1. Add a DelelionPolicy attribute to the S3 bucket resource, with the value Delete forcing the bucket to be removed when the stack is deleted.
  2. Add a custom resource with an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM role. Write the Lambda function to delete all objects from the bucket when RequestType is Delete.
  3. Identify the resource that was not deleted. Manually empty the S3 bucket and then delete it.
  4. Replace the EC2 and S3 bucket resources with a single AWS OpsWorks Stacks resource. Define a custom recipe for the stack to create and delete the EC2 instance and the S3 bucket.

Answer(s): B



A company has an AWS CodePipeline pipeline that is configured with an Amazon S3 bucket in the eu-west-1

Region. The pipeline deploys an AWS Lambda application to the same Region. The pipeline consists of an AWS CodeBuild project build action and an AWS CloudFormation deploy action.

The CodeBuild project uses the aws cloudformation package AWS CLI command to build an artifact that contains the Lambda function code's .zip file and the CloudFormation template. The CloudFormation deploy action references the CloudFormation template from the output artifact of the CodeBuild project's build action.

The company wants to also deploy the Lambda application to the us-east-1 Region by using the pipeline in eu- west-1. A DevOps engineer has already updated the CodeBuild project to use the aws cloudformation package command to produce an additional output artifact for us-east-1.

Which combination of additional steps should the DevOps engineer take to meet these requirements? (Choose two.)

  1. Modify the CloudFormation template to include a parameter for the Lambda function code's zip file location.
    Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to pass in the us-east-1 artifact location as a parameter override.
  2. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.
  3. Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access.
  4. Create an S3 bucket in us-east-1. Configure S3 Cross-Region Replication (CRR) from the S3 bucket in eu- west-1 to the S3 bucket in us-east-1.
  5. Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.

Answer(s): C,E



A company runs an application on one Amazon EC2 instance. Application metadata is stored in Amazon S3 and must be retrieved if the instance is restarted. The instance must restart or relaunch automatically if the instance becomes unresponsive.

Which solution will meet these requirements?

  1. Create an Amazon CloudWatch alarm for the StatusCheckFailed metric. Use the recover action to stop and start the instance. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.
  2. Configure AWS OpsWorks, and use the auto healing feature to stop and start the instance. Use a lifecycle event in OpsWorks to pull the metadata from Amazon S3 and update it on the instance.
  3. Use EC2 Auto Recovery to automatically stop and start the instance in case of a failure. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.
  4. Use AWS CloudFormation to create an EC2 instance that includes the UserData property for the EC2 resource. Add a command in UserData to retrieve the application metadata from Amazon S3.

Answer(s): B



Viewing page 4 of 57
Viewing questions 25 - 32 out of 429 questions


AWS Certified DevOps Engineer - Professional DOP-C02 Exam Discussions & Posts

Amazon AWS Certified DevOps Engineer - Professional DOP-C02: Skills Tested, Job Roles, and Study Tips

The AWS Certified DevOps Engineer - Professional DOP-C02 certification is designed for individuals who perform a DevOps engineer role with two or more years of experience provisioning, operating, and managing AWS environments. This certification validates technical expertise in implementing continuous delivery systems and methodologies on the AWS platform, as well as automating security controls, governance processes, and compliance validation. Organizations hiring for cloud-native roles, such as DevOps Engineers, Site Reliability Engineers, and Cloud Architects, prioritize this credential because it demonstrates a candidate's ability to design and maintain resilient, scalable, and secure infrastructure. Achieving this Amazon certification signifies that a professional possesses the advanced skills required to manage complex, multi-account AWS environments effectively.

What the AWS Certified DevOps Engineer - Professional DOP-C02 Exam Covers

The exam evaluates a candidate's proficiency across several critical domains, including SDLC Automation, Configuration Management and IaC, Resilient Cloud Solutions, Monitoring and Logging, Incident and Event Response, and Security and Compliance. These topics are not tested in isolation; rather, the exam presents complex, scenario-based practice questions that require you to synthesize knowledge across these areas to solve real-world operational challenges. For instance, you might be asked to design a CI/CD pipeline that integrates automated security testing, which touches upon both SDLC Automation and Security and Compliance. By engaging with our practice questions, you will encounter scenarios that mirror the multifaceted nature of these domains, ensuring you are prepared for the integrated way AWS tests these concepts. Mastering these topics requires a deep understanding of how various AWS services interact to support automated, secure, and resilient software delivery lifecycles.

Among these domains, Resilient Cloud Solutions often presents the most significant challenge for candidates because it requires a comprehensive understanding of high availability, disaster recovery, and fault tolerance across distributed systems. You must demonstrate the ability to architect solutions that can withstand service failures while maintaining performance and data integrity, which often involves complex configurations of AWS services like Auto Scaling, Elastic Load Balancing, and multi-region deployments. This area demands more than theoretical knowledge; it requires the ability to analyze trade-offs between cost, performance, and availability in high-pressure scenarios. Candidates must be prepared to evaluate architectural diagrams and operational requirements to select the most resilient design patterns that align with AWS best practices.

Are These Real AWS Certified DevOps Engineer - Professional DOP-C02 Exam Questions?

Our practice questions are sourced and verified by the community, consisting of IT professionals and recent test-takers who have sat for the actual exam. Because these questions are community-verified, they reflect the style, complexity, and focus areas that appear on the real exam, providing a reliable way to gauge your readiness. If you've been searching for AWS Certified DevOps Engineer - Professional DOP-C02 exam dumps or braindump files, our community-verified practice questions offer something more valuable — each question is verified and explained by IT professionals who recently passed the exam. We do not provide leaked or confidential content, as our goal is to help you understand the underlying concepts rather than memorize answers. This approach ensures that you are prepared for the logic and reasoning required on the actual certification exam.

The community verification process is central to the reliability of our study materials, as it involves active participation from users who have recently completed their certification journey. When a question is posted, users discuss the answer choices, debate the technical nuances of the scenario, and flag any inaccuracies based on their recent exam experience. This collaborative environment allows for the refinement of explanations, ensuring that the reasoning provided is accurate and aligned with current AWS documentation. By engaging with these discussions, you gain insights into how experienced professionals approach complex problems, which is far more effective than relying on static, unverified sources.

How to Prepare for the AWS Certified DevOps Engineer - Professional DOP-C02 Exam

Effective exam preparation requires a combination of hands-on experience and a deep understanding of AWS architectural principles. You should spend significant time in a sandbox or real AWS environment, building and breaking infrastructure to see how services like AWS CloudFormation, AWS CodePipeline, and AWS Systems Manager behave under different conditions. Rely heavily on official Amazon documentation and whitepapers, as these are the definitive sources of truth for the services covered in the exam. Every practice question includes a free AI Tutor explanation that breaks down the reasoning behind the correct answer — so you understand the concept, not just the answer. Creating a consistent study schedule that allocates time for both reading and practical application is essential for retaining the vast amount of information required for this professional-level certification.

A common mistake candidates make is relying on rote memorization of facts rather than developing the ability to apply knowledge to scenario-based questions. The DOP-C02 exam is heavily focused on situational judgment, meaning you must understand not just what a service does, but when and why to use it over an alternative in a specific context. To avoid this, focus on understanding the "why" behind every architectural decision in your practice sessions. Additionally, many candidates struggle with time management during the exam; practicing with timed sets of questions will help you build the stamina and speed necessary to complete the exam within the allotted time frame.

What to Expect on Exam Day

On the day of your exam, you will encounter a series of questions designed to test your ability to apply AWS knowledge in professional scenarios. The exam typically consists of multiple-choice and multiple-response questions, which may require you to select one or more correct answers based on the provided requirements. These questions are often presented as complex, multi-paragraph scenarios that describe a business problem, a set of constraints, and a desired outcome. You will take the exam at a Pearson VUE testing center or via an online proctored environment, where strict security protocols are enforced to maintain the integrity of the Amazon certification process. Being familiar with the interface and the style of questioning beforehand is a critical component of your overall exam prep strategy.

Who Should Use These AWS Certified DevOps Engineer - Professional DOP-C02 Practice Questions

These practice questions are intended for experienced DevOps engineers, cloud architects, and systems administrators who are ready to validate their expertise at a professional level. Ideally, you should have at least two years of hands-on experience managing AWS environments before attempting this certification exam. This exam is a significant step for professionals looking to demonstrate their capability to lead complex DevOps initiatives and manage large-scale, automated cloud infrastructure. By using these resources, you are engaging in a structured exam preparation process that helps identify knowledge gaps and reinforces your understanding of AWS best practices. The career impact of passing this exam is substantial, as it serves as a recognized benchmark of your ability to handle the operational demands of modern cloud-native organizations.

To get the most out of these practice questions, treat each one as a learning opportunity rather than a simple test. Do not just read the correct answer; engage with the AI Tutor explanation to understand the underlying logic, and read the community discussions to see how others interpreted the scenario. If you get a question wrong, flag it and revisit it later to ensure you have mastered the concept, rather than just memorizing the correction. This iterative process of testing, reviewing, and refining your knowledge is the most effective way to prepare for the rigors of the actual exam. Browse the questions above and use the community discussions and AI Tutor to build real exam confidence.

Updated on: 27 April, 2026

AI Tutor AI Tutor 👋 I’m here to help!