Amazon AWS Certified DevOps Engineer - Professional DOP-C02 Exam Questions
AWS Certified DevOps Engineer - Professional DOP-C02 (Page 7 )

Updated On: 25-Apr-2026

A DevOps engineer is building a multistage pipeline with AWS CodePipeline to build, verify, stage, test, and deploy an application. A manual approval stage is required between the test stage and the deploy stage. The development team uses a custom chat tool with webhook support that requires near-real-time notifications.

How should the DevOps engineer configure status updates for pipeline activity and approval requests to post to the chat tool?

  1. Create an Amazon CloudWatch Logs subscription that filters on CodePipeline Pipeline Execution State Change. Publish subscription events to an Amazon Simple Notification Service (Amazon SNS) topic.
    Subscribe the chat webhook URL to the SNS topic, and complete the subscription validation.
  2. Create an AWS Lambda function that is invoked by AWS CloudTrail events. When a CodePipeline Pipeline Execution State Change event is detected, send the event details to the chat webhook URL.
  3. Create an Amazon EventBridge rule that filters on CodePipeline Pipeline Execution State Change. Publish the events to an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function that sends event details to the chat webhook URL. Subscribe the function to the SNS topic.
  4. Modify the pipeline code to send the event details to the chat webhook URL at the end of each stage.
    Parameterize the URL so that each pipeline can send to a different URL based on the pipeline environment.

Answer(s): C



A company's application development team uses Linux-based Amazon EC2 instances as bastion hosts. Inbound SSH access to the bastion hosts is restricted to specific IP addresses, as defined in the associated security groups. The company's security team wants to receive a notification if the security group rules are modified to allow SSH access from any IP address.

What should a DevOps engineer do to meet this requirement?

  1. Create an Amazon EventBridge rule with a source of aws.cloudtrail and the event name AuthorizeSecurityGroupIngress. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.
  2. Enable Amazon GuardDuty and check the findings for security groups in AWS Security Hub. Configure an Amazon EventBridge rule with a custom pattern that matches GuardDuty events with an output of NON_COMPLIANT. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.
  3. Create an AWS Config rule by using the restricted-ssh managed rule to check whether security groups disallow unrestricted incoming SSH traffic. Configure automatic remediation to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
  4. Enable Amazon Inspector. Include the Common Vulnerabilities and Exposures-1.1 rules package to check the security groups that are associated with the bastion hosts. Configure Amazon Inspector to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.

Answer(s): C



A DevOps team manages an API running on-premises that serves as a backend for an Amazon API Gateway endpoint. Customers have been complaining about high response latencies, which the development team has verified using the API Gateway latency metrics in Amazon CloudWatch. To identify the cause, the team needs to collect relevant data without introducing additional latency.

Which actions should be taken to accomplish this? (Choose two.)

  1. Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.
  2. Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and upload those segments to X-Ray during each request.
  3. Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray daemon to upload segments to X-Ray.
  4. Modify the on-premises application to send log information back to API Gateway with each request.
  5. Modify the on-premises application to calculate and upload statistical data relevant to the API service requests to CloudWatch metrics.

Answer(s): A,C



A company has an application that is using a MySQL-compatible Amazon Aurora Multi-AZ DB cluster as the database. A cross-Region read replica has been created for disaster recovery purposes. A DevOps engineer wants to automate the promotion of the replica so it becomes the primary database instance in the event of a failure.

Which solution will accomplish this?

  1. Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoints. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to invoke an AWS Lambda function that will promote the replica instance as the primary.
  2. Create an Aurora custom endpoint to point to the primary database instance. Configure the application to use this endpoint. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance.
  3. Create an AWS Lambda function to modify the application's AWS CloudFormation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instance.
    Create an Amazon CloudWatch alarm to invoke this Lambda function after the failure event occurs.
  4. Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge event that detects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.

Answer(s): D



A company hosts its staging website using an Amazon EC2 instance backed with Amazon EBS storage. The company wants to recover quickly with minimal data losses in the event of network connectivity issues or power failures on the EC2 instance.

Which solution will meet these requirements?

  1. Add the instance to an EC2 Auto Scaling group with the minimum, maximum, and desired capacity set to 1.
  2. Add the instance to an EC2 Auto Scaling group with a lifecycle hook to detach the EBS volume when the EC2 instance shuts down or terminates.
  3. Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric and select the EC2 action to recover the instance.
  4. Create an Amazon CloudWatch alarm for the StatusCheckFailed_Instance metric and select the EC2 action to reboot the instance.

Answer(s): C



A company wants to use AWS development tools to replace its current bash deployment scripts. The company currently deploys a LAMP application to a group of Amazon EC2 instances behind an Application Load Balancer (ALB). During the deployments, the company unit tests the committed application, stops and starts services, unregisters and re-registers instances with the load balancer, and updates file permissions. The company wants to maintain the same deployment functionality through the shift to using AWS services.

Which solution will meet these requirements?

  1. Use AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy's appspec.yml
    file to restart services, and deregister and register instances with the ALB. Use the appspec.yml file to update file permissions without a custom script.
  2. Use AWS CodePipeline to move the application from the AWS CodeCommit repository to AWS CodeDeploy. Use CodeDeploy's deployment group to test the application, unregister and re-register instances with the AL and restart services. Use the appspec.yml file to update file permissions without a custom script.
  3. Use AWS CodePipeline to move the application source code from the AWS CodeCommit repository to AWS CodeDeploy. Use CodeDeploy to test the application. Use CodeDeploy's appspec.yml file to restart services and update permissions without a custom script. Use AWS CodeBuild to unregister and re-register instances with the ALB.
  4. Use AWS CodePipeline to trigger AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy's appspec.yml file to restart services. Unregister and re-register the instances in the AWS CodeDeploy deployment group with the ALB. Update the appspec.yml file to update file permissions without a custom script.

Answer(s): D



A company runs an application with an Amazon EC2 and on-premises configuration. A DevOps engineer needs to standardize patching across both environments. Company policy dictates that patching only happens during non-business hours.

Which combination of actions will meet these requirements? (Choose three.)

  1. Add the physical machines into AWS Systems Manager using Systems Manager Hybrid Activations.
  2. Attach an IAM role to the EC2 instances, allowing them to be managed by AWS Systems Manager.
  3. Create IAM access keys for the on-premises machines to interact with AWS Systems Manager.
  4. Run an AWS Systems Manager Automation document to patch the systems every hour
  5. Use Amazon EventBridge scheduled events to schedule a patch window.
  6. Use AWS Systems Manager Maintenance Windows to schedule a patch window.

Answer(s): A,B,F



A company has chosen AWS to host a new application. The company needs to implement a multi-account strategy. A DevOps engineer creates a new AWS account and an organization in AWS Organizations. The DevOps engineer also creates the OU structure for the organization and sets up a landing zone by using AWS Control Tower.

The DevOps engineer must implement a solution that automatically deploys resources for new accounts that users create through AWS Control Tower Account Factory. When a user creates a new account, the solution must apply AWS CloudFormation templates and SCPs that are customized for the OU or the account to automatically deploy all the resources that are attached to the account. All the OUs are enrolled in AWS Control Tower.

Which solution will meet these requirements in the MOST automated way?

  1. Use AWS Service Catalog with AWS Control Tower. Create portfolios and products in AWS Service Catalog. Grant granular permissions to provision these resources. Deploy SCPs by using the AWS CLI and JSON documents.
  2. Deploy CloudFormation stack sets by using the required templates. Enable automatic deployment. Deploy
    stack instances to the required accounts. Deploy a CloudFormation stack set to the organization's management account to deploy SCPs.
  3. Create an Amazon EventBridge rule to detect the CreateManagedAccount event. Configure AWS Service Catalog as the target to deploy resources to any new accounts. Deploy SCPs by using the AWS CLI and JSON documents.
  4. Deploy the Customizations for AWS Control Tower (CfCT) solution. Use an AWS CodeCommit repository as the source. In the repository, create a custom package that includes the CloudFormation templates and the SCP JSON documents.

Answer(s): D



Viewing page 7 of 57
Viewing questions 49 - 56 out of 429 questions


AWS Certified DevOps Engineer - Professional DOP-C02 Exam Discussions & Posts

Amazon AWS Certified DevOps Engineer - Professional DOP-C02: Skills Tested, Job Roles, and Study Tips

The AWS Certified DevOps Engineer - Professional DOP-C02 certification is designed for individuals who perform a DevOps engineer role with two or more years of experience provisioning, operating, and managing AWS environments. This certification validates technical expertise in implementing continuous delivery systems and methodologies on the AWS platform, as well as automating security controls, governance processes, and compliance validation. Organizations hiring for cloud-native roles, such as DevOps Engineers, Site Reliability Engineers, and Cloud Architects, prioritize this credential because it demonstrates a candidate's ability to design and maintain resilient, scalable, and secure infrastructure. Achieving this Amazon certification signifies that a professional possesses the advanced skills required to manage complex, multi-account AWS environments effectively.

What the AWS Certified DevOps Engineer - Professional DOP-C02 Exam Covers

The exam evaluates a candidate's proficiency across several critical domains, including SDLC Automation, Configuration Management and IaC, Resilient Cloud Solutions, Monitoring and Logging, Incident and Event Response, and Security and Compliance. These topics are not tested in isolation; rather, the exam presents complex, scenario-based practice questions that require you to synthesize knowledge across these areas to solve real-world operational challenges. For instance, you might be asked to design a CI/CD pipeline that integrates automated security testing, which touches upon both SDLC Automation and Security and Compliance. By engaging with our practice questions, you will encounter scenarios that mirror the multifaceted nature of these domains, ensuring you are prepared for the integrated way AWS tests these concepts. Mastering these topics requires a deep understanding of how various AWS services interact to support automated, secure, and resilient software delivery lifecycles.

Among these domains, Resilient Cloud Solutions often presents the most significant challenge for candidates because it requires a comprehensive understanding of high availability, disaster recovery, and fault tolerance across distributed systems. You must demonstrate the ability to architect solutions that can withstand service failures while maintaining performance and data integrity, which often involves complex configurations of AWS services like Auto Scaling, Elastic Load Balancing, and multi-region deployments. This area demands more than theoretical knowledge; it requires the ability to analyze trade-offs between cost, performance, and availability in high-pressure scenarios. Candidates must be prepared to evaluate architectural diagrams and operational requirements to select the most resilient design patterns that align with AWS best practices.

Are These Real AWS Certified DevOps Engineer - Professional DOP-C02 Exam Questions?

Our practice questions are sourced and verified by the community, consisting of IT professionals and recent test-takers who have sat for the actual exam. Because these questions are community-verified, they reflect the style, complexity, and focus areas that appear on the real exam, providing a reliable way to gauge your readiness. If you've been searching for AWS Certified DevOps Engineer - Professional DOP-C02 exam dumps or braindump files, our community-verified practice questions offer something more valuable, each question is verified and explained by IT professionals who recently passed the exam. We do not provide leaked or confidential content, as our goal is to help you understand the underlying concepts rather than memorize answers. This approach ensures that you are prepared for the logic and reasoning required on the actual certification exam.

The community verification process is central to the reliability of our study materials, as it involves active participation from users who have recently completed their certification journey. When a question is posted, users discuss the answer choices, debate the technical nuances of the scenario, and flag any inaccuracies based on their recent exam experience. This collaborative environment allows for the refinement of explanations, ensuring that the reasoning provided is accurate and aligned with current AWS documentation. By engaging with these discussions, you gain insights into how experienced professionals approach complex problems, which is far more effective than relying on static, unverified sources.

How to Prepare for the AWS Certified DevOps Engineer - Professional DOP-C02 Exam

Effective exam preparation requires a combination of hands-on experience and a deep understanding of AWS architectural principles. You should spend significant time in a sandbox or real AWS environment, building and breaking infrastructure to see how services like AWS CloudFormation, AWS CodePipeline, and AWS Systems Manager behave under different conditions. Rely heavily on official Amazon documentation and whitepapers, as these are the definitive sources of truth for the services covered in the exam. Every practice question includes a free AI Tutor explanation that breaks down the reasoning behind the correct answer, so you understand the concept, not just the answer. Creating a consistent study schedule that allocates time for both reading and practical application is essential for retaining the vast amount of information required for this professional-level certification.

A common mistake candidates make is relying on rote memorization of facts rather than developing the ability to apply knowledge to scenario-based questions. The DOP-C02 exam is heavily focused on situational judgment, meaning you must understand not just what a service does, but when and why to use it over an alternative in a specific context. To avoid this, focus on understanding the "why" behind every architectural decision in your practice sessions. Additionally, many candidates struggle with time management during the exam; practicing with timed sets of questions will help you build the stamina and speed necessary to complete the exam within the allotted time frame.

What to Expect on Exam Day

On the day of your exam, you will encounter a series of questions designed to test your ability to apply AWS knowledge in professional scenarios. The exam typically consists of multiple-choice and multiple-response questions, which may require you to select one or more correct answers based on the provided requirements. These questions are often presented as complex, multi-paragraph scenarios that describe a business problem, a set of constraints, and a desired outcome. You will take the exam at a Pearson VUE testing center or via an online proctored environment, where strict security protocols are enforced to maintain the integrity of the Amazon certification process. Being familiar with the interface and the style of questioning beforehand is a critical component of your overall exam prep strategy.

Who Should Use These AWS Certified DevOps Engineer - Professional DOP-C02 Practice Questions

These practice questions are intended for experienced DevOps engineers, cloud architects, and systems administrators who are ready to validate their expertise at a professional level. Ideally, you should have at least two years of hands-on experience managing AWS environments before attempting this certification exam. This exam is a significant step for professionals looking to demonstrate their capability to lead complex DevOps initiatives and manage large-scale, automated cloud infrastructure. By using these resources, you are engaging in a structured exam preparation process that helps identify knowledge gaps and reinforces your understanding of AWS best practices. The career impact of passing this exam is substantial, as it serves as a recognized benchmark of your ability to handle the operational demands of modern cloud-native organizations.

To get the most out of these practice questions, treat each one as a learning opportunity rather than a simple test. Do not just read the correct answer; engage with the AI Tutor explanation to understand the underlying logic, and read the community discussions to see how others interpreted the scenario. If you get a question wrong, flag it and revisit it later to ensure you have mastered the concept, rather than just memorizing the correction. This iterative process of testing, reviewing, and refining your knowledge is the most effective way to prepare for the rigors of the actual exam. Browse the questions above and use the community discussions and AI Tutor to build real exam confidence.

Updated on: 27 April, 2026

AI Tutor AI Tutor 👋 I’m here to help!