Amazon AWS Certified DevOps Engineer - Professional DOP-C02 Exam Questions
AWS Certified DevOps Engineer - Professional DOP-C02 (Page 5 )

Updated On: 23-Apr-2026

A company has multiple AWS accounts. The company uses AWS IAM Identity Center (AWS Single Sign-On) that is integrated with AWS Toolkit for Microsoft Azure DevOps. The attributes for access control feature is

enabled in IAM Identity Center.

The attribute mapping list contains two entries. The department key is mapped to ${path:enterprise.department}. The costCenter key is mapped to ${path:enterprise.costCenter}.

All existing Amazon EC2 instances have a department tag that corresponds to three company departments (d1, d2, d3). A DevOps engineer must create policies based on the matching attributes. The policies must minimize administrative effort and must grant each Azure AD user access to only the EC2 instances that are tagged with the user's respective department name.

Which condition key should the DevOps engineer include in the custom permissions policies to meet these requirements?





Answer(s): C



A company hosts a security auditing application in an AWS account. The auditing application uses an IAM role to access other AWS accounts. All the accounts are in the same organization in AWS Organizations.

A recent security audit revealed that users in the audited AWS accounts could modify or delete the auditing application's IAM role. The company needs to prevent any modification to the auditing application's IAM role by any entity other than a trusted administrator IAM role.

Which solution will meet these requirements?

  1. Create an SCP that includes a Deny statement for changes to the auditing application's IAM role. Include a condition that allows the trusted administrator IAM role to make changes. Attach the SCP to the root of the organization.
  2. Create an SCP that includes an Allow statement for changes to the auditing application's IAM role by the trusted administrator IAM role. Include a Deny statement for changes by all other IAM principals. Attach the SCP to the IAM service in each AWS account where the auditing application has an IAM role.
  3. Create an IAM permissions boundary that includes a Deny statement for changes to the auditing application's IAM role. Include a condition that allows the trusted administrator IAM role to make changes.
    Attach the permissions boundary to the audited AWS accounts.
  4. Create an IAM permissions boundary that includes a Deny statement for changes to the auditing application's IAM role. Include a condition that allows the trusted administrator IAM role to make changes.
    Attach the permissions boundary to the auditing application's IAM role in the AWS accounts.

Answer(s): A



A company has an on-premises application that is written in Go. A DevOps engineer must move the application to AWS. The company's development team wants to enable blue/green deployments and perform A/B testing.

Which solution will meet these requirements?

  1. Deploy the application on an Amazon EC2 instance, and create an AMI of the instance. Use the AMI to create an automatic scaling launch configuration that is used in an Auto Scaling group. Use Elastic Load Balancing to distribute traffic. When changes are made to the application, a new AMI will be created, which will initiate an EC2 instance refresh.
  2. Use Amazon Lightsail to deploy the application. Store the application in a zipped format in an Amazon S3 bucket. Use this zipped version to deploy new versions of the application to Lightsail. Use Lightsail deployment options to manage the deployment.
  3. Use AWS CodeArtifact to store the application code. Use AWS CodeDeploy to deploy the application to a fleet of Amazon EC2 instances. Use Elastic Load Balancing to distribute the traffic to the EC2 instances.
    When making changes to the application, upload a new version to CodeArtifact and create a new CodeDeploy deployment.
  4. Use AWS Elastic Beanstalk to host the application. Store a zipped version of the application in Amazon S3.
    Use that location to deploy new versions of the application. Use Elastic Beanstalk to manage the deployment options.

Answer(s): D



A developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2

Auto Scaling group, and also use Elastic Load Balancing for load balancing.

Occasionally, some application servers are being terminated after failing ELB HTTP health checks. The developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated.

How can log collection be automated?

  1. Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  2. Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an AWS Config rule for EC2 Instance-terminate Lifecycle Action and trigger a step function that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  3. Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  4. Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon EventBridge rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

Answer(s): D



A company has an organization in AWS Organizations. The organization includes workload accounts that contain enterprise applications. The company centrally manages users from an operations account. No users can be created in the workload accounts. The company recently added an operations team and must provide the operations team members with administrator access to each workload account.

Which combination of actions will provide this access? (Choose three.)

  1. Create a SysAdmin role in the operations account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the workload accounts.
  2. Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account.
  3. Create an Amazon Cognito identity pool in the operations account. Attach the SysAdmin role as an authenticated role.
  4. In the operations account, create an IAM user for each operations team member.
  5. In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group.
  6. Create an Amazon Cognito user pool in the operations account. Create an Amazon Cognito user for each operations team member.

Answer(s): B,D,E



A company has multiple accounts in an organization in AWS Organizations. The company's SecOps team

needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if any account in the organization turns off the Block Public Access feature on an Amazon S3 bucket. A DevOps engineer must implement this change without affecting the operation of any AWS accounts. The implementation must ensure that individual member accounts in the organization cannot turn off the notification.

Which solution will meet these requirements?

  1. Designate an account to be the delegated Amazon GuardDuty administrator account. Turn on GuardDuty for all accounts across the organization. In the GuardDuty administrator account, create an SNS topic.
    Subscribe the SecOps team's email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for GuardDuty findings and a target of the SNS topic.
  2. Create an AWS CloudFormation template that creates an SNS topic and subscribes the SecOps team's email address to the SNS topic. In the template, include an Amazon EventBridge rule that uses an event pattern of CloudTrail activity for s3:PutBucketPublicAccessBlock and a target of the SNS topic. Deploy the stack to every account in the organization by using CloudFormation StackSets.
  3. Turn on AWS Config across the organization. In the delegated administrator account, create an SNS topic.
    Subscribe the SecOps team's email address to the SNS topic. Deploy a conformance pack that uses the s3-bucket-level-public-access-prohibited AWS Config managed rule in each account and uses an AWS Systems Manager document to publish an event to the SNS topic to notify the SecOps team.
  4. Turn on Amazon Inspector across the organization. In the Amazon Inspector delegated administrator account, create an SNS topic. Subscribe the SecOps team's email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for public network exposure of the S3 bucket and publishes an event to the SNS topic to notify the SecOps team.

Answer(s): C



A company has migrated its container-based applications to Amazon EKS and want to establish automated email notifications. The notifications sent to each email address are for specific activities related to EKS components. The solution will include Amazon SNS topics and an AWS Lambda function to evaluate incoming log events and publish messages to the correct SNS topic.

Which logging solution will support these requirements?

  1. Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
  2. Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon EventBridge events that invoke Lambda.
  3. Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
  4. Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.

Answer(s): A



A company is implementing an Amazon Elastic Container Service (Amazon ECS) cluster to run its workload. The company architecture will run multiple ECS services on the cluster. The architecture includes an Application Load Balancer on the front end and uses multiple target groups to route traffic.

A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis.

Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.)

  1. Download the Amazon CloudWatch Logs container instance from AWS. Configure this instance as a task.
    Update the application service definitions to include the logging task.
  2. Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs.
  3. Use Amazon EventBridge to schedule an AWS Lambda function that will run every 60 seconds and will run the Amazon CloudWatch Logs create-export-task command. Then point the output to the logging S3 bucket.
  4. Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket.
  5. Activate access logging on the target groups that the ECS services use. Then send the logs directly to the logging S3 bucket.
  6. Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket.
    Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose.

Answer(s): B,D,F



Viewing page 5 of 57
Viewing questions 33 - 40 out of 429 questions


AWS Certified DevOps Engineer - Professional DOP-C02 Exam Discussions & Posts

Amazon AWS Certified DevOps Engineer - Professional DOP-C02: Skills Tested, Job Roles, and Study Tips

The AWS Certified DevOps Engineer - Professional DOP-C02 certification is designed for individuals who perform a DevOps engineer role with two or more years of experience provisioning, operating, and managing AWS environments. This certification validates technical expertise in implementing continuous delivery systems and methodologies on the AWS platform, as well as automating security controls, governance processes, and compliance validation. Organizations hiring for cloud-native roles, such as DevOps Engineers, Site Reliability Engineers, and Cloud Architects, prioritize this credential because it demonstrates a candidate's ability to design and maintain resilient, scalable, and secure infrastructure. Achieving this Amazon certification signifies that a professional possesses the advanced skills required to manage complex, multi-account AWS environments effectively.

What the AWS Certified DevOps Engineer - Professional DOP-C02 Exam Covers

The exam evaluates a candidate's proficiency across several critical domains, including SDLC Automation, Configuration Management and IaC, Resilient Cloud Solutions, Monitoring and Logging, Incident and Event Response, and Security and Compliance. These topics are not tested in isolation; rather, the exam presents complex, scenario-based practice questions that require you to synthesize knowledge across these areas to solve real-world operational challenges. For instance, you might be asked to design a CI/CD pipeline that integrates automated security testing, which touches upon both SDLC Automation and Security and Compliance. By engaging with our practice questions, you will encounter scenarios that mirror the multifaceted nature of these domains, ensuring you are prepared for the integrated way AWS tests these concepts. Mastering these topics requires a deep understanding of how various AWS services interact to support automated, secure, and resilient software delivery lifecycles.

Among these domains, Resilient Cloud Solutions often presents the most significant challenge for candidates because it requires a comprehensive understanding of high availability, disaster recovery, and fault tolerance across distributed systems. You must demonstrate the ability to architect solutions that can withstand service failures while maintaining performance and data integrity, which often involves complex configurations of AWS services like Auto Scaling, Elastic Load Balancing, and multi-region deployments. This area demands more than theoretical knowledge; it requires the ability to analyze trade-offs between cost, performance, and availability in high-pressure scenarios. Candidates must be prepared to evaluate architectural diagrams and operational requirements to select the most resilient design patterns that align with AWS best practices.

Are These Real AWS Certified DevOps Engineer - Professional DOP-C02 Exam Questions?

Our practice questions are sourced and verified by the community, consisting of IT professionals and recent test-takers who have sat for the actual exam. Because these questions are community-verified, they reflect the style, complexity, and focus areas that appear on the real exam, providing a reliable way to gauge your readiness. If you've been searching for AWS Certified DevOps Engineer - Professional DOP-C02 exam dumps or braindump files, our community-verified practice questions offer something more valuable — each question is verified and explained by IT professionals who recently passed the exam. We do not provide leaked or confidential content, as our goal is to help you understand the underlying concepts rather than memorize answers. This approach ensures that you are prepared for the logic and reasoning required on the actual certification exam.

The community verification process is central to the reliability of our study materials, as it involves active participation from users who have recently completed their certification journey. When a question is posted, users discuss the answer choices, debate the technical nuances of the scenario, and flag any inaccuracies based on their recent exam experience. This collaborative environment allows for the refinement of explanations, ensuring that the reasoning provided is accurate and aligned with current AWS documentation. By engaging with these discussions, you gain insights into how experienced professionals approach complex problems, which is far more effective than relying on static, unverified sources.

How to Prepare for the AWS Certified DevOps Engineer - Professional DOP-C02 Exam

Effective exam preparation requires a combination of hands-on experience and a deep understanding of AWS architectural principles. You should spend significant time in a sandbox or real AWS environment, building and breaking infrastructure to see how services like AWS CloudFormation, AWS CodePipeline, and AWS Systems Manager behave under different conditions. Rely heavily on official Amazon documentation and whitepapers, as these are the definitive sources of truth for the services covered in the exam. Every practice question includes a free AI Tutor explanation that breaks down the reasoning behind the correct answer — so you understand the concept, not just the answer. Creating a consistent study schedule that allocates time for both reading and practical application is essential for retaining the vast amount of information required for this professional-level certification.

A common mistake candidates make is relying on rote memorization of facts rather than developing the ability to apply knowledge to scenario-based questions. The DOP-C02 exam is heavily focused on situational judgment, meaning you must understand not just what a service does, but when and why to use it over an alternative in a specific context. To avoid this, focus on understanding the "why" behind every architectural decision in your practice sessions. Additionally, many candidates struggle with time management during the exam; practicing with timed sets of questions will help you build the stamina and speed necessary to complete the exam within the allotted time frame.

What to Expect on Exam Day

On the day of your exam, you will encounter a series of questions designed to test your ability to apply AWS knowledge in professional scenarios. The exam typically consists of multiple-choice and multiple-response questions, which may require you to select one or more correct answers based on the provided requirements. These questions are often presented as complex, multi-paragraph scenarios that describe a business problem, a set of constraints, and a desired outcome. You will take the exam at a Pearson VUE testing center or via an online proctored environment, where strict security protocols are enforced to maintain the integrity of the Amazon certification process. Being familiar with the interface and the style of questioning beforehand is a critical component of your overall exam prep strategy.

Who Should Use These AWS Certified DevOps Engineer - Professional DOP-C02 Practice Questions

These practice questions are intended for experienced DevOps engineers, cloud architects, and systems administrators who are ready to validate their expertise at a professional level. Ideally, you should have at least two years of hands-on experience managing AWS environments before attempting this certification exam. This exam is a significant step for professionals looking to demonstrate their capability to lead complex DevOps initiatives and manage large-scale, automated cloud infrastructure. By using these resources, you are engaging in a structured exam preparation process that helps identify knowledge gaps and reinforces your understanding of AWS best practices. The career impact of passing this exam is substantial, as it serves as a recognized benchmark of your ability to handle the operational demands of modern cloud-native organizations.

To get the most out of these practice questions, treat each one as a learning opportunity rather than a simple test. Do not just read the correct answer; engage with the AI Tutor explanation to understand the underlying logic, and read the community discussions to see how others interpreted the scenario. If you get a question wrong, flag it and revisit it later to ensure you have mastered the concept, rather than just memorizing the correction. This iterative process of testing, reviewing, and refining your knowledge is the most effective way to prepare for the rigors of the actual exam. Browse the questions above and use the community discussions and AI Tutor to build real exam confidence.

Updated on: 27 April, 2026

AI Tutor AI Tutor 👋 I’m here to help!