Amazon AWS DevOps Engineer Professional Exam
AWS DevOps Engineer - Professional (DOP-C01) (Page 3 )

Updated On: 12-Feb-2026

A company has many applications. Different teams in the company developed the applications by using multiple languages and frameworks. The applications run on premises and on different servers with different operating systems. Each team has its own release protocol and process. The company wants to reduce the complexity of the release and maintenance of these applications. The company is migrating its technology stacks, including these applications, to AWS. The company wants centralized control of source code, a consistent and automatic delivery pipeline, and as few maintenance tasks as possible on the underlying infrastructure.
What should a DevOps engineer do to meet these requirements?

  1. Create one AWS CodeCommit repository for all applications. Put each application's code in different branch. Merge the branches, and use AWS CodeBuild to build the applications. Use AWS CodeDeploy to deploy the applications to one centralized application server.
  2. Create one AWS CodeCommit repository for each of the applications Use AWS CodeBuild to build the applications one at a time. Use AWS CodeDeploy to deploy the applications to one centralized application server.
  3. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time to create one AMI for each server. Use AWS CloudFormation StackSets to automatically provision and decommission Amazon EC2 eets by using these AMIs.
  4. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build one Docker image for each application in Amazon Elastic Container Registry (Amazon ECR). Use AWS CodeDeploy to deploy the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure that AWS Fargate manages.

Answer(s): D


Reference:

https://towardsdatascience.com/ci-cd-logical-and-practical-approach-to-build-four-step-pipeline-on-aws-3f54183068ec



A DevOps engineer is developing an application for a company. The application needs to persist les to Amazon S3. The application needs to upload les with different security classi cations that the company de nes. These classi cations include con dential, private, and public. Files that have a con dential classi cation must not be viewable by anyone other than the user who uploaded them. The application uses the IAM role of the user to call the S3 API operations.
The DevOps engineer has modi ed the application to add a DataClassi cation tag with the value of con dential and an Owner tag with the uploading user's ID to each con dential object that is uploaded to Amazon S3. Which set of additional steps must the DevOps engineer take to meet the company's requirements?

  1. Modify the S3 bucket's ACL to grant bucket-owner-read access to the uploading user's IAM role. Create an IAM policy that grants
    s3:GetObject operations on the S3 bucket when aws:ResourceTag/DataClassi cation equals con dential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Attach the policy to the IAM roles for users who require access to the S3 bucket.
  2. Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassi cation equals con dential, and
    s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
  3. Modify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassi cation equals con dential, and aws:RequesttTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
  4. Modify the S3 bucket's ACL to grant authenticated-read access when aws:ResourceTag/DataClassi cation equals con dential, and
    s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.

Answer(s): B



A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the nal stage of a CI/CD pipeline.
A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps
Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.
How should the DevOps Engineer overcome this?

  1. Add a BeforeAllowTra c hook to the AppSpec le that tests and waits for any necessary database changes before tra c can ow to the new version of the Lambda function
  2. Add an AfterAllowTra c hook to the AppSpec le that forces tra c to wait for any pending database changes before allowing the new version of the Lambda function to respond
  3. Add a BeforeInstall hook to the AppSpec le that tests and waits for any necessary database changes before deploying the new version of the Lambda function
  4. Add a ValidateService hook to the AppSpec le that inspects incoming tra c and rejects the payload if dependent services, such as the database, are not yet ready

Answer(s): A



A software company wants to automate the build process for a project where the code is stored in GitHub. When the repository is updated, source code should be compiled, tested, and pushed to Amazon S3.
Which combination of steps would address these requirements? (Choose three.)

  1. Add a buildspec.yml le to the source code with build instructions.
  2. Con gure a GitHub webhook to trigger a build every time a code change is pushed to the repository.
  3. Create an AWS CodeBuild project with GitHub as the source repository.
  4. Create an AWS CodeDeploy application with the Amazon EC2/On-Premises compute platform.
  5. Create an AWS OpsWorks deployment with the install dependencies command.
  6. Provision an Amazon EC2 instance to perform the build.

Answer(s): A,B,C



An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on
Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance.
When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region.
How should the company meet these requirements with the LEAST amount of application changes?

  1. Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.
  2. Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.
  3. Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.
  4. Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.

Answer(s): C






Post your Comments and Discuss Amazon AWS DevOps Engineer Professional exam prep with other Community members:

Join the AWS DevOps Engineer Professional Discussion