Amazon AWS DevOps Engineer Professional Exam
AWS DevOps Engineer - Professional (DOP-C01) (Page 20 )

Updated On: 19-Jan-2026

A DevOps engineer is creating a CI/CD pipeline for an Amazon ECS service. The ECS container instances run behind an Application Load Balancer as the web tier of a three-tier application. An acceptance criterion for a successful deployment is the veri cation that the web tier can communicate with the database and middleware tiers of the application upon deployment.

How can this be accomplished in an automated fashion?

  1. Create a health check endpoint in the web application that tests connectivity to the data and middleware tiers. Use this endpoint as the health check URL for the load balancer.
  2. Create an approval step for the quality assurance team to validate connectivity. Reject changes in the pipeline if there is an issue with connecting to the dependent tiers.
  3. Use an Amazon RDS active connection count and an Amazon CloudWatch ELB metric to alarm on a signi cant change to the number of open connections.
  4. Use Amazon Route 53 health checks to detect issues with the web service and roll back the Cl/CD pipeline if there is an error.

Answer(s): A



A development team manages website deployments using AWS CodeDeploy blue/green deployments. The application is running on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group.

When deploying a new revision, the team notices the deployment eventually fails, but it takes a long time to fail. After further inspection, the team discovers the AllowTra c lifecycle event ran for an hour and eventually failed without providing any other information. The team wants to ensure failure notices are delivered more quickly while maintaining application availability even upon failure.

Which combination of actions should be taken to meet these requirements? (Choose two.)

  1. Change the deployment con guration to CodeDeployDefault.AllAtOnce to speed up the deployment process by deploying to all of the instances at the same time.
  2. Create a CodeDeploy trigger for the deployment failure event and make the deployment fail as soon as a single health check failure is detected.
  3. Reduce the HealthCheckIntervalSeconds and UnhealthyThresholdCount values within the target group health checks to decrease the amount of time it takes for the application to be considered unhealthy.
  4. Use the appspec.yml le to run a script on the AllowTra c hook to perform lighter health checks on the application instead of making CodeDeploy wait for the target group health checks to pass.
  5. Use the appspec.yml le to run a script on the BeforeAllowTra c hook to perform health checks on the application and fail the deployment if the health checks performed by the script are not successful.

Answer(s): B,E



A development team manually builds an artifact locally and then places it in an Amazon S3 bucket. The application has a local cache that must be cleared when a deployment occurs. The team executes a command to do this, downloads the artifact from Amazon S3, and unzips the artifact to complete the deployment.

A DevOps team wants to migrate to a CI/CD process and build in checks to stop and roll back the deployment when a failure occurs. This requires the team to track the progression of the deployment.

Which combination of actions will accomplish this? (Choose three.)

  1. Allow developers to check the code into a code repository. Using Amazon CloudWatch Events, on every pull into master, trigger an AWS Lambda function to build the artifact and store it in Amazon S3.
  2. Create a custom script to clear the cache. Specify the script in the Beforelnstall lifecycle hook in the AppSpec le.
  3. Create user data for each Amazon EC2 instance that contains the clear cache script. Once deployed, test the application. If it is not successful, deploy it again.
  4. Set up AWS CodePipeline to deploy the application. Allow developers to check the code into a code repository as a source for the pipeline.
  5. Use AWS CodeBuild to build the artifact and place it in Amazon S3. Use AWS CodeDeploy to deploy the artifact to Amazon EC2 instances.
  6. Use AWS Systems Manager to fetch the artifact from Amazon S3 and deploy it to all the instances.

Answer(s): B,D,E



A company is implementing a well-architected design for its globally accessible API stack. The design needs to ensure both high reliability and fast response times for users located in North America and Europe.

The API stack contains the following three tiers:

· Amazon API Gateway
· AWS Lambda
· Amazon DynamoDB

Which solution will meet the requirements?

  1. Con gure Amazon Route 53 to point to API Gateway APIs in North America and Europe using health checks. Con gure the APIs to forward requests to a Lambda function in that Region. Con gure the Lambda functions to retrieve and update the data in a DynamoDB table in the same Region as the Lambda function.
  2. Con gure Amazon Route 53 to point to API Gateway APIs in North America and Europe using latency-based routing and health checks.
    Con gure the APIs to forward requests to a Lambda function in that Region. Con gure the Lambda functions to retrieve and update the data in a DynamoDB global table.
  3. Con gure Amazon Route 53 to point to API Gateway in North America, create a disaster recovery API in Europe, and con gure both APIs to forward requests to the Lambda functions in that Region. Retrieve the data from a DynamoDB global table. Deploy a Lambda function to check the North America API health every 5 minutes. In the event of a failure, update Route 53 to point to the disaster recovery API.
  4. Con gure Amazon Route 53 to point to API Gateway API in North America using latency-based routing. Con gure the API to forward requests to the Lambda function in the Region nearest to the user. Con gure the Lambda function to retrieve and updathe data in a DynamoDB table.

Answer(s): B



A company manages an application that stores logs in Amazon CloudWatch Logs. The company wants to archive the logs in Amazon S3. Logs are rarely accessed after 90 days and must be retained for 10 years.

Which combination of steps should a DevOps engineer take to meet these requirements? (Choose two.)

  1. Con gure a CloudWatch Logs subscription lter to use AWS Glue to transfer all logs to an S3 bucket.
  2. Con gure a CloudWatch Logs subscription lter to use Amazon Kinesis Data Firehose to stream all logs to an S3 bucket.
  3. Con gure a CloudWatch Logs subscription lter to stream all logs to an S3 bucket.
  4. Con gure the S3 bucket lifecycle policy to transition logs to S3 Glacier after 90 days and to expire logs after 3,650 days.
  5. Con gure the S3 bucket lifecycle policy to transition logs to Reduced Redundancy after 90 days and to expire logs after 3,650 days.

Answer(s): B,D



Viewing page 20 of 43
Viewing questions 96 - 100 out of 208 questions



Post your Comments and Discuss Amazon AWS DevOps Engineer Professional exam prep with other Community members:

Join the AWS DevOps Engineer Professional Discussion