EC-Council 312-97 Exam
Certified DevSecOps Engineer (ECDE) (Page 3 )

Updated On: 7-Feb-2026

(Debra Aniston is a DevSecOps engineer in an IT company that develops software products and web applications. Her team has found various coding issues in the application code. Debra would like to fix coding issues before they exist. She recommended a DevSecOps tool to the software developer team that highlights bugs and security vulnerabilities with clear remediation guidance, which helps in fixing security issues before the code is committed. Based on the information given, which of the following tools has Debra recommended to the software development team?)

  1. SonarLint.
  2. Arachni.
  3. OWASP ZAP.
  4. Tenable.io.

Answer(s): A

Explanation:

SonarLint is a static code analysis tool designed specifically to be used inside developers' IDEs, where it provides immediate feedback while code is being written. It highlights bugs, security vulnerabilities, and code smells and, importantly, provides clear remediation guidance that explains why an issue exists and how it can be fixed. This aligns directly with Debra's requirement to fix issues "before they exist," meaning before code is committed to the repository. Arachni and OWASP ZAP are dynamic application security testing tools that require a running application and are typically used later in the pipeline. Tenable.io is a vulnerability management platform focused on infrastructure and application scanning rather than real-time developer feedback. By using SonarLint, developers receive continuous guidance during coding, supporting the shift-left security approach in DevSecOps and reducing the cost and effort of fixing vulnerabilities later in the lifecycle.



(Terry Diab has been working as a DevSecOps engineer in an IT company that develops software products and web applications for a call center. She would like to integrate Snyk with AWS CodeCommit to monitor and remediate vulnerabilities in the code repository. Terry pushed code to AWS CodeCommit; this triggered Amazon EventBridge Rule, which then triggered AWS CodePipeline. AWS CodePipeline passed code to Snyk CLI run.
Who among the following interacts with Snyk CLI and sends the results to Snyk UI?)

  1. AWS CodeDeploy.
  2. AWS CodeCommit.
  3. AWS Pipeline.
  4. AWS CodeBuild.

Answer(s): D

Explanation:

In an AWS CI/CD architecture, AWS CodePipeline acts as an orchestration service that coordinates different stages but does not execute build or scan commands itself. AWS CodeBuild is the service responsible for running commands such as compiling code, executing tests, and running third-party security tools like the Snyk CLI. In Terry's workflow, CodeCommit stores the source code, EventBridge triggers the pipeline, and CodePipeline passes the source to CodeBuild. CodeBuild then executes the Snyk CLI, performs vulnerability scanning, and sends the scan results to the Snyk UI using the configured authentication token. AWS CodeDeploy is focused on application deployment and does not interact with Snyk CLI. Therefore, AWS CodeBuild is the component that interacts with Snyk CLI and communicates results back to the Snyk platform. This integration ensures that dependency vulnerabilities are detected early in the Build and Test stage.



(William McDougall has been working as a DevSecOps engineer in an IT company located in Sacramento, California. His organization has been using Microsoft Azure DevOps service to develop software products securely and quickly. To take proactive decisions related to security issues and to reduce the overall security risk, William would like to integrate ThreatModeler with Azure Pipelines. How can ThreatModeler be integrated with Azure Pipelines and made a part of William's organization DevSecOps pipeline?)

  1. By using a bidirectional API.
  2. By using a unidirectional API.
  3. By using a unidirectional UI.
  4. By using a bidirectional UI.

Answer(s): A

Explanation:

ThreatModeler integration with Azure Pipelines is achieved using a bidirectional API, which allows automated and continuous interaction between the pipeline and the threat modeling platform. This bidirectional communication enables Azure Pipelines to trigger threat modeling activities while also receiving results, risk scores, and actionable insights back from ThreatModeler. Such feedback loops are critical for proactive security decision-making during the Plan stage of DevSecOps. Unidirectional APIs or UI-based integrations limit automation and do not support continuous feedback, making them unsuitable for pipeline-driven workflows. UI-based approaches also introduce manual steps, which conflict with DevSecOps principles of automation and consistency. By using a bidirectional API, William's organization can embed threat modeling into the planning process, identify architectural risks early, and ensure security considerations are continuously enforced as part of the pipeline.



(Peter Dinklage has been working as a senior DevSecOps engineer at SacramentSoft Solution Pvt. Ltd. He has deployed applications in docker containers. His team leader asked him to check the exposure of unnecessary ports.
Which of the following commands should Peter use to check all the containers and the exposed ports?)

  1. docker ps --quiet | xargs docker inspect --all --format : Ports=.
  2. docker ps --quiet | xargs docker inspect --format ': Ports='.
  3. docker ps --quiet | xargs docker inspect --format : Ports.
  4. docker ps --quiet | xargs docker inspect --all --format ': Ports='.

Answer(s): B

Explanation:

To inspect exposed ports for running Docker containers, the recommended approach is to first retrieve container IDs using docker ps --quiet and then pass them to docker inspect. The --format option allows selective output of container configuration details, including port mappings. The command docker ps --quiet | xargs docker inspect --format ': Ports=' correctly extracts port information for each container. Options that include the --all flag or incorrect formatting are not valid for this inspection use case. Checking exposed ports is an important activity in the Operate and Monitor stage because unnecessary open ports increase the attack surface and may violate container security best practices. Regular inspection helps ensure that only required ports are exposed, supporting secure runtime operations.



(Jason Wylie has been working as a DevSecOps engineer in an IT company located in Sacramento, California. He would like to use Jenkins for CI and Azure Pipelines for CD to deploy a Spring Boot app to an Azure Container Service (AKS) Kubernetes cluster. He created a namespace for deploying the Jenkins in AKS, and then deployed the Jenkins app to the Pod.
Which of the following commands should Jason run to see the pods that have been spun up and running?)

  1. kubectl get pods -k Jenkins.
  2. kubectl get pods -s jenkins.
  3. kubectl get pods -n jenkins.
  4. kubectl get pods -p jenkins.

Answer(s): C

Explanation:

Kubernetes uses namespaces to logically isolate resources such as pods, services, and deployments.

When an application like Jenkins is deployed into a specific namespace, the correct way to view the pods running in that namespace is by using the -n (or --namespace) flag with the kubectl get pods command. The command kubectl get pods -n jenkins instructs Kubernetes to list all pods in the "jenkins" namespace. The other options use invalid or unrelated flags that are not supported for namespace selection. Verifying pod status during the Release and Deploy stage is essential to ensure that applications have been deployed successfully and are running as expected before exposing services or proceeding to monitoring. This step supports deployment validation and operational readiness in Kubernetes-based DevSecOps environments.






Post your Comments and Discuss EC-Council 312-97 exam prep with other Community members:

Join the 312-97 Discussion