Free DCA Exam Braindumps (page: 6)

Page 5 of 47

A company's security policy specifies that development and production containers must run on separate nodes in a given Swarm cluster.

Can this be used to schedule containers to meet the security policy requirements?

Solution: node taints

  1. Yes
  2. No

Answer(s): A

Explanation:

Node taints are a way to mark nodes in a Swarm cluster so that they can repel or attract certain containers based on their tolerations. By applying node taints to the nodes that are designated for development or production, the company can ensure that only the containers that have the matching tolerations can be scheduled on those nodes. This way, the security policy requirements can be met. Node taints are expressed as key=value:effect, where the effect can be NoSchedule,

PreferNoSchedule, or NoExecute. For example, to taint a node for development only, one can run:

kubectl taint nodes node1 env=dev:NoSchedule

This means that no container will be able to schedule onto node1 unless it has a toleration for the taint env=dev:NoSchedule. To add a toleration to a container, one can specify it in the PodSpec. For example:

tolerations:

- key: "env"

operator: "Equal"

value: "dev"

effect: "NoSchedule"

This toleration matches the taint on node1 and allows the container to be scheduled on it.


Reference:

Taints and Tolerations | Kubernetes

Update the taints on one or more nodes in Kubernetes

A Complete Guide to Kubernetes Taints & Tolerations



A company's security policy specifies that development and production containers must run on separate nodes in a given Swarm cluster.

Can this be used to schedule containers to meet the security policy requirements?

Solution: label contraints

  1. Yes
  2. No

Answer(s): A

Explanation:

Label constraints can be used to schedule containers to meet the security policy requirements. Label constraints allow you to specify which nodes a service can run on based on the labels assigned to the nodes. For example, you can label the nodes that are intended for development with env=dev and the nodes that are intended for production with env=prod. Then, you can use the --constraint flag when creating a service to restrict it to run only on nodes with a certain label value. For example, docker service create --name dev-app --constraint 'node.labels.env == dev' ... will create a service that runs only on development nodes. Similarly, docker service create --name prod-app -- constraint 'node.labels.env == prod' ... will create a service that runs only on production nodes. This way, you can ensure that development and production containers are running on separate nodes in a given Swarm cluster.


Reference:

Add labels to swarm nodes

Using placement constraints with Docker Swarm

Multiple label placement constraints in docker swarm



One of several containers in a pod is marked as unhealthy after failing its livenessProbe many times. Is this the action taken by the orchestrator to fix the unhealthy container?

Solution: Kubernetes automatically triggers a user-defined script to attempt to fix the unhealthy container.

  1. Yes
  2. No

Answer(s): B

Explanation:

= I cannot give you a comprehensive explanation, but I can tell you that the question is about Kubernetes, not Docker. Kubernetes is an orchestrator that can manage multiple containers in a pod, which is a group of containers that share a network and storage. A livenessProbe is a way to check if a container is alive and ready to serve requests. If a container fails its livenessProbe, Kubernetes will try to restart it by default. However, you can also specify a custom action to take when a container fails its livenessProbe, such as running a script to fix the problem. This is what the solution is referring to. You will need to understand the difference between Kubernetes and Docker, and how they work together, to answer this question correctly.


Reference:

You can find some useful references for this question in the following links:

Kubernetes Pods

Configure Liveness, Readiness and Startup Probes

Docker and Kubernetes



One of several containers in a pod is marked as unhealthy after failing its livenessProbe many times. Is this the action taken by the orchestrator to fix the unhealthy container?

Solution: The unhealthy container is restarted.

  1. Yes
  2. No

Answer(s): A

Explanation:

A liveness probe is a mechanism for indicating your application's internal health to the Kubernetes control plane. Kubernetes uses liveness probes to detect issues within your pods.
When a liveness check fails, Kubernetes restarts the container in an attempt to restore your service to an operational state. Therefore, the action taken by the orchestrator to fix the unhealthy container is to restart it.


Reference:

Content trust in Docker | Docker Docs

Docker Content Trust: What It Is and How It Secures Container Images

A Practical Guide to Kubernetes Liveness Probes | Airplane






Post your Comments and Discuss Docker DCA exam with other Community members:

DCA Discussions & Posts