Free Linux Foundation CNPA Exam Questions (page: 8)

A developer is tasked with securing a Kubernetes cluster and needs to implement Role-Based Access Control (RBAC) to manage user permissions.
Which of the following statements about RBAC in Kubernetes is correct?

  1. RBAC does not support namespace isolation and applies globally across the cluster.
  2. RBAC allows users to have unrestricted roles and access to all resources in the cluster.
  3. RBAC is only applicable to Pods and does not extend to other Kubernetes resources.
  4. RBAC uses roles and role bindings to grant permissions to users for specific resources and actions.

Answer(s): D

Explanation:

Role-Based Access Control (RBAC) in Kubernetes is a cornerstone of cluster security, enabling fine- grained access control based on the principle of least privilege. Option D is correct because RBAC leverages Roles (or ClusterRoles) that define sets of permissions, and RoleBindings (or ClusterRoleBindings) that assign those roles to users, groups, or service accounts. This mechanism ensures that users have only the minimum required access to perform their tasks, enhancing both security and governance.

Option A is incorrect because RBAC fully supports namespace-scoped roles, allowing isolation of permissions at the namespace level in addition to cluster-wide roles. Option B is wrong because RBAC is specifically designed to restrict, not grant, unrestricted access. Option C is misleading because RBAC applies broadly across Kubernetes API resources, not just Pods--it includes ConfigMaps, Secrets, Deployments, Services, and more.

By applying RBAC correctly, platform teams can align with security best practices, ensuring that sensitive operations (e.g., managing secrets or modifying cluster configurations) are tightly controlled. RBAC is also central to compliance frameworks, as it provides auditability of who has access to what resources.


Reference:

-- CNCF Kubernetes Security Best Practices

-- Kubernetes RBAC Documentation (aligned with CNCF platform engineering security guidance)

-- Cloud Native Platform Engineering Study Guide



Why is centralized configuration management important in a multi-cluster GitOps setup?

  1. It requires all clusters to have the exact same configuration, including secrets and environment variables, to maintain uniformity.
  2. It ensures consistent and auditable management of configurations and policies across clusters from a single Git repository or set of coordinated repositories.
  3. It eliminates the need for automated deployment tools like Argo CD or Flux since configurations are already stored centrally.
  4. It makes it impossible for different teams to customize configurations for specific clusters, reducing flexibility.

Answer(s): B

Explanation:

In a GitOps-driven multi-cluster environment, centralized configuration management ensures that platform teams can maintain consistency, governance, and security across multiple clusters, all while leveraging Git as the single source of truth. Option B is correct because centralization allows teams to enforce policies, apply configurations, and audit changes across environments in a traceable and reproducible way. This supports compliance, as every change is version-controlled, peer-reviewed, and automatically reconciled by tools like Argo CD or Flux.

Option A is misleading--centralized management does not mean clusters must have identical configurations; it enables consistent patterns while still allowing environment-specific overlays or customizations (e.g., dev vs. prod). Option C is incorrect because GitOps tools remain essential for continuous reconciliation between desired and actual state. Option D is also incorrect because centralized management does not remove flexibility--it supports parameterization and customization per cluster.

By combining centralization with declarative configuration and GitOps automation, organizations gain operational efficiency, faster recovery from drift, and improved auditability in multi-cluster scenarios.


Reference:

-- CNCF GitOps Principles for Platforms

-- CNCF Platforms Whitepaper

-- Cloud Native Platform Engineering Study Guide



A platform team is implementing an API-driven approach to enable development teams to consume platform capabilities more effectively.
Which of the following examples best illustrates this approach?

  1. Providing a documented process for developers to submit feature requests for the platform.
  2. Developing a dashboard that visualizes platform usage statistics without exposing any APIs.
  3. Allowing developers to request and manage development environments on demand through an internal tool.
  4. Implementing a CI/CD pipeline that automatically deploys updates to the platform based on developer requests.

Answer(s): C

Explanation:

An API-driven approach in platform engineering enables developers to interact with the platform programmatically through self-service capabilities. Option C is correct because giving developers the ability to request and manage environments on demand via APIs or internal tooling exemplifies the API-first model. This approach abstracts infrastructure complexity, reduces manual intervention, and ensures automation and repeatability--all key goals of platform engineering.

Option A is a traditional request/response workflow but does not empower developers with real- time, self-service capabilities. Option B provides visibility but does not expose APIs for consumption or management. Option D focuses on automating platform updates rather than enabling developer interaction with platform services.

By exposing APIs for services such as provisioning environments, databases, or networking, the platform team empowers developers to operate independently while maintaining governance and consistency. This improves developer experience and accelerates delivery, aligning with internal developer platform (IDP) practices.


Reference:

-- CNCF Platforms Whitepaper

-- CNCF Platform Engineering Maturity Model

-- Cloud Native Platform Engineering Study Guide



In a Kubernetes environment, which component is responsible for watching the state of resources during the reconciliation process?

  1. Kubernetes Scheduler
  2. Kubernetes Dashboard
  3. Kubernetes API Server
  4. Kubernetes Controller

Answer(s): D

Explanation:

The Kubernetes reconciliation process ensures that the actual cluster state matches the desired state defined in manifests. The Kubernetes Controller (option D) is responsible for watching the state of resources through the API Server and taking action to reconcile differences. For example, the Deployment Controller ensures that the number of Pods matches the replica count specified, while the Node Controller monitors node health.

Option A (Scheduler) is incorrect because the Scheduler's role is to assign Pods to nodes based on constraints and availability, not ongoing reconciliation. Option B (Dashboard) is simply a UI for visualization and does not manage cluster state. Option C (API Server) exposes the Kubernetes API and serves as the communication hub, but it does not perform reconciliation logic itself.

Controllers embody the core Kubernetes design principle: continuous reconciliation between declared state and observed state. This makes them fundamental to declarative infrastructure and aligns with GitOps practices where controllers continuously enforce desired configurations from source control.


Reference:

-- CNCF Kubernetes Documentation

-- CNCF GitOps Principles

-- Cloud Native Platform Engineering Study Guide



To simplify service consumption for development teams on a Kubernetes platform, which approach combines service discovery with an abstraction of underlying infrastructure details?

  1. Manual service dependencies configuration within application code.
  2. Shared service connection strings and network configurations document.
  3. Direct Kubernetes API access with detailed documentation.
  4. Service catalog with abstracted APIs and automated service registration.

Answer(s): D

Explanation:

Simplifying developer access to platform services is a central goal of internal developer platforms (IDPs). Option D is correct because a service catalog with abstracted APIs and automated registration provides a unified interface for developers to consume services without dealing with low-level infrastructure details. This approach combines service discovery with abstraction, offering golden paths and self-service capabilities.

Option A burdens developers with hardcoded dependencies, reducing flexibility and portability. Option B relies on manual documentation, which is error-prone and not dynamic. Option C increases cognitive load by requiring developers to interact directly with Kubernetes APIs, which goes against platform engineering's goal of reducing complexity.

A service catalog enables developers to provision databases, messaging queues, or APIs with minimal input, while the platform automates backend provisioning and wiring. It also improves consistency, compliance, and observability by embedding platform-wide policies into the service provisioning workflows. This results in a seamless developer experience that accelerates delivery while maintaining governance.


Reference:

-- CNCF Platforms Whitepaper

-- CNCF Platform Engineering Maturity Model

-- Cloud Native Platform Engineering Study Guide



Viewing page 8 of 18
Viewing questions 36 - 40 out of 85 questions



Post your Comments and Discuss Linux Foundation CNPA exam prep with other Community members:

CNPA Exam Discussions & Posts