Free HashiCorp HCVA0-003 Exam Questions

What API endpoint is used to manage secrets engines in Vault?

  1. /secret-engines/
  2. /sys/mounts
  3. /sys/capabilities
  4. /sys/kv

Answer(s): B

Explanation:

Comprehensive and Detailed in Depth
Vault's API provides endpoints for managing its components, including secrets engines, which generate and manage secrets (e.g., AWS, KV, Transit). Managing secrets engines involves enabling, disabling, tuning, or listing them. Let's evaluate:
Option A: /secret-engines/
This is not a valid Vault API endpoint. Vault uses /sys/ for system-level operations, and no endpoint named /secret-engines/ exists in the official API documentation. It's a fabricated path, possibly a misunderstanding of secrets engine management. Incorrect.
Option B: /sys/mounts

This is the correct endpoint. The /sys/mounts endpoint allows operators to list all mounted secrets engines (GET), enable a new one (POST to /sys/mounts/<path>), or tune existing ones (POST to /sys/mounts/<path>/tune). For example, enabling the AWS secrets engine at aws/ uses POST /v1/sys/mounts/aws with a payload specifying the type (aws). This endpoint is the central hub for secrets engine management. Correct.
Option C: /sys/capabilities
The /sys/capabilities endpoint checks permissions for a token on specific paths (e.g., what capabilities like read or write are allowed). It's unrelated to managing secrets engines--it's for policy auditing, not mount operations. Incorrect.
Option D: /sys/kv
There's no /sys/kv endpoint. The KV secrets engine, when enabled, lives at a user-defined path (e.g., kv/), not under /sys/. System endpoints under /sys/ handle configuration, not specific secrets engine instances. Incorrect.
Detailed Mechanics:
The /sys/mounts endpoint interacts with Vault's mount table, a registry of all enabled backends (auth methods and secrets engines). A GET request to /v1/sys/mounts returns a JSON list of mounts, e.g., {"kv/": {"type": "kv", "options": {"version": "2"}}}. A POST request to /v1/sys/mounts/my-mount with {"type": "kv"} mounts a new KV engine. Tuning (e.g., setting TTLs) uses /sys/mounts/<path>/tune. This endpoint's versatility makes it the go-to for secrets engine management.
Real-World Example:
To enable the Transit engine: curl -X POST -H "X-Vault-Token: <token>" -d '{"type":"transit"}' http://127.0.0.1:8200/v1/sys/mounts/transit. To list mounts: curl -X GET -H "X-Vault-Token: <token>" http://127.0.0.1:8200/v1/sys/mounts.
Overall Explanation from Vault Docs:
"The /sys/mounts endpoint is used to manage secrets engines in Vault... List, enable, or tune mounts via this system endpoint."


Reference:

https://developer.hashicorp.com/vault/api-docs/system/mounts



You are deploying Vault in a local data center, but want to be sure you have a secondary Vault cluster in the event the primary cluster goes offline. In the secondary data center, you have applications that are running, as they are architected to run active/active.
Which type of replication would be best in this scenario?

  1. Disaster Recovery replication
  2. Performance replication

Answer(s): B

Explanation:

Comprehensive and Detailed in Depth
Vault supports two replication types: Performance Replication and Disaster Recovery (DR) Replication, each serving distinct purposes. The scenario involves an on-premises primary cluster and a secondary cluster in another data center, with active/active applications needing Vault access. Let's analyze:
Option A: Disaster Recovery replication

DR replication mirrors the primary cluster's state (secrets, tokens, leases) to a secondary cluster, which remains in standby mode until activated (promoted) during a failover. It's designed for disaster scenarios where the primary is lost, not for active/active use. The secondary doesn't serve reads or writes until promoted, which doesn't suit applications actively running in the secondary data center.
Incorrect.
Option B: Performance replication
Performance replication creates an active secondary cluster that replicates data from the primary in near real-time. It supports read operations locally, reducing latency for applications in the secondary data center, and can handle writes (forwarded to the primary). This fits an active/active architecture, providing redundancy and performance. If the primary fails, the secondary can continue serving reads (though writes need reconfiguring). Correct.
Detailed Mechanics:
Performance replication uses a primary-secondary model with log shipping via Write-Ahead Logs (WALs). The secondary maintains its own storage, synced from the primary, and can serve reads independently. Writes are forwarded to the primary, ensuring consistency. In an active/active setup, applications in both data centers can query their local Vault cluster, leveraging the secondary's read capability. DR replication, conversely, keeps the secondary dormant, requiring manual promotion, which introduces downtime unsuitable for active apps.
Real-World Example:
Primary cluster at dc1.vault.local:8200, secondary at dc2.vault.local:8200. Apps in DC2 query the secondary for secrets (e.g., GET /v1/secret/data/my-secret), avoiding cross-DC latency. If DC1 fails, DC2 continues serving cached reads until a new primary is established.
Overall Explanation from Vault Docs:
"Performance replication... allows secondary clusters to serve reads locally, ideal for active/active setups... DR replication is for failover, keeping secondaries in standby."


Reference:

https://developer.hashicorp.com/vault/docs/enterprise/replication



How long does the Transit secrets engine store the resulting ciphertext by default?

  1. 24 hours
  2. 30 days
  3. 32 days
  4. Transit does not store data

Answer(s): D

Explanation:

Comprehensive and Detailed in Depth
The Transit secrets engine in Vault is designed for encryption-as-a-service, not data storage. Let's evaluate:
Option A: 24 hours
Transit doesn't store ciphertext, so no TTL applies. Incorrect.
Option B: 30 days
No storage means no 30-day retention. Incorrect.
Option C: 32 days

This aligns with token TTLs, not Transit behavior. Incorrect.
Option D: Transit does not store data
Transit encrypts data and returns the ciphertext to the caller without persisting it in Vault. Correct.
Detailed Mechanics:
When you run vault write transit/encrypt/mykey plaintext=<base64-data>, Vault uses the named key (e.g., mykey) to encrypt the input and returns a response like vault:v1:<ciphertext>. This ciphertext is not stored in Vault's storage backend (e.g., Consul, Raft); it's the client's responsibility to save it (e.g., in a database). This stateless design keeps Vault lightweight and secure, avoiding data retention risks.
Real-World Example:
Encrypt a credit card: vault write transit/encrypt/creditcard plaintext=$(base64 <<< "1234-5678- 9012-3456"). Response: ciphertext=vault:v1:<data>. You store this in your app's database; Vault retains nothing.
Overall Explanation from Vault Docs:
"Vault does NOT store any data encrypted via the transit/encrypt endpoint... The ciphertext is returned to the caller for storage elsewhere."


Reference:

https://developer.hashicorp.com/vault/docs/secrets/transit



Which of the following policies would permit a user to generate dynamic credentials on a database?

  1. path "database/creds/read_only_role" { capabilities = ["generate"] }
  2. path "database/creds/read_only_role" { capabilities = ["update"] }
  3. path "database/creds/read_only_role" { capabilities = ["list"] }
  4. path "database/creds/read_only_role" { capabilities = ["read"] }

Answer(s): D

Explanation:

Comprehensive and Detailed in Depth
The Database secrets engine generates dynamic credentials for database access. The endpoint database/creds/<role> (e.g., read_only_role) provides these credentials via a read operation. Let's analyze:
Option A: capabilities = ["generate"]
There's no generate capability in Vault policies. Capabilities are create, read, update, delete, list, etc.

This is invalid. Incorrect.
Option B: capabilities = ["update"]
update (PUT) modifies existing data, not generates credentials. The creds endpoint uses GET.
Incorrect.
Option C: capabilities = ["list"]
list retrieves metadata or paths, not credential data. Incorrect.
Option D: capabilities = ["read"]
Generating dynamic credentials involves a GET request to database/creds/<role>, mapped to the read capability. This policy allows it. Correct.
Detailed Mechanics:
For a role read_only_role defined with vault write database/roles/read_only_role db_name=my-db creation_statements="CREATE USER...", a user with read on database/creds/read_only_role can run vault read database/creds/read_only_role to get temporary credentials. Vault's policy system aligns HTTP verbs to capabilities: GET = read, PUT = update. This counterintuitive mapping (GET for creation) is specific to dynamic secrets.
Overall Explanation from Vault Docs:
"Generating database credentials requires read capability on database/creds/<role>... Despite creating credentials, the HTTP request is a GET."


Reference:

https://developer.hashicorp.com/vault/tutorials/db-credentials/database-secrets



Viewing page 10 of 73
Viewing questions 37 - 40 out of 285 questions



Post your Comments and Discuss HashiCorp HCVA0-003 exam prep with other Community members:

HCVA0-003 Exam Discussions & Posts