CompTIA CY0-001 Exam Questions
CompTIA SecAI+ Beta (Page 4 )

Updated On: 12-May-2026

Which of the following is the most concerning risk for a company that allows corporate end users to use public- facing large language models (LLMs)?

  1. Inaccuracies due to hallucinations
  2. Out-of-date acceptable use policies
  3. Data security regulatory violations
  4. Malicious code generation

Answer(s): C

Explanation:

The greatest concern with employees using public-facing LLMs is the potential exposure of sensitive or regulated corporate data. Submitting such information to external systems may violate data protection laws (e.g., GDPR, HIPAA), creating legal and compliance risks that outweigh issues like hallucinations or malicious outputs.



Which of the following requires developers to harden infrastructure to protect AI systems?

  1. Intake processes
  2. Acceptable use policies
  3. Development guidelines
  4. Configuration standards

Answer(s): D

Explanation:

Configuration standards define how infrastructure and systems must be securely set up and maintained. By following these standards, developers harden the environment that supports AI systems, reducing risks from misconfigurations and vulnerabilities.



Which of the following is the best example of an AI model that is trained to identify multiple points from input using a neural network to provide output for authentication?

  1. Facial recognition
  2. Encryption key
  3. Open Authorization (OAuth)
  4. Bounding box

Answer(s): A

Explanation:

Facial recognition uses neural networks to analyze multiple points or features from an input image (such as eyes, nose, mouth, and facial structure) to generate a unique identifier for authentication purposes.



An organization is developing and implementing AI features into a customer service application. Which of the following practices should the organization put the place before releasing the application for customer trials?

  1. Data masking and sanitization
  2. External compliance audits
  3. Approved AI vendor lists
  4. Third-party risk management

Answer(s): A

Explanation:

Before releasing AI features for customer trials, it is critical to protect sensitive information that may be used during testing. Data masking and sanitization ensure customer or corporate data is anonymized or obfuscated, reducing the risk of data exposure while still allowing realistic evaluation of the AI system.



An internal user enters a client credit card number into an internal generative machine learning (ML) model:

#User prompt: Customer Jane Doe has a new credit card that she wants to add to her account. The number is 5555-5555-5555-5555

Which of the following is the most effective way to prevent prompt injection attacks against a large language model (LLM)?

  1. Guardrails
  2. Antivirus
  3. Web application firewall (WAF)
  4. Role-based access control

Answer(s): A

Explanation:

Guardrails are the primary security control for LLMs to prevent prompt injection attacks. They enforce rules on what inputs are accepted and how the model responds, blocking malicious or sensitive prompts (such as credit card numbers) before they can manipulate or exploit the model.



A security alert triggers an agentic system. An analyst notices the following payload in the logs"



The alert includes multiple shell commands that are not typically run as part of any hardening. Which of the following is the most effective control to implement?

  1. Adding logic that includes approved strings before running the shell commands
  2. Deprecating model usage and retaining the model with safer parameters
  3. Modifying the application to ignore the SECURITY_UPDATE tag
  4. Using only approved libraries when interacting with agentic systems

Answer(s): A

Explanation:

The payload in the alert attempts to trick the system into executing unauthorized shell commands. The most effective control is to implement allow-list validation (approved strings) before execution. This ensures that only predefined, safe commands are executed, blocking prompt injection attempts that introduce malicious code such as the fake patch script.



A global security operations center (SOC) wants to adapt and leverage the strength of AI in order to enhance its security operations. Which of the following is the best way to enhance the global SOC functions?

  1. Generate code and execute in production to help save time.
  2. Enable a personal assistant that can act in the global SOC with no human intervention.
  3. Use open-source models in production to help the efficiency of threat detection and threat analysis.
  4. Summarize alerts to easily gain insights on the environment.

Answer(s): D

Explanation:

AI can significantly enhance SOC operations by summarizing and correlating high volumes of alerts, enabling analysts to quickly identify patterns, prioritize threats, and gain actionable insights. This reduces analyst fatigue
and improves response times without introducing unsafe automation risks.



An attacker successfully completes a denial-of-service (DoS) attack through the context window of an AI system. Thousands of characters are obfuscated and hidden behind an emoji. Which of the following techniques best mitigates this type of attack?

  1. Fraud detection
  2. Large language model (LLM)-as-a-judge
  3. Pattern recognition
  4. Prompt filter

Answer(s): D

Explanation:

A DoS attack through the context window relies on overwhelming the model with excessive or obfuscated input.
Prompt filtering prevents such malicious or oversized inputs from being processed, ensuring that the model only receives safe, properly structured data within acceptable limits.



Viewing page 4 of 15
Viewing questions 25 - 32 out of 106 questions


CY0-001 Exam Discussions & Posts (Share your experience with others)

AI Tutor AI Tutor 👋 I’m here to help!