CompTIA CY0-001 Exam Questions
CompTIA SecAI+ Beta (Page 2 )

Updated On: 12-May-2026

Which of the following job roles in an organizational governance structure develops a model from business use cases?

  1. Platform architect
  2. AI risk analyst
  3. Machine learning operations (MLOps) engineer
  4. Data scientist

Answer(s): D

Explanation:

A data scientist develops models from business use cases by translating organizational needs into machine learning solutions. They prepare data, select algorithms, and build models that align with the use cases.



An administrator, who works for a financial institution, is required to implement data security controls for data at rest within AI systems that involve data disclosure. Which of the following is the most suitable control?

  1. Data lineage
  2. Rate limits
  3. Encryption
  4. Masking

Answer(s): C

Explanation:

For financial institutions handling AI systems, protecting data at rest against disclosure requires encryption.
Encryption ensures that even if the storage medium is accessed or compromised, the data remains unreadable without the proper decryption keys.



A security engineer needs to monitor an AI-based system for runtime operations. The engineer is mostly concerned about the visibility of internal activity. Which of the following is the most appropriate monitoring solution?

  1. Deploying a security information and event management (SIEM) tool
  2. Implementing a web application firewall (WAF) with header logging
  3. Relying on vendor model controls and monitoring prompt inputs
  4. Enabling stack call and debugging level traces at the function level

Answer(s): D

Explanation:

For runtime visibility into internal activity of an AI system, the most suitable control is enabling stack calls and debugging-level traces. This provides granular insights into function-level execution, dependencies, and operations, which directly supports monitoring of runtime behavior.



Which of the following should an auditor reference when reviewing a company's human resources AI systems for legal non-compliance?

  1. Organization for Economic Cooperation and Development (OECD) standard
  2. National Institute of Standards and Technology (NIST) AI Risk Management Framework 9RMF)
  3. European Union (EU) AI Act
  4. International Organization for Standardization (ISO)

Answer(s): C

Explanation:

The EU AI Act is legally binding legislation that specifically governs the use of AI systems, including those used in human resources for hiring, promotion, and evaluation. An auditor reviewing AI systems for legal non- compliance must reference this act because it establishes enforceable requirements related to transparency, bias, risk classification, and prohibited practices.



An airline corporation wants to implement a chatbot application using a large language model (LLM) so its customers:

-Can ask question and receive answers about flight details.
-Have the option to upload files.

Which of the following security controls should the airline use to protect against malicious input and unauthorized use beyond the service-level agreement? (Choose two.)

  1. Prompt guardrails
  2. Role-based access controls
  3. Firewall rules
  4. Model token quotas

Answer(s): A,D

Explanation:

Prompt guardrails are needed to prevent malicious or manipulated inputs (prompt injection) from causing the chatbot to provide harmful, misleading, or unauthorized responses.
Model token quotas limit the amount of input/output a user can generate, preventing abuse or excessive usage beyond the service-level agreement (SLA).



A security operations center (SOC) has a very high volume of logs and alerts. The manager proposes the implementation of machine learning (ML) system to help with triage. Which of the following tasks is most suitable?

  1. Applying filters on specific alerts
  2. Automatically patching vulnerable systems
  3. Identifying and classifying alerts
  4. Summarizing the content of alerts

Answer(s): C

Explanation:

Machine learning is best suited for analyzing large volumes of security data and distinguishing between true threats and false positives. By identifying and classifying alerts, the ML system helps the SOC prioritize incidents and reduce analyst workload.



An organization recently created a custom model that integrates with a language model (LLM). The developer notices that the application programming interface (API) costs have increased. Which of the following is the best control to reduce cost?

  1. Implementing prompt templates
  2. Increasing central processing unit (CPU) and memory
  3. Reducing the model size
  4. Adjusting token limits

Answer(s): D

Explanation:

API costs for large language model integrations are directly tied to token usage (input + output tokens). By adjusting token limits, the organization can reduce unnecessary processing of overly long prompts or responses, thereby lowering overall API costs without changing model size or infrastructure resources.



A security administrator needs to improve an AI model. During an initial investigation, the administrator notices that two successive login features are recorded every day, and then a successful login occurs after a specific time interval. All the successful login attempts have been during office hours. Which of the following techniques should the administrator use to improve the AI model's security?

  1. Access management
  2. Pattern recognition
  3. Signature matching
  4. Vulnerability analysis

Answer(s): B

Explanation:

The administrator is analyzing repeated login behaviors and time-based patterns that precede successful access. Pattern recognition allows the AI model to detect these behavioral trends, improving its ability to identify anomalies or potential attacks while aligning with normal office-hour login behavior.



Viewing page 2 of 15
Viewing questions 9 - 16 out of 106 questions


CY0-001 Exam Discussions & Posts (Share your experience with others)

AI Tutor AI Tutor 👋 I’m here to help!