CY0-001: CompTIA SecAI+ Beta
Free Practice Exam Questions (page: 2)
Updated On: 10-Jan-2026

Which of the following job roles in an organizational governance structure develops a model from business use cases?

  1. Platform architect
  2. AI risk analyst
  3. Machine learning operations (MLOps) engineer
  4. Data scientist

Answer(s): D

Explanation:

A data scientist develops models from business use cases by translating organizational needs into machine learning solutions. They prepare data, select algorithms, and build models that align with the use cases.



An administrator, who works for a financial institution, is required to implement data security controls for data at rest within AI systems that involve data disclosure.
Which of the following is the most suitable control?

  1. Data lineage
  2. Rate limits
  3. Encryption
  4. Masking

Answer(s): C

Explanation:

For financial institutions handling AI systems, protecting data at rest against disclosure requires encryption.
Encryption ensures that even if the storage medium is accessed or compromised, the data remains unreadable without the proper decryption keys.



A security engineer needs to monitor an AI-based system for runtime operations. The engineer is mostly concerned about the visibility of internal activity.
Which of the following is the most appropriate monitoring solution?

  1. Deploying a security information and event management (SIEM) tool
  2. Implementing a web application firewall (WAF) with header logging
  3. Relying on vendor model controls and monitoring prompt inputs
  4. Enabling stack call and debugging level traces at the function level

Answer(s): D

Explanation:

For runtime visibility into internal activity of an AI system, the most suitable control is enabling stack calls and debugging-level traces. This provides granular insights into function-level execution, dependencies, and operations, which directly supports monitoring of runtime behavior.



Which of the following should an auditor reference when reviewing a company's human resources AI systems for legal non-compliance?

  1. Organization for Economic Cooperation and Development (OECD) standard
  2. National Institute of Standards and Technology (NIST) AI Risk Management Framework 9RMF)
  3. European Union (EU) AI Act
  4. International Organization for Standardization (ISO)

Answer(s): C

Explanation:

The EU AI Act is legally binding legislation that specifically governs the use of AI systems, including those used in human resources for hiring, promotion, and evaluation. An auditor reviewing AI systems for legal non- compliance must reference this act because it establishes enforceable requirements related to transparency, bias, risk classification, and prohibited practices.



An airline corporation wants to implement a chatbot application using a large language model (LLM) so its customers:
Can ask question and receive answers about flight details.

Have the option to upload files.

Which of the following security controls should the airline use to protect against malicious input and unauthorized use beyond the service-level agreement? (Choose two.)

  1. Prompt guardrails
  2. Role-based access controls
  3. Firewall rules
  4. Model token quotas

Answer(s): A,D

Explanation:

Prompt guardrails are needed to prevent malicious or manipulated inputs (prompt injection) from causing the chatbot to provide harmful, misleading, or unauthorized responses.
Model token quotas limit the amount of input/output a user can generate, preventing abuse or excessive usage beyond the service-level agreement (SLA).



Viewing page 2 of 17
Viewing questions 6 - 10 out of 78 questions



Post your Comments and Discuss CompTIA CY0-001 exam prep with other Community members:

CY0-001 Exam Discussions & Posts