CertNexus AIP-210 Exam
Certified Artificial Intelligence Practitioner (Page 4 )

Updated On: 9-Feb-2026

Which three security measures could be applied in different ML workflow stages to defend them against malicious activities? (Select three.)

  1. Disable logging for model access.
  2. Launch ML Instances In a virtual private cloud (VPC).
  3. Monitor model degradation.
  4. Use data encryption.
  5. Use max privilege to control access to ML artifacts.
  6. Use Secrets Manager to protect credentials.

Answer(s): B,D,F

Explanation:

Security measures can be applied in different ML workflow stages to defend them against malicious activities, such as data theft, model tampering, or adversarial attacks. Some of the security measures are:
Launch ML Instances In a virtual private cloud (VPC): A VPC is a logically isolated section of a cloud provider's network that allows users to launch and control their own resources. By launching ML instances in a VPC, users can enhance the security and privacy of their data and models, as well as restrict the access and traffic to and from the instances. Use data encryption: Data encryption is the process of transforming data into an unreadable format using a secret key or algorithm. Data encryption can protect the confidentiality, integrity, and availability of data at rest (stored in databases or files) or in transit (transferred over networks). Data encryption can prevent unauthorized access, modification, or leakage of sensitive data. Use Secrets Manager to protect credentials: Secrets Manager is a service that helps users securely store, manage, and retrieve secrets, such as passwords, API keys, tokens, or certificates. Secrets Manager can help users protect their credentials from unauthorized access or exposure, as well as rotate them automatically to comply with security policies.



A healthcare company experiences a cyberattack, where the hackers were able to reverse-engineer a dataset to break confidentiality.
Which of the following is TRUE regarding the dataset parameters?

  1. The model is overfitted and trained on a high quantity of patient records.
  2. The model is overfitted and trained on a low quantity of patient records.
  3. The model is underfitted and trained on a high quantity of patient records.
  4. The model is underfitted and trained on a low quantity of patient records.

Answer(s): B

Explanation:

Overfitting is a problem that occurs when a model learns too much from the training data and fails to generalize well to new or unseen data. Overfitting can result from using a low quantity of training data, a high complexity of the model, or a lack of regularization. Overfitting can also increase the risk of reverse-engineering a dataset from a model's outputs, as the model may reveal too much information about the specific features or patterns of the training data. This can break the confidentiality of the data and expose sensitive information about the individuals in the dataset.



When working with textual data and trying to classify text into different languages, which approach to representing features makes the most sense?

  1. Bag of words model with TF-IDF
  2. Bag of bigrams (2 letter pairs)
  3. Word2Vec algorithm
  4. Clustering similar words and representing words by group membership

Answer(s): B

Explanation:

A bag of bigrams (2 letter pairs) is an approach to representing features for textual data that involves counting the frequency of each pair of adjacent letters in a text. For example, the word "hello" would be represented as {"he": 1, "el": 1, "ll": 1, "lo": 1}. A bag of bigrams can capture some information about the spelling and structure of words, which can be useful for identifying the language of a text. For example, some languages have more common bigrams than others, such as "th" in English or "ch" in German .



Which of the following options is a correct approach for scheduling model retraining in a weather prediction application?

  1. As new resources become available
  2. Once a month
  3. When the input format changes
  4. When the input volume changes

Answer(s): C

Explanation:

The input format is the way that the data is structured, organized, and presented to the model. For example, the input format could be a CSV file, an image file, or a JSON object. The input format can affect how the model interprets and processes the data, and therefore how it makes predictions.
When the input format changes, it may require retraining the model to adapt to the new format and ensure its accuracy and reliability. For example, if the weather prediction application switches from using numerical values to categorical values for some features, such as wind direction or cloud cover, it may need to retrain the model to handle these changes .



Which of the following tools would you use to create a natural language processing application?

  1. AWS DeepRacer
  2. Azure Search
  3. DeepDream
  4. NLTK

Answer(s): D

Explanation:

NLTK (Natural Language Toolkit) is a Python library that provides a set of tools and resources for natural language processing (NLP). NLP is a branch of AI that deals with analyzing, understanding, and generating natural language texts or speech. NLTK offers modules for various NLP tasks, such as tokenization, stemming, lemmatization, parsing, tagging, chunking, sentiment analysis, named entity recognition, machine translation, text summarization, and more .






Post your Comments and Discuss CertNexus AIP-210 exam prep with other Community members:

Join the AIP-210 Discussion