Free Salesforce-AI-Associate Exam Braindumps (page: 12)

Page 11 of 27

A service leader wants use AI to help customer resolve their issues quicker in a guided self-serve application.
Which Einstein functionality provides the best solution?

  1. Case Classification
  2. Bots
  3. Recommendation

Answer(s): B

Explanation:

"Bots provide the best solution for a service leader who wants to use AI to help customers resolve their issues quicker in a guided self-serve application. Bots are a feature that uses natural language processing (NLP) and natural language understanding (NLU) to create conversational interfaces that can interact with customers using text or voice. Bots can help automate and streamline customer service processes by providing answers, suggestions, or actions based on the customer's intent and context."



Why is it critical to consider privacy concerns when dealing with AI and CRM data?

  1. Ensures compliance with laws and regulations
  2. Confirms the data is accessible to all users
  3. Increases the volume of data collected

Answer(s): A

Explanation:

"It is critical to consider privacy concerns when dealing with AI and CRM data because it ensures compliance with laws and regulations. Data privacy is the right of individuals to control how their personal data is collected, used, shared, or stored by others. Data privacy laws and regulations are legal frameworks that define and enforce the rights and obligations of data subjects, data controllers, and data processors regarding personal data. Data privacy laws and regulations vary by country, region, or industry, and may impose different requirements or restrictions on how AI and CRM data can be handled."



Which action should be taken to develop and implement trusted generated AI with Salesforce's safety guideline in mind?

  1. Develop right-sized models to reduce our carbon footprint.
  2. Create guardrails that mitigates toxicity and protect PII
  3. Be transparent when AI has created and automatically delivered content.

Answer(s): B

Explanation:

"Creating guardrails that mitigate toxicity and protect PII is an action that should be taken to develop and implement trusted generative AI with Salesforce's safety guideline in mind. Salesforce's safety guideline is one of the Trusted AI Principles that states that AI systems should be designed and developed with respect for the safety and well-being of humans and the environment. Creating guardrails means implementing measures or mechanisms that can prevent or limit the potential harm or risk caused by AI systems. For example, creating guardrails can help mitigate toxicity by filtering out inappropriate or offensive content generated by AI systems. Creating guardrails can also help protect PII by masking or anonymizing personal or sensitive information generated by AI systems."



What is a potential source of bias in training data for AI models?

  1. The data is collected in area time from sources systems.
  2. The data is skewed toward is particular demographic or source.
  3. The data is collected from a diverse range of sources and demographics.

Answer(s): B

Explanation:

"A potential source of bias in training data for AI models is that the data is skewed toward a particular demographic or source. Skewed data means that the data is not balanced or representative of the target population or domain. Skewed data can introduce or exacerbate bias in AI models, as they may overfit or underfit the model to a specific subset of data. For example, skewed data can lead to bias if the data is collected from a limited or biased demographic or source, such as a certain age group, gender, race, location, or platform."






Post your Comments and Discuss Salesforce Salesforce-AI-Associate exam with other Community members:

Salesforce-AI-Associate Exam Discussions & Posts