Salesforce Data-Con-101 Exam Questions
Salesforce Certified Data Cloud Consultant (Page 3 )

Updated On: 27-Apr-2026

A Data Cloud customer wants to adjust their identity resolution rules to increase their

accuracy of matches. Rather than matching on email address, they want to review a rule that joins their CRM Contacts with their Marketing Contacts, where both use the CRM ID as their primary key.

Which two steps should the consultant take to address this new use case?

Choose 2 answers

  1. Map the primary key from the two systems to Party Identification, using CRM ID as theidentification name for both.
  2. Map the primary key from the two systems to party identification, using CRM ID as theidentification name for individualscoming from the CRM, and Marketing ID as the identification name for individuals coming from themarketing platform.
  3. Create a custom matching rule for an exact match on the Individual ID attribute.
  4. Create a matching rule based on party identification that matches on CRM ID as the partyidentification name.

Answer(s): A,D

Explanation:

To address this new use case, the consultant should map the primary key from the two systems to Party Identification, using CRM ID as the identification name for both, and create a matching rule based on party identification that matches on CRM ID as the party identification name. This way, the consultant can ensure that the CRM Contacts and Marketing Contacts are matched based on their CRM ID, which is a unique identifier for each individual. By using Party Identification, the consultant can also leverage the benefits of this attribute, such as being able to match across different entities and sources, and being able to handle multiple values for the same individual. The other options are incorrect because they either do not use the CRM ID as the primary key, or they do not use Party Identification as the attribute type.


Reference:

Configure Identity Resolution Rulesets, Identity Resolution Match Rules, Data Cloud Identity Resolution Ruleset, Data Cloud Identity Resolution Config Input



Which consideration related to the way Data Cloud ingests CRM data is true?

  1. CRM data cannot be manually refreshed and must wait for the next scheduled synchronization,
  2. The CRM Connector's synchronization times can be customized to up to 15-minute intervals.
  3. Formula fields are refreshed at regular sync intervals and are updated at the next full refresh.
  4. The CRM Connector allows standard fields to stream into Data Cloud in real time.

Answer(s): D

Explanation:

The correct answer is D. The CRM Connector allows standard fields to stream into Data Cloud in real time. This means that any changes to the standard fields in the CRM data source are reflected in Data Cloud almost instantly, without waiting for the next scheduled synchronization. This feature enables Data Cloud to have the most up-to-date and accurate CRM data for segmentation and activation1.

The other options are incorrect for the following reasons:

A . CRM data can be manually refreshed at any time by clicking the Refresh button on the data stream detail page2. This option is false.

B . The CRM Connector's synchronization times can be customized to up to 60-minute intervals, not 15-minute intervals3. This option is false.

C . Formula fields are not refreshed at regular sync intervals, but only at the next full refresh4. A full refresh is a complete data ingestion process that occurs once every 24 hours or when manually triggered. This option is false.

1: Connect and Ingest Data in Data Cloud article on Salesforce Help

2: Data Sources in Data Cloud unit on Trailhead

3: Data Cloud for Admins module on Trailhead

4: [Formula Fields in Data Cloud] unit on Trailhead

5: [Data Streams in Data Cloud] unit on Trailhead



What does the Source Sequence reconciliation rule do in identity resolution?

  1. Includes data from sources where the data is most frequently occurring
  2. Identifies which individual records should be merged into a unified profile by setting a priority forspecific data sources
  3. Identifies which data sources should be used in the process of reconcillation by prioritizing themost recently updated data source
  4. Sets the priority of specific data sources when building attributes in a unified profile, such as afirst or last name

Answer(s): D

Explanation:

The Source Sequence reconciliation rule sets the priority of specific data sources when building attributes in a unified profile, such as a first or last name. This rule allows you to define which data source should be used as the primary source of truth for each attribute, and which data sources should be used as fallbacks in case the primary source is missing or invalid. For example, you can set the Source Sequence rule to use data from Salesforce CRM as the first priority, data from Marketing Cloud as the second priority, and data from Google Analytics as the third priority for the first name attribute. This way, the unified profile will use the first name value from Salesforce CRM if it exists, otherwise it will use the value from Marketing Cloud, and so on. This rule helps you to ensure the accuracy and consistency of the unified profile attributes across different data sources.


Reference:

Salesforce Data Cloud Consultant Exam Guide, Identity Resolution, Reconciliation Rules



Which two dependencies prevent a data stream from being deleted?

Choose 2 answers

  1. The underlying data lake object is used in activation.
  2. The underlying data lake object is used in a data transform.
  3. The underlying data lake object is mapped to a data model object.
  4. The underlying data lake object is used in segmentation.

Answer(s): B,C

Explanation:

To delete a data stream in Data Cloud, the underlying data lake object (DLO) must not have any dependencies or references to other objects or processes. The following two dependencies prevent a data stream from being deleted1:

Data transform: This is a process that transforms the ingested data into a standardized format and structure for the data model. A data transform can use one or more DLOs as input or output. If a DLO is used in a data transform, it cannot be deleted until the data transform is removed or modified2.

Data model object: This is an object that represents a type of entity or relationship in the data model. A data model object can be mapped to one or more DLOs to define its attributes and values. If a DLO is mapped to a data model object, it cannot be deleted until the mapping is removed or changed3.

1: Delete a Data Stream article on Salesforce Help

2: [Data Transforms in Data Cloud] unit on Trailhead

3: [Data Model in Data Cloud] unit on Trailhead



What should a user do to pause a segment activation with the intent of using that segment
again?

  1. Deactivate the segment.
  2. Delete the segment.
  3. Skip the activation.
  4. Stop the publish schedule.

Answer(s): A

Explanation:

The correct answer is A. Deactivate the segment. If a segment is no longer needed, it can be deactivated through Data Cloud and applies to all chosen targets. A deactivated segment no longer publishes, but it can be reactivated at any time1. This option allows the user to pause a segment activation with the intent of using that segment again.

The other options are incorrect for the following reasons:

B . Delete the segment. This option permanently removes the segment from Data Cloud and cannot be undone2. This option does not allow the user to use the segment again.

C . Skip the activation. This option skips the current activation cycle for the segment, but does not affect the future activation cycles3. This option does not pause the segment activation indefinitely.

D . Stop the publish schedule. This option stops the segment from publishing to the chosen targets, but does not deactivate the segment4. This option does not pause the segment activation completely.

1: Deactivated Segment article on Salesforce Help

2: Delete a Segment article on Salesforce Help

3: Skip an Activation article on Salesforce Help

4: Stop a Publish Schedule article on Salesforce Help



When creating a segment on an individual, what is the result of using two separate

containers linked by an AND as shown below?

GoodsProduct | Count | At Least | 1

Color | Is Equal To | red

AND

GoodsProduct | Count | At Least | 1

PrimaryProductCategory | Is Equal To | shoes

  1. Individuals who purchased at least one of any red' product and also purchased at least one pairof `shoes'
  2. Individuals who purchased at least one 'red shoes' as a single line item in a purchase
  3. Individuals who made a purchase of at least one 'red shoes' and nothing else
  4. Individuals who purchased at least one of any 'red' product or purchased at least one pair of'shoes'

Answer(s): A

Explanation:

When creating a segment on an individual, using two separate containers linked by an AND means that the individual must satisfy both the conditions in the containers. In this case, the individual must have purchased at least one product with the color attribute equal to `red' and at least one product with the primary product category attribute equal to `shoes'. The products do not have to be the same or purchased in the same transaction. Therefore, the correct answer is A .

The other options are incorrect because they imply different logical operators or conditions. Option B implies that the individual must have purchased a single product that has both the color attribute equal to `red' and the primary product category attribute equal to `shoes'. Option C implies that the individual must have purchased only one product that has both the color attribute equal to `red' and the primary product category attribute equal to `shoes' and no other products. Option D implies that the individual must have purchased either one product with the color attribute equal to `red' or one product with the primary product category attribute equal to `shoes' or both, which is equivalent to using an OR operator instead of an AND operator.

Create a Container for Segmentation

Create a Segment in Data Cloud

Navigate Data Cloud Segmentation



What should an organization use to stream inventory levels from an inventory management

system into Data Cloud in a fast and scalable, near-real-time way?

  1. Cloud Storage Connector
  2. Commerce Cloud Connector
  3. Ingestion API
  4. Marketing Cloud Personalization Connector

Answer(s): C

Explanation:

The Ingestion API is a RESTful API that allows you to stream data from any source into Data Cloud in a fast and scalable way. You can use the Ingestion API to send data from your inventory management system into Data Cloud as JSON objects, and then use Data Cloud to create data models, segments, and insights based on your inventory data. The Ingestion API supports both batch and streaming modes, and can handle up to 100, 000 records per second. The Ingestion API also provides features such as data validation, encryption, compression, and retry mechanisms to ensure data quality and security.


Reference:

Ingestion API Developer Guide, Ingest Data into Data Cloud



Northern Trail Outfitters (NTO), an outdoor lifestyle clothing brand, recently started a new

line of business. The new business specializes in gourmet camping food. For business reasons as well as security reasons, it's important to NTO to keep all Data Cloud data separated by brand.

Which capability best supports NTO's desire to separate its data by brand?

  1. Data streams for each brand
  2. Data model objects for each brand
  3. Data spaces for each brand
  4. Data sources for each brand

Answer(s): C

Explanation:

Data spaces are logical containers that allow you to separate and organize your data by different criteria, such as brand, region, product, or business unit1. Data spaces can help you manage data access, security, and governance, as well as enable cross-cloud data integration and activation2. For NTO, data spaces can support their desire to separate their data by brand, so that they can have different data models, rules, and insights for their outdoor lifestyle clothing and gourmet camping food businesses. Data spaces can also help NTO comply with any data privacy and security regulations that may apply to their different brands3. The other options are incorrect because they do not provide the same level of data separation and organization as data spaces. Data streams are used to ingest data from different sources into Data Cloud, but they do not separate the data by brand4. Data model objects are used to define the structure and attributes of the data, but they do not isolate the data by brand5. Data sources are used to identify the origin and type of the data, but they do not partition the data by brand.


Reference:

Data Spaces Overview, Create Data Spaces, Data Privacy and Security in Data Cloud, Data Streams Overview, Data Model Objects Overview, [Data Sources Overview]



Viewing page 3 of 22
Viewing questions 17 - 24 out of 168 questions


AI Tutor AI Tutor 👋 I’m here to help!