Microsoft DP-700 Exam Questions
Implementing Data Engineering Solutions Using Microsoft Fabric (Page 4 )

Updated On: 21-Feb-2026

You have a Fabric workspace named Workspace1 that contains a notebook named Notebook1.
In Workspace1, you create a new notebook named Notebook2.
You need to ensure that you can attach Notebook2 to the same Apache Spark session as Notebook1.
What should you do?

  1. Enable high concurrency for notebooks.
  2. Enable dynamic allocation for the Spark pool.
  3. Change the runtime version.
  4. Increase the number of executors.

Answer(s): A

Explanation:

To ensure that Notebook2 can attach to the same Apache Spark session as Notebook1, you need to enable high concurrency for notebooks. High concurrency allows multiple notebooks to share a Spark session, enabling them to run within the same Spark context and thus share resources like cached data, session state, and compute capabilities. This is particularly useful when you need notebooks to run in sequence or together while leveraging shared resources.



You have a Fabric workspace named Workspace1 that contains a lakehouse named Lakehouse1. Lakehouse1 contains the following tables:
Orders
Customer
Employee
The Employee table contains Personally Identifiable Information (PII).
A data engineer is building a workflow that requires writing data to the Customer table, however, the user does NOT have the elevated permissions required to view the contents of the Employee table.
You need to ensure that the data engineer can write data to the Customer table without reading data from the Employee table.
Which three actions should you perform? Each correct answer presents part of the solution.
Note: Each correct selection is worth one point.

  1. Share Lakehouse1 with the data engineer.
  2. Assign the data engineer the Contributor role for Workspace2.
  3. Assign the data engineer the Viewer role for Workspace2.
  4. Assign the data engineer the Contributor role for Workspace1.
  5. Migrate the Employee table from Lakehouse1 to Lakehouse2.
  6. Create a new workspace named Workspace2 that contains a new lakehouse named Lakehouse2.
  7. Assign the data engineer the Viewer role for Workspace1.

Answer(s): D,E,F



You have a Fabric warehouse named DW1. DW1 contains a table that stores sales data and is used by multiple sales representatives.
You plan to implement row-level security (RLS).
You need to ensure that the sales representatives can see only their respective data.
Which warehouse object do you require to implement RLS?

  1. STORED PROCEDURE
  2. CONSTRAINT
  3. SCHEMA
  4. FUNCTION

Answer(s): D

Explanation:

To implement Row-Level Security (RLS) in a Fabric warehouse, you need to use a function that defines the security logic for filtering the rows of data based on the user's identity or role. This function can be used in conjunction with a security policy to control access to specific rows in a table.
In the case of sales representatives, the function would define the filtering criteria (e.g., based on a column such as SalesRepID or SalesRepName), ensuring that each representative can only see their respective data.



HOTSPOT (Drag and Drop is not supported)
You have a Fabric workspace named Workspace1_DEV that contains the following items:
10 reports
Four notebooks
Three lakehouses
Two data pipelines
Two Dataflow Gen1 dataflows
Three Dataflow Gen2 dataflows
Five semantic models that each has a scheduled refresh policy
You create a deployment pipeline named Pipeline1 to move items from Workspace1_DEV to a new workspace named Workspace1_TEST.
You deploy all the items from Workspace1_DEV to Workspace1_TEST.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Note: Each correct selection is worth one point.
Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:



Data from the semantic models will be deployed to the target stage ­ No In a deployment pipeline, data from semantic models (such as cached data) is not typically deployed along with the models themselves.
While the semantic models themselves (including structure and definitions) will be deployed, the actual data (e.g., the results of the model refreshes) will not be automatically transferred. You would need to handle data refreshes separately after the semantic models are deployed.
The Dataflow Gen1 dataflows will be deployed to the target stage ­ Yes Dataflows, including Dataflow Gen1 dataflows, are part of the deployment pipeline and will be deployed to the target stage. These dataflows are part of the solution being deployed, and the pipeline ensures their migration to the target workspace (Workspace1_TEST).
The scheduled refresh policies will be deployed to the target stage ­ No While the scheduled refresh policies for semantic models are part of the configuration, they are not automatically deployed through the pipeline. Deployment pipelines typically move content, such as reports, notebooks, and dataflows, but scheduled refresh settings are not automatically transferred with the deployment. You would need to manually configure the refresh policies in the new workspace (Workspace1_TEST) after the deployment.



You have a Fabric deployment pipeline that uses three workspaces named Dev, Test, and Prod.
You need to deploy an eventhouse as part of the deployment process.
What should you use to add the eventhouse to the deployment process?

  1. GitHub Actions
  2. a deployment pipeline
  3. an Azure DevOps pipeline

Answer(s): C

Explanation:

Correct:
* an Azure DevOps pipeline
Incorrect:
* a deployment pipeline
* an eventstream
* GitHub Actions






Post your Comments and Discuss Microsoft DP-700 exam dumps with other Community members:

Join the DP-700 Discussion