Microsoft DP-700 Exam Questions
Implementing Data Engineering Solutions Using Microsoft Fabric (Page 3 )

Updated On: 21-Feb-2026

You have a Fabric workspace that contains a warehouse named Warehouse1.
You have an on-premises Microsoft SQL Server database named Database1 that is accessed by using an on- premises data gateway.
You need to copy data from Database1 to Warehouse1.
Which item should you use?

  1. a Dataflow Gen1 dataflow
  2. a data pipeline
  3. a KQL queryset
  4. a notebook

Answer(s): B

Explanation:

To copy data from an on-premises Microsoft SQL Server database (Database1) to a warehouse (Warehouse1) in Microsoft Fabric, the best option is to use a data pipeline. A data pipeline in Fabric allows for the orchestration of data movement, from source to destination, using connectors, transformations, and scheduled workflows. Since the data is being transferred from an on-premises database and requires the use of a data gateway, a data pipeline provides the appropriate framework to facilitate this data movement efficiently and reliably.



You have a Fabric workspace that contains a warehouse named Warehouse1.
You have an on-premises Microsoft SQL Server database named Database1 that is accessed by using an on- premises data gateway.
You need to copy data from Database1 to Warehouse1.
Which item should you use?

  1. an Apache Spark job definition
  2. a data pipeline
  3. a Dataflow Gen1 dataflow
  4. an eventstream

Answer(s): B

Explanation:

To copy data from an on-premises Microsoft SQL Server database (Database1) to a warehouse (Warehouse1) in Fabric, a data pipeline is the most appropriate tool. A data pipeline in Fabric is designed to move data between various data sources and destinations, including on-premises databases like SQL Server, and cloud- based storage like Fabric warehouses. The data pipeline can handle the connection through an on-premises data gateway, which is required to access on-premises data. This solution facilitates the orchestration of data movement and transformations if needed.



You have a Fabric F32 capacity that contains a workspace. The workspace contains a warehouse named DW1 that is modelled by using MD5 hash surrogate keys.
DW1 contains a single fact table that has grown from 200 million rows to 500 million rows during the past year.
You have Microsoft Power BI reports that are based on Direct Lake. The reports show year-over-year values.
Users report that the performance of some of the reports has degraded over time and some visuals show errors.
You need to resolve the performance issues. The solution must meet the following requirements:
Provide the best query performance.
Minimize operational costs.
Which should you do?

  1. Change the MD5 hash to SHA256.
  2. Increase the capacity.
  3. Enable V-Order.
  4. Modify the surrogate keys to use a different data type.
  5. Create views.

Answer(s): B

Explanation:

Correct:
* Increase the capacity.
We need to upgrade the capacity.
Direct Lake in Fabric F32 Capacity supports tables up to 300 million rows. F64 can handle 500 million rows.
Incorrect:
* Change the MD5 hash to SHA256.
* Create views.
* Disable V-Order on the warehouse.
* Enable V-Order.
* Modify the surrogate keys to use a different data type.



HOTSPOT (Drag and Drop is not supported)
You have a Fabric workspace that contains a warehouse named DW1. DW1 contains the following tables and columns.

You need to create an output that presents the summarized values of all the order quantities by year and product. The results must include a summary of the order quantities at the year level for all the products.
How should you complete the code? To answer, select the appropriate options in the answer area.
Note: Each correct selection is worth one point.
Hot Area:

  1. See Explanation section for answer.

Answer(s): A

Explanation:



Summarize by year and product: The query needs to group data by both year and product name.
Include a summary of order quantities at the year level: This is achieved using grouping mechanisms like ROLLUP or CUBE.
SELECT YEAR(SO.ModifiedDate)
Since we need to summarize the data by year, extracting the year from the ModifiedDate column using YEAR (SO.ModifiedDate) is the correct choice. Other options like CAST or CONVERT do not specifically extract the year.
ROLLUP(YEAR(SO.ModifiedDate), P.Name)
The ROLLUP function creates subtotals for each grouping combination. In this case:
It will group by YEAR(SO.ModifiedDate) and P.Name (product name).
It will also include a summary for all products for each year, which meets the requirement of summarizing order quantities at the year level for all products.



You have a Fabric workspace that contains a lakehouse named Lakehouse1. Data is ingested into Lakehouse1 as one flat table. The table contains the following columns.


You plan to load the data into a dimensional model and implement a star schema. From the original flat table, you create two tables named FactSales and DimProduct. You will track changes in DimProduct.
You need to prepare the data.
Which three columns should you include in the DimProduct table? Each correct answer presents part of the solution.
Note: Each correct selection is worth one point.

  1. Date
  2. ProductName
  3. ProductColor
  4. TransactionID
  5. SalesAmount
  6. ProductID

Answer(s): B,C,F

Explanation:

In a star schema, the DimProduct table serves as a dimension table that contains descriptive attributes about products. It will provide context for the FactSales table, which contains transactional data. The following columns should be included in the DimProduct table:
1. ProductName: The ProductName is an important descriptive attribute of the product, which is needed for analysis and reporting in a dimensional model.
2. ProductColor: ProductColor is another descriptive attribute of the product. In a star schema, it makes sense to include attributes like color in the dimension table to help categorize products in the analysis.
3. ProductID: ProductID is the primary key for the DimProduct table, which will be used to join the FactSales table to the product dimension. It's essential for uniquely identifying each product in the model.






Post your Comments and Discuss Microsoft DP-700 exam dumps with other Community members:

Join the DP-700 Discussion