Microsoft DP-300 Exam Questions
Administering Microsoft Azure SQL Solutions (Page 6 )

Updated On: 17-Feb-2026

You have an Azure SQL database named DB1.

You have a table name Table1 that has 20 columns of type CHAR(400). Row compression for Table1 is enabled.

During a database audit, you discover that none of the fields contain more than 150 characters.

You need to ensure that you can apply page compression to Table1.

What should you do?

  1. Configure the columns as sparse.
  2. Change the column type to NVARCHAR(MAX).
  3. Change the column type to VARCHAR(MAX).
  4. Change the column type to VARCHAR(200).

Answer(s): D

Explanation:

We reduce the max length of the column from 400 to 200.
Incorrect:
Not A: Sparse column is useful when there are many null columns.
The SQL Server Database Engine uses the SPARSE keyword in a column definition to optimize the storage of values in that column. Therefore, when the column value is NULL for any row in the table, the values require no storage.
Not B, Not C: SQL Server 2005 got around the limitation of 8KB storage size and provided a workaround with varchar(max). It is a non-Unicode large variable-length character data type and can store a maximum of 2^31- 1 bytes (2 GB) of non-Unicode characters.


Reference:

https://www.sqlshack.com/sql-varchar-data-type-deep-dive/
https://36chambers.wordpress.com/2020/06/18/nvarchar-everywhere-a-thought-experiment/



You have an on-premises Microsoft SQL Server named SQL1 that hosts five databases.

You need to migrate the databases to an Azure SQL managed instance. The solution must minimize downtime and prevent data loss.

What should you use?

  1. Always On availability groups
  2. Backup and Restore
  3. log shipping
  4. Database Migration Assistant

Answer(s): B



You have an Azure subscription that contains an Azure SQL database. The database contains a table named table1 that uses partitioned columnstores.

You need to configure table1 to meet the following requirements:

Each partition must be compressed.

The compression ratio must be maximized.

You must be able to index the compressed data.

What should you use?

  1. page compression
  2. columnstore compression
  3. GZIP compression
  4. columnstore archival compression

Answer(s): D

Explanation:

SQL Server, Azure SQL Database, and Azure SQL Managed Instance support row and page compression for rowstore tables and indexes, and support columnstore and columnstore archival compression for columnstore tables and indexes.
For columnstore tables and indexes, all columnstore tables and indexes always use columnstore compression and this is not user configurable.
Compressing columnstore indexes with archival compression, causes the index to perform slower than columnstore indexes that do not have the archival compression. Use archival compression only when you can afford to use extra time and CPU resources to compress and retrieve the data.
The benefit of archival compression, is reduced storage, which is useful for data that is not accessed frequently. For example, if you have a partition for each month of data, and most of your activity is for the most recent months, you could archive older months to reduce the storage requirements.


Reference:

https://docs.microsoft.com/en-us/sql/relational-databases/data-compression/data-compression



You have an Azure subscription linked to a Microsoft Entra tenant. The subscription contains 10 virtual machines that run Windows Server 2019 and host Microsoft SQL Server 2019 instances.

You need to ensure that you can manage the SQL Server instances by using a single user account.

What should you do first?

  1. Enable a user-assigned managed identity on each virtual machine.
  2. Deploy a Microsoft Entra Domain Services domain and join the virtual machines to the domain.
  3. Enable a system-assigned managed identity on each virtual machine.
  4. Join the virtual machines to the Microsoft Entra tenant.

Answer(s): B



DRAG DROP (Drag and Drop is not supported)

You have an Azure subscription.

You plan to deploy a new Azure virtual machine that will host a Microsoft SQL Server instance.

You need to configure the disks on the virtual machine. The solution must meet the following requirements:

Minimize latency for transaction logs.

Minimize VM allowed IO.

Which type of disk should you use for each workload? To answer, drag the appropriate disk types to the correct workloads. Each disk type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

Note: Each correct selection is worth one point.

Select and Place:

  1. See Explanation section for answer.

Answer(s): A

Explanation:





Box 1: Premium SSD

Storage: Performance best practices for SQL Server on Azure VMs. Place tempdb on the local ephemeral SSD (default D:\) drive for most SQL Server workloads that are not part of Failover Cluster Instance (FCI) after choosing the optimal VM size.

Select Premium SSD instead of Standard SSD for better performance.

Box 2: Ultra Disk
For the log drive plan for capacity and test performance versus cost while evaluating the premium P30 - P80 disks
If submillisecond storage latency is required, use Azure ultra disks for the transaction log. For M-series virtual machine deployments consider write accelerator over using Azure ultra disks.


Reference:

https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/performance-guidelines-best- practices-storage






Post your Comments and Discuss Microsoft DP-300 exam dumps with other Community members:

Join the DP-300 Discussion