Free DP-100 Exam Braindumps

You are developing a hands-on workshop to introduce Docker for Windows to attendees.
You need to ensure that workshop attendees can install Docker on their devices.
Which two prerequisite components should attendees install on the devices? Each Answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  1. Microsoft Hardware-Assisted Virtualization Detection Tool
  2. Kitematic
  3. BIOS-enabled virtualization
  4. VirtualBox
  5. Windows 10 64-bit Professional

Answer(s): C,E


C: Make sure your Windows system supports Hardware Virtualization Technology and that virtualization is enabled. Ensure that hardware virtualization support is turned on in the BIOS settings. For example:

E: To run Docker, your machine must have a 64-bit operating system running Windows 7 or higher.


You are preparing to use the Azure ML SDK to run an experiment and need to create compute. You run the following code:

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Exhibit A:

Exhibit B:

  1. Please refer to Exhibit B for the answer.

Answer(s): A


Box 1: No
If a compute cluster already exists it will be used.

Box 2: Yes
The wait_for_completion method waits for the current provisioning operation to finish on the cluster.

Box 3: Yes
Low Priority VMs use Azure's excess capacity and are thus cheaper but risk your run being pre-empted.

Box 4: No
Need to use training_compute.delete() to deprovision and delete the AmlCompute target.


You create a datastore named training_data that references a blob container in an Azure Storage account. The blob container contains a folder named csv_files in which multiple comma-separated values (CSV) files are stored.

You have a script named in a local folder named ./script that you plan to run as an experiment using an estimator. The script includes the following code to read data from the csv_files folder:

You have the following script.

You need to configure the estimator for the experiment so that the script can read the data from a data reference named data_ref that references the csv_files folder in the training_data datastore.

Which code should you use to configure the estimator?

Answer(s): B


Besides passing the dataset through the inputs parameter in the estimator, you can also pass the dataset through script_params and get the data path (mounting point) in your training script via arguments. This way, you can keep your training script independent of azureml-sdk. In other words, you will be able use the same training script for local debugging and remote training on any cloud platform.

from azureml.train.sklearn import SKLearn

script_params = {
# mount the dataset on the remote compute and pass the mounted path as an argument to the training script
'--data-folder': mnist_ds.as_named_input('mnist').as_mount(),
'--regularization': 0.5

est = SKLearn(source_directory=script_folder,

# Run the experiment
run = experiment.submit(est)

Incorrect Answers:
A: Pandas DataFrame not used.


You create a multi-class image classification deep learning experiment by using the PyTorch framework. You plan to run the experiment on an Azure Compute cluster that has nodes with GPU’s.

You need to define an Azure Machine Learning service pipeline to perform the monthly retraining of the image classification model. The pipeline must run with minimal cost and minimize the time required to train the model.

Which three pipeline steps should you run in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Select and Place:

Exhibit A:

Exhibit B: