PDF Only
$35.00 Free Updates Upto 90 Days
- DP-100 Dumps PDF
- 428 Questions
- Updated On November 04, 2024
PDF + Test Engine
$55.00 Free Updates Upto 90 Days
- DP-100 Question Answers
- 428 Questions
- Updated On November 04, 2024
Test Engine
$45.00 Free Updates Upto 90 Days
- DP-100 Practice Questions
- 428 Questions
- Updated On November 04, 2024
How to pass Microsoft DP-100 exam with the help of dumps?
DumpsPool provides you the finest quality resources you’ve been looking for to no avail. So, it's due time you stop stressing and get ready for the exam. Our Online Test Engine provides you with the guidance you need to pass the certification exam. We guarantee top-grade results because we know we’ve covered each topic in a precise and understandable manner. Our expert team prepared the latest Microsoft DP-100 Dumps to satisfy your need for training. Plus, they are in two different formats: Dumps PDF and Online Test Engine.
How Do I Know Microsoft DP-100 Dumps are Worth it?
Did we mention our latest DP-100 Dumps PDF is also available as Online Test Engine? And that’s just the point where things start to take root. Of all the amazing features you are offered here at DumpsPool, the money-back guarantee has to be the best one. Now that you know you don’t have to worry about the payments. Let us explore all other reasons you would want to buy from us. Other than affordable Real Exam Dumps, you are offered three-month free updates.
You can easily scroll through our large catalog of certification exams. And, pick any exam to start your training. That’s right, DumpsPool isn’t limited to just Microsoft Exams. We trust our customers need the support of an authentic and reliable resource. So, we made sure there is never any outdated content in our study resources. Our expert team makes sure everything is up to the mark by keeping an eye on every single update. Our main concern and focus are that you understand the real exam format. So, you can pass the exam in an easier way!
IT Students Are Using our Designing and Implementing a Data Science Solution on Azure Dumps Worldwide!
It is a well-established fact that certification exams can’t be conquered without some help from experts. The point of using Designing and Implementing a Data Science Solution on Azure Practice Question Answers is exactly that. You are constantly surrounded by IT experts who’ve been through you are about to and know better. The 24/7 customer service of DumpsPool ensures you are in touch with these experts whenever needed. Our 100% success rate and validity around the world, make us the most trusted resource candidates use. The updated Dumps PDF helps you pass the exam on the first attempt. And, with the money-back guarantee, you feel safe buying from us. You can claim your return on not passing the exam.
How to Get DP-100 Real Exam Dumps?
Getting access to the real exam dumps is as easy as pressing a button, literally! There are various resources available online, but the majority of them sell scams or copied content. So, if you are going to attempt the DP-100 exam, you need to be sure you are buying the right kind of Dumps. All the Dumps PDF available on DumpsPool are as unique and the latest as they can be. Plus, our Practice Question Answers are tested and approved by professionals. Making it the top authentic resource available on the internet. Our expert has made sure the Online Test Engine is free from outdated & fake content, repeated questions, and false plus indefinite information, etc. We make every penny count, and you leave our platform fully satisfied!
Frequently Asked Questions
Question # 1
You use the following code to run a script as an experiment in Azure Machine Learning: You must identify the output files that are generated by the experiment run. You need to add code to retrieve the output file names. Which code segment should you add to the script?
A. files = run.get_properties()
B. files= run.get_file_names()
C. files = run.get_details_with_logs()
D. files = run.get_metrics()
E. files = run.get_details()
Question # 2
You write five Python scripts that must be processed in the order specified in Exhibit A – which allows the same modules to run in parallel, but will wait for modules with dependencies. You must create an Azure Machine Learning pipeline using the Python SDK, because you want to script to create the pipeline to be tracked in your version control system. You have created five PythonScriptSteps and have named the variables to match the module names.
A. Option A
B. Option B
C. Option C
D. Option D
Question # 3
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. An IT department creates the following Azure resource groups and resources: The IT department creates an Azure Kubernetes Service (AKS)-based inference compute target named aks-cluster in the Azure Machine Learning workspace. You have a Microsoft Surface Book computer with a GPU. Python 3.6 and Visual Studio Code are installed. You need to run a script that trains a deep neural network (DNN) model and logs the loss and accuracy metrics. Solution: Install the Azure ML SDK on the Surface Book. Run Python code to connect to the workspace and then run the training script as an experiment on local compute.
A. Yes
B. No
Question # 4
A set of CSV files contains sales records. All the CSV files have the same data schema. Each CSV file contains the sales record for a particular month and has the filename sales.csv. Each file in stored in a folder that indicates the month and year when the data was recorded. The folders are in an Azure blob container for which a datastore has been defined in an Azure Machine Learning workspace. The folders are organized in a parent folder named sales to create the following hierarchical structure: At the end of each month, a new folder with that month’s sales file is added to the sales folder. You plan to use the sales data to train a machine learning model based on the following requirements: You must define a dataset that loads all of the sales data to date into a structure that can be easily converted to a dataframe. You must be able to create experiments that use only data that was created before a specific previous month, ignoring any data that was added after that month. You must register the minimum number of datasets possible. You need to register the sales data as a dataset in Azure Machine Learning service workspace. What should you do?
A. Create a tabular dataset that references the datastore and explicitly specifies each
'sales/mm-yyyy/
sales.csv' file every month. Register the dataset with the name sales_dataset each month,
replacing the existing dataset and specifying a tag named month indicating the month and year it was
registered. Use
this dataset for all experiments.
B. Create a tabular dataset that references the datastore and specifies the path 'sales/*/sales.csv', register the dataset with the name sales_dataset and a tag named month indicating the month and year it was registered, and use this dataset for all experiments.
C. Create a new tabular dataset that references the datastore and explicitly specifies each 'sales/mm-yyyy/ sales.csv' file every month. Register the dataset with the name sales_dataset_MM-YYYY each month with appropriate MM and YYYY values for the month and year. Use the appropriate month-specific dataset for experiments.
D. Create a tabular dataset that references the datastore and explicitly specifies each 'sales/mm-yyyy/ sales.csv' file. Register the dataset with the name sales_dataset each month as a new version and with a tag named month indicating the month and year it was registered. Use this dataset for all experiments, identifying the version to be used based on the month tag as necessary.
Question # 5
You have the following Azure subscriptions and Azure Machine Learning service workspaces:
A. Yes
B. No
Question # 6
You are creating a classification model for a banking company to identify possible instances of credit card fraud. You plan to create the model in Azure Machine Learning by using automated machine learning. The training dataset that you are using is highly unbalanced. You need to evaluate the classification model. Which primary metric should you use?
A. normalized_mean_absolute_error
B. [spearman_correlation
C. AUC.weighted
D. accuracy
E. normalized_root_mean_squared_error
Question # 7
You run an experiment that uses an AutoMLConfig class to define an automated machine learning task with a maximum of ten model training iterations. The task will attempt to find the best performing model based on a metric named accuracy. You submit the experiment with the following code: You need to create Python code that returns the best model that is generated by the automated machine learning task. Which code segment should you use?
A. Option A
B. Option B
C. Option C
D. Option D
Question # 8
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register a machine learning model. You plan to deploy the model as a real-time web service. Applications must use key-based authentication to use the model. You need to deploy the web service. Solution: Create an AksWebservice instance. Set the value of the auth_enabled property to True. Deploy the model to the service. Does the solution meet the goal?
A. Yes
B. No
Question # 9
You use the following Python code in a notebook to deploy a model as a web service: The deployment fails. You need to use the Python SDK in the notebook to determine the events that occurred during service deployment an initialization. Which code segment should you use?
A. service.state
B. service.environment
C. service.get_logs()
D. Service.serialize
Question # 10
You must use the Azure Machine Learning SDK to interact with data and experiments in the workspace. You need to configure the config.json file to connect to the workspace from the Python environment. Which two additional parameters must you add to the config.json file in order to connect to the workspace? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. subscription_Id
B. Key
C. resource_group
D. region
E. Login
Question # 11
You use the Azure Machine Learning service to create a tabular dataset named training.data. You plan to use this dataset in a training script. You create a variable that references the dataset using the following code: training_ds = workspace.datasets.get("training_data") You define an estimator to run the script. You need to set the correct property of the estimator to ensure that your script can access the training.data dataset Which property should you set?
A. Option A
B. Option B
C. Option C
D. Option D
Question # 12
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model’s predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a MimicExplainer. Does the solution meet the goal?
A. Yes
B. No
Question # 13
You create an Azure Machine Learning workspace. You must configure an event handler to send an email notification wten data drift is detected in the workspace datasets. You must minimize development efforts. You need to configure an Azure service to send the notification. Which Azure service should you use?
A. Azure Function apps
B. Azure DevOps pipeline
C. Azure Automation runbook
D. Azure Logic Apps
Question # 14
You create a binary classification model. The model is registered in an Azure Machine Learning workspace. You use the Azure Machine Learning Fairness SDK to assess the model fairness. You develop a training script for the model on a local machine. You need to load the model fairness metrics into Azure Machine Learning studio. What should you do?
A. Implement the download_dashboard_by_upload_id function
B. Implement the creace_group_metric_sec function
C. Implement the upload_dashboard_dictionary function
D. Upload the training script
Question # 15
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create an Azure Machine Learning service datastore in a workspace. The datastore contains the following files: • /data/2018/Q1 .csv • /data/2018/Q2.csv • /data/2018/Q3.csv • /data/2018/Q4.csv • /data/2019/Q1.csv All files store data in the following format: id,f1,f2,l 1,1,2,0 2,1,1,1 3.2.1.0 You run the following code:
A. Yes
B. No
Question # 16
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create an Azure Machine Learning service datastore in a workspace. The datastore contains the following files: • /data/2018/Q1 .csv • /data/2018/Q2.csv • /data/2018/Q3.csv • /data/2018/Q4.csv • /data/2019/Q1.csv All files store data in the following format: id,f1,f2,l 1,1,2,0 2,1,1,1 3.2.1.0 You run the following code:
A. Yes
B. No
Question # 17
You use Azure Machine Learning Studio to build a machine learning experiment. You need to divide data into two distinct datasets. Which module should you use?
A. Partition and Sample
B. Assign Data to Clusters
C. Group Data into Bins
D. Test Hypothesis Using t-Test
Question # 18
You create a workspace by using Azure Machine Learning Studio. You must run a Python SDK v2 notebook in the workspace by using Azure Machine Learning Studio. You must preserve the current values of variables set in the notebook for the current instance. You need to maintain the state of the notebook. What should you do?
A. Change the compute.
B. Change the current kernel
C. Stop the compute.
D. Stop the current kernel.
Question # 19
You have an Azure Machine Learning workspace named workspaces. You must add a datastore that connects an Azure Blob storage container to workspaces. You must be able to configure a privilege level. You need to configure authentication. Which authentication method should you use?
A. Account key
B. SAS token
C. Service principal
D. Managed identity
Question # 20
You run a script as an experiment in Azure Machine Learning. You have a Run object named run that references the experiment run. You must review the log files that were generated during the experiment run. You need to download the log files to a local folder for review. Which two code segments can you run to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. run.get_details()
B. run.get_file_names()
C. run.get_metrics()
D. run.download_files(output_directory='./runfiles')
E. run.get_all_logs(destination='./runlogs')
Question # 21
You develop and train a machine learning model to predict fraudulent transactions for a hotel booking website. Traffic to the site varies considerably. The site experiences heavy traffic on Monday and Friday and much lower traffic on other days. Holidays are also high web traffic days. You need to deploy the model as an Azure Machine Learning real-time web service endpoint on compute that can dynamically scale up and down to support demand. Which deployment compute option should you use?
A. attached Azure Databricks cluster
B. Azure Container Instance (ACI)
C. Azure Kubernetes Service (AKS) inference cluster
D. Azure Machine Learning Compute Instance
E. attached virtual machine in a different region
Question # 22
You train and register a model in your Azure Machine Learning workspace. You must publish a pipeline that enables client applications to use the model for batch inferencing. You must use a pipeline with a single ParallelRunStep step that runs a Python inferencing script to get predictions from the input data. You need to create the inferencing script for the ParallelRunStep pipeline step. Which two functions should you include? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. run(mini_batch)
D
B. main()
C. batch()
D. init()
E. score(mini_batch)
Question # 23
You create a batch inference pipeline by using the Azure ML SDK. You run the pipeline by using the following code: from azureml.pipeline.core import Pipeline from azureml.core.experiment import Experiment pipeline = Pipeline(workspace=ws, steps=[parallelrun_step]) pipeline_run = Experiment(ws, 'batch_pipeline').submit(pipeline) You need to monitor the progress of the pipeline execution. What are two possible ways to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Option A
B. Option B
C. Option C
D. Option D
E. Option E
Question # 24
You are creating a new Azure Machine Learning pipeline using the designer. The pipeline must train a model using data in a comma-separated values (CSV) file that is published on a website. You have not created a dataset for this file. You need to ingest the data from the CSV file into the designer pipeline using the minimal administrative effort. Which module should you add to the pipeline in Designer?
A. Convert to CSV
B. Enter Data Manually D
C. Import Data
D. Dataset
Question # 25
You use Azure Machine Learning to train a model. You must use a sampling method for tuning hyperparameters. The sampling method must pick samples based on how the model performed with previous samples. You need to select a sampling method. Which sampling method should you use?
A. Grid
B. Bayesian
C. Random
Question # 26
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model’s predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a PFIExplainer. Does the solution meet the goal?
A. Yes
B. No
Question # 27
You create a multi-class image classification deep learning model. You train the model by using PyTorch version 1.2. You need to ensure that the correct version of PyTorch can be identified for the inferencing environment when the model is deployed. What should you do?
A. Save the model locally as a.pt file, and deploy the model as a local web service.
B. Deploy the model on computer that is configured to use the default Azure Machine Learning conda environment.
C. Register the model with a .pt file extension and the default version property.
D. Register the model, specifying the model_framework and model_framework_version properties.
Question # 28
You use Azure Machine Learning studio to analyze a dataset containing a decimal column named column1. You need to verity that the column1 values are normally distributed. Which static should you use?
A. Profile
B. Type
C. Max
D. Mean
Question # 29
You are a lead data scientist for a project that tracks the health and migration of birds. You create a multi-class image classification deep learning model that uses a set of labeled bird photographs collected by experts. You have 100,000 photographs of birds. All photographs use the JPG format and are stored in an Azure blob container in an Azure subscription. You need to access the bird photograph files in the Azure blob container from the Azure Machine Learning service workspace that will be used for deep learning model training. You must minimize data movement. What should you do?
A. Create an Azure Data Lake store and move the bird photographs to the store.
B. Create an Azure Cosmos DB database and attach the Azure Blob containing bird photographs storage to the database.
C. Create and register a dataset by using TabularDataset class that references the Azure blob storage containing bird photographs.
D. Register the Azure blob storage containing the bird photographs as a datastore in Azure Machine Learning service.
E. Copy the bird photographs to the blob datastore that was created with your Azure Machine Learning service workspace.
Question # 30
You create an Azure Machine Learning workspace named workspaces. You create a Python SDK v2 notebook to perform custom model training in wortcspacel. You need to run the notebook from Azure Machine Learning Studio in workspace1. What should you provision first?
A. default storage account
B. real-time endpoint
C. Azure Machine Learning compute cluster
D. Azure Machine Learning compute instance
Question # 31
You create an Azure Machine Learning workspace. You train an MLflow-formatted regression model by using tabular structured data. You must use a Responsible Al dashboard to assess the model. You need to use the Azure Machine Learning studio Ul to generate the Responsible A dashboard. What should you do first?
A. Deploy the model to a managed online endpoint.
B. Register the model with the workspace.
C. Create the model explanations.
D. Convert the model from the MLflow format to a custom format.
Question # 32
You have a Python script that executes a pipeline. The script includes the following code: from azureml.core import Experiment pipeline_run = Experiment(ws, 'pipeline_test').submit(pipeline) You want to test the pipeline before deploying the script. You need to display the pipeline run details written to the STDOUT output when the pipeline completes. Which code segment should you add to the test script?
A. pipeline_run.get.metrics()
B. pipeline_run.wait_for_completion(show_output=True)
C. pipeline_param = PipelineParameter(name="stdout", default_value="console")
D. pipeline_run.get_status()
Question # 33
You train a machine learning model. You must deploy the model as a real-time inference service for testing. The service requires low CPU utilization and less than 48 MB of RAM. The compute target for the deployed service must initialize automatically while minimizing cost and administrative overhead. Which compute target should you use?
A. Azure Kubernetes Service (AKS) inference cluster
B. Azure Machine Learning compute cluster
C. Azure Container Instance (ACI)
D. attached Azure Databricks cluster
Question # 34
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are using Azure Machine Learning to run an experiment that trains a classification model. You want to use Hyperdrive to find parameters that optimize the AUC metric for the model. You configure a HyperDriveConfig for the experiment by running the following code:
A. Yes
B. No
Question # 35
You create a training pipeline by using the Azure Machine Learning designer. You need to load data into a machine learning pipeline by using the Import Data component. Which two data sources could you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point
A. Azure Blob storage container through a registered datastore
B. Azure SQL Database
C. URL via HTTP
D. Azure Data Lake Storage Gen2
E. Registered dataset
Question # 36
You are creating a compute target to train a machine learning experiment. The compute target must support automated machine learning, machine learning pipelines, and Azure Machine Learning designer training. You need to configure the compute target Which option should you use?
A. Azure HDInsight
B. Azure Machine Learning compute cluster
C. Azure Batch
D. Remote VM
Question # 37
You create a script that trains a convolutional neural network model over multiple epochs and logs the validation loss after each epoch. The script includes arguments for batch size and learning rate. You identify a set of batch size and learning rate values that you want to try. You need to use Azure Machine Learning to find the combination of batch size and learning rate that results in the model with the lowest validation loss. What should you do?
A. Run the script in an experiment based on an AutoMLConfig object
B. Create a PythonScriptStep object for the script and run it in a pipeline
C. Use the Automated Machine Learning interface in Azure Machine Learning studio
D. Run the script in an experiment based on a ScriptRunConfig object
E. Run the script in an experiment based on a HyperDriveConfig object
Question # 38
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. An IT department creates the following Azure resource groups and resources: The IT department creates an Azure Kubernetes Service (AKS)-based inference compute target named aks-cluster in the Azure Machine Learning workspace. You have a Microsoft Surface Book computer with a GPU. Python 3.6 and Visual Studio Code are installed. You need to run a script that trains a deep neural network (DNN) model and logs the loss and accuracy metrics. Solution: Attach the mlvm virtual machine as a compute target in the Azure Machine Learning workspace. Install the Azure ML SDK on the Surface Book and run Python code to connect to the workspace. Run the training script as an experiment on the mlvm remote compute resource.
A. Yes
B. No
Question # 39
You plan to run a script as an experiment using a Script Run Configuration. The script uses modules from the scipy library as well as several Python packages that are not typically installed in a default conda environment You plan to run the experiment on your local workstation for small datasets and scale out the experiment by running it on more powerful remote compute clusters for larger datasets. You need to ensure that the experiment runs successfully on local and remote compute with the least administrative effort. What should you do?
A. Create and register an Environment that includes the required packages. Use this
Environment for all experiment runs.
B. Always run the experiment with an Estimator by using the default packages.
C. Do not specify an environment in the run configuration for the experiment. Run the experiment by using the default environment.
D. Create a config.yaml file defining the conda packages that are required and save the file in the experiment folder.
E. Create a virtual machine (VM) with the required Python configuration and attach the VM as a compute target. Use this compute target for all experiment runs.
Question # 40
You use the Azure Machine Learning Python SDK to define a pipeline to train a model. The data used to train the model is read from a folder in a datastore. You need to ensure the pipeline runs automatically whenever the data in the folder changes. What should you do?
A. Set the regenerate_outputs property of the pipeline to True
B. Create a ScheduleRecurrance object with a Frequency of auto. Use the object to create a Schedule for the pipeline
C. Create a PipelineParameter with a default value that references the location where the training data is stored
D. Create a Schedule for the pipeline. Specify the datastore in the datastore property, and the folder containing the training data in the path_on_datascore property
Question # 41
You need to record the row count as a metric named row_count that can be returned using the get_metrics method of the Run object after the experiment run completes. Which code should you use?
A. run.upload_file(‘row_count’, ‘./data.csv’)
B. run.log(‘row_count’, rows)
C. run.tag(‘row_count’, rows)
D. run.log_table(‘row_count’, rows)
E. run.log_row(‘row_count’, rows)
Question # 42
You have a dataset that is stored m an Azure Machine Learning workspace. You must perform a data analysis for differentiate privacy by using the SmartNoise SDK. You need to measure the distribution of reports for repeated queries to ensure that they are balanced Which type of test should you perform?
A. Bias
B. Accuracy
C. Privacy
D. Utility
Question # 43
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run: from azureml.core import Run import pandas as pd run = Run.get_context() data = pd.read_csv('data.csv') label_vals = data['label'].unique() # Add code to record metrics here run.complete() The experiment must record the unique labels in the data as metrics for the run that can be reviewed later. You must add code to the script to record the unique label values as run metrics at the point indicated by the comment. Solution: Replace the comment with the following code: for label_val in label_vals: run.log('Label Values', label_val) Does the solution meet the goal?
A. Yes
B. No
Question # 44
You have a Jupyter Notebook that contains Python code that is used to train a model. You must create a Python script for the production deployment. The solution must minimize code maintenance. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. Refactor the Jupyter Notebook code into functions
B. Save each function to a separate Python file
C. Define a main() function in the Python script
D. Remove all comments and functions from the Python script
Question # 45
You have the following Azure subscriptions and Azure Machine Learning service workspaces:
A. Yes
B. No
Question # 46
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run: from azureml.core import Run import pandas as pd run = Run.get_context() data = pd.read_csv('data.csv') label_vals = data['label'].unique() # Add code to record metrics here run.complete() The experiment must record the unique labels in the data as metrics for the run that can be reviewed later. You must add code to the script to record the unique label values as run metrics at the point indicated by the comment. Solution: Replace the comment with the following code: run.log_table('Label Values', label_vals) Does the solution meet the goal?
A. Yes
B. No
Question # 47
You create an Azure Machine Learning compute resource to train models. The compute resource is configured as follows: Minimum nodes: 2 Maximum nodes: 4 You must decrease the minimum number of nodes and increase the maximum number of nodes to the following values: Minimum nodes: 0 Maximum nodes: 8 You need to reconfigure the compute resource. What are three possible ways to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Use the Azure Machine Learning studio.
B. Run the update method of the AmlCompute class in the Python SDK.
C. Use the Azure portal.
D. Use the Azure Machine Learning designer.
E. Run the refresh_state() method of the BatchCompute class in the Python SDK
Question # 48
You manage an Azure Machine Learning workspace by using the Azure CLI ml extension v2. You need to define a YAML schema to create a compute cluster. Which schema should you use?
A. https://azuremlschemas.azureedge.net/latest/computdnstarKeichema.json
B. https://azuremlschemas.azureedge.net/latest/amlCompute.schemajson
C. https://azuremlschemas.azureedge.net/latest/vmCompute.schema.json
D. https://azuremlschemas.azureedge.net/latest/kubernetesCompute.schema.json
Question # 49
You use an Azure Machine Learning workspace. You have a trained model that must be deployed as a web service. Users must authenticate by using Azure Active Directory. What should you do?
A. Deploy the model to Azure Kubernetes Service (AKS). During deployment, set the
token_auth_enabled parameter of the target configuration object to true
B. Deploy the model to Azure Container Instances. During deployment, set the auch_enabled parameter of the target configuration object to true
C. Deploy the model to Azure Container Instances. During deployment, set the coken_auch_enabled parameter of the target configuration object to true
D. Deploy the model to Azure Kubernetes Service (AKS). During deployment, set the auch. enabled parameter of the target configuration object to true
Question # 50
You create a deep learning model for image recognition on Azure Machine Learning service using GPU-based training. You must deploy the model to a context that allows for real-time GPU-based inferencing. You need to configure compute resources for model inferencing. Which compute type should you use?
A. Azure Container Instance
B. Azure Kubernetes Service
C. Field Programmable Gate Array
D. Machine Learning Compute
Question # 51
You use the designer to create a training pipeline for a classification model. The pipeline uses a dataset that includes the features and labels required for model training. You create a real-time inference pipeline from the training pipeline. You observe that the schema for the generated web service input is based on the dataset and includes the label column that the model predicts. Client applications that use the service must not be required to submit this value. You need to modify the inference pipeline to meet the requirement. What should you do?
A. Add a Select Columns in Dataset module to the inference pipeline after the dataset and
use it to select all columns other than the label.
B. Delete the dataset from the training pipeline and recreate the real-time inference pipeline.
C. Delete the Web Service Input module from the inference pipeline.
D. Replace the dataset in the inference pipeline with an Enter Data Manually module that includes data for the feature columns but not the label column.
Question # 52
You are implementing hyperparameter tuning by using Bayesian sampling for an Azure ML Python SDK v2-based model training from a notebook. The notebook is in an Azure Machine Learning workspace. The notebook uses a training script that runs on a compute cluster with 20 nodes. The code implements Bandit termination policy with slack_factor set to 02 and a sweep job with max_concurrent_trials set to 10. You must increase effectiveness of the tuning process by improving sampling convergence. You need to select which sampling convergence to use. What should you select?
A. Set the value of slack.factor of earty.termination policy to 0.1.
B. Set the value of max_concurrent_trials to 4.
C. Set the value of slack_factor of eartyjermination policy to 0.9.
D. Set the value of max.concurrentjrials to 20.
Question # 53
You have an Azure Machine Learning workspace. You build a deep learning model. You need to publish a GPU-enabled model as a web service. Which two compute targets can you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Azure Kubernetes Service (AKS)
B. Azure Container Instances (ACI)
C. Local web service
D. Azure Machine Learning compute clusters
Question # 54
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. An IT department creates the following Azure resource groups and resources: The IT department creates an Azure Kubernetes Service (AKS)-based inference compute target named aks-cluster in the Azure Machine Learning workspace. You have a Microsoft Surface Book computer with a GPU. Python 3.6 and Visual Studio Code are installed. You need to run a script that trains a deep neural network (DNN) model and logs the loss and accuracy metrics. Solution: Install the Azure ML SDK on the Surface Book. Run Python code to connect to the workspace. Run the training script as an experiment on the aks-cluster compute target. Does the solution meet the goal?
A. Yes
B. No
Question # 55
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run: from azureml.core import Run import pandas as pd run = Run.get_context() data = pd.read_csv('data.csv') label_vals = data['label'].unique() # Add code to record metrics here run.complete() The experiment must record the unique labels in the data as metrics for the run that can be reviewed later. You must add code to the script to record the unique label values as run metrics at the point indicated by the comment. Solution: Replace the comment with the following code: run.upload_file('outputs/labels.csv', './data.csv') Does the solution meet the goal?
A. Yes
B. No
Question # 56
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You create an Azure Machine Learning service datastore in a workspace. The datastore contains the following files: • /data/2018/Q1.csv • /data/2018/Q2.csv • /data/2018/Q3.csv • /data/2018/Q4.csv • /data/2019/Q1.csv All files store data in the following format: id,f1,f2i 1,1.2,0 2,1,1, 1 3,2.1,0 You run the following code:
A. Yes
B. No
Question # 57
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run: The experiment must record the unique labels in the data as metrics for the run that can be reviewed later. You must add code to the script to record the unique label values as run metrics at the point indicated by the comment. Solution: Replace the comment with the following code: run.log_list('Label Values', label_vals) Does the solution meet the goal?
A. Yes
B. No
Question # 58
You train and register a machine learning model. You create a batch inference pipeline that uses the model to generate predictions from multiple data files. You must publish the batch inference pipeline as a service that can be scheduled to run every night. You need to select an appropriate compute target for the inference service. Which compute target should you use?
A. Azure Machine Learning compute instance
B. Azure Machine Learning compute cluster
C. Azure Kubernetes Service (AKS)-based inference cluster
D. Azure Container Instance (ACI) compute target
Question # 59
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have a Python script named train.py in a local folder named scripts. The script trains a regression model by using scikit-learn. The script includes code to load a training data file which is also located in the scripts folder. You must run the script as an Azure ML experiment on a compute cluster named amlcompute. You need to configure the run to ensure that the environment includes the required packages for model training. You have instantiated a variable named aml-compute that references the target compute cluster. Solution: Run the following code: Does the solution meet the goal?
A. Yes
B. No
Question # 60
You create a multi-class image classification model with automated machine learning in Azure Machine Learning. You need to prepare labeled image data as input for model training in the form of an Azure Machine Learning tabular dataset. Which data format should you use?
A. COCO
B. JSONL
C. JSON
D. Pascal VOC
Question # 61
You are using Azure Machine Learning to monitor a trained and deployed model. You implement Event Grid to respond to Azure Machine Learning events. Model performance has degraded due to model input data changes. You need to trigger a remediation ML pipeline based on an Azure Machine Learning event. Which event should you use?
A. RunStatusChanged
B. DatasetDriftDetected
C. ModelDeployed
D. RunCompleted
Question # 62
You create an Azure Machine Learning pipeline named pipeline1 with two steps that contain Python scripts. Data processed by the first step is passed to the second step. You must update the content of the downstream data source of pipeline1 and run the pipeline again You need to ensure the new run of pipeline1 fully processes the updated content. Solution: Set the allow_reuse parameter of the PythonScriptStep object of both steps to False Does the solution meet the goal?
A. Yes
B. No
Question # 63
You plan to use automated machine learning to train a regression model. You have data that has features which have missing values, and categorical features with few distinct values. You need to configure automated machine learning to automatically impute missing values and encode categorical features as part of the training task. Which parameter and value pair should you use in the AutoMLConfig class?
A. featurization = 'auto'
B. enable_voting_ensemble = True
C. task = 'classification'
D. exclude_nan_labels = True
E. enable_tf = True
Question # 64
You are a data scientist working for a bank and have used Azure ML to train and register a machine learning model that predicts whether a customer is likely to repay a loan. You want to understand how your model is making selections and must be sure that the model does not violate government regulations such as denying loans based on where an applicant lives. You need to determine the extent to which each feature in the customer data is influencing predictions. What should you do?
A. Enable data drift monitoring for the model and its training dataset.
B. Score the model against some test data with known label values and use the results to calculate a confusion matrix.
C. Use the Hyperdrive library to test the model with multiple hyperparameter values.
D. Use the interpretability package to generate an explainer for the model.
E. Add tags to the model registration indicating the names of the features in the training dataset.
Question # 65
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are using Azure Machine Learning to run an experiment that trains a classification model. You want to use Hyperdrive to find parameters that optimize the AUC metric for the model. You configure a HyperDriveConfig for the experiment by running the following code:
A. Yes
B. No
Question # 66
You train and register an Azure Machine Learning model You plan to deploy the model to an online endpoint You need to ensure that applications will be able to use the authentication method with a non-expiring artifact to access the model. Solution: Create a managed online endpoint with the default authentication settings. Deploy the model to the online endpoint. Does the solution meet the goal?
A. Yes
B. No
Question # 67
You deploy a real-time inference service for a trained model. The deployed model supports a business-critical application, and it is important to be able to monitor the data submitted to the web service and the predictions the data generates. You need to implement a monitoring solution for the deployed model using minimal administrative effort. What should you do?
A. View the explanations for the registered model in Azure ML studio.
B. Enable Azure Application Insights for the service endpoint and view logged data in the Azure portal.
C. Create an ML Flow tracking URI that references the endpoint, and view the data logged by ML Flow.
D. View the log files generated by the experiment used to train the model.
Question # 68
You plan to provision an Azure Machine Learning Basic edition workspace for a data science project. You need to identify the tasks you will be able to perform in the workspace. Which three tasks will you be able to perform? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Create a Compute Instance and use it to run code in Jupyter notebooks.
B. Create an Azure Kubernetes Service (AKS) inference cluster.
C. Use the designer to train a model by dragging and dropping pre-defined modules.
D. Create a tabular dataset that supports versioning.
E. Use the Automated Machine Learning user interface to train a model.
Question # 69
You use Azure Machine Learning designer to create a real-time service endpoint. You have a single Azure Machine Learning service compute resource. You train the model and prepare the real-time pipeline for deployment You need to publish the inference pipeline as a web service. Which compute type should you use?
A. HDInsight
B. Azure Databricks
C. Azure Kubernetes Services
D. the existing Machine Learning Compute resource
E. a new Machine Learning Compute resource
Question # 70
You register a file dataset named csvjolder that references a folder. The folder includes multiple com ma-separated values (CSV) files in an Azure storage blob container. You plan to use the following code to run a script that loads data from the file dataset. You create and instantiate the following variables:
A. Option A
B. Option B
C. Option C
D. Option D
Question # 71
You train a model and register it in your Azure Machine Learning workspace. You are ready to deploy the model as a real-time web service. You deploy the model to an Azure Kubernetes Service (AKS) inference cluster, but the deployment fails because an error occurs when the service runs the entry script that is associated with the model deployment. You need to debug the error by iteratively modifying the code and reloading the service, without requiring a re-deployment of the service for each code update. What should you do?
A. Register a new version of the model and update the entry script to load the new version
of the model from its registered path.
B. Modify the AKS service deployment configuration to enable application insights and redeploy to AKS.
C. Create an Azure Container Instances (ACI) web service deployment configuration and deploy the model on ACI.
D. Add a breakpoint to the first line of the entry script and redeploy the service to AKS.
E. Create a local web service deployment configuration and deploy the model to a local
Docker container.
Question # 72
You use Azure Machine Learning designer to create a training pipeline for a regression model. You need to prepare the pipeline for deployment as an endpoint that generates predictions asynchronously for a dataset of input data values. What should you do?
A. Clone the training pipeline.
B. Create a batch inference pipeline from the training pipeline.
C. Create a real-time inference pipeline from the training pipeline.
D. Replace the dataset in the training pipeline with an Enter Data Manually module.
Question # 73
You retrain an existing model. You need to register the new version of a model while keeping the current version of the model in the registry. What should you do?
A. Register a model with a different name from the existing model and a custom property
named version with the value 2.
B. Register the model with the same name as the existing model.
C. Save the new model in the default datastore with the same name as the existing model. Do not register the new model.
D. Delete the existing model and register the new one with the same name.
Question # 74
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model’s predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a TabularExplainer. Does the solution meet the goal?
A. Yes
B. No
Question # 75
You create an MLflow model You must deploy the model to Azure Machine Learning for batch inference. You need to create the batch deployment. Which two components should you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point
A. Compute target
B. Kubernetes online endpoint
C. Model files
D. Online endpoint
E. Environment
Question # 76
You manage an Azure Machine Learning workspace named workspaces You must develop Python SDK v2 code to attach an Azure Synapse Spark pool as a compute target in workspaces The code must invoke the constructor of the SynapseSparkCompute class. You need to invoke the constructor. What should you use?
A. Synapse workspace web URL and Spark pool name
B. resource ID of the Synapse Spark pool and a user-defined name
C. pool URL of the Synapse Spark pool and a system-assigned name
D. Synapse workspace name and workspace web URL
Question # 77
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register a machine learning model. You plan to deploy the model as a real-time web service. Applications must use key-based authentication to use the model. You need to deploy the web service. Solution: Create an AciWebservice instance. Set the value of the ssl_enabled property to True. Deploy the model to the service. Does the solution meet the goal?
A. Yes
B. No
Question # 78
You run an automated machine learning experiment in an Azure Machine Learning workspace. Information about the run is listed in the table below:
A. Option A
B. Option B
C. Option C
D. Option D
Question # 79
You are training machine learning models in Azure Machine Learning. You use Hyperdrive to tune the hyperparameters. In previous model training and tuning runs, many models showed similar performance. You need to select an early termination policy that meets the following requirements: • accounts for the performance of all previous runs when evaluating the current run • avoids comparing the current run with only the best performing run to date Which two early termination policies should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. Bandit
B. Median stopping
C. Default
D. Truncation selection
Question # 80
You register a model that you plan to use in a batch inference pipeline. The batch inference pipeline must use a ParallelRunStep step to process files in a file dataset. The script has the ParallelRunStep step runs must process six input files each time the inferencing function is called. You need to configure the pipeline. Which configuration setting should you specify in the ParallelRunConfig object for the PrallelRunStep step?
A. process_count_per_node= "6"
B. node_count= "6"
C. mini_batch_size= "6"
D. error_threshold= "6"
Question # 81
You are developing a machine learning model. You must inference the machine learning model for testing. You need to use a minimal cost compute target Which two compute targets should you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point
A. Local web service
B. Remote VM
C. Azure Databricks
D. Azure Machine Learning Kubernetes
E. Azure Container Instances
Question # 82
You create a multi-class image classification deep learning model that uses the PyTorch deep learning framework. You must configure Azure Machine Learning Hyperdrive to optimize the hyperparameters for the classification model. You need to define a primary metric to determine the hyperparameter values that result in the model with the best accuracy score. Which three actions must you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. Set the primary_metric_goal of the estimator used to run the bird_classifier_train.py
script to maximize.
B. Add code to the bird_classifier_train.py script to calculate the validation loss of the model and log it as a float value with the key loss.
C. Set the primary_metric_goal of the estimator used to run the bird_classifier_train.py script to minimize.
D. Set the primary_metric_name of the estimator used to run the bird_classifier_train.py script to accuracy.
E. Set the primary_metric_name of the estimator used to run the bird_classifier_train.py script to loss.
F. Add code to the bird_classifier_train.py script to calculate the validation accuracy of the model and log it as a float value with the key accuracy.
Question # 83
You manage an Azure Machine Learning workspace. You have an environment for training jobs which uses an existing Docker image. A new version of the Docker image is available. You need to use the latest version of the Docker image for the environment configuration by using the Azure Machine Learning SDK v2-What should you do?
A. Modify the conda.file to specify the new version of the Docker image.
B. Use the Environment class to create a new version of the environment.
C. Use the create.or.update method to change the tag of the image.
D. Change the description parameter of the environment configuration.
Question # 84
you are a data scientist working for a hotel booking website company. You use the Azure Machine Learning service to train a model that identifies fraudulent transactions. You must deploy the model as an Azure Machine Learning real-time web service using the Model.deploy method in the Azure Machine Learning SDK. The deployed web service must return real-time predictions of fraud based on transaction data input. You need to create the script that is specified as the entry_script parameter for the InferenceConfig class used to deploy the model. What should the entry script do?
A. Start a node on the inference cluster where the web service is deployed.
B. Register the model with appropriate tags and properties.
C. Create a Conda environment for the web service compute and install the necessary Python packages.
D. Load the model and use it to predict labels from input data.
E. Specify the number of cores and the amount of memory required for the inference compute.
Question # 85
You create an Azure Machine Learning workspace named ML-workspace. You also create an Azure Databricks workspace named DB-workspace. DB-workspace contains a cluster named DB-cluster. You must use DB-cluster to run experiments from notebooks that you import into DBworkspace. You need to use ML-workspace to track MLflow metrics and artifacts generated by experiments running on DB-cluster. The solution must minimize the need for custom code. What should you do?
A. From DB-cluster, configure the Advanced Logging option.
B. From DB-workspace. configure the Link Azure ML workspace option.
C. From ML-workspace. create an attached compute.
D. From ML-workspace. create a compute cluster.
Question # 86
You create an Azure Machine Learning workspace. You must configure an event-driven workflow to automatically trigger upon completion of training runs in the workspace. The solution must minimize the administrative effort to configure the trigger. You need to configure an Azure service to automatically trigger the workflow. Which Azure service should you use?
A. Event Grid subscription
B. Azure Automation runbook
C. Event Hubs Capture
D. Event Hubs consumer
Leave a comment
Your email address will not be published. Required fields are marked *