![]() If we set this to true, the rectangles in the graphic will be color-coded. If we set this to true, the corners of the rectangles in the graphic will be rounded. The next two parameters are purely aesthetic. ![]() We'll set this to a list containing the strings, stay and churn. This needs to contain the different values found in our response feature. You may recall from an earlier lesson that this function outputs the column names of a data frame. We can get this by setting this to the list function applied to the predictors variable. This needs to contain a list of our predictor feature names. The first parameter is our classifier variable. This function has a few parameters, so we'll add each one on a separate line to make our code more readable. We'll type an equal sign and then type the export_graphviz function followed by a pair of parenthesis. We'll start on a new line where we'll create a new variable called graphic_export. We'll now move on to step two, where we'll build the decision tree graphic. This imports a function from the tree section of the sklearn library that builds the decision tree graphic. On a line below, we'll type, from ee import export_graphviz. We'll do this in a new cell where we'll write import graphviz. We now need to import this library into our Jupyter notebook. We'll eventually arrive at a point where we're asked to confirm that we want to proceed.Īgain, we'll jump ahead through the rest of the installation process, and once it's finished, we can close this window. In this case, we just need to type conda install python-graphviz and hit enter.Īnaconda will run this code for some time, so we'll jump ahead. Installing libraries with Anaconda Prompt is easy if you have the correct code. If this isn't possible, don't worry, decision tree graphs aren't necessary for building and deploying decision tree models. You can resolve this by contacting your IT support. This means you will not be able to import the Graphviz library. Note that you may not have permission to do this if you're using a work machine. We'll right click it and select run as administrator. We can see Anaconda Prompt as one of the options here. We'll start step one by taping the Windows key and then typing Anaconda. We'll do this using functions from the imported libraries. Seconds, we'll visualize the decision tree. When this is the case, we need to install them manually using the Anaconda Prompt. While Anaconda installs most of the libraries you'll use for machine learning, it doesn't install all of them as the installation will become too big. Installing this involves using the Anaconda Prompt. This contains the functions we'll need to build this tree. This is especially useful when sharing your model's results with people who have no experience with machine learning.įirst, we'll import the Graphviz library. Our goal in this lesson is to create a graphic of our decision tree's logic.Īlthough not strictly necessary, the decision tree graphic gives a good understanding of how the model makes predictions. (bright music) - In the previous lesson, we learned how to evaluate our decision tree by examining accuracy, precision and recall. Fortunately we can simplify (aka prune) it by modifying the DecisionTreeClassifier() function. The point of this chart is readability, so having an overly complex one defeats the purpose. ![]() However, it’s very complex and too difficult to read. When we do, we can see our decision tree chart in the output. We’ll type decision_tree_graphic on a line below and run the code. The only argument this takes is the graphic_export variable we just created. We’ll set this to equal the Source() function which we’ll take from the graphviz library. Next we need to create a new variable called decision_tree_graphic. The darker the color the stronger the prediction Orange will represent stayed while blue will represent churned. This will give the rectangles rounded edges. We’ll set this to equal a list containing the strings “Stayed” and “Churned”.įollowing that are two arguments that we’ll use to modify the aesthetics of the chart. We can get these by using the list() function on the predictors variable. The next argument is the list of feature names. This contains the DecisionTreeClassifier() function. The first is the classifier variable that we created in an earlier lesson. This will store the output of the export_graphviz() function which we’ll use now to create the visualization. We’ll start by creating a variable called graphic_export.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |