Understanding decision trees

Decision tree algorithms are very commonly-used supervised learning algorithm models for classification and regression tasks. In this section, we will show how you can visualize decision tree classifiers to better understand their logic.

Decision tree classifiers build a sequence of simple if/else rulings on data through the use of which they can then predict the target value.

Decision trees are usually simpler to interpret because of their structure and the ability we have to visualize the modeled tree, using modules such as the sklearn export_graphviz function.

The following standard Python code can be used to visualize the decision tree model that we previously built in our notebook:

!pip install graphviz
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn import tree
from sklearn.datasets import load_wine
from IPython.display import SVG
from graphviz import Source
from IPython.display import display

# feature matrix
feature_names = [ 'GCR', 'NPHI', 'PE', 'ILD', 'ILM']
X = df_data_1[feature_names]

# target vector
y = df_data_1['lithofacies']

# print dataset description
estimator = DecisionTreeClassifier()
estimator.fit(X, y)

graph = Source(tree.export_graphviz(estimator, out_file=None
, feature_names=labels
, filled = True))
display(SVG(graph.pipe(format='svg')))

The following screenshot shows the code executed in our notebook and the graphical output it creates (although this is difficult to fit into a single screenshot):

Very seldom does one allow.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset