Explainable AI Whiteboard Technical Series: Decision Trees

58 Views
Published
The explainable AI whiteboard technical series explores and explains some of the key tools within our data science toolbox that powers the Juniper AI-Native Networking Platform. In this video we cover decision trees.
Juniper's decision trees help identify network issues like faulty cables, AP or switch health, and wireless coverage using algorithms like Random Forest, Gradient Boosting, and XGBoost. Decision trees follow three steps: collect data, train a model, and deploy it.

For example, they can detect cable faults using features like frame errors and one-way traffic. The tree asks key questions based on two metrics: Gini impurity (measuring uncertainty) and information gain (how much a question reduces that uncertainty).

A root node starts with the entire dataset and splits data by asking yes/no questions. This process continues until a high certainty level is reached, avoiding overfitting by pruning branches. Gini impurity ranges from zero (no uncertainty) to one (high uncertainty), while information gain helps find the best questions to reduce uncertainty.

The tree grows by iterating through data values, selecting the question with the most information gain at each node, and splitting the data until a final prediction is made at the leaf nodes. These decision trees are part of Juniper’s AI-Native solutions to resolve network issues efficiently.

Chapters:
0:00: Introduction
1:51: Decision tree
2:41: Genie
3:18 Information gain
4:02 Building the tree

What is Explainable AI?
https://www.juniper.net/us/en/research-topics/what-is-explainable-ai-xai.html

Explainable AI
https://www.juniper.net/us/en/dm/explainable-ai.html
Category
Juniper Networks
Tags
decision tree, ML, AI
Be the first to comment