Explainable AI Whiteboard Technical Series: Mutual Information

Your video will begin in 10
40 Views
Published
The explainable AI whiteboard technical series explores and explains some of the key tools within our data science toolbox that powers the Juniper AI-Native Networking Platform. In this video we cover mutual information.

Juniper's Mist AI platform uses mutual information to identify which network features—like mobile device type or access points—are most valuable for predicting the success or failure of SLE (Service Level Expectation) metrics.
Mutual information compares the uncertainty (entropy) of random variables, revealing how much one feature informs another. Entropy is explained with a coin toss example: more uncertainty means higher entropy.

Additionally, Pearson correlation shows whether a feature predicts success (positive correlation) or failure (negative correlation). These concepts are integrated into Mist's dashboard, aiding network performance analysis with the virtual assistant's help.

Chapters:
0:00: Introduction
0:34 Mutual Information Definition
1:24 Entropy
3:53 Pearson Correlation

What is explainable AI, or XAI?
https://www.juniper.net/us/en/research-topics/what-is-explainable-ai-xai.html

Explainable AI
https://www.juniper.net/us/en/dm/explainable-ai.html
Category
Juniper Networks
Tags
AI, xai, artificial intelligence
Be the first to comment