Explainable AI Wiki

What is explainable AI?



Explainable AI is also referred to as interpretable machine learning. Organizations use the explainable AI techniques to gain insights into their AI model behavior and underlying decision-making process.

Explainable AI or explainable artificial intelligence (also known as xAI) is a field within artificial intelligence that refers to the ability of data scientist or machine learning engineers to understand and explain how their AI model reached its findings or predictions. The main aim of explainable AI is to make AI as transparent as possible - since most of the complex machine learning or deep learning models are blackboxes.

Why is explainable AI important?

In progress.

Model Interpretability Techniques

SHAP