Visual Explanation of Evidence in Additive Classifiers
Machine-learned classifiers are important components of many data mining
and knowledge discovery systems. In several application domains, an explanation
of the classifier's reasoning is critical for the classifier’s acceptance by the
end-user. We describe a framework, ExplainD, for explaining decisions made by classifiers
that use additive evidence. ExplainD applies to many widely used classifiers, including
linear discriminants and many additive models. We demonstrate our ExplainD framework using
implementations of naïve Bayes, linear support vector machine, and logistic regression
classifiers on example applications. ExplainD uses a simple graphical explanation of the
classification process to provide visualizations of the classifier decisions, visualization
of the evidence for those decisions, the capability to speculate on the effect of changes to
the data, and the capability, wherever possible, to drill down and audit the source of the evidence.
We demonstrate the effectiveness of ExplainD in the context of a deployed web-based system
(Proteome Analyst) and using a downloadable Python-based implementation.