[120 total points]
Refer to the web page for policies regarding collaboration, due dates, and extensions.
In many classification tasks, naive Bayes is either competitive with, or is, the best Bayesian-nets model... which is surprising, given that the naive Bayes model is so trivial that it essentially ignores dependencies between features! This seems to be an argument against structure learning, until one realizes that most structure learning methods are trying to model the joint distribution, which does not necessarily corresponds to a good estimate of the class-conditional distribution.
Tree-Augmented Naive Bayes (TAN) is a model that attempts to
optimize this
class conditional distribution;
augmenting naive Bayes by adding arcs between features such
that each feature has as its parents the class node, and (at most) one
other feature. (If you remove the Class node, the remaining
feature-feature arcs form a tree
structure.)
The following is an example of a TAN model.
(Note that the graph on only the evidence variables (W, X, Y, Z) forms
a
tree.)
The algorithm for learning TAN models is a variant of the Chow-Liu algorithm for learning tree-structured Bayes nets. Let C represent the class variable, and {Xi}i=1n be the features (non-class variables).
(a: 15 points) Implement the above algorithm for learning the structure of a TAN model, and submit your code as tanstruct.m.
(b: 5 points)
Here, we consider the task of breast cancer typing --
that is, classifying a tumor as either malignant or benign;
see breast.csv within
zip file.
Learn a TAN structure for this data,
and draw the structure (directed acyclic graph) produced in your
writeup.
(c: 10 points) Classification
In this question you will compare the classification accuracy of naive
Bayes and TAN.
First, randomly withhold 183 records as a test set. Then, using a
training set of size m, for m = 100, 200, 300, 400, 500, ...
(You may want to consider several such sets.)
Consider a factor produced as a product of some of the CPDs in a Bayesian network B:
(a: 5 points) Show that ν is a conditional probability in some network. More precisely, construct another Bayesian network B' and a disjoint partition W = Y ∪ Z such that ν(W) = PB'( Y | Z ).
(b: 5 points) Show that the intermediate factors produced by the variable elimination algorithm are also conditional probabilities in some network.
[10 points] Markov Networks & Factorization
Consider a
distribution P over four binary random variables {X1,
X2,, X3, X4}
that assigns probability 1/8 to each of the following eight
configurations:
(0,0,0,0) (1,0,0,0) (1,1,0,0) (1,1,1,0)
(0,0,0,1) (0,0,1,1) (0,1,1,1) (1,1,1,1)
and probability 0 to the other 8 configurations.
The distribution P satisfies the global Markov property with respect to the graph H = {X1 - X2 - X3 - X4 - X1}. (Note this graph is a circle.) For example, consider the independence claim (X1⊥X3|X2,X4). For the assignment X2 = x21, X4 = x40, we have that only assignments where X1 = x11 receive positive probability. Thus, P(x11|x21, x40) = 1, and X1 is trivially independent of X3 in the conditional distribution. A similar analysis applies to all other cases, so that the global Markov assumptions hold. However, the distribution P does not factorize according to H. Give a proof of this. (Hint: Use a proof by contradiction.)
(b: 5 points) Build a cluster graph for this Bayesian net, and determine the message passing scheme and initial potential of each cluster (use a cluster that contains variable "C" in it as the root).
(d: 5 points)
Answer the following queries using the calibrated cluster tree.
Use values for the Bayes model parameters as follows:
[a=.25, b=.5, c=.65, d=.55, e=.4, f=.4, g=.5, h=.3, i=.8, j=.4, k=.5].
|
[Question removed - no time to cover Markov Network Approximations]