An Optimized Theory Revision Module

Russell Greiner, R. Bharat Rao and Glenn Meredith

In progress

Essentially all theory revision systems use a set of theory-to-theory transformations { t_k } to hill-climb from a given initial theory to a new theory whose empirical accuracy, over a given set of labeled training instances { q_j }, is a local optimum. At the heart of each such process is a "evaluator", which compares the accuracy of the current theory KB with that of each of its "neighbors" N(KB) = { t_k(KB) }, each formed by applying one transformation to the KB, with the goal of determining which neighbor has the highest score. One obvious way to implement such an evaluator involves simply running each individual neighbor theory KB_k = t_k(KB) on each instance q_j; this corresponds to the "wrapper" model [JKP'93]. In practice, however, this approach can be prohibitively slow. We therefore built an alternative system DD, which employs a smarter evaluator that can quickly compute the score of a transformed theory t_k(KB), relative to the current theory KB, by "looking inside" KB and reasoning about the effects of the t_k transformation. This paper presents data that shows DD runs around 35 times faster than the naive wrapped system Delta, attaining the same accuracy. We also discuss DD's source of power, and generalize from this specific implementation to suggest other situations to which these ideas may be applicable.

  • (postscript)