Efficient Reasoning/Inference

Efficient ways to do general inference tasks.
See Efficient Interpretation for results related to (visual) interpretation, and Effective Performance for results related to effective performance (PALO), in general.

Efficent Reasoning

Efficient Reasoning

Russell Greiner, Christian Darken and N. Iwan Santoso
Computing Surveys, 33:1 (Mar, 2001), p. 1-30.

Many tasks require "reasoning" --- ie, deriving conclusions from a corpus of explicitly stored information --- to solve their range of problems.  An ideal reasoning system would produce all-and-only  the correct answers to every possible query, produce answers that are as specific as possible, be expressive enough to permit any possible fact to be stored and any possible query to be asked, and be efficient. Unfortunately, this is provably impossible: as correct and precise systems become more expressive, they become increasingly inefficient, or even undecidable.  This tutorial first formalizes these hardness results, in the context of both logic- and probability-based reasoning, then overviews the existing techniques now used to address, or at least side-step, this dilemma.  Throughout, we also include some alternative proposals.

Lookahead Control Policy

Performance of Lookahead Control Policies in the Face of Abstractions and Approximations

Ilya Levner, Vadim Bulitko, Omid Madani, and Russell Greiner
Abstraction, Reformulation and Approximation: Proceedings of the 5th International SARA Symposium,  August 2-4, 2002, p. 299-308.

This paper explores the formulation of image interpretation as a Markov Decision Process (MDP) problem, highlighting the important assumptions in the MDP formulation. Furthermore state abstraction, value function and action approximations as well as lookahead search are presented as necessary solution methodologies. We view the task of image interpretation as a dynamic control problem where the optimal vision operator is selected responsively based on the problem solving state at hand. The control policy, therefore, maps problem-solving states to operators in an attempt to minimize the total problem-solving time while reliably interpreting the image. Real world domains, like that of image interpretation, usually have incredibly large state spaces which require methods of abstraction in order to be manageable by today's information processing systems. In addition an optimal value function (V*) used to evaluate state quality is also generally unavailable requiring approximations to be used in conjunction with state abstraction. Therefore, the performance of the system is directly related to the types of abstractions and approximations present.

Realtime Lookahead Control Policy

Real-time Lookahead Control Policies

Vadim Bulitko, Ilya Levner, and Russell Greiner
AAAI/KDD/UAI-2002 Joint Workshop on Real-Time Decision Support and Diagnosis Systems,  29 July 2002, p. ??.

Decision-making in practical domains is usually complex, as a coordinated sequence of actions is needed to reach a satisfactory state, and responsive, as no fixed sequence works for all cases -- instead we need to select actions after sensing the environment. At each step, a lookahead control policy chooses among feasible actions by envisioning their effects into the future and selecting the action leading to the most promising state.

There are several challenges to producing the appropriate policy. First, when each individual state description is large, the policy may instead use a low-dimensional abstraction of the states. Second, in some situations the quality of the final state is not given, but can only be learned from data.

Deeper lookahead typically selects actions that lead to higher-quality outcomes. Of course, as deep forecasts are computationally expensive, it is problematic when computational time is a factor. This paper makes this accuracy/efficiency tradeoff explicit, defining a system's effectiveness in terms of both the quality of the returned response, and the computational cost. We then investigate how deeply a system should search, to optimize this ``type II'' performance criterion.

Pathology in Single Agent Search

Lookahead Pathologies for Single Agent Search

Vadim Bulitko, Lihong Li, Russell Greiner and Ilya Levner,
Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), Acalpulco, 2003.

Admissible and consistent heuristic functions are usually preferred in single-agent heuristic search as they guarantee optimal solutions with complete search methods such as A* and IDA*. Larger problems, however, frequently make a complete search intractable due to space and/or time limita-tions. In particular, a path-planning agent in a real-time strategy game may need to take an action be-fore its complete search has the time to finish. In such cases, incomplete search techniques (such as RTA*, SRTA*, RTDP, DTA*) can be used. Such algorithms conduct a limited ply lookahead and then evaluate the states envisioned using a heuristic function. The action selected on the basis of such evaluations can be suboptimal due to the incom-pleteness of search and inaccuracies in the heuris-tic. It is usually believed that deeper lookahead in-creases the chances of taking the optimal action. In this paper, we demonstrate that this is not necessar-ily the case, even when admissible and consistent heuristic functions are used.

AndOr Trees Add in AIJ paper

Optimal Depth-First Strategies for And-Or Trees

Russell Greiner, Ryan Hayward and Michael Molloy
Proceedings of the Eighteenth Annual National Conference on Artificial Intelligence (AAAI-02),  p. 725-730, Edmonton, Aug 2002.

Many tasks require evaluating a specified boolean expression f over a set of probabilistic tests where we know the probability that each test will succeed, and also the cost of performing each test. A strategy specifies when to perform which test, towards determining the overall outcome of f. This paper investigates the challenge of finding the strategy with the minimum expected cost.

We observe first that this task is typically NP-hard --- eg, when tests can occur many times within f, or when there are probabilistic correlations between the test outcomes. We therefore focus on the situation where the tests are probabilistically independent and each appears only once in f. Here, f can be written as an and-or tree, where each internal node corresponds to either the ``And'' or ``Or'' of its children, and each leaf node is a probabilistic test.

There is an obvious depth-first approach to evaluating such and-or trees: First evaluate each penultimate subtree in isolation; then reduce this subtree to a single ``mega-test'' with appropriate cost and probability, and recur on the resulting reduced tree. After formally defining this approach, we show first that it produces the optimal strategy for shallow (depth 1 or 2) and-or trees, then show it can be arbitrarily bad for deeper trees. We next consider a larger, natural subclass of strategies --- those that can be expressed as a linear sequence of tests --- and show that the best such ``linear strategy'' can also be very much worse than the optimal strategy in general. Finally, we show that our results hold in a more general model, where internal nodes can also be probabilistic tests.


A Formal Analysis of Solution Caching

Vinay Chaudhri and Russell Greiner
Proceedings of the Ninth Canadian Conference on Artificial Intelligence (CSCSI-92), Vancouver, May 1992.

Many knowledge representation systems and deductive databases store and maintain the conclusions found during a derivation process, in a form that allows these conclusions to be used during subsequent derivations. As this approach, called "solution caching", allows the system to avoid repeating these derivations, it can reduce the system's overall cost for query answering. Unfortunately, as there is a cost for storing these conclusions, it is not always desirable to cache every solution found --- this depends on whether the savings achieved by performing fewer inference steps for these future queries exceeds the storage overhead incurred. This paper formally characterizes this tradeoff, and presents an efficient algorithm that produces an optimal caching strategy: i.e., given an inference graph of a knowledge base, anticipated number of queries at each node, and various parameters, it determines which nodes in the inference graph should cache their solutions.

Return to Greiner's home page