Applications are invited for a one-year (renewable) fellowship to work in the areas of
The postdoc will be required to carry out high-quality research, over the entire gambit of research activities, ranging from formal theoretic explorations, to concrete implementations and empirical testing. S/he may work with various local companies, including
as well as the design and development of the AI PlayGround. This is in addition to pushing on your own "curiousity-driven" and "technology push" ideas; see also RG's ideas.
Candidates should have a Ph.D. in Computer Science or the equivalent. Previous research excellence and strong productivity in addition to good computing background is essential. If interested, please send
(to arrive ASAP) to:
Russell Greiner
Department of Computing Science
Athabasca Hall 359
University of Alberta
Edmonton, AB T6G 2H1
Email: greiner@cs.ualberta.ca
Phone: (780) 492-5461
Fax: (780) 492-1071
Review of applications will begin immediately, and will continue until the position is filled.
Electronic submissions -- in plain text or PostScript -- are encouraged.
We also encourage applicants to apply for various additional sources of funding, including
and well as other possible sources.
See http://www.cs.ualberta.ca/~rgreiner/ for more information about my research, and http://www.cs.ualberta.ca/ for more information about the department in general.
Edmonton is also a great place to live! See AboutEdmonton for more information about the city!
While I (Russ Greiner) will be the primary contact, several others at UofAlberta have related interests, including:
Peter van Beek | constraint satisfaction, planning |
Bill Armstrong | adaptive logic networks |
Renee Elio | cognitive modeling, agent communication |
Randy Goebel | default reasoning, and other representation issues |
Jonathan Schaeffer | game playing(Chinook), search, parallel systems |
Tony Marsland | game playing, search |
as well as on-going activities in logic programming, vision, robotics, and many collaborations with others in areas outside of AI (including philosophy, psychology, ...); see also AI Lab HomePage.
We are most interested in a researcher who can do high quality research, which results in publications and perhaps distributable code; see SoftwarePage. I also anticipate getting some money from an industrial company to work on some specific theoretical aspects of certain funded applications. Here, the post-doc and I will be expected to apply the ideas, and code, that we develop to their datasets. (I view this as a wonderful opportunity: getting data, and specific problems, is often one of the hardest aspects of research!) Of course, I will also make sure that the funders expect research from us, rather than development.
Also, the postdoc will have the option of teaching a course, as a way to supplement his/her income.
The raison d'être for building a Bayesian Net is to answer queries; this often involves computing P( h | e), the posterior probability of the hypothesis h, conditioned on the observations e. Eg, a patient may want to know the probability that he is suffering from a heart problem, given the set of recent sensor measurements. Unfortunately, it can take an extremely long time to produce answers to queries, both in theory (it is NP-hard) and in practice. Fortunately, there are often ways to make this computation more efficient, in many situations. In particular, even if one query-answering algorithm (QA) is slow for a specified query, another algorithm may be quite efficient for the same query. Moreover, different BNs can express the same distribution. Even if a particular QA algorithm is slow for a given query when using one BN, that same algorithm may be efficient for this query, in a different, but equivalent BN.
Our challenge, then, is to find the "most efficient" BN/QA combination -- i.e., determine
to minimize the expected time to answer queries, over the distribution of queries that will be encountered.
I am very interested in hearing any specific ideas on how to solve this problem.
See these articles for one possible framework for posing this problem.