Positions for PostDoctoral FellowsWe are looking to hire strong researchers as postdoctoral fellows (PDFs), for various projects -- both application pull and technology push. These positions are all associated with the Alberta Ingenuity Centre for Machine Learning. -- see our ad.
If you are interested in any of the positions mentioned below, please send me ...
We may have some funding for the first three positions; in general, it is helpful if you have funding to bring in (part of) your salary from some external sources (eg, a PostDoctoral Fellowship).
Patient-specific Cancer Treatment (PolyomX) (pdf)Learn which treatment should be most effective for each specific (cancer) patient, based on
Brain Tumor Analysis ProjectGOAL#1: Segmentation -- find location of brain tumour
Bovine Haplotype Project (AFNS)
This project involves developing genomic selection methods and tools for beef cattle -- eg, analysing SNP (single nucleotide polyomorphism) profiles of cattles, to help estimate "breeding value". This is with members of the Department of Agricultural, Food and Nutritional Science (AFNS).
The Proteome Analyst system can analyse a set of peptide sequences (proteins) in a given proteome and return the general function, and subcellular location, of each protein, as well as a functional summary for the entire proteome. The current version first maps each novel protein to a set of attributes -- namely the tokens that appear in certain fields of the (known proteins) homologs found by Blast -- then finds the general function (resp., subcellular location) most associated with this token-set, based on a learned classifier. We are looking for a researcher (summer student, grad student, postdoctoral fellow) to help us extend Protein Analysis in several ways:
Learning and Validating Belief Nets
Bayesian belief nets (BN) are becoming the preferred tool for a wide variety of tasks, ranging from sensor fusion to information retrieval. We are currently developing and experimenting with various tools for learning these BNs from training data. We are looking for a student to help us here, both in developing and implementing these learning system, and also in running careful experiments to help us compare these different approaches. We also plan to investigate ways to learn, and use, Probabilistic Relational Models --- extension to belief nets that allow the representation of relationships.
We will also explore ways to compute and use "variance" around the belief net response; see webpage: extending the work on "mixture using variance" and perhaps a variance-based model of value-of-information.
Learning tasks typically begin with a data sample --- eg, symptoms and test results for a set of patients, together with their clinical outcomes. By contrast, many real-world studies begin with no actual data, but instead with a budget --- funds that can be used to collect the relevant information. For example, one study has allocated $30 thousand to develop a system to diagnose cancer, based on a battery of patient tests, each with its own (known) costs and (unknown) discriminative powers. Given our goal of identifying the most accurate classifier, what is the best way to spend the $30 thousand? Should we indiscriminately run every test on every patient, until exhausting the budget? Or, should we selectively, and dynamically, determine which tests to run on which patients? We call this task budgeted learning.
There are many open questions, both theoretic and empirical.