Projects
 Human-Computer Interaction

I am interested in advanced human-computer interaction, i.e. interactions that go beyond the usual windows, keyboard and mouse interactions. The advanced interaction modalities include, for example, haptic input/output, vision-based input, virtual and augmented reality, and so on.

Project Xingdong Xing Dong Yang studies how well we can perceive haptic (kinesthetic) feedback while moving our arms and hands, for example, how well we discriminate the magnitude and the direction of haptic forces. The results of his research can be used to better understand how we can use haptic feedback to improve the learning of complex motor skills.
Fraser Anderson is developing a system that records the movements of, and the forces applied to, the surgical tools in laparoscopic surgeries, and he is developing a system that compares the movement/force patterns of traineee surgeons to those of expert surgeons. This will enable us to develop a system for assessing objectively the performance, and improvements in performance, of trainee surgeons. Project Fraser
Project Milena Milena Radzikowska is developing interfaces for optimization-based decision support for integrated mining operations, interfaces that should display all relevant process information required by personnel making local operating decisions.

I am interested in taking on new students to work in this area, especially on projects involving haptic interfaces.

 Computer-Assisted Therapy

In physical and occupational therapy, there is a tremendous need for new methods that allow objective evaluation of therapy progress, that are fun (and thus motivating) to use, and that are hopefully cheap and portable, so that patients can continue using them once they return home from the rehabilitation hospital.

Project Michelle Michelle Annett is interested in technology-assisted rehabilitation methods. She is currently working on a a large touch screen table that can be use in occupational therapy for upper body rehabilitation. This project involves the development of hardware and software components, and a pilot system is currently under evaluation at a local rehabilitation hospital.

I am very interested in taking on new students in this area, and there are a number of interesting projects in this area that are ready to go.

 Spatial Navigation

I have a long-standing interest in spatial navigation, where I am interested in how people find their way through the environment, what perceptual cues they rely on, and what navigational cues they are attending to.

Project Debbie With Debbie Kelly, I study how people orient in their environment. Is it based on landmarks in the environment or is it based on general geometric information? If it is based on landmarks, what landmark information is used (e.g. shape or color)? And if it is based on geometric information, what information is used (e.g. distances or angles)?
With Jan Snyder, I study to what extent we rely on static location information for determining egomotion (heading) and which static cues are most important. The results shuggest that we process heading information in two separate visual pathways, the dorsal path relying on motion information, and the ventral pathway relying on static visual information. Project Jan
Thesis Michelle In her Master's thesis Michelle Annett developed a system that allows to create and run standard paradigms in spatial navigation research on monitors, in a head-mounted display, or a Virtual Reality cave. The system is very easy to use for researchers who have no programming or other technical knowledge, but is also easy for programmers and VR experts to extend.

I am interested in taking on a student to work on a computational, biological model of spatial navigation with particular emphasis on perceptual and attentional factors.

 Other

Map Example Computer-Aided Map Revision: Cartographic agencies invest large human and financial resources into the updating of maps. It would be helpful to automate map updating, but for legal reasons it is not possible to simply replace human operators. In this context, Jun Zhou has developed a program that operates like an apprentice, that learns from human operators what needs to be done in the process of map revision, and that offers to complete revision steps, initially very simple ones and then more complex ones, while still leaving the human operator in charge of the map revision process. Jun Zhou, Li Cheng and I are currently revising this system using online learning methods.



Home
Updated December 2009