Projects

1. CAVE/touch-table collaboration.

The aim of this project is to develop and investigate collaboration between users located in the CAVE and the touch-table. Specifically, real objects will be located on the table surface, and these will be represented virtually in the CAVE display. As a table user moves the real objects, their virtual representation in the CAVE will be updated accordingly. Experimentally, a board-game scenario or object-arrangement task could be investigated. The main task is development of the basic networked VE system. Subsequently, there are many avenues of research on this project, including representation of users and environement.

2. Eye tracking for navigation in VEs

The aim of this project is to develop and investigate methods of using eye tracking to navigate VEs in immersive displays such as the CAVE. The project will involve development and experimental work. Varying methods of interaction using eye tracking are possible, and gaze, blinks and pupil dilation are able to be tracked. Experimentally, varying methods of navigation in VEs (including eye tracking) will be investigated in terms of task performance and usability.

3. Lie detector using eye tracking

The aim of this project is to develop and investigate methods of using eye tracking to detect when people are telling the truth and lying. This will be an extension to my CHI 2010 paper, which demostrated that people's oculesic behaviour of gaze, blinks and pupil dilation changes depending on their state of veracity in mediated communication systems (video and avatar) similarly to as is observed in real-world intercation. This project aims to record eye tracking data of people engaged in truthful and deceptive conversation, and build a model that may be used to estimate when people are lying depending on their oculesic behaviour. Experimental work - for instance comparing to traditional methods of lie detection will also be performed.

4. Real-time lip synchronisation for multi-user VEs

The aim of this project is to develop and investigate real-time lip synchronisation in multi-user VEs. Current games often use a variety of animation techniques to bring characters to life. This includes accurate mouth movement according to a character's vocal utterances. However, in real-time multi-user VEs in which users are embodied by avatars, the verbal communication between users is spontaneous and unpredictable. This project aims to develop avatars capable of accurate lip-synch in real time. This involves audio processing, mapping phenomes (sounds) to visemes (mouth shapes) using an open souce library, and transferring the data accros a network to other clients connected to the shared VE, thus animating the avatar. Experimental work will focus on comparing with other methods of lip sync and measuring social presence.

 

 

 

Eye Catching
Contact
Eye Representation experiment
Character Creation