CGVI logo

Research Projects

The project work, which starts after the exams in June and occupies students full‐time until early September, is intended to provide an extended opportunity to plan, execute and evaluate a significant piece of work, working closely with an expert in the field. Projects will either be related to a problem of industrial interest or to a topic near the leading edge of research. Examples of past projects are detailed below.


Name: Mahdi MohammadBagher Year: 2008-2009
Project title: Screen-Space Percentage-Closer Soft Shadows (SS-PCSS)

Since rendering soft shadows is computationally expensive, Mahdi proposed rendering Percentage-Closer Soft Shadows (PCSS), one of the state of the art techniques in soft shadow rendering, inside a screen-space rendering loop. Edge-aware filtering such as cross-bilateral filtering is required to address the issue of loosing the sense of edges in screen space. To make this technique 3 to 10 times faster than the PCSS technique, he decided to approximate the cross-bilateral filter with a separable version of it. The results are visually comparable to traditional soft shadow algorithms as well as the ground truth while being super fast to compute. It combines naturally with a deferred shading pipeline, making it an ideal choice for video games.


Name: Fabrizio Pece Year: 2008-2009
Project title: High Dynamic Range for Dynamic Scenes

Digital cameras, due to their design limitation, cannot capture the  dynamic range of colours (ratio between dark and bright regions) as it  is presented in the real world. High Dynamic Range (HDR) photography overcomes this limitation, but unfortunately it is not suitable for dynamic scenes. In fact moving objects in the scenes produce in the final HDR images undesirable artefacts called ghosts. Fabrizio's project developed techniques to adapt HDR imaging to dynamic scenes. These techniques are responsible to detect moving objects in a scene described by a bracketed exposure sequence and to erase the ghosts generated by these movements in the corresponding HDR  images.


Name: Cristina Amati Year: 2008-2009
Project title: Animating 2D ink paintings with 3D wind motion

Hand drawn art is often much faster to make and more expressive than a 3D model. Cristina's project offers a solution for creating animations directly from ink paintings by using a recording of the artist at work. The algorithm uses image processing and machine vision techniques to analyse the video frames. With the obtained information it segments the high resolution scan of the painting into semantically meaningful object components(e.g. a plant is broken into leaves, petals, etc.). A model of the object is then constructed in 2.5D and animated with 3D wind simulation in an automated process.


Name: Thomi Mertzanidou Year: 2007‐2008
Project Title: Image parsing.

Thomi’s project involved image parsing which is the process of trying to associate a label with each pixel of the image. In a small region of the image, the visual information is very ambiguous (e.g. the sky and sea look much the same). However, by combining information about context it is possible to reason about what is going on. For instance, we may see a chair next to a table, above the floor and surrounded by wall, but are unlikely to see a chair above a table on top of a window. Thomi’s project investigated incorporating context into image labeling.


Name: Frederic Besse Year: 2007‐2008
Project Title: Panoramic videos and time‐flow manipulation

The goal of Frederic's project was to explore time-flow editing, to generate dynamic panoramic videos.  He used time-flow editing to create composite images in which different parts of the image come from different moments in time. For example, in the original video the entire stadium (pictured left) collapsed simultaneously. However, with a modified time front different pixels come from different times in the video so that the right half collapses first. He then combined this idea with image panorama techniques to make panoramic videos.


Name: Laura Panagiotaki Year: 2006‐2007
Project Title: Automated camera placement

Current video-game camera control techniques are criticised for inadequate capture of game action.  Laura's project developed and evaluated a set of real-time automated algorithms for the movement and placement of virtual cameras. These addressed key limitations of existing techniques by drawing on cinematographic principles to drive the autonomous control system. The camera reacts to changes in the game environment in real‐time, and allows control parameters to be tailored to maximize dramatic impact and playability.



Name: Saurabh Sethi Year: 2006‐2007
Project Title: Text retrieval from Archimedes palimpsest.

Saurabh's project developed new image-processing techniques for recovering hidden text from multispectral images of an ancient document: the Archimedes palimpsest. As was common in the middle ages, the author took old parchment, scraped off the original writing and wrote his own text on top. Now historians are interested in the text written underneath, which is still discernible in places. This project used state‐of‐the‐art computer vision techniques enhance the hidden text.




Name: Umar Mohammed Year: 2005‐2006
Project Title: Generative Models for Face Recognition

Face recognition algorithms have many real-world applications, but current approaches are not sufficiently reliable for widespread market acceptance. Umar's project developed a new approach to face recognition based on recent developments in machine learning. The algorithm calculates the probability that faces have an underlying common cause (they come from the same person). Umar's experiments demonstrate several advantages of this approach over the current state of the art.



Name: Jania Aghajanian Year: 2005‐2006
Project Title: Predicting cognitive states from fMRI using support vector machines.

Jania wrote an image processing system for a mind reading competition run by the Human Brain Mapping organization in 2006. The competition provided functional MRI scans taken every few seconds from subject watching a movie inside an MRI scanner. Competitors had to write a computer program to determine the contents of the movie from the subjects scans. Jania combined image processing techniques with machine learning to predict cognitive states from features of the fMRI data.




Name: Sun Hyun Lee Year: 2005‐2006
Project Title: Pedestrian Detection and Segmentation based on human silhouette models in a still image

Sung Hyun's project is on pedestrian detection in static images, which is a challenging task as the shape and appearance of human beings exhibits considerable variability. He built a model that describes the family of shapes that human beings can take and then searched the image for these shapes. He took real world images and fitted this model using an iterative method that built up a color model for each detected person. Finally the pedestrian was segmented using a Markov Random Field.


Name: Christine Dubreau Year: 2004‐2005
Project Title: Active contours and support vector machines: SVM Shapes

Christine's project created a "snake" algorithm driven by a support vector machine that identifies image textures corresponding to objects of interest. A snake is a dynamic contour that crawls around the image and attempts to align itself with objects of interest. She combined this with a support vector machine classifier which learnt to discriminate between different image textures of interest. She used the algorithm to construct a system for outlining multiple‐sclerosis lesions in MRI brain scans.




Name: Lisa Gralewski Year: 2002‐2003
Project Title: Foreign body identification in automatic food sorting systems using colour.

Machine vision is extensively used in industry to detect damaged or low quality produce or to find defects in products on assembly lines. Lisa wrote software to detect foreign bodies in images of streams of crops falling in front of a camera in an industrial food sorting machine. The project uses colour to identify crop and distinguish it from other materials, which the machine ejects from the stream.