Research Masters in Computer Vision, Image Processing, Graphics and Simulation

Machine Vision and Image Processing Research Projects

Bernard Buxton, January 2002

1 Human Body Modelling

B F Buxton, (Conny Ruiz) and Ioannis Douros, with Hamamatsu Photonics, UK.

3D digitisation of the human body using, for example, the Body Lines scanner [1] loaned to the Department by Hamamatsu Photonics can provide dense sampling of the body surface, of the order of 100,000 points, or for other scanners, several million data points, per image. In order to use this data, it is necessary to reconstruct the surface at high resolution, dealing appropriately with any noise. Considerable research effort (see for example: [2,3,4,5]) has been put into generalised 3D reconstruction techniques, but these do not take advantage of knowledge about the morphology of the human body, which can potentially reduce the complexity and improve the accuracy of surface reconstruction from 3D human scan data. A number of techniques that do so have been developed at UCL over the past few years [6,7,8] to produce robust, fast surface mesh and spline fitting reconstruction techniques of subjects who are unclothed or are wearing close-fitting garments, and have been scanned in an approximately known posture. These techniques make use of a number of assumptions about topology and surface curvature when reconstructing the skin, and can provide surfaces with good levels of detail, but utilise scanner data that has been pre-processed or "cleaned" to remove artifacts and reduce noise.

For clothed subjects these assumptions do not generally hold and, because of the complexity of the folded, cumpled surface of clothing, the data cannot be pre-processed to the same extent as for an unclothed subject. Last year, in her MRes project, Conny Ruiz [11] successfully combined fabric draping techniques with ideas borrowed from robust statistics to develop a method for the model-based fitting of a surface to such scanner data. The method was shown to work well and such fitted models could, for example, be used for visualisation in virtual environments and for numerical analysis of fabric drape [9,10]. However, there is scope to use a better distance metric between the data points and the model. The aim of this project is to explore such an approach and to see if it can provide a even better surface reconstruction than that developed previously, in particular applied to scans of subjects in underwear, ie for modelling human skin..

References

[1] Horiguchi C, "Sensors that Detect Shape", J. Adv. Automation Technology Vol. 7 No. 3, 1995, pp. 210-216.

[2] Amenta N, Bern M and Kamvysselis M, "A new Voronoi-based surface reconstruction algorithm", Siggraph '98, pp 415-421.

[3] Hoppe H, "Surface Reconstruction from Unorganized Points", PhD Thesis, U. Washington, 1994.

[4] Li P, and Jones P, "Anthropometry-Based Surface Modelling of the Human Torso", Computers in Engineering, Amer. Soc. Mech. Eng., Minneapolis, 1994, pp. 469-474.

[5] Hilton A, Stoddart A, Illingworth J, and Windeatt T, "Implicit surface-based geometric fusion", Computer Vision and Image Understanding, vol 69, pp 273-291, 1998.

[6] West, E, "B-spline surface skinning for body scanner data", MRes Thesis, Department of Computer science, University College London, September 1997.

[7] Dekker L, Khan S, West E, Buxton B, Treleaven P, "Models for Understanding the 3D Human Body Form", IEEE International Workshop on Model-Based 3D Image Analysis, IEEE, 1998, pp. 65-74.

[8] Douros, I, "B-spline surface reconstruction of the human body from 3D scanner data", MRes Thesis, Department of Computer science, University College London, September 1998.

[9] Tsopelas N, "Modelling thin-walled objects in computer graphics and animation", PhD Thesis, Department of Computer Science, Queen Mary and Westfield College, University of London, 1993.

[10] Volino P, Courchesne M, and Magnenat Thalmann, N, "Versatile and efficient techniques for simulating cloth and other deformable objects", Computer Graphics (SIGGRAPH), annual Conference Series, 1995, pp137-144.

[11] Ruiz, M C, "Modelling Clothed People", MRes Thesis, Department of Computer science, University College London, September 2000.


2 Articulation and Animation of 3D Digitised Body Surface Images

B F Buxton and J Oliveira, with Hamamatsu Photonics, UK.

Animated models of human beings are typically based on synthetic polygonal or metaball forms [1], using detailed positioning techniques such as those described in the h-anim standard [2] and key framing or, as in recent research, physics and biomechanics based approaches to model realistic shape and motion [3,4,5]. The shape of these models can be adjusted to that of actual people, most famously as in the work of N and D Thalmann on the development of a Marilyn Monroe avatar [6], but also, more generally for any subject using video silhouettes as in the work of Hilton and Gentils [7,8]. However, the availability of dense, 3D digitised data of the human body surface from devices such as the Body Lines scanner [9] loaned to the Department by Hamamatsu Photonics potentially allows an additional level of realism to be applied by articulating and animating skinned models based on such digitised data.

Teichmann [10] has developed a technique for assisted articulation, where a semi-interactive process is used to define the 'skeleton' of an arbitrary object, based on a closed mesh, resulting in an appropriately articulated form. Some exploratory work has been done at UCL along these lines to segment 3D body images by automated location of key landmarks that correspond to joint locations, and generating articulated VRML output that can be manipulated interactively to move the limbs [11]. The major short-coming of this approach is the lack of surface deformation. This is particularly necessary at areas that undergo major shape change during movement, such as the shoulders and hips. MRes projects completed in the last two years [14, 15] have addressed these problems by coupling the skin to the underlying articulated skeleton [12,13] and developing deformable skin models to produce automated techniques that can take 3D skinned body forms and generate articulated h-anim compliant output. These projects used an elasticity model and a conjugate gradient optimisation routine to produce pseudo-dynamics for the body skin. This worked well, but could obviously be improved by incorporating a purely geometrical motion of the skin [16, 17] and correct, Newtonian dynamics. An initial attempt was made in a more recent previous project to build a hybrid system for animating whole body scans by using a combination of geometric morphing, elasticity and Newtonian dynamics to the best effect. A basic hybrid system was built, but there is ample scope for a second attempt at this project, in particular incorporating the recent work at UCL on level of detail and animation, for example using the geometric blending and/or elasticity forces to control the local level of detail..

References

[1] Porcher Nedel L, Thalmann D, "Modelling and Deformation of the Human Body Using an Anatomically-Based Approach", Proc. Computer Animation, 1998.

[2] Roehl B, "Draft specification for a standard VRML Humanoid, Version 1.0", http://ece.uwaterloo.ca/~h-anim/, 1997.

[3] Tek H, and Kimia B, B, "Volumetric segmentation of medical images by three-dimensional bubbles", in IEEE Workshop on Physics-Based Modelling in Computer Vision, 1995.

[4] Brogan D,C, Metroyer R,A, and Hodgins J,K, "Dynamically simulated characters in virtual environments", IEEE Computer Graphics and Applications, September/October 1998, pp 58-69.

[5] Metaxas, D and Terzopoulos D, "Shape and nonrigid motion estimation through physics-based synthesis", IEEE PAMI, vol 15, 1993.

[6] Magnenat Thalmann N, and Thalmann, D, editors, "Artificial Life and Virtual Reality", John Wiley & Sons, 1994. See for example, the introduction by the Thalmanns themselves.

[7] Hilton A, and Gentils T, "Popup People: Capturing human models to poulate virtual world", Centre for Vision, Speech and Signal Processing, University of Surrey, http://www.ee.surrey.ac.uk?Research/VSS/P3Dvision.

[8] Hilton A, Gentils T, and Beresford D, "Virtual People: Capturing 3D Articulated Models of Individual People", IEE Colloquium on Computer Vision for Virtual Human Modelling, 1998.

[9] Horiguchi C, "Sensors that Detect Shape", J. Adv. Automation Technology Vol. 7 No. 3, 1995, pp. 210-216.

[10] Teichmann M, Teller S, "Assisted Articulation of Closed Polygonal Models", MIT Technical report and technical sketch at Siggraph '98.

[11] Carruthers D, "Conversion and Articulation of 3-D Body Scans", MSc Thesis, Department of Computer science, University College London, September 1998.

[12] Scheepers F, " Anatomy-Based Modeling of the Human Musculature", Proceedings SIGGRAPH'97.

[13] Wilhelms J and Van Gelder A, "Anatomically Bas ed Modeling", Proceedings of SIGGRAPH'97.

[14] Kenton A,"Articulation and Animation of 3D Digitised Body Surface Images", CVIPGS MRes project dissertation, UCL, September 1999.

[15] Chapman C, "Using geometry and elasticity to articulate and animate 3-D digitised body surface images", CVIPGS MRes project dissertation, UCL, September 2000.

[16] Sun W, Hilton A, Smith R and Illingworth J, "Building animated models from 3-d scanned data", in Scanning 2000, Paris, May 2000.

[17] Smith R, Hilton A,and Sun W, "Seamless VRML humans", in Scanning 2000, Paris, May 2000.


3 Creating a skeleton for 3D models of the human body

B F Buxton, J Oliveira and I Douros in collaboration with  Hamamatsu UK.

The availability of dense, 3D digitised data of the human body surface from devices such as the Body Lines scanner [1] loaned to the Department by Hamamatsu Photonics potentially allows a number of applications to be tackled at an unprecedented level of detail and fidelity. One is the articulation and animation of human body models desccribed above. This requires an underlying skeleton which, to date, has been produced by hand. However, traditional image processing technques such as the medial axis transform [2, 3, 4] may be extended to higher dimensions and adapted to work from tesselated data [5] and such methods used to define the axes of the main limbs as precursors to definition of a humanoid skeleton [6]. Alternative, techniques such as principal components analysis may be adapted to provide initial limb axes for data from the Body lines scanner. For example, using the former, Teichmann [7] has developed a semi-interactive process to define the 'skeleton' of an arbitrary object, based on a closed mesh, resulting in an appropriately articulated form.

Recently, however, a much more robust technique has been developed for constructing the affine skeleton [8]. A preliminary implementation of the affine skeleton has been carried out [9] which shows that the technique is extremely interesting when applied to human whole-body scan data in a slice format, as obtained from the Hamamatsu body scanner. The aim of this project is to explore the use of such techniques for automatically building a humanoid skeleton from a body scan.

References

[1] Horiguchi C, "Sensors that Detect Shape", J. Adv. Automation Technology Vol. 7 No. 3, 1995, pp. 210-216.

[2] Blum H, "A transformation for extracting new descriptions of shape", Symposium on Models for the Perception of speech and Visual form, cambridge, MIT Press, 1964.

[3] Jain A K, "Fundamentals of Digital Image Processing", prentice Hall, 1987. See Chapter 9.

[4] de Berg M, van Krevald M, Overmars M and Schwarzkopf O, "Computational Geometry. Algorithms and Applications", Springer 1997. See chapter 7.

[5] Chin F J, Snoeyink J, and Wang C-A, "Finding the medial axis of a simple polygon in linear time", in Proc. 6th Annual International Symposium of Algorithms and Computation (ISAAC'95). Lecture Notes in Computer Science, Vol 1004, pp 382-391, Springer-Verlag, 1995.

[6] Carruthers D, "Conversion and Articulation of 3-D Body Scans", MSc Thesis, Department of Computer science, University College London, September 1998.

[7] Teichmann M, Teller S, "Assisted Articulation of Closed Polygonal Models", MIT Technical report and technical sketch at Siggraph '98.

[8] Belelu S, Sapiro G, Tannebaum A, and Giblin P J, “Noise-Resistant Affine Skeletons of Planar Curves”, in Proceedings of the 6th European Conference on Computer Vision, " Dublin, Ireland, 27-30 June, 2000, edited by D Vernon, Lecture Notes in Computer Science, edited by G Goos, J Hartmanis and J van Leeuwen, Springer Verlag, Vol. II, pp. 742-754, Springer, June 2000.

[9] Moon-Sik Jeong and B F Buxton, "The Affine Skeleton of Cross-Sectional, 3-D Human Body Scan Data", draft December 2001.


4 Modelling The Human Body

B F Buxton and I Douros in collaboration with the 3D Centre and  Hamamatsu UK.

There are now a number of 3D scanners, such as the Body Lines scanner [1] loaned to the Department by Hamamatsu Photonics, and other systems available that
can produce 3D images of the surface of a whole human body. Techniques have been developed for converting the cloud of data points delivered by such systems
into a surface representation, either as a mesh, a series of spline curves or as a smooth spline surface [see for example: 2,3,4 and subsequent publications]. Although
such representations are convenient for visualisation and a number of applications, be they qualitative, such as texture mapping or quantitative such as body volume
and surface area measurement, they do not go very far towards providing a description of body shape. Shape descriptions would themselves be extremely useful in
several ways, for example in applications in the clothing and fashion industries, in ergonomics and in medicine and healthcare. In addition, shape descritions could
also serve as the basis for further processing of 3D body scans, in particular: for matching scans, either of the same subject taken at different times, of different
subjects, to an idealised reference model, or for human body model building, for example via the Procrustes alignment procedure [5] and subsequent statistical
modelling via linear and non-linear principal components analysis [6-9] and the development of automatic electro-optical measurement techniques for human
anthropometry [10].

The local differential geometry of the human body has thus been studied [11,12] and techniques developed, for example, for estimating surface curvature from the
scanner data. However, recent work [13] suggests that non-local geometric properties of the body surfac, such as the loci of bitangent curves may be more robust
and more useful. The aim of this project is therefore to develop a system for generating such local and non-local geometric descriptors and to study their behaviour in
a scale space as the body data is progressively smoothed or its level of detail progressive reduced by decimation or edge collapse.

References

[1] Horiguchi C, "Sensors that Detect Shape", J. Adv. Automation Technology Vol. 7 No. 3, 1995, pp. 210-216.

[2] West, E, "B-spline surface skinning for body scanner data", MRes Thesis, Department of Computer science, University College London, September 1997.

[3] Dekker L, Khan S, West E, Buxton B, Treleaven P, "Models for Understanding the 3D Human Body Form", IEEE International Workshop on Model-Based
3D Image Analysis, IEEE, 1998, pp. 65-74.

[4] Douros, I, "B-spline surface reconstruction of the human body from 3D scanner data", MRes Thesis, Department of Computer science, University College
London, September 1998.

[5] Gower J C, Generalised Procrustes Analysis, Psychometrika vol 40, pp33-5, (1975).

[6] Hill A, Cootes T. F and Taylor C. J, "Active shape models - 'smart snakes'", in Proceedings of The 3rd British Machine Vision Conference, University of Leeds,
22-24 September, pp 266-275, Springer-Verlag, (1992).

[7] Hill A, Thornham A, Taylor C J, "Model-Based Interpretation of 3D Medical Images", in Proceedings of The 4th British Machine Vision Conference, University
of Surrey, 21-23 September, pp 339-348, BMVA Press (1994).

[8] Haslam J, Taylor C J and Cootes T F, "A probabilistic fitness measure for deformable template models", in Proceedings of The 4th British Machine Vision
Conference, University of Surrey, 21-23 September, pp 33-42, BMVA Press (1994).

[9] Sozou P D, Cootes T F, Taylor C J, and Di Mauro E C, " Non-linear point distribution modelling using a multi-layer perceptron", in Proceedings of The 5th
British Machine Vision Conference, University of Birmingham, 11-14 September, pp 107-116, BMVA Press (1995).

[10] Li P, and Jones P, "Anthropometry-Based Surface Modelling of the Human Torso", Computers in Engineering, Amer. Soc. Mech. Eng., Minneapolis, 1994,
pp. 469-474.


5 Manipulation of 3D scanned models of  the human body

B F Buxton and I Douros in collaboration with the 3D Centre for Electronic Commerce and Hamamatsu UK.

3D scans of the human body may now routinely be collected and, for example, the shape of the torso, analysed statistically, but it is not so easy to manipulate a scan so as, for example, to edit it to produce a slightly different body shape. The basic problem, well known in computer graphics and animation, is that editing by hand a model that may be defined by many thousands of vertices is extremely tedious and difficult to accomplish satisfactorily as it has to be carried out by a myriad of detailed operations, one to each vertex. It is thus frustrating that, although it is easy to build representations of an object's shape (the whole human body or just the torso in this case) which do describe the shape systematically in increasing deatil, for example, via the central moments, the process is not easily reversed. Thus, one cannot edit selected moments in order to create changes in the shape at the desired scale, as the reconstruction of the surface turns out to be undefined.

A constrained reconstruction scheme based on iterative linearisation of the solution implied by use of  Lagrange multipliers has been developed. The aim of this project is to implement this scheme and to see to what extent it can provide a useful means ofinteractively  tailoring object shape, for example by manipulating 2D body shapes (slices, profiles and silhouettes) and the body surface..


6    Guided refinement of the visual appearance of digital clothes

B F Buxton and B Spanlang in collaboration with the 3D Centre for Electronic Commerce

Simulation of deformable surfaces such as garments has been of interest to the computer graphics community for more than 15 years now. Different algorithms have emerged that improve simulation accuracy and speed. Nevertheless one always has to live with a trade-off between the two and complete physical accuracy can't be reached due to the fact that models are only an approximation. Now people are interested in virtually shopping clothes on the  Internet, expecting to try on garments in a 3D representation of themselves. Typical customers don't mind if the visual quality is not perfect when garments are animated. But when it comes to high quality
still images, as published in catalogues, researchers are challenged to find the right adjustments to their models to make them look undistinguishable from reality. For this so called Kawabata tests are made on real fabrics to get physical properties such as weight, stretch, shear, bend, etc. These properties then are mapped to the
computer model, in our case a mass spring model. Comparing the behaviour of simulated fabrics with the real ones on images gives very good results, but still the rendered image and the photograph are not identical.

The aim of this project is to find algorithms to automatically adjust the visual appearance of virtual clothing by taking a real image of the same garment as a guide. The algorithm should find visible differences and adjust the virtual model so that it looks identical to the real picture. For getting the shading right, normals of the
virtual garment will need to be adjusted possibly by utilising normal/bump maps. New per pixel lighting hardware, global illumination approximations and if necessary programmable hardware shaders will be employed to allow fine tuning of the visual quality.

Example images of garments and their simulated counterparts can be found at http://www.cs.ucl.ac.uk/research/vr/Projects/3DCentre/different_fabric_results.htm
and a description of the project at  http://www.cs.ucl.ac.uk/staff/b.spanlang/project/GarmentAppearance.htm


7    A computer interactive dance performance

B F Buxton, M Slater, and S Hu in collaboration with M Ramsgaard Thomsen in the Bartlewtt School and Carol Brown of Diamond Dance Studios, London N1.

There is scope for two or three collaborative projects in connection with a idea recently proposed for an interactive stage and dance performance. The opportunities are for:

(i)    Building and implementing a statistical tracker that wiull deliver, in real-time, a stream of data about the movement and pose of a dancer,

(ii)    Building a system to track the position, orientation and shape of a moveable, flexible screen to be used for projection of computer generated imagery driven by the characteristics of the dance performance, rehearsed and preprogrammed, choreographed dance sequences. It may also be necessary to construct, or at least work in collaboration with others, on constructing the appropriate computer generated imagery.

(iii)    Building a genetic/evolutionary programming system that learns to assermble the tracker data, rehearsed dance sequences and preprogrammed choreographed sequences for generation of the computer display consistent with the artistic content of the performance.

For a diagrammatic layout see: http://www.cs.ucl.ac.uk/staff/mthomsen/download/spawn/index.html. A full copy of the proposed collaboration can be provided on request.
 


8 Application of multiview techniques to the visualisation of historical artefacts and creation of a content addressable image database

B F Buxton and M Hansard with Dr Susanne Kuechler-Fogden, Department of Anthopology

The Department of Anthropology, has, in collaboration with colleagues elsewhere in the UK, in the USA and in Europe, been studying the appearance and significance of a set of historical artefacts, which are rare, distributed across several continents, and display a number of unique patterns. Since the artefacts are only physically available at one location and then, often only to a few on a restricted basis, there is a need to create a visual database from which they can be viewed, for example, by researchers world wide. One way to do this would be to create complete, computerised 3D models of the artefacts, for example using close-range photogrammetry [1]. Another would be to use implicit 3D modelling and visualisation techniques recently developed in computer vision research to facilitate visualisation from a database of a number of different basis views.

The aim of this project is to study the feasibility of the latter for the typical case where the viewer wishes to look around an object. Both an approximate 3D visualisation based on a linear combination of basis views [2,3,4] which has recently been shown to work well, for example, for face encoding [5], and a full, non-linear, perspective visualisation based on the trifocal tensor [6] may be used. Since the objects are quite complicated and their relief is often very important, it is of particular interest to determine how many basis views are required for a good visualisation and to what extent the simpler and more robust linear combination of views can be used. It is also important to explore to what extent such a database could be used to build a visual interface to the collection that could be used to search for objects from their appearance and the characteristics of the patterns each displays.

References:

[1] Atkinson K B, ed. "Close range photogrammetry and machine vision", Whittles Publishing, 1996.

[2] Ullman S and Basri R, "Recognition by linear combination of models", IEEE PAMI, vol 13, pp992-1006, (1991).

[3] Ullman S, "High level vision: Object recognition and visual cognition", MIT Press (1996).

[4] Mendonca P R S and Cipolla R, "Analysis and computation of an affine trifocal tensor", Proceedings of BMVC'98, September 1998, pp125-133.

[5] Koufakis I and Buxton B F, "Very low bit rate face video compression using linear combination of 2D face views and principal components analysis", Image and Vision Computing, vol 17, number 14, pp 1031-1051, 1999.

[6] Torr P H S and Zisserman A, "Robust parameterisation and computation of the trifocal tensor", Proceedings of BMVC'96, September 1996, pp 655-664.


9 Extrapolation of linear multiview techniques

B F Buxton and John Gilby, Sira.

The generation of novel or vitrual views from a linear combination of basis views has been extensively investigated over the last few years [1,2, 3]. The most accurate, most pleasing results for multiview interpolation are obtained by a combination of homogeneous least-squares fitting to obtain the geometry of the control points [4, 5] and a weighted blending of pixel values [6] to obtain the intensity or colour at each pixel on the object. Images outside the range of the initial basis views may be similarly generated, but with less confidence as it is not known how far one may extrapolate in this way. Recently, it has been shown [7], in particular for objects such as faces that are symmetric, that the structure of the eigenproblem embedded in the homogeneous least squares fitting procedure may be used to indicate that the extrapolation breaks down at a particular critical view.

The aim of this project is to study this phenomenon further and to determine to what extent this critical view depends on an object's shape and appearance and to generalise the results obtained to date for asymmetric objects. A key problem, currently under investigation, is finding a suitable parameterisation of the elements of the matrix eigenproblem similar to that developed for a linear fitting procedure [8], so that the crtical view may be predicted from sample views taken from between the initial basis views.

References:

[1] Ullman S and Basri R, "Recognition by linear combination of models", IEEE PAMI, vol 13, pp992-1006, (1991).

[2] Ullman S, "High level vision: Object recognition and visual cognition", MIT Press (1996).

[3] Mendonca P R S and Cipolla R, "Analysis and computation of an affine trifocal tensor", Proceedings of BMVC'98, September 1998, pp125-133.

[4] Kennedy D, Buxton B F and Gilby J H, “Application of the Total Least Squares Procedure to Linear View Interpolation” in Proc BMVC’99, pp 305-314, 1999.

[5] Mühlich M and Mester R, “The Role of Total Least Squares in Motion Analysis”, In Proceedings of 5th European conference on Computer Vision, ECCV'98, Springer Verlag, pp 305-321, (1998).

[6] Koufakis I and Buxton B F, "Very low bit rate face video compression using linear combination of 2D face views and principal components analysis", Image and Vision Computing, vol 17, number 14, pp 1031-1051, (1999).

[7] Kennedy D M, Buxton B F and Gilby J H, "The Analysis of Critical Views in Linear Multiview Relationships", draft April 2000.

[8] Hansard M E and Buxton B F, “Parametric View-Synthesis”, in Proceedings of the 6th European Conference on Computer Vision, " Dublin, Ireland, 27-30 June, 2000, edited by D Vernon, Lecture Notes in Computer Science, edited by G Goos, J Hartmanis and J van Leeuwen, Springer Verlag, Vol I, pp. 191-202, Springer, June 2000.


10 3D Modelling of Dynamic Face Shape Changes

P Hammond and T J Hutton, Biomedical Informatics Unit, Eastman Dental Institute

This project will make use of 3D surface data captured on the Eastman's face scanner. Face texture and shape are acquired simultaneously, unlike some other scanners, and because the acquisition time is only 0.1 seconds, the scanner can capture facial configurations that may exist only momentarily. Example data with registered texture can be downloaded at http://www.eastman.ucl.ac.uk/~thutton/tim.wrl .

Using such data the student will model changes in surface data and grey-levels that occur during speech and facial expression. A model that can synthesise facial expression, jaw movement and speech could be used for animation, for the interpretation of new images or for the study of speech and eating disorders. Improved simulation of effects of surgery might be obtained by creating a dynamic synthesis of the predicted face movement before and after surgery, as opposed to static views currently used. The project ties in closely with other work within the department, see http://www.eastman.ucl.ac.uk/~dmi/MINORI for a summary.

This project is likely to be extremely challenging, but exciting, requiring the student to attain rapid familiarity with the dataset and with the modelling techniques.

Contact: T.Hutton@eastman.ucl.ac.uk, P.Hammond@eastman.ucl.ac.uk


11 Other projects

B F Buxton

I may undertake to supervise other computer vision projects suggested by students if they are feasible and interesting.

One area of interest is in eye tracking, either in video taken from multimedia experiments or, in collaboration with Anthony Steed, who is interested in seeing if its possible to track the eye under a pair of the stereo graphics shutter glasses in the CAVE. Anthony has been looking at cameras and has ascertained that micro-head cameras are down to about 9mm, so should be mountable on glasses if their FOV is wide enough to capture the eye from such an oblique angle. There are thus a few practical problems to be resolved if anyone is interested, before we could commit to such a project.. Note, however, that experimentation would not need to be in the CAVE, at least initially, as we have desktop configurations for the same glasses.