COMPGV15 - Computational Photography and Capture

This database contains the 2016-17 versions of syllabuses. Syllabuses from the 2015-16 session are available here.

Note: Whilst every effort is made to keep the syllabus and assessment records correct, the precise details must be checked with the lecturer(s).

Code COMPGV15 (Also taught as: COMPM085)
Year MSc
Prerequisites A-Level maths and basic knowledge of Matlab
Term 2
Taught By Tim Weyrich (50%), Gabriel Brostow (50%)
Aims The module is designed to be self-contained, introducing the theoretical and practical aspects of modern photography and capture algorithms to students with only limited mathematical background. The two primary aims are i) to introduce universal models of colour, computer-controlled cameras, lighting and shape capture, and ii) to motivate students to choose among the topics presented for either continuing study (for those considering MSc’s and PhD’s) or future careers in the fields of advanced imaging.
Learning Outcomes Students will develop in-depth knowledge and understanding of the main Computational Photography topics as listed in the attached outline syllabus.

Content:

Introduction to Computational Photography
More on cameras, sensors and colour
Blending and compositing
Background subtraction and matting
Warping, morphing, mosaics and panoramas
High-dynamic range imaging/tome mapping
Hybrid images
Flash photography
Stylised rendering using multi-flash

Image Inpainting
Texture synthesis
Image quilting
Heeger and Bergen
Simplicial complex of morphable textures (Matusik 2005)

Extension to the temporal domain
TIP, Video textures
Temporal sequence rendering
Ezzat speech anim, comtrolled video sprites
Video-based rendering: using photographs to enhance videos of a static scene
Motion magnification
Non-photorealistic rendering and animation

Colourization and colour transfercolorization using optimization
Colour transfer between images
N-Dimensional probability density function transfer and its application to colour transfer
Intrinsic images
Vectorising Raster images
Poisson image editing
Seam carving
De-blurring/ dehazing
Coded aperture imaging

Image-based rendering
Image-based modelling and photo editing view dependence, light-dependence, plenoptic function
Selected ways to capture the above representations

Extensions to the temporal domain
Factored time-lapse video
Computational time-lapse video
Video synopsis and indexing

Capturing images with structured light
Laser-stripe projection
ShadowCuts
Stripe codes
Edge codes
Phase shift
Brief recap of stereo, spatio-temporal stereo
Photometric stereo
The Helmholtz wheel (Helmholtz reciprocity) 

Dual photography
Seeing around cornersdual light stage
Separation of global and local reflectance
Image-based BRDF measurements
Measuring the BSSRDF

Method of Instruction:

Lecture presentations supplemented by practical lab demonstration sessions and substantial online content with other detailed examples, and links for both further reading and existing demo software.

Assessment:

The course has the following assessment components:

  • Individual Project (60%)
  • Coursework (40%)

To pass this course, students must:

  • Obtain an overall pass mark of 50% for all sections combined.
  • Submit a proper attempt at the coursework component.


(The Individual Project consists of the implementation and a written report. The Coursework consists of 2 separate pieces of coursework.)

Resources:

Peter J Burt, Edward H Adelson. “The Laplacian pyramid as a compact image code,” [J]. IEEE Transaction on communications,1983, 231: 532–540
Patrick Pérez, Michel Gangnet, Andrew Blake, “Poisson image editing”, Proceedings of ACM SIGGRAPH 2003, Pages: 313 – 318.
Pradeep Sen and Billy Chen and Gaurav Garg and Stephen R. Marschner and Mark Horowitz and Marc Levoy and Hendrik P. A. Lensch,“Dual photography” ACM Trans. Graphics, vol 24, num 3, p 745-755.
Tim Hawkins, Per Einarsson, Paul Debevec, “A Dual Light Stage”, Proceedings Eurographics Symposium on Rendering 2005
Youichi Horry, Ken-Ichi Anjyo, Kiyoshi Arai, “Tour into the picture: using a spidery mesh interface to make animation froma single image”, Proceedings of Siggraph 1997, Pages: 225 – 232.
Thaddeus Beier, Shawn Neely, “Feature-based image metamorphosis”, Proceedings of ACM SIGGRAPH 1992, pages 35-42.
Levin, Lischinski, Weiss, “Colorization using Optimization”, SIGGRAPH 2004.
Reinhard, Ashikhmin, Gooch, Shirley, “Color Transfer Between Images”, CGandA 2001.
Pritch, Rav-Acha, Peleg, “Video Synopsis and Indexing”, ICCV 2007.
Pravin Bhat and C. Lawrence Zitnick and Noah Snavely and Aseem Agarwala and Maneesh Agrawala and Brian Curless and Michael Cohen and Sing Bing Kang, “Using Photographs to Enhance Videos of a Static Scene”, Proceedings Eurographics Symposium on Rendering 2007, pp 327-338.
Arno Schödl, Richard Szeliski, David H. Salesin, and Irfan Essa, “Video Textures”, Proceedings of SIGGRAPH 2000, pages 489-498, July 2000.
Arno Schödl, Irfan Essa, “Controlled Animation of Video Sprites”, Symposium on Computer Animation 2002.
Pitié, Kokaram, Dahyot, “N-Dimensional Probability Density Function Transfer and its Applicatio