Current Students

COMPM085 - Computational Photography and Capture

This database contains 2016-17 versions of the syllabuses. For current versions please see here.

Code COMPM085 (Also taught as: COMPGV15)
Year 4
Prerequisites

Completion of years 1 and 2 of the BSc/ BEng/ MEng Computer Science or CS with EE Programme plus A-Level maths and basic knowledge of Matlab

Term 2
Taught By Tim Weyrich (50%)
Gabriel Brostow (50%)
Aims The module is designed to be self-contained, introducing the theoretical and practical aspects of modern photography and capture algorithms to students with only limited mathematical background. The two primary aims are i) to introduce universal models of colour, computer-controlled cameras, lighting and shape capture, and ii) to motivate students to choose among the topics presented for either continuing study (for those considering MSc’s and PhD’s) or future careers in the fields of advanced imaging.
Learning Outcomes Students will develop in-depth knowledge and understanding of the main Computational Photography topics as listed in the attached outline syllabus.

Content:

Introduction to Computational Photography
More on cameras, sensors and colour
Blending and compositing
Background subtraction and matting
Warping, morphing, mosaics and panoramas
High-dynamic range imaging/ tome mapping
Hybrid images
Flash photography
Stylised rendering using multi-flash

Image Inpainting

Texture synthesis
Image quilting
Heeger and Bergen
Simplicial complex of morphable textures (Matusik 2005)

Extension to the temporal domain

TIP, Video textures
Temporal sequence rendering
Ezzat speech anim, comtrolled video sprites
Video-based rendering: using photographs to enhance videos of a static scene
Motion magnification
Non-photorealistic rendering and animation

Colourization and colour transfer

Colourization using optimization
Colour transfer between images
N-Dimensional probability density function transfer and its application to colour transfer
Intrinsic images
Vectorising Raster images
Poisson image editing
Seam carving
De-blurring/ dehazing
Coded aperture imaging

Image-based rendering

Image-based modelling and photo editing view dependence, light-dependence, plenoptic function
Selected ways to capture the above representations

Extensions to the temporal domain

Factored time-lapse vidoe
Computational time-lapse video
Video synopsis and indexing

Capturing images with structured light

Laser-stripe projection
ShadowCuts
Stripe codes
Edge codes
Phase shift
Brief recap of stereo, spatio-temporal stereo
Photometric stereo
The Helmholtz wheel (Helmholtz reciprocity)

Dual photography

Seeing around corners
Dual light stage
Separation of global and local reflectance
Image-based BRDF measurements
MEasuring the BSSRDF 

Method of Instruction:

Lecture presentations supplemented by practical lab demonstration sessions and substantial online content, with other detailed examples and links for both further reading and existing demo software.

Assessment:

The course has the following assessment components:

 

  • Individual Project (implementation + written report, 60%)
  • Coursework Section (2 pieces, 40%)

 

 To pass this course, students must:

 

  • Submit a proper attempt at the coursework component.
  • Obtain an overall pass mark of 50% for all sections combined.
  • Obtain a minimum mark of 40% in each component worth ≥ 30% of the module as a whole.

 

 

Resources:

Peter J Burt, Edward H Adelson. “The Laplacian pyramid as a compact image code,” [J]. IEEE Transaction on communications,

1983, 231: 532–540

Patrick Pérez, Michel Gangnet, Andrew Blake, “Poisson image editing”, Proceedings of ACM SIGGRAPH 2003, Pages: 313 – 318.

Pradeep Sen and Billy Chen and Gaurav Garg and Stephen R. Marschner and Mark Horowitz and Marc Levoy and Hendrik P. A. Lensch,

“Dual photography” ACM Trans. Graphics, vol 24, num 3, p 745-755.

Tim Hawkins, Per Einarsson, Paul Debevec, “A Dual Light Stage”, Proceedings Eurographics Symposium on Rendering 2005

Youichi Horry, Ken-Ichi Anjyo, Kiyoshi Arai, “Tour into the picture: using a spidery mesh interface to make animation from

a single image”, Proceedings of Siggraph 1997, Pages: 225 – 232.

Thaddeus Beier, Shawn Neely, “Feature-based image metamorphosis”, Proceedings of ACM SIGGRAPH 1992, pages 35-42.

Levin, Lischinski, Weiss, “Colorization using Optimization”, SIGGRAPH 2004.

Reinhard, Ashikhmin, Gooch, Shirley, “Color Transfer Between Images”, CGandA 2001.

Pritch, Rav-Acha, Peleg, “Video Synopsis and Indexing”, ICCV 2007.

Pravin Bhat and C. Lawrence Zitnick and Noah Snavely and Aseem Agarwala and Maneesh Agrawala and Brian Curless and Michael

Cohen and Sing Bing Kang, “Using Photographs to Enhance Videos of a Static Scene”, Proceedings Eurographics Symposium on Rendering

2007, pp 327-338.

Arno Schödl, Richard Szeliski, David H. Salesin, and Irfan Essa, “Video Textures”, Proceedings of SIGGRAPH 2000, pages 489-498,

July 2000.

Arno Schödl, Irfan Essa, “Controlled Animation of Video Sprites”, Symposium on Computer Animation 2002.

Pitié, Kokaram, Dahyot, “N-Dimensional Probability Density Function Transfer and its Application to Colour Transfer”, Proceedings

of ICCV 2005, p.1434-1439.

Eric Bennett, Leonard McMillan, “Computational Time-Lapse Video”, ACM SIGGRAPH 2007, pp 102-108.

Sunkavalli, Matusik, Pfister, Rusinkiewicz, “Factored Time-Lapse Video”, Proceedings of Siggraph 2007.

Tony Ezzat, Gadi Geiger, Tomaso Poggio, “Trainable videorealistic speech animation”, Proceedings of Siggraph 2002, Pages:

388 – 398.

Land and McCann, “Lightness and Retinex Theory” J. Opt. Soc. Am. 61, 1-11 (1971).

H.G. Barrow and J.M. Tenenbaum, “Recovering Intrinsic Scene Characteristics From Images”, Computer Vision Systems, A. Hanson

and E. Riseman, eds., pp. 3-26. Academic Press, 1978.

P. Sinha, E.H. Adelson, “Recovering Reflectance and Illumination in a World of Painted Polyhedra”, Proceedings of ICCV 1993,

p. 156-163,

Yair Weiss, “Deriving Intrinsic Images from Image Sequences”, Proceedings ICCV 2001, pp. 68—75.

Marshall F. Tappen, William T. Freeman, Edward H. Adelson, “Recovering Intrinsic Images from a Single Image”, pp1343-1350.

Graham D. Finlayson, Mark S. Drew, and Cheng Lu, “Intrinsic Images by Entropy Minimization”, in Proceedings of ECCV pp. 582—595,

2004.

Module home page.