COMPGI20 - Introduction to Supervised Learning

This database contains 2016-17 versions of the syllabuses. For current versions please see here.

Taught ByIasonas Kokkinos (100%)
AimsTo have a full understanding of the learning outcomes.
Learning Outcomes

Students successfully completing the module should understand:
Linear regression, regularisation
Kernels and support vector machines
Linear algebra tools for machine learning
Neural Networks

Structured Prediction

Programming in either Matlab or Python


Introduction to Machine Learning
- Elements of Decision Theory, Bayes’ rule
- Nearest Neighbours
- Linear Regression, Ridge Regression
- Generalization, Cross-Validation

Logistic Regression
- Probabilistic interpretation, cross-entropy
- Newton-Raphson method
- Applications in vision

Support Vector Machines
- Functional and Geometric Margins
- SVM optimisation problem - primal and dual formulations
- Kernel trick
- Applications in vision

- Ensemble methods
- Weak learners
- Adaboost
- Applications in vision

Neural Networks
- Multi-layer perceptrons
- Stochastic Gradient Descent
- Weight decay, momentum, batch normalization.
- Convolutional Neural Networks
- Applications in vision

Elements of structured prediction
- Markov Random Fields, Conditional Random Fields, Bayesian Networks
- Inference in chain/tree-structured graphs: max/sum-product
- CNN-CRF hybrids
- Applications in vision

Method of Instruction

Lectures and lab classes.  


The course has the following assessment components:

  • Written Examination (2.5 hours, 50%)
  • Coursework Section (50%)

To pass this course, students must: 

  • Obtain an overall pass mark of 50% for all sections combined. 




D. Barber: Bayesian Reasoning and Machine Learning. Cambridge University Press (2012)

K. Murphy, Machine Learning: A Probabilistic Perspective, MIT Press (2013)

D.J.C. MacKay: Information Theory, Inference and Learning Algorithms. Cambridge University Press.

C. M. Bishop: Pattern Recognition and Machine Learning. Springer (2006)