COMPGC26 - Artificial Intelligence and Neural Computing
This database contains 2017-18 versions of the syllabuses. For current versions please see here.
|Code||COMPGC26 (Also taught as COMP3058)|
|Prerequisites||Students should have either: (1) a degree in Mathematics; or (2) a degree in Philosophy in which they have completed and passed a formal logic module.|
|Taught By||Denise Gorse (50%)|
Anthony Hunter (50%)
|Aims||This course introduces artificial intelligence and neural computing as both technical subjects and as fields of intellectual activity. The overall targets are: (1) to present basic methods of expressing knowledge in forms suitable for holding in computing systems, together with methods for deriving consequences from that knowledge by automated reasoning; (2) to present basic methods for learning knowledge; and (3) to introduce neural computing as an alternative knowledge acquisition/representation paradigm, to explain its basic principles and their relationship to neurobiological models, to describe a range of neural computing techniques and their application areas.|
|Learning Outcomes||Ability to identify problems that can be expressed in terms of search problems or logic problems, and translate them into the appropriate form, and know how they could be addressed using an algorithmic approach. Ability to identify problems that can be expressed in terms of neural networks, and to select an appropriate learning methodology for the problem area.|
Scope of the Subject
- Nature and goals of AI.
- Application areas.
- Use of states and transitions to model problems.
- Breadth-first, depth-first and related types of search.
- A* search algorithm.
- Use of heuristics in search.
Reasoning in logic
- Brief revision of propositional and predicate logic.
- Different characterisations of reasoning.
- Generalized modus ponens.
- Forward and backward chaining.
- Diversity of knowledge.
- Inheritance hierarchies.
- Semantic networks.
- Knowledgebase ontologies.
- Diversity of uncertainty.
- Dempster-Shafer theory.
- Induction of knowledge.
- Decision tree learning algorithms.
- An architecture for intelligent agents.
Nature and Goals of Neural Computing
- Overview of network architectures and learning paradigms.
Binary Decision Neurons
- The McCullough-Pitts model.
- Single-layer perceptrons and their limitations.
The Multilayer Perceptron
- The sigmoid output function.
- Hidden units and feature detectors.
- Training by error backpropagation.
- The error surface and local minima.
- Generalisation, how to avoid 'overtraining'.
The Hopfield Model
- Content addressable memories and attractor nets.
- Hopfield energy function.
- Setting the weights.
- Storage capacity.
- The Kohonen self-organising feature map.
Method of Instruction
The course has the following assessment components:
- Written Examination (2.5 hours, 90%)
- Coursework Section (2 pieces, 10%)
To pass this module, students must:
- Obtain an overall pass mark of 50% for all components combined.
Reading list available via the UCL Library catalogue.