Current Students

COMPGS03 - Validation and Verification

This database contains 2016-17 versions of the syllabuses. For current versions please see here.

CodeCOMPGS03 (Also taught as: COMPM023)
YearMSc
PrerequisitesSuccessful completion of years 1 and 2 of the Computer Science programme.
Term2
Taught ByYue Jia (50%), Earl Barr (50%)
AimsThe module will train students in the principles and techniques of validating and verifying software systems. The training will be intellectually demanding and will cover not only the state-of-the practice in validation and verification, but also the most significant trends, problems and results in validation and verification research.
Learning OutcomesOn completion of the module, the successful student should have a good knowledge and understanding of correctness, consistency, faults and failures, static analysis and testing. The student should be able to understand the use of logic as a formal language for the specification of systems, to understand the use of symbolic execution, and the main verification techniques used in symbolic model checking, and be able to verify simple systems. In addition, students should have a good understanding of the range of approaches to testing that can be applied to software systems and be able to undertake both black-box and white-box (unit-level) testing. Further, successful students will be able to appreciate the limitations of the current tools and have insights in ongoing research topics to overcome them. 

Content:

Basic concepts:
Software engineering lifecycle context 
Correctness
Soundness and completeness
Faults
Errors
Failures
Static and dynamic analysis

Validation:
Kinds of testing (unit, functional, integration, system, acceptance, regression) 
Black Box Testing: input partitioning and combinatorial testing
Coverage and Structural Testing
Mutation Testing
Regression Testing

Verification:
Propositional and Predicate Logic 
Specifying and verifying programs
Symbolic Execution
Hoare Logic
Model Checking
Temporal Logic:  NuSMV, Computational Tree Logic (CTL)
Abstract Interpretation

Reading: 
Selected surveys and research papers

Method of Instruction:

Lectures, coursework.There are two pieces of coursework,equally weighted.

Assessment:

The course has the following assessment components:

  • Written Examination (2.5 hours, 80%) 
  • Coursework Section (2 pieces, 20%)

 

 

To pass this course, students must:

 

  • Obtain an overall pass mark of 50% for all sections combined.

 

 

Resources:

Yue Jia and Mark Harman,"An Analysis and Survey of the Development of Mutation Testing"IEEE Transactions on Software Engineering, vol. 37 no. 5, pp. 649 – 678, September 2011.

Shin Yoo and Mark Harman,"Regression Testing Minimisation, Selection and Prioritisation: A Survey", Journal of Software Testing, Verification and Reliability, 22(2):67-120.

S. Anand, E. Burke, T. Y. Chen, J. Clark, M. B. Cohen, W. Grieskamp, M. Harman, M. J. Harrold and P. McMinn."An Orchestrated Survey on Automated Software Test Case Generation", Journal of Systems and Software, vol. 86, no. 8, August 2013, pp. 1978–2001.

P. McMinn,"Search-Based Software Test Data Generation: A Survey"
Software Testing, Verification and Reliability, vol. 14, no. 2, June 2004, pp. 105–156.

Alex Groce, Amin Alipour, and Rahul Gopinath, "Coverage and its discontents"
Wayward Essay 2014, http://www.cs.cmu.edu/ ~agroce/onwardessays14.pdf