COMPM023 - Validation and Verification
This database contains 2016-17 versions of the syllabuses. For current versions please see here.
|Code||COMPM023 (Also taught as COMPGS03)|
|Prerequisites||Successful completion of years 1 and 2 of the Computer Science programme.|
|Taught By||Yue Jia (50%), Earl Barr (50%)|
|Aims||The module will train students in the principles and techniques of validating and verifying software systems. The training will be intellectually demanding and will cover not only the state-of-the practice in validation and verification, but also the most significant trends, problems and results in validation and verification research.|
|Learning Outcomes||On completion of the module, the successful student should have a good knowledge and understanding of correctness, consistency, faults and failures, static analysis and testing. The student should be able to understand the use of logic as a formal language for the specification of systems, to understand the use of symbolic execution, and the main verification techniques used in symbolic model checking, and be able to verify simple systems. In addition, students should have a good understanding of the range of approaches to testing that can be applied to software systems and be able to undertake both black-box and white-box (unit-level) testing. Further, successful students will be able to appreciate the limitations of the current tools and have insights in ongoing research topics to overcome them.|
- Software engineering lifecycle context
- Soundness and completeness
- Static and dynamic analysis
- Kinds of testing (unit, functional, integration, system, acceptance, regression)
- Black Box Testing: input partitioning and combinatorial testing
- Coverage and Structural Testing
- Mutation Testing
- Regression Testing
- Propositional and Predicate Logic
- Specifying and verifying programs
- Symbolic Execution
- Hoare Logic
- Model Checking
- Temporal Logic: NuSMV, Computational Tree Logic (CTL)
- Abstract Interpretation
- Selected surveys and research papers
Method of Instruction:
Lectures, coursework. There are two pieces of coursework, equally weighted.
The course has the following assessment components:
- Written Examination (2.5 hours, 80%)
- Coursework Section (2 pieces, 20%)
To pass this module, students must:
- Obtain an overall pass mark of 50% for all components combined;
- Obtain a minimum mark of 40% in each component worth ≥ 30% of the module as a whole.
Yue Jia and Mark Harman "An Analysis and Survey of the Development of Mutation Testing"IEEE Transactions on Software Engineering, vol. 37 no. 5, pp. 649 – 678, September 2011.
Shin Yoo and Mark Harman"Regression Testing Minimisation, Selection and Prioritisation: A Survey"Journal of Software Testing, Verification and Reliability, 22(2):67-120.
S. Anand, E. Burke, T. Y. Chen, J. Clark, M. B. Cohen, W. Grieskamp, M. Harman, M. J. Harrold and P. McMinn."An Orchestrated Survey on Automated Software Test Case Generation". Journal of Systems and Software, vol. 86, no. 8, August 2013, pp. 1978–2001.
P. McMinn."Search-Based Software Test Data Generation: A Survey"Software Testing, Verification and Reliability, vol. 14, no. 2, June 2004, pp. 105–156.
Alex Groce, Amin Alipour, and Rahul Gopinath"Coverage and its discontents"Wayward Essay 2014http://www.cs.cmu.edu/~agroce/onwardessays14.pdf