COMPM023 - Validation and VerificationNote: Whilst every effort is made to keep the syllabus and assessment records correct, the precise details must be checked with the lecturer(s).
- COMPM023 (Also taught as: COMPGS03)
- Successful completion of years 1 and 2 of the Computer Science programme.
- Taught By
- Earl Barr (50%)
Shin Yoo (50%)
- The module will train students in the principles and techniques of validating and verifying software systems. The training will be at an intellectually demanding level and will cover not only the state-of-the practice in validation and verification, but also the most significant trends, problems and results in validation and verification research.
- Learning Outcomes
- On completion of the module, the successful student should have a good knowledge and understanding of correctness, consistency, faults and failures, static analysis and testing. The student should be able to understand the use of logic as a formal language for the specification of systems, to understand the use of the main verification techniques used in symbolic model checking, and be able to verify simple systems. In addition, students should have a good understanding of the range of approaches to testing that can be applied to software systems and be able to undertake code level unit testing. Further, successful students will be able to appreciate the limitations of the current tools and have insights in ongoing research topics to overcome them.
- Basic concepts
- Software engineering lifecycle context
Static and dynamic analysis
- Static Analysis
- Propositional and Predicate Logic
Specification languages - Alloy
Computational Tree Logic - CTL
Temporal Logic - NuSMV
Specifying and verifying programs
Java Modelling Language (JML) and ESC/Java2
- Dynamic Analysis
- Kinds of testing (unit, functional, integration, system, acceptance, regression)
The testing lifecycle
Coverage and Mutation Analysis
Identifying test cases (boundary analysis, partitioning)
Code level unit testing with JUnit/TestNG
- Selected surveys and research papers
Method of Instruction:
Lectures, coursework, lab classes.There are three pieces of courswork, all equally weighted.
The course has the following assessment components:
- Written Examination (2.5 hours, 75%)
- Coursework Section (3 pieces, 25%)
To pass this course, students must:
- Obtain an overall pass mark of 50% for all sections combined
The examination rubric is:
Answer TWO Questions from SECTION A and ONE question from SECTION B (THREE questions in total).
All questions carry equal marks, section A covers static analysis, section B dynamic analysis.
Michael Huth and Mark Ryan, Logics in Computer Science: Modelling and Reasoning about Systems, Cambridge University Press.
Second Edition, 2004.
Mauro Pezzè and Michal Young, Software Testing and Analysis: Process, Principles, and Techniques, Wiley, 2007.
Ricky W. Butler and George B. Finelli, The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software,
IEEE Transactions on Software Engineering, 19(1):3-12, January 1993.
Phyllis Frankl, Dick Hamlet, Bev Littlewood and Lorenzo Strigini, Evaluating Testing Methods by Delivered Reliability, IEEE
Transactions on Software Engineering, 24(8):586-601, August 1998.
Phyllis G. Frankl and Elaine J. Weyuker, An Applicable Family of Data Flow Testing Criteria, IEEE Transactions on Software
Engineering, 14(10):1483-1498, October 1988.
Todd L. Graves, Mary Jean Harrold, Jung-Min Kim, Adam Porter and Gregg Rothermel, An Empirical Study of Regression Test Selection
Techniques, ACM Transactions on Software Engineering and Methodology, 10(2):184-208, April 2001.
Thomas J. Ostrand and Mark J. Balcer, The Category-Partition Method for Specifying and Generating Functional Tests, Communications
of the ACM, 31(6):676-686, June 1988.
Thomas Reps, Thomas Ball, Manuvir Das and James Larus, The Use of Program Profiling for Software Maintenance with Applications
to the Year 2000 Problem, Proceedings of the Sixth European Software Engineering Conference (ESEC/FSE 1997), Zurich, Switzerland,
pp.432-449, September 1997.
Elaine J. Weyuker, On Testing Non-Testable Programs, Computer Journal, 25(4):465-470, November 1982.
Elaine J. Weyuker, Axiomatizing Software Test Data Adequacy, IEEE Transactions on Software Engineering, SE-12(12): 1128-1138,
Michal Young and Richard N. Taylor, Rethinking the Taxonomy of Fault Detection Techniques, Proc. 11th Int'l Conference on
Software Engineering (ICSE 1989), Pittsburgh, PA, USA, pp. 53-62, May 1989.
Hong Zhu, Patrick A.V. Hall and John H.R. May, Software Unit Test Coverage and Adequacy, ACM Computing Surveys, 29(4): 366-427,
Gregg Rothermel and Mary Jean Howard, 'Analysing Regression Test Selection Techniques', IEEE Transactions on Software Engineering,
22 (8):529-551, August 1996.
M. Harrold, J. McGregor and K. Fitzpatrick, 'Incremental Testing of Object-Oriented Class Structures', in Proceedings of the
1992 International Conference on Software Engineering (ICSE 1992), pp 68-80.
Roong-Ko Doong and Phyllis Frankl, 'The ASTOOT Approach to Testing Object-Oriented Programs', ACM Transactions on Software
Engineering and Methodology, 3 (2):101-130, April 1994.
R. DeMillo, R.Lipton and F.Sayward, 'Hints on Test Data Selection: Help for the Practicing Programmer', Computer 1 (4), April
David S. Rosenblum and Elanie J. Weyuker, 'Using Coverage Information to Predict the Cost-Effectiveness of Regression Testing
Strategies', IEEE Transactions on Software Engineering, 23(3):146-156, March 1997.
Further lecture notes will be provided.