Grand Challenges: Computer Programs that Always work

UCL Contact: Sarah Turnbull (Please note this is an internal UCL-CS event).
Date/Time: 24 Nov 17, 12:00 - 14:00
Venue: IOE - Bedford Way (20) – 728

Abstract

The sixth of the Eight Challenges will be "Computer Programs that Always work" on the 24th November at 12:00 in room 728 in the Institute of Education (20 Bedford Way)

The talk will be by the following speakers with the below abstracts:

Jens Groth: How to make computation resilient and how to verify correctness

What can go possibly go wrong in a computation? Just about everything! Unin tentional errors or deliberate attacks mean that we cannot trust computatio n to be correct.

During computation we can mitigate errors and attacks by distributing the c omputation across multiple parties. Even if program execution on one machin e fails, the other machines provide redundancy and can recover the correct result. Secure multi-party computation protocols provide resilience even ag ainst multiple corrupt parties and at the same time preserve privacy of inp uts.

Alternatively, we may after a computation demand evidence for the output be ing correct. Cryptographic proofs can be given for a computation being corr ect, and recent techniques allow very efficient verification of a computati on. Moreover, the proofs may be zero-knowledge, which means that they do no t reveal any confidential information that was used in the computation.

David Clark: Measuring Test Suite Quality

Programs that always work demand test suites that, in execution, exercise e very significant aspect of the program's behaviour. This property is someti mes termed the adequacy of the test suite.This quality consideration is usu ally regarded as met by being close to achieving a 100% coverage criterion for the paths in the program's control flow graph. So there now exist searc h based tools such as EvoSuite that search for covering sets of tests. Lim itations for the coverage approach include:
1. This is not an option when the source code is not available.
2. There is experimental evidence that coverage alone does not achieve adequacy
3. Some paths in the graph may be infeasible
4. On average, 10% of tests that cover an error fail to propagate the error information to the oracle.

Information theory offers software measures for improving test suite adequa cy. Use of quantified information flow provides a lightweight semantic elem ent in test suite selection that can overcome limitations in syntactic cove rage. In this talk we examine the issues of test suite diversity, avoidance of failed error propagation, and touch on the under-researched question of practical automated oracles.

Ann Blandford: Programs that always work: a person-centred perspective

In this talk I will review different interpretations of what it means for a program to work, from a user-centred perspective. I will focus particularl y on what it means for an interactive, programmable system to be safe, draw ing on examples from the ECLIPSE project. We have studied how nurses progra m interactive medical devices and identified sources of error and - at leas t as importantly - of resilience: making systems work in the face of distra ctions and obstacles.

Emmanuel Letier: Goal-Oriented Requirements Engineering in 15 Minutes

Developing programs that always work require that we know and agree on what it means for a program to work. In this talk, I will outline how goal-orie nted requirements engineering techniques can facilitate such activity. The techniques consists in deriving precise specifications of software requirem ents from stakeholders' goals, analysing the validity of domain assumptions used during such derivation, and managing the inevitable uncertainty about stakeholder's goals, domain assumptions and software requirements through Bayesian decision analysis.