6  Requirements in the Wider Context

How to define requirements is strongly influenced by the context in which software development takes place. In this chapter, we will look at the roles of requirements in client projects, in product development, in greenfield and brownfield projects, and in regulated environments. We will also discuss the relations between requirements engineering and social responsibility, and between requirements engineering and artificial intelligence.

6.1 Requirements in Client Projects

Many software development take place in the context of a client project. These are projects where a software is developed by a IT company to address the needs of a specific client. For example, the client may be an ambulance service that needs a new ambulance dispatching software, or it may be a government administration that needs a new system to distribute social benefits. In a client project, the initial requirements come from a client organisation and the software is to be developed by a provider organisation.

The tendering process

The client usually selects a provider through competitive tender: the client invites multiple providers to submit a proposal, then select the best or cheapest proposal among those it received. Requirements engineering play a crucial role in all stages of the tendering process. This process is illustrated in Figure 6.1.

Figure 6.1: The tender process and change requests in client projects. Requirements engineering is a significant activity in all stages, and all documents contain descriptions of stakeholder goals and software requirements.

In the first step, the client assembles a procurement team that will prepare a request for proposal. The request for proposal defines the client’s needs for the project by describing the stakeholders’ goals, the context in which the software is to be used and a list of required features. For complex projects, requests for proposals can be several hundreds pages long. Preparing a good request for proposal involves a substantial amount of requirements discovery, analysis and documentation to be done by the client organisation, sometimes with the help of external consultants. The quality of requirements in the request for proposal is crucial because this document will set the direction for the whole project.

In the second step, multiple candidate supplier organisations assemble bidding teams to prepare proposals in response to the request. These proposals describe how their organisation will deliver the system or services described in the request for proposal, at what cost, and by what time. For complex projects, preparing these proposals involve significant requirements engineering activities. You must analyse all requirements in the request for proposal, envision an architecture that will help you deliver the requirements, and write up in your proposal how your system will deliver each and every one of the requirements. These proposals can again be several hundreds pages long.

The client must then evaluate all proposals and select one that best match its selection criteria. The client and selected provider will then enter contractual negotiations to produce the project contract. The contract is legally binding; each party can sue the other if it violates its contractual obligations. The requirements are part of the contract, therefore they need to be written and checked carefully by both sides.

The development team will then receive the contract and use the requirements it contains as the basis for their implementation. Usually, requirements in the contract do not specify everything in full details and the development team will perform additional requirements engineering to clarify details with a client representative.

During development, the client may send change requests to the development team asking them to modify some of the requirements. These changes requests are managed differently depending on the type of contract.

Contract types

There are two main types of contracts.

The first type is fixed-scope and price. It means that all requirements are defined in advance and must be delivered for a fixed price. When the client wants to change the requirements, it needs to send a formal change request and negotiate the cost of that change with the provider. This type of contract is aligned with the waterfall model. For a development team working with a fixed-scope contract, agile development is possible, in the sense that they can deliver the software iteratively and incrementally, but some of the benefits of agile will be lost.

The second type of contract is time and materials. It means that the client pays the provider based on the time spent developing the system. This type of contract is more favourable to agile processes. It gives the client more flexibility to change requirements during development but less certainty about the cost. The client may also have concern that the supplier will not assign their most efficient developers on the project.

In Summary

The key point in this section is that significant requirements engineering activities take place during all stages of the tendering process and during the management of change requests. The various documents in this process —the request for proposal, the proposal, the contract, and change requests— all contain descriptions of stakeholder goals and software requirements. The quality of the requirements in the initial call for proposal and in the signed contract are critical to the project success.

6.2 Requirements in Product Development

In product development, the software is developed for a market rather than for a specific client. Usually, the system is designed and developed internally, although some components can be outsourced. Examples of product development include many software that you use every day: email clients, messaging applications, word processors, IDE, video conferencing systems, university course management system, etc. In this context, the software development team does not receive requirements from an external client. The requirements are defined internally based on the company’s vision for the product, market analysis, and feedback from product stakeholders.

This context is more favourable to agile development than in client-driven projects. The role of requirements engineering in product development is illustrated in Figure 6.2.

Figure 6.2: Requirements Engineering in Product Development: the main role of the product owner is to identify and prioritize what to build next based on feedback and ideas from stakeholders and the development team.

In agile methods, the person responsible for deciding and communicating the project requirements is called the product owner. One of the main roles of the product owner is to collect feedback and ideas from a variety of stakeholders, to prioritize these ideas, to transform them into concrete features, and to maintain the “product backlog” which is a prioritized list of feature requests and change requests that will potentially be implemented by the development team. The development team will then deliver product updates to the stakeholders at regular intervals.

Requirements engineering in this context is much more iterative and incremental than in the context of client project. The product owner’s approach to requirements engineering and their relations to stakeholders and the development team are quite different from the requirements negotiation and client-supplier relations in a typical client project.

Following a product development approach is not incompatible with a client project. The client and provider may agree on a time and material contract that treats the project as a product development. If the project has a fixed-scope and price contract, it is also possible for the development team to treat the project internally as if it was a product development and manage its requirements accordingly.

6.3 Greenfield vs. Brownfield Projects

Another important distinction is between greenfield and brownfield projects.

  • A greenfield project is one where you develop a new system from scratch. An example is the development of a new ambulance dispatching system intended to replace a paper-based system, as it was done for the London Ambulance Service in 1992.
  • A brownfield project is one where an existing software system needs to be modified or replaced. An example would be to change some of the components of the existing ambulance dispatching system in response to new Government standards about incident priorities and response times.

The vast majority of software projects today are brownfield projects. Working on a greenfield project is the exception.

The fundamental principles of requirements engineering apply to both greenfield and brownfield projects. In both cases, you need to understand stakeholder goal, the context in which the system is used, and define the desired behaviour and quality of the new system. One important difference, however, is that in brownfield project you have less flexibility to change the context and you need to ensure that any change you make integrates with existing applications and working practices. Therefore, requirements engineering in brownfield project requires a deeper analysis of the context, and in particular of the constraints imposed by the legacy applications.

6.4 Requirements in Regulated Environments

In some industries, software must be audited for regulatory compliance before it can be deployed and used. For example, software developed for aerospace, nuclear power plants, or medical devices goes through a rigorous auditing process during which systems engineers are required to provide evidence of their system safety.

The obligation to provide evidence that software complies with regulations has strong impact on requirements engineering. It necessitates the production of comprehensive documentation for software requirements and the establishment of traceability from regulations to requirements and from requirements to code, tests, and test results. Developing high-quality software and having to demonstrate to auditors that the software is safe and complies with regulations forces an organisation to use more systematic and rigorous practices than they might otherwise have used. Some of the techniques we will study in later chapters are about defining requirements in such context where high quality is mandated and must be demonstrated. This includes notably techniques for maintaining traceability links from stakeholder goals to machine requirements and domain assumptions.

6.5 Requirements and Social Responsibility

Social responsibility is the idea that businesses and individuals have a duty to act in the best interest of society as a whole. This means that when you develop software, you and the organisation you work for have a responsibility to consider all impacts that the software may have on society and the environment.

Some of the most important concerns are defined in Table 6.1. The list is far from complete.

Table 6.1: Some of the social responsibility concerns for software systems.
Concern Description
Safety Avoiding physical harms to people and infrastructure.
Security Protecting people and assets against harms caused by others.
Privacy Protecting people’s freedom not to be observed or disturbed.
Environmental sustainability Protecting the environment by conserving natural resources, avoiding pollutions, and reducing contributions to climate change.
Fairness Avoiding discrimination, notably based on race, religion, and gender.

Most important social concerns are expressed in national and international regulations. This includes among others discrimination legislations, accessibility requirements, and privacy regulations such as the European Union General Data Protection Regulation (GDPR). Many sectors, like finance, have specific regulations.

In terms of personal social responsibility, all software engineers are expected to follow the minimum ethical and professional standards set in code of ethics from professional organisations such as the IEEE and ACM. Beyond regulations and code of ethics, your social responsibility may also include your choices of working on projects that you find valuable to society and avoiding projects that you find harmful or pointless. This includes the choice of organisations you decide to work for.

Sometimes, it is easy to see that some software is illegal, unethical, or harmful to others. Well-known examples are the Volkswagen emissions scandal where a piece of software was installed into cars to cheat pollution emission tests, and the use of dark patterns to trick users into behaviours or purchases they do not consent to.

Often, however, deciding what is socially responsible is a complex and nuanced problem. This can be particularly true for defense applications (military), for policing and the justice system, in healthcare, and all domains with complex ethical and fairness implications. Addressing social responsibility concerns is usually a “wicked problem”. A wicked problem is one where there is no definitive, universally agreed definition of the problem to be solved. This is in contrast to “tame problems”, like the game of chess, that have clear goals and fixed set of rules. Many software engineering projects, and particularly those that have important social impacts, are wicked problems: the goals are not clear and hard to define in advance, different stakeholders have competing needs and values, and the impacts of software impacts on society are hard to predict and measure.

Requirements engineering play a crucial role in addressing social responsibility concerns. Within the software development process, requirements engineering is the set of activities where software engineers are expected to engage with stakeholders and experts from other disciplines (laws, social sciences, environmental sciences, psychology, philosophy) to analyse the potential impacts of technical software engineering decisions on society and the environment. Requirements engineering brings specific skills and techniques that complement those of other disciplines, notably for identifying stakeholders, understanding their needs and concerns, clarifying vague concerns such as safety and fairness and translating them into precise software requirements, managing conflicts between competing goals, analysing risks, and evolving software systems to deal with changing needs and contexts.

6.6 Requirements and Artificial Intelligence

Artificial Intelligence (AI) is an umbrella term covering a broad range of computing techniques that imitate or claim to imitate human reasoning of some form or another. AI techniques include machine learning techniques such as deep learning and reinforcement learning, optimisation techniques such as planning and search algorithms, and knowledge representation and formal reasoning techniques such as various formal logics and reasoning systems.

An AI system is a software system (a Machine) where some of the system’s core components are implemented using one or more AI techniques. Examples of AI systems include self-driving cars and other autonomous vehicles, virtual assistants on mobile phones, recommender systems used by online stores and streaming services, credit scoring systems used in banking, risk assessment tools used by police and judges to assess the likelihood of future criminal activities, diagnosis systems on medical images, and of course generative AI systems to generate text, images, or software code.

Requirements Engineering for AI

The rapid progress of AI is creating huge opportunities to transform many sectors of activities. Over the next decade, many organisations will be looking at how to best incorporate AI in their business. Business analysts and requirements experts who specialise in AI systems will be in high demand. Without such expertise, many AI projects will result in expensive failures.

At a high-level, requirements engineering for AI systems has the same concerns as any other type of system: we must understand the stakeholder needs for the AI system, the context in which it will be used, and its desired behaviour and qualities.

Although, defining requirements for AI systems has specific challenges, many established requirements engineering practices remain essential in the context of building AI systems.

For example, many AI initiatives start as exploratory projects without clear business goals. Requirements engineering practices are important in this context to guide the explorations: they can help data scientists understand the context in which the AI system would be used, clarify and quantify the potential impacts of the AI system on business goals, and analyse the tradeoffs and risks of different AI solutions.

After the transition from the exploratory to the development phase, effective requirements engineering practices are needed to address common requirements engineering challenges reported by software engineers and data scientist building AI systems. This includes dealing with stakeholder’s unrealistic expectations about what can be achieved with AI and the costs involved, dealing with regulatory constraints, and dealing with requirements for the complex IT infrastructure needed to monitor, manage, and evolve machine learning models and datasets.

Tradeoffs and Risk Analysis

While well-established requirements engineering practices are essential, they are not sufficient because engineering AI systems demands much greater attention to tradeoffs and risk than most other type of systems.

Requirements tradeoffs are everywhere in AI systems. For example, if you build a classifier to detect potential frauds in credit card transactions, you need to find the right balance between the rates of false positive (classifying a legitimate transaction as fraudulent) and false negative (classifying a fraudulent transaction as legitimate). Each error type has different costs and consequences for the banks and card holders. In general, requirements engineering for AI systems involves exploring a large range of design decisions that have important impacts on multiple stakeholder goals. These design decisions include decisions about what tasks to automate, the roles of humans in the system, tradeoffs between quality metrics (e.g. false positive and false negative), what data to use in training, etc. Furthermore, the impacts of decisions will often be hard to predict and decisions must take into account multiple and conflicting stakeholder goals. Requirements engineering for AI systems therefore involves significant and complex decisions under uncertainty.

Requirements engineering for AI systems also demands significant attention to risks. AI systems built using machine learning are rarely if ever 100% accurate. For example, in a financial fraud detection system, some legitimate transactions will be incorrectly labelled as fraudulent, and vice-versa. In some contexts, the inaccuracies are benign (e.g. an inaccurate video recommendation) but in other context inaccuracies can have important consequences (e.g. for medical diagnosis or for identifying a person in front a moving self-driving car). An additional source of risk is that many machine learning models are non-deterministic and too complex to be understood through inspection by humans. AI systems based on such models therefore have greater risks to behave in unexpected and harmful ways in situations that have not been covered in training and testing. Furthermore, the increased automation that is typical of AI systems reduces the possibility for humans to intervene to recover from unexpected and undesired situations. Requirements engineering for many AI systems will therefore demand much greater attention to risks than for most other systems.

Fairness, Accountability, Transparency

The use of machine learning for decisions that affect people’s lives (such as access to loans and insurance products, granting of bails and parole in the justice system, screening of job applications, fraud investigations, etc.) have lead to increased concerns about decisions’ fairness, accountability and transparency.

  • Fairness refers to the idea that the decision process should not create, reinforce or perpetuate unjust discriminations, notably based on race, gender or religion.
  • Accountability refers to the idea that someone must be held accountable for decisions taken by the system. Unaccountability would be a situation where everyone avoids responsibility by blaming decisions on the “algorithm”.
  • Transparency refers to the idea that decisions can be explained and justified to people affected by them.

These concerns are not new and not specific to AI systems. For example, the new UK system of access to social benefits has transparency concerns despite not being build using AI techniques. The growing use of machine learning has however made everyone more aware of their importance.

How to translate these concerns into specific stakeholder goals and machine requirements is a new challenge for requirements engineers. This is an important area of ongoing research in the software engineering, machine learning, and human-computer interaction communities.

The AI Alignment Problem

Another important concern is the AI alignment problem.

Many AI systems are machines whose behaviour is guided by the pursuit of some goal: the machine observes the world and acts on the world with the purpose of optimising some objective functions. Typical examples of such systems are robots, self-driving cars, conversational AI, and stock trading algorithms.

The AI alignment problem arises when the objective functions given to the AI system differs from the actual stakeholder goals that system designers, users, and society would like to be pursued. This can lead to situations where the AI system optimizes its objective functions but does not satisfy some important stakeholder goals. Such a situation is a typical symptom of requirements errors: the machine satisfies its requirements but not the stakeholder goals (Chapter 4).

One example that received much attention was an AI chatbot called Tay that Microsoft launched on twitter in 2016 only to shut it down a few hours later when the chatbot started to generate offensive racist and sexist tweets, presumably because it maximized an objective function perhaps related to generating as much attention as possible. Another example reported in the press in 2023 was the hypothetical scenario of an AI-enabled military drone tasked to identify and destroy enemy sites, with the possibility for a human operator to abort the mission before destruction. In this hypothetical scenario, the AI drone observed during training that destroying the enemy site brings a higher reward than if the mission is aborted. It thus learned that it can maximize its objective function by preventing the abort command, either by killing its operator or by destroying the operator’s communication tower.

Formulating better objective functions and specifying constraints on the AI systems behaviour can reduce AI-alignment problems but it would be naive to believe that it can completely eliminate it. In any complex system, the objective functions always proxies (simplified representations) for complex concerns that cannot describe fully in mathematical functions. Requirements engineering has an essential role to play here both in helping to formulate better objective functions and constraints, and in designing systems that take into account the impossibility to formulate mathematical objective functions and constraints that capture the full complexities of stakeholder goals and the world in which the system operates.

AI for Requirements Engineering

AI could also be used to support, and perhaps even automate, the requirements engineering process. This is an idea that has been explored in various forms since the beginning of requirements engineering, more than 30 years ago. The recent breakthrough in machine learning and natural language processing have renewed and increased the interests in this idea. Examples of the use of AI in requirements engineering include:

  • tools for analysing the quality of requirements sentences, for example to flag ambiguous words using pre-defined rules and, more recently, to analyse the sentences quality using machine learning;
  • tools for analysing user feedbacks such as those found in app reviews or social networks, notably for classifying app reviews and for discovering requirements-related information in their contents;
  • process mining tools that apply machine learning to process data (event logs) to analyse existing workflows in order to reveal bottlenecks and other areas for improvements.

There are also various prototype tools that use formal reasoning techniques (a form of AI) to derive machine requirements from stakeholder goals, or to infer machine requirements from scenarios. People have also started exploring the use of generative AI tools (like chatGPT) to write requirements and requirements models.

Although the real-world applications of AI in requirements engineering are still marginal, major improvements and wider adoptions might be around the corner. The latest research in this area will be covered in Part II.

Requirements in Software Engineering with AI

Requirements engineering is also affected by the increasing use of AI in software development, notably for automated code generation, testing, debugging and optimisation.

As software development tasks become increasingly automated, software engineers will gradually spend less time writing code by hand and more time instructing an AI system how to generate, debug, and optimize the code automatically. To do so, software engineers will need to be able to formulate suitable functional requirements and objective functions for code generation and improvements. What formats the requirements and objective functions will take remains to be seen. They might be expressed in formal languages, in natural language, or a combination of both. The requirements might be specified as general properties, or as a collection of scenarios, or more likely as a combination of general properties and scenarios. However instructions are given to the AI system, the ability to analyse conflicting stakeholder goals and to write suitable functional requirements and objective functions are likely to become increasingly critical skills for software engineers.

In the near future, once most coding activities will have been successfully automated, requirements engineering might be one the few parts of your current software engineering eduction that remains relevant 😉.

6.7 Notes and Further Readings

The description of requirements in client projects is partly based on Chapter 9 of Jeremy Dick, Elizabeth Hull and Ken Jackson’s requirements engineering book (Dick, Hull, and Jackson 2017). The book includes more details about requirements engineering activities for the client and provider organisations.

The description of requirements in product development is partly based on Henrik Kniberg’s video on “Agile Product Ownership in a Nutshell”. Requirements engineering in product development is also described in Chapter 9 of (Dick, Hull, and Jackson 2017).

Social responsibility is important concern for software engineers. Martin Fowler gives an excellent talk on the topic, reminding us that we are “Not just code monkeys”. The ACM and IEEE code of ethics are important references (Gotterbarn et al. 2001). Although they can be hard to enforce and are vague on guidance, they provide essential baselines of professional conduct for all software engineers. Ronald Howard and Clinton Korver recommend extending such baselines with a personal code of ethics (Howard, Korver, and Birchard 2008). Writing a personal code of ethics can help us clarify our ethical thoughts and values ahead of time, before being caught in the heat of tricky situation or before we start to violating our principles without even realising it. Kevin Ryan’s ethical exercises for software engineers is a great way to help us start thinking about our personal values and tradeoffs (Ryan 2020). The paper also describes some of the ways we can push back on unethical practices in our jobs. Yuriy Brun and Alexandra Meliou introduced the topic of software fairness to the software engineering research community (Brun and Meliou 2018). The paper presents many examples of fairness issues in software systems and how dealing with fairness affect the various stages of the software development process. Important research is also being conducted on the topic of values in software engineering (Ferrario et al. 2016; Hussain et al. 2020).

The roles and impacts of AI in requirements engineering is the topic of many recent papers. Nadia Nahar and colleagues have helpfully compiled and summarized the results of 50 studies describing the challenges most commonly reported by software engineers and data scientists building AI systems with ML components (Nahar et al. 2023). Boris Scharinger and co-authors describe how requirements engineering plays a crucial role in addressing many of these challenges (Scharinger et al. 2022). Iason Gabriel presents a clear and in-depth discussion of multiple perspectives on the AI value alignment problem and potential approaches to address it (Gabriel 2020).