KANBrief 3/22

Conflicting values: a challenge in the design of AI systems

The challenges arising during development of systems using artificial intelligence are not only technical in nature. Several economic and social values, which may in some cases conflict with safety requirements, are also a factor. The ETTO principle highlights potential conflicts and shows that these values need to be carefully balanced in order for artificial intelligence to be successfully established and its acceptance by society promoted.

The EU, originally established as an organization for enhancing economic development, has become a political community of 27 member states. It represents the European values of human dignity, freedom, equality, democracy, human rights and the rule of law. It has also assumed a role as one of the most influential international institutions – one that regards assuring safety as a key public interest. The EU Machinery Directive 2006/42/EC has become an influential means for securing the safety of products. Evaluations have shown that the Directive is serving its purpose, but that the rapid development of digital products and AI applications has created a need to complement it with additional measures.

Market stimuli and the public good – a potential conflict?

Attempts to formulate regulations that help boost the economy whilst at the same time safeguarding European values reveal the conflicts and discrepancies between important values. The consultations currently taking place regarding a European AI Regulation, which is to promote AI “made in Europe”, are a good example. According to EU documents however, the potential conflicts between commercial, political and social values are often illusory, as the protection of citizens’ rights is intended to serve as a competitive asset on the global market. This statement may, however, indicate a propensity to wishful thinking. Where economic interests conflict with the public good and core social values, regulatory measures or reconciliation of the interests of the stakeholders concerned can be beneficial. The use of regulation as a means to negotiate between various interests and important values may engender protests and suspicion. Some manufacturers would prefer recommendations and self-assessment tools to binding regulation and national legislation. The public may regard directives as a hindrance to easy access and use of products and services: for a typical Internet user, for example, the most tangible effect of the General Data Protection Regulation may have been to have made surfing the Internet and using different applications more cumbersome.

The emerging technologies are a source of both high hopes, and growing worries. In the current situation, the risk-based approach adopted in the EU to ensure both safety and the protection of its citizens’ fundamental rights seems more warranted than ever. Awareness of the risks is a first step, but it must be complemented by ways of negotiating between diverse, possibly conflicting values. This is not an easy task in the world of AI, where the products and services change and develop as they are continuously updated, and where the borderline between products and services is often opaque.

The ETTO principle

The precautionary principle protects against unnecessary hype, but may also facilitate conceptual soundness and application of the reality principle during the design and development of new products and services. Erik Hollnagel, a well-known safety scientist, has developed a simple tool, the ETTO (Efficiency-Thoroughness Trade-Off) principle, for this purpose. The motivator of the ETTO principle is the fact that any human action, whether individual or collective, is curtailed by scarcity. Time, information, materials, tools, energy or labour are rarely available in abundance. However, people usually manage their tasks by adjusting their actions to the prevailing conditions. In doing so, Hollnagel says, they follow the ETTO principle.

Thoroughness requires planning, which by necessity postpones commencement of the task: the time spent on preparations reduces the time allocated to performance of the task itself. Realizing efficiency, for its part, implies minimizing the resources required to achieve an intended objective. Efficient functioning often requires at least some level of systematic planning, as it is impossible to be efficient without first being thorough.

The ETTO principle reveals how the attention given to thoroughness and efficiency in any activity is a trade-off. Investing in thoroughness reduces efficiency, and vice-versa. Concentrating on just one of these values is not an option, as it is not possible to complete any activity without both. The rational outcome of the trade-off depends on the priority assigned to each of the values associated with the task. Although maximum efficiency and thoroughness are mutually exclusive, each can be used to boost the other.

Usability versus safety

The relationship between thoroughness and efficiency resembles the relationship between usability and safety. Both are essential design values; it appears impossible however to maximize both simultaneously, since ensuring safety often makes the product more difficult to use. The dichotomy between thoroughness and efficiency and between safety and usability must be negotiated in consideration of an acceptable risk and a period of time for which a person can maintain their activity. The greater the risks associated with failure and mismanagement, the more important thoroughness and safety become.

The ETTO principle does not provide us with a tool for finding easy solutions to the trade-offs to be made between various design values and fundamental European values. Rather, its usefulness lies in the inherent paradoxes that it reveals. Many features of AI are great assets and at the same time deep vulnerabilities. We face choices, recognizing that pursuing some values often involves jeopardizing others. The planned AI Regulation is intended to support the Machinery Regulation with regard to artificial intelligence. Where AI systems are complex and lack transparency, in particular, legislation and standardization face the challenge of making the right trade-offs.

Jaana Hallamaa, Professor of Social Ethics, University of Helsinki
jaana.hallamaa@helsinki.fi