KANBrief 1/21

Is product safety compatible with complex‚ artificial intelligence?

Where the behaviour of systems cannot be predicted, defining requirements for them presents legislators with a challenge

No universally accepted definition of artificial intelligence exists. It is clear however that the various methods of artificial intelligence are intended to support human beings in reaching decisions – or even to take these decisions out of their hands. An as-yet unresolved issue is in what cases, and subject to what criteria, it is permissible for decisions with a bearing upon safety to be taken automatically by methods of artificial intelligence, or under their influence.

The risks presented by a product must be assessed and reduced to an acceptable level before the product is made available on the market. The directives and regulations of the European Single Market specify the high level of protection to be observed. Where products and work equipment lie outside this harmonized scope, they are subject to national regulations.

Under the hierarchy of protective measures, a product should be designed such that hazards cannot even arise in the first instance. Where this is not feasible, protective equipment must reduce the risks until only acceptable residual risks remain. Finally, users must be informed of these residual risks. Where control systems are used to execute the safety functions of a product, they play a significant role in this concept.

It is crucially important that manufacturers are able to assess the risks presented by their products. This is precisely where the problem would lie if, for example, the intention were to rely on a control system supported by machine learning (In machine learning, computers learn a task from data rather than by being explicitly programmed to perform it or being trained by rules that are comprehensible to human beings.) to prevent people from being endangered by moving parts of a machine: the designers of systems based on the more complex methods of artificial intelligence (such as machine learning with neural networks) have as yet been unable to explain satisfactorily, even after the event, why their systems behaved in a certain way.

Safety technology in unknown territory

More complex methods of artificial intelligence now enable systems to take decisions automatically. These may include decisions that have a bearing upon safety. The technical principles and assumptions upon which conventional safety technology is based were not designed for application in such cases, however. For this reason, research is currently being conducted into evaluation methods. The results are intended to be prepared as soon as possible for consideration in standardization activity (for example the ISO/TR 5469 project, “Artificial intelligence – Functional safety and AI systems”, in ISO/IEC JTC 1/SC 42/WG 3). The goal is to determine how artificial intelligence may be used, if at all, in the context of safety-related systems.

One strategy that can be used to demonstrate reliably the safety of highly complex systems involves the definition of “arguments” that use inductive reasoning to obtain strong circumstantial evidence (but not absolute proof). This strategy has long been used for very complex technologies, for example in nuclear technology or aeronautics and aerospace, and also to determine whether software is suitable for safety-related use.

Attempts are now being made to use such approaches, which tend to have their origins in the field of risk management, to create catalogues of criteria for an acceptable level of risk that can also be applied to methods of artificial intelligence. These criteria may concern specification and modelling, explainability and accountability of decisions, transferability to different situations, verification and validation of the system, monitoring during runtime, human-machine interaction, process assurance and certification, and also safety-related ethics and data security. The European Parliament’s call for an EU regulation on ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies is similar in its purpose; here, the Parliament is proposing such criteria for assessing conformity.

Under an approach of this kind, safety is defined primarily not by verifiable product properties, but by verifiable process criteria. However, in order to attain a level of safety approximating that embodied in the European product safety legislation and the basic principle of prevention at the workplace, the criteria for the aforementioned “arguments” would first have to be shown to be complete and reliable. Strictly speaking therefore, even regulations governing the framework and basic requirements for this purpose cannot be set out until the assumptions on which they are based have been reliably proven.

Initial regulatory approaches

ISO/TR 22100-5:2021-01 (Safety of machinery – Relationship with ISO 12100 – Part 5: Implications of artificial intelligence machine learning), recently published, attempts to set out the limits within which machine learning could be embedded in a machine control system in accordance with legislation and standardization in their current form. The European Commission is currently presenting proposals for revision of the Machinery Directive 2006/42/EC and for a regulation governing artificial intelligence, both of which contain legally binding framework conditions for the use of artificial intelligence.

These framework conditions must contain complete, clear and verifiable requirements setting out in what cases and subject to what criteria safety-related decisions taken by a system may be influenced or automated by methods of artificial intelligence. Whether this situation has been reached must now be determined by the experts.

Corrado Mattiuzzo
mattiuzzo@kan.de