ISO and IEC are currently working on a technical report that for the first time will lay down principles for development and scrutiny of AI-based functions relevant to safety.
Artificial intelligence (AI) is currently the subject of particular attention, since it is regarded as a key technology of the future and already underpins numerous technical innovations.
The importance of this technology has also been recognized by the European Commission, which presented the first draft of a new regulation on the use of artificial intelligence in April 2021. As soon as this regulation enters into force, a major need will exist for the requirements set out in it to be supported by international standards.
In the area of functional safety, artificial intelligence has not yet been addressed adequately, if at all. For example, the generic functional safety standard IEC 615081 contains no usable information on addressing artificial intelligence in the context of functional safety. Nor is provision made for this aspect to be considered in the content of the standard during its current revision.
One approach to eliminating this deficit is currently being developed by the ISO/IEC JTC 1 SC 42 WG3 working group in conjunction with experts in the IEC SC65A working group responsible for IEC 61508. These parties are jointly developing the ISO/IEC TR 5469 Technical Report, Artificial intelligence – Functional safety and AI systems. Plans have already been made for the Technical Report to serve as a basis for further normative documents such as technical specifications, as a result of which this document acquires great importance. Publication is currently expected in mid-2022.
The goal of the Technical Report is to promote awareness for the characteristics, safety risk factors, available methods and potential limitations of artificial intelligence. Developers of systems relevant to safety are thus to be equipped to make appropriate use of artificial intelligence in safety functions. Furthermore, the document aims to provide information on challenges and concepts for solutions in the context of the safety of systems employing artificial intelligence.
To this end, Section 5 of TR 5469 provides an initial overview of the relationships between functional safety and artificial intelligence technologies. Section 6 then attempts to provide a qualitative overview of different safety risk levels of AI systems. Assessment of these levels is based upon a combination of AI technology classes and various usage levels.
Usage levels differ according to their possible influence upon the safety function. For example, systems in which artificial intelligence is used within a safety function are viewed as being highly critical, the use of artificial intelligence in the course of development of a safety function less so. No consideration is given in this context however to the actual risk emanating from the system as a whole and its application.
Moreover, classification of the second criterion for evaluation, the AI technology class, is based solely on compliance with existing or future functional safety standards. Opinions on this criterion differ, since the failure as yet of current functional safety standards to address artificial intelligence is itself the topic of this Technical Report. Assignment to different AI technology classes is not based on the particular features of the technology concerned; in fact, these features play no role whatsoever in this instance.
In this context, Section 8 could serve as a more effective tool for assessing different AI technology classes and the risks to which they give rise. It addresses not only the topic of safe and trustworthy use of AI systems, but also the specific characteristics of modern AI systems, and presents the risks and challenges posed by them. For example, it is difficult to fully evaluate a system based on deep learning, since the high complexity of such a system prevents it from being described in full. Possible solutions to these challenges and risks, involving suitable verification and validation measures, processes and methods, and also measures for scrutiny and for risk reduction, are considered in Sections 9, 10 and 11. A method for the use of AI technology in safety-related systems that are not suitable for the application of existing functional safety standards is also presented in Section 7.
Altogether, ISO/IEC TR 5469 already provides a wealth of information on the use of artificial intelligence in the context of functional safety within the scope of IEC 61508. In particular, the risks and risk-mitigation methods specific to AI that are presented by the report make a valuable contribution to discussion in this area. However, other concepts still necessitate critical discussion. A separate sector-specific report for automotive applications is currently in preparation.
1 IEC 61508 series of standards: Functional safety of electrical/electronic/programmable electronic safety-related systems
Institute for Occupational Safety and Health of the DGUV (IFA)