Around 130 experts from the areas of occupational safety and health, research, standardization and regulation met on 20 October at the 7th EUROSHNET conference in Paris to discuss the challenges presented by artificial intelligence for occupational safety and health.
Artificial intelligence is already being used in numerous areas. These include transport and logistics, the industrial sector, agriculture, healthcare, human resources and insurance. What is still lacking, however, is a clear definition of artificial intelligence. Raja Chatila, Professor Emeritus of artificial intelligence, robotics and IT ethics at Sorbonne University in Paris, made the case for a definition broad enough to cover all current and future AI systems. At the same time, he pointed out the need for AI to be sufficiently narrowly defined to allow specific requirements for the systems to be formulated. Common to AI applications is that they process large volumes of data and use statistical models to draw logical conclusions from them. However, AI recognizes neither the quality nor the context of the data, and is often a “black box”, with decision-making processes that human beings are unable to grasp.
What are the characteristics of good AI?
For artificial intelligence to meet with acceptance and be used responsibly, it must be trustworthy. A high-level expert group of the European Commission on the subject of AI has drawn up key requirements for the concept of AI’s trustworthiness. These requirements include human beings remaining in control, systems being transparent, technically robust and secure, data protection being assured, discrimination and systematic errors being eliminated, and legal accountability being clarified. Raja Chatila further pointed out that AI cannot be considered in isolation, but must always be seen in the context of its application, i.e. the system in which it is used.
Using impressive examples, André Steimers, Professor at Koblenz University of Applied Sciences, showed how easily AI can reach the wrong conclusions. This may be due to the data being outdated or unrepresentative. In some cases, however, it may be very difficult or even impossible for humans to grasp why such errors arise. This raises questions regarding the reliability of a system and what level of automation is permissible, particularly in safety-critical scenarios.
Sebastian Hallensleben, Chairman of the CEN-CENELEC Joint Technical Committee on Artificial Intelligence, brought home the important contribution that standardization can make for the trustworthiness of AI. He pointed out the need for an approach that is practicable both for industry and for regulators and consumers, and that makes various aspects comprehensible. One conceivable solution is a standardized label, similar to that for the energy efficiency of electrical appliances. The label would show at a glance what level of transparency, comprehensibility, data protection, fairness and reliability is provided by an AI product.
The need for a regulatory framework
For AI to be used safely, it is imperative that European regulation should keep pace with technological developments. Victoria Piedrafita, who holds responsibility for the proposed Machinery Regulation at the European Commission’s Directorate-General GROW, explained how the proposal addresses AI and interacts with the AI Regulation. For example, all AI applications impacting upon safety-related functions are to be assigned to the highest risk category, for which certification by a notified body is mandatory. Attention must also be paid to hazards that arise only after the machines have been placed on the market, as a result of the machines developing further autonomously. If this aspect is not considered, the machines must not be placed on the market, as safety has top priority.
At present, it remains unclear to what extent the planned AI Regulation will apply to areas of application that impact upon the safety and health of workers at work or issues of collective bargaining autonomy. Antonio Aloisi of IE University Law School in Madrid showed that algorithms are now at least supporting humans and even replacing them altogether in many management tasks. Algorithms evaluate curricula vitae, issue work instructions, measure employees’ performance and in some cases may even influence employee dismissals. However, as Aloisi points out, these developments are not yet sufficiently addressed by legislation, collective agreements or risk assessments. These regulatory loopholes must be closed urgently. Several papers also highlighted the importance of ensuring that the data are appropriate and balanced for the problem at hand. Automated decisions may otherwise be biased in favour of certain groups of people owing to their gender, age or skin colour.
How strict does regulation need to be?
In the concluding panel discussion, Isabelle Schömann (European Trade Union Confederation) cautioned against allowing AI applications to be introduced on a trial and error basis. European legislation clearly states that unsafe products are unacceptable. Jörg Firnkorn (DEKRA) advocated moderation: in his view, both over-regulation and under-regulation should be avoided; a calculated risk also opens up the opportunity to learn from mistakes and improve the technology. Franck Gambelli (French employers’ association UIMM) drew a parallel with the increasing use of robots 30 years ago. This also initially raised serious concerns which, however, did not materialize. Gambelli considers it important that standardization should offer practicable tools for implementation. Christoph Preusse (German Social Accident Insurance Institution for the woodworking and metalworking industries/BGHM) pointed out that the activities of other countries are also relevant to Europe; China and the USA, for example, were seeking to develop international standards that will also impact upon issues of workplace organization. Companies with an international focus will not be willing to differentiate between different regions and modify their products accordingly.
Action rather than reaction
“Prevention means proactivity. As occupational safety and health experts, we can’t afford to wait and see what happens, and then react,” was how EUROSHNET Chair Pilar Cáceres Armendáriz of INSST, the Spanish Occupational Safety and Health Institute, summed up the situation in her concluding remarks. In her view, an important contribution of the conference was therefore that it had brought the various stakeholders into dialogue with one another in order to learn from each other and, together, explore how artificial intelligence could best be addressed in legislation and occupational safety and health.