KANBrief 4/19

Questioning occupational safety and health in the age of AI

Automated decision-making is becoming increasingly accepted. Machine learning now allows management to make decisions about workers at a more granular level than ever before, based on comprehensive information preselected by algorithms. Given the cutting edge nature of the technologies used, it is important to look at the occupational safety and health issues arising as well as benefits posed for workers today.

The term ‘artificial intelligence’ (AI) came into being in the 1950s, at an academic conference where scientists set out to make a machine behave in ways that would seem intelligent if a human ‘were so behaving’. ‘Intelligence’ at this time was linked to the use of language, formation of concepts, and the ability to improve oneself, as well as to solve problems originally ‘reserved for humans’  (McCarthy, J., Minsky, M. L., Rochester, N., Shannon, C. E., 1955, ‘A proposal for the Dartmouth Summer Research Project on Artificial Intelligence’).

AI research was at first mostly experimental and focussed on the invention of robots, and over time became linked to so-called neural networks and computational processing power. With the increasing capacity of computer memory and sophistication of algorithms, better AI is now promised. Now, AI tools and applications are increasingly becoming incorporated into social and institutional practices, from medicine to welfare, and into workplaces.

AI at the workplace

There are a number of possibilities for workplace progress and growth in productivity with the integration of AI applications. However, there are also important occupational safety and health (OSH)-related questions arising as AI is integrated into workplaces. Stress, discrimination (e.g. owing to ethnic and/or gender bias), heightened precariousness, the possibilities of work intensification and job losses and even musculoskeletal disorders have already been shown to pose psychosocial risks in digitalised workplaces  (See “OSH and the future of work). These risks are made worse when AI augments existing technological tools which were not conceived for this purpose or is introduced uncritically for workplace management and design.

Experts from European OSH authorities have stated that worker data collection for decision-making in AI-augmented analytics, tools and applications is creating one of the most urgent issues in workplaces today. Practitioners, however, are often simply not aware of the possible uses of such management tools. OSH risks such as worker stress and job losses (e.g. driven by automated human resource management) arise when AI-augmented technologies are implemented without appropriate consultation and training or communication.

A topic for standardization

In response to some of these issues, a committee within the International Organization for Standardization (ISO TC 260) has been working since 2018 on a standard designed for application to uses of dashboards (information screens displaying company metrics, including OSH metrics, for managers) and metrics using AI tools in workplaces. The standard includes regulations on gathering and using data from workers and how dashboards should be set up where data is viewable and usable. Data gathering tools are becoming increasingly of interest, in particular for multinational companies. Homogeneous, standardized data based on metrics is essential for the functioning of AI tools.

Representatives from the manufacturers of the software used in order to standardise data are active in these ISO discussions. Since metrics on OSH measures and an unreflected use of AI tools can have a considerable impact upon occupational safety and health, representatives of practitioners and social partners should also be involved.

International standards can be an effective way of ensuring that the benefits of AI tools are achieved. Thus, international corporate practices must be equivalent at some level, and data must be standardisable. It is essential to ensure that practitioners are involved in the discussions and implementation processes in order to ensure that processes are humane as well as functional (cf. Rolf Jaeger, European Industrial Relations Intercultural Communication and Negotiation).

Assoc Prof Dr Phoebe V Moore
University of Leicester / Social Science Center Berlin (WZB)
Pm358@leicester.ac.uk