The impact of AI is likely to be great, since it permits a revolution in the workplace. In the long term, AI may indeed take on much of the effort that is currently performed by people, including professional work that requires special training and up to now has been beyond the ability of machines. In the short to medium term, though, AI tools will need to be taught to do the work that they will assume in the future.
Machines do not learn by themselves, despite what is implied by the phrase ‘machine learning’ (ML). Learning requires training, and it is humans who train the machines, or at least provide the foundations for learning. Training AI tools is essentially simple, at least at a conceptual level: it requires ML systems being provided with ‘marked up’ examples that illustrate how the phenomena being sought are to be defined. These examples must include lots of borderline cases that the human trainers mark up as good or bad, thereby enabling the ML system to cultivate criteria for its own selection process to be performed on future data. Often these criteria are not those that the human trainers might have used, but the outcome is typically the same: AI can make choices in the same way as humans, and thus can replace humans in decision-making.
Many thousands of instances are however required, even in a simple case – and even more for safety-relevant issues. Training AI to recognise different human faces is quite simple, the key dimensions being the distance between the irises, the tip of the nose and the upper lip. These dimensions are determinable by their ratio, which in turn is determined from an identified edge, such as the points on the silhouette of a cheek or chin. Recognising these ‘edges’ has taken years of learning – shadows affect their definition, strands of hairs block them. For professional work, the learning tasks will be even greater and more difficult to undertake.
The training of AI will not make for interesting work – even if the phenomenon in question has interesting properties, as is often the case with the material that professionals deal with. However interesting a phenomenon may be initially, it will become dull if it is observed and ‘labelled’ hundreds if not thousands of times. For facial recognition, much of the training has been done by graduate students in computer science departments, and the cost was therefore low. But when it comes to identifying issues in, say, legal corpora, or safety-relevant aspects, the training of AI will be done by individuals qualified in the field in question – lawyers or engineers in this example – and their time is not cheap.
Motivation will also be an issue: the dullness of the work will seem all the more dispiriting when its purpose is to put the trainers out of work. At least, this is what is implied in most narratives concerning the impact of AI – and indeed this is what many professionals will themselves fear.
This need not necessarily be the outcome, however: not because professionals will resist undertaking this training or, out of mischief, perform it so badly that the training never ends, but because of the enormous cost of AI tools or licences – a factor which is frequently underestimated.
These investments are so high that one means of obtaining an adequate return is to expand the role of AI. Paradoxically however, this will require more training, which takes time and thus adds to the labour and licensing costs for the implementation of AI. As a result, individuals who imagined that they would be training the AI tools to replace themselves may well find themselves in a self-perpetuating loop, servants of AI but oblivious to when their servitude will end.
The AI revolution may not lead to professional unemployment, but to transformations in the experience of being a professional and what the corresponding work entails. If they are to fulfil their tasks successfully in a future world of work, OSH authorities need to face these complex and heterogeneous developments.
Institute for Social Futures,
Lancaster University, UK