Here we are, nearly halfway through Day 2 of the CIPD Annual Conference and Exhibition. In one of my pre-event blogs I asked the question “is this conference relevant to an employment lawyer? Is an employment lawyer relevant to this conference?
I had a fantastic day yesterday attending the opening keynote speech (“Trust and why it is key for success”), and subsequent breakout sessions on diversity and inclusion, and keeping human connections in the digital age. Everything I’ve seen and heard so far has application to my work as a collaborative and resolution-focused employment lawyer and I’ll share my thoughts on why that is, in a subsequent blog.
This morning I attended a panel discussion entitled: “Will automation and artificial intelligence help or hinder good people management?”
A paraphrased quote from the book “Hello World”, by Hannah Reed was shared with us early on. In essence, the concerns people currently have (and which take up disproportionate column inches in the media) about evil AI (Artificial Intelligence) taking over and ruling the world are as relevant today as worrying about overcrowding on Mars.
We heard from Cheryl Allen, the HR Director Transformation from online retailer Atos, who shared with us that the HR team has a digital colleague, FREDA (“Friendly Robotic Engineer Delivering Advantage”) who takes on many calculation and reporting tasks, freeing up the human beings to add value elsewhere, in places that human interaction is absolutely crucial. And this for me is the fundamental point. It’s not the fact that increasingly clever technology exists, it’s the application of it that can cause issues in the workplace. The decisions about how and where artificial intelligence is used in a business – is made by human beings.
Employment lawyers hear many negative stories about automation and how it is used inappropriately to the detriment of employees. It’s been 15 years since the first stories filtered through of mass redundancies being announced by text message. We also hear of women on maternity leave being removed from IT systems as a long period without logging in triggers an automated leavers’ process. There are more, and more shocking examples I could share. But the point is, that somebody somewhere (or several people) thought that this use of automation was the right thing to do.
Human beings are amazing and wonderful, but nevertheless fundamentally flawed. Concerns about giving “control” to computers can overlook the point that human bias creates countless problems at work.
Human beings in the recruitment process filtering CVs based on presumptions about the candidates – like in this recent example where the candidate “Adam Henton” got 3 times the interviews as “Mohamed Allam” despite having identical qualifications and experience. An automated review of the qualifications and experience of candidates in a recruitment process, provided the parameters are set correctly could help to prevent discriminatory practices in recruitment.
A solution described by Andy Spence on this morning’s panel works to a similar point. In recruitment advertisements, there is a body of research suggesting that particular wording appeals to the different sexes – and Textio could be used to reduce bias in recruitment (there’s a case study on their site detailing how Johnson & Johnson have added 90,000 women to their science and technology pipeline through the use of this technology!).
In short – human judgment is not better than artificial intelligence “judgment”. They both have their upsides and downsides. The trick is, realising the limitations of both. Making decisions about the livelihoods and futures of human beings is a massive responsibility. However it is done, organisations need to take it seriously and use whatever means appropriate to achieve their objectives in a fair and non-discriminatory way.
I believe that great outcomes can be achieved by a careful blend of automated and human decision making – what do you think?