untitled-design-4

News & Events

;
Insight

AI: Stripping the ‘Human’ out of Human Resources and Employment Law?

Artificial Intelligence (AI) and Automated Decision-Making (ADM), often used to increase the speed, decision making and efficiency of work, has been on companies’ radar for some time now. Whilst many companies still view such technological advances as something for the future, both AI and ADM are already present in many workplaces… and that has trade unions worried. 

The concerns regarding how AI and ADM may adversely impact the employment relationship seems to be growing steadily, and the latest reports by the Trades Union Congress (TUC) have highlighted the issue as one of concern moving forward. Linked below, the reports analyse the impact of such technology on the employment relationship, and they make several stark conclusions. This bulletin will highlight three of the many areas the report focuses on: i) discrimination, ii) the right to privacy, and iii) protecting employees from irrational and unfair decisions.

Since the outbreak of the COVID-19 pandemic, AI and ADM have become increasingly prevalent for businesses for a variety of reasons. We have already seen uses including assisting employers in managing their employee’s homeworking activities; monitoring employees’ workflow; and assisting employers in the recruitment process where there are large volumes of applicants.

Discrimination

How could the adoption of AI and ADM result in an increased risk of discrimination in the employment context? Well, let’s look at the recruitment process as an example (of particular interest to me as I worked in recruitment prior to joining BPE). Programmes are increasingly being utilised to help speed up the recruitment process through an initial sift of applicants. In principle, this makes sense given the large number of applicants for each role as a result of the current economic climate, however, issues can arise when deciding which criteria is required to filter into the sifting tool. For example, a sift tool analysing applications which may require a certain qualification may not take into account variances of the qualification such as those who studied outside of the UK which could lead to foreign born workers not making the cut for no reason other than the sift tool’s decision making.

Moreover, when we get to the video interview stage, what criteria do you rate candidates on their responses to AI driven video interviews? Candidates’ expressions and the way they speak are influenced by a vast range of factors. A sensitive and significant issue which may influence a candidate’s expression is the existence of a disability; a protected characteristic under the Equality Act 2010. Therefore, AI algorithms which bias against such individuals through scoring them poorly due to mannerisms linked to a disability, will undoubtedly leave the interviewing company susceptible to a discrimination claim.

To balance the interests of both the employer and employee going forward, the TUC report suggests that “all actors in the ‘value chain’ leading to the implementation of AI and ADM in the workplace are liable for discrimination, subject to a reasonable steps defence". Such a defence exists already by virtue of s.109(4) Equality Act 2010 in relation to the vicarious liability of employers, thus we will need to extend the defence to actors in the value chain, i.e. those who contribute to the development of the programmes.

The Right to Privacy

Whilst the report suggests Article 8 of the European Convention on Human Rights (ECHR) adequately protects employees’ rights to privacy from invasive AI and ADM, there is currently “inadequate legally binding guidance to employers explaining when Article 8 rights are infringed by the use of AI-powered technology and how, practically speaking, the Article 8 balancing exercise is to be resolved”. Without such guidance it is difficult for employers to balance their business interests, in monitoring employee’s work efficiency, and employee’s rights to privacy. Striking the wrong balance, i.e., excessive monitoring, may ultimately undermine the mutual trust and confidence that the employment relationship is predicated on.

Throughout the COVID19 outbreak we have seen a shift to working from home for a large number of workers following the stay at home message. The right to privacy has not been considered in great detail by employers who had to adapt to a new way of working very quickly over the last year. In practice and through media reporting we have heard genuine horror stories of employers who require access to webcams or use tools like Sneek to take regular images to ensure a worker is really carrying out work and not topping up their tan in the back garden. Such intrusive practices are one of the reasons unions are raising concerns.

Protecting Employees from Irrational or Unfair Decisions

Under the current unfair dismissal legislation, employees who qualify for protection under s.97 and s.98 ERA 1996 may still be protected from decisions to dismiss based on incorrectly generated AI or ADM information, depending on the circumstances of each case. Additionally, the TUC report points out that Article 5(1)(d) GDPR can also be used to protect both employees and workers where decisions have been made using inaccurate data, because the processing of ‘personal data must be accurate’.

Whilst that is true, effective legal protection is not currently granted to employees who do not qualify for unfair dismissal rights through not having the two years continuity of employment, although one can see certain automatic unfair dismissal rights being claimed in certain circumstances. Clearly more needs to be done in this area.

Concluding Thoughts

Whilst there is not much that can be done from preventing the integration and adoption of AI and ADM in the workplace becoming the norm, given the commercial advantages they bring, there is much that can be done to ensure that fair decisions are reached. Hopefully, through flagging the problematic and discriminatory outcomes that can arise from AI-powered processes, programmers can ensure AI and ADM achieves fair, just, and accurate outcomes for employees and workers. How workers and employees feel towards the employer’s increasingly gazing eye is up for debate, but what is not up for debate is ensuring AI integration and incorporation is done in a way which respects workers’ and employees’ dignity and privacy.

What does this mean for you or your business?

As noted above, it is important that the integration of AI or ADM into your workplace does not tarnish the mutual respect between employer and employee. Commercial and legal considerations should be taken into account at the planning stage, with consideration given to privacy, automated processing and any potential negative publicity that may arise from the introduction of AI in the workplace.

What do you need to be doing now?

AI and ADM tools, if incorporated into your business, should be done openly rather than covertly and careful thought should be given to their design and programming to prevent discrimination and breaches of privacy in their application.

Recommended Reading

Technology managing people - The worker experience

Technology Managing People - the legal implications

European Commission's consultation paper

 

These notes have been prepared for the purpose of articles only. They should not be regarded as a substitute for taking legal advice.

Get in touch

Talk to us about your legal challenges and discover how our expert, pragmatic legal advice and broad commercial acumen can help.