Robin Turnbull
- Partner
In today’s fast-paced work environment, the rapid integration of artificial intelligence (AI) presents a transformative shift in how businesses operate. On the one hand, there are unprecedented opportunities but on the other, there are significant risks.
Most of our existing employment law and workplace policies were not created with such technological advancements in mind – and that‘s going to present unique challenges going forward for employers and employees.
While the EU has implemented an AI Act, the UK government has no immediate plans to legislate to regulate AI, opting instead for a non-statutory approach relying on existing regulators to oversee the use of AI.
In the meantime, the Trade Union Congress has published a draft Bill designed to regulate the use of artificial intelligence (AI) systems by employers – the Artificial Intelligence (Employment and Regulation) Bill.
The Bill primarily focuses on regulating “high-risk” decision-making processes, which is defined broadly as having the capacity or potential to produce legal effects concerning a worker, employee or jobseeker, or other similar significant effects. This could include many of the crucial aspects of the employment lifecycle, such as hiring, firing and performance assessment.
What could the Bill mean in practice?
Practically, the Bill mandates extensive measures for an employer before implementing high-risk AI systems. Employers must engage in consultations with unions, conduct comprehensive Workplace AI Risk Assessments covering health and safety, equality, data protection, and human rights, and transparently communicate AI system functionalities to employees through a register.
Additionally, employees are entitled to request human reconsideration of automated high-risk decisions and a personalised statement explaining how a member of staff may be impacted by AI decisions. A notable provision of the bill is its proposal to shift the burden of proof in cases of AI-based discrimination under the Equality Act 2010. By facilitating easier recourse for affected individuals, this measure is designed to address concerns regarding hidden biases and prejudices within AI algorithms.
If the entire AI system is discriminatory, then an employer could receive multiple claims for unlimited compensation. As soon as one supplier of the software is found to be discriminatory, every employer who uses the software could face similar claims. That risk isn’t just in the employment field, but for service providers and those in education. Therefore, it is one of the most significant risks from AI. (For other risks such as data breaches and how to prepare for one – see our other article).
Employers and service providers may wish to exercise caution in deploying AI decision-making systems to ensure there are no unintended discriminatory consequences and help mitigate potential legal and reputational risks.
An AI tool is only as good as the data which it is fed. For example, if a dismissal decision is based on the analyses of an AI tool which has relied on inaccurate data, it is not going to be reasonable for the employer to rely on the AI tool’s assessment of that data to support a dismissal. If that was used in a mass redundancy exercise, there could be multiple unfair dismissal claims.
Without clear guidelines, there is a danger that those using generative AI might input offensive, discriminatory or inappropriate content as part of a prompt. There is also a risk that the outputs from the AI might contain discriminatory or biased content because of the inherent biases in the data used to train the AI.
The important takeaway is that employers and service providers should not blindly use AI. There are various steps to help mitigate the risk even before the AI system is integrated. These could include seeking reassurances or asking more questions from the AI provider; using contractual warranties and indemnities to guard against a claim; updating policies and training and undertaking an AI risk assessment or audit. Lawyers may be used as part of an audit process to ensure any initial assessments of risk are kept secret through legal privilege.
As the legal landscape continues to evolve in response to AI advancements, it remains crucial for employers to stay abreast of developments and proactively adapt their practices to align with emerging regulations and industry standards. Proactive measures can help organisations navigate this transformative era responsibly and minimise the risks associated with AI integration.
Whether organisations and the law will manage to keep up with AI’s miraculous progress is yet to be seen.
If this has affected your organisation, get in touch with Robin Turnbull or your regular Anderson Strathern contact for tailored employment law advice.
You might also be interested in –