The nature of employment law is historically rooted in maintaining a fair hiring field for all workers. For example, in the Statute of Labourers 1351, labourers were required to work for the same wages as before the Black Death. This aimed to “control how much people [labourers] were paid and conserve a social structure” (Oak National Academy). Since then, employment law has consistently sought to protect fairness and prevent individuals from exploiting systems for personal gain.
Even 674 years later, the same principle remains relevant. Therefore, employment law should also regulate AI systems to prevent unfair hiring and workplace management practices.
Since AI is a relatively new technology, there are very limited legal precedents worldwide and no explicit laws in the UK. In theory, this gives AI unchecked authority in workplaces, particularly in hiring processes. A clear example of the risks was Amazon’s failed recruitment system in 2014. The system, trained on CVs of existing employees, reproduced gender bias because Amazon’s workforce had a 6:4 male-to-female ratio. As a result, the AI favoured male applicants, reinforcing discrimination instead of eliminating it. This demonstrates how easily AI, left unregulated, can perpetuate harmful biases.
Amazon was transparent about its failed system, but there is no guarantee other companies would do the same. AI hiring systems risk becoming “black boxes,” where decision-making is hidden. This issue surfaced in the case of the Department for Work and Pensions (DWP) using AI for disability assessments. Claimant Ben, supported by the High Court, highlighted the lack of transparency. Although the AI did not directly violate the Equality Act 2010, the case established a precedent for accountability and the right to transparency in AI-driven decisions.
Many AI-related employment cases focus on automating CV sorting. But risks go further. For example, Microsoft’s chatbot Tay (2016) quickly began promoting hate speech, conspiracy theories, and offensive remarks, despite being designed by expert engineers. This raises a key concern: if AI systems created by top companies fail so drastically, how can workplace AI systems safely manage people?
Consider a scenario where an AI management system orders an employee to do something irresponsible, negligent, or even illegal. Who would be liable? The AI itself cannot be held responsible. Should liability fall on the company that developed the AI, even if the harm was unintended? Or on the innocent employee acting under orders? Using Lady Hale’s CBC criteria (Barclays Bank case, 2020), key requirements for vicarious liability are not satisfied. This grey area highlights the urgent need for employment law to regulate AI in workplaces.
A recurring theme is the lack of AI-specific legislation, with cases often relying on equality or employment laws instead. Some argue that this is sufficient, and new laws would be unnecessary. However, in a society where AI is rapidly influencing work and daily life, specific legislation is crucial. It ensures fairness, prevents abuse, and establishes clear precedents.
The UK government has stated that AI-specific laws are “not right for today,” arguing that regulation could burden business and innovation. However, with job security already strained by the cost-of-living crisis, the uncertainty surrounding AI only increases public anxiety. Legislation here is not a cosmetic accessory—it is essential to maintain trust, fairness, and equality in the workplace.
The European Commission has taken steps toward this with its Artificial Intelligence Act proposed in April 2021, aiming for full adoption by 2025. This could provide a model for the UK to follow.
Ultimately, all parties—governments, businesses, and individuals—acknowledge that AI laws are inevitable. With AI-related cases increasing, employment law must regulate AI in hiring and workplace management to preserve fairness, equality, and social structure for all.
2 Comments
Congratulations!!!!
thanks