0

By Rachel Potter, Senior Associate, and Vanessa Jacklin-Levin, Partner, Bowmans, South Africa

The recent US decision of Mobley v Workday is a sign of things to come in South Africa as more and more AI software is used to make key decisions impacting South Africans’ fundamental rights. The current absence of legislation or regulation in the country does not mean that companies creating, selling and using AI software in South Africa are immune to legal action.

RELATED: South Africa: Department of Communications and Digital Technologies releases Artificial Intelligence Policy Framework

 

In Mobley v Workday the United States District Court for the Northern District of California ruled that AI service providers, like Workday, could be directly liable for employment discrimination under federal anti-discrimination laws.

Derek Mobley, an African American male over 40 with anxiety and depression, alleged that Workday’s AI-powered hiring tools discriminated against him and other job applicants based on race, age, and disability. He had applied for over 100 positions through companies using Workday’s platform since 2017 and was consistently rejected.

The ruling is not final and merely allows the case to move to the discovery phase, where both parties will gather more evidence. This is similar to the exception phase or the class action certification phase in the South African context – the court rules that there is a legal basis for liability, but the plaintiff still has to prove a claim on the facts against the specific defendant in question.

In a nutshell: AI vendors as ‘agents’ of employers

Workday provides a broad suite of ‘human resource management services’, including providing its customers with a platform on the customer’s website to collect, process, and screen job applications.

ADVERTISEMENT

The software in dispute in this case is an algorithmic decision-making tool used for applicant screening in the hiring process.

Mobley claimed that the tools provided by Workday disproportionately rejected applicants who were African American, over 40 years old, or disabled. He claimed that the automated nature of the rejections, often occurring at very late or early hours, indicated that the decisions were made by Workday’s AI-driven tools rather than human evaluators.

The Court accepted the argument that AI vendors/ suppliers can be considered ‘agents’ of employers, and could therefore fall within the definition of an ‘employer’ under the relevant employment discrimination legislation. This means that, if an AI tool used by an employer discriminates against job applicants, the vendor/ supplier providing that tool could be held directly responsible.

The Court emphasised that Workday’s software was not following employers’ instructions but qualified as an agent because its tools are alleged to perform a traditional hiring function of rejecting candidates at the screening stage and recommending who to advance to subsequent stages, through the use of artificial intelligence and machine learning.

In the South African context

While there is no law regulating AI in South Africa, the Department of Communications and Digital Technologies released an Artificial Intelligence Policy Framework in August 2024. The Policy Framework is broad and the country is still far from implementing AI legislation and regulations in South Africa.

One of the strategic pillars of South Africa’s AI policy laid out in the Policy Framework is fairness and mitigating bias, which includes human control of technology (a human cantered approach in AI systems); human-in-the-loop systems (ensuring critical AI decisions involve human oversight, especially in generative AI); and decision-making frameworks (developing frameworks for AI decision-making that prioritise human judgment).

South Africa, and indeed the world, are alive to the importance of human oversight in critical decision-making functions of AI, particularly those that impact fundamental human rights.

In the EU, lawmakers signed the Artificial Intelligence Act in June 2024 and it entered into force in August 2024. The AI Act adopts a risk-based approach and classifies AI systems into several risk categories, with different degrees of regulation applying.

Outright prohibited AI practices include biometric categorisation systems inferring race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation (except for lawful labelling or filtering in law-enforcement purposes); and AI systems evaluating or classifying individuals or groups based on social behaviour or personal characteristics leading to detrimental or disproportionate treatment in unrelated contexts or unjustified or disproportionate to their behaviour.

The current absence of legislation or regulation in South Africa does not mean that companies creating, selling and using AI software in the country are immune to legal action. Our courts are entitled to develop the law to apply the rights contained in the Bill of Rights to individuals or companies, not only the state and public entities, if there is inadequate protection of that right by legislation.

Individuals may seek the protection of the courts where their rights to equality, dignity, privacy, housing, healthcare, food, water and social security and others are deprived by decision-making left to AI software, with or without human oversight.

One approach may be to extend the law of vicarious liability (where an employer is strictly liable for the negligent or intentional conduct of its employees if acting in the course and scope of their employment) or the law of agency to hold companies that use or sell AI software liable for the decision, acts or omissions of the AI software that they use.

In all the uncertainty of this space, one thing that can be certain is that litigation is coming.

 

More in Features

You may also like