Law

Mazur Judgment Highlights Accountability Challenges of AI in Legal Practice

The recent Court of Appeal ruling in Mazur v Charles Russell Speechlys clarifies the boundaries of delegation in litigation but raises pressing questions about the role of artificial intelligence in legal workflows. While the law asserts that responsibility remains with authorised individuals, the evolving use of AI creates complexities in maintaining genuine supervision and control. The ruling underscores that lawyers cannot delegate away their duty of oversight, a principle now tested by rapid advances in AI tools affecting legal service delivery.

By Dania Martine | 16 May 2026
Scales of justice held by robotic hand symbolising AI accountability in law

The Court of Appeal’s judgment in Mazur v Charles Russell Speechlys confirms that only authorised individuals are entitled to conduct litigation, emphasising that delegation within the legal profession must be accompanied by genuine oversight, management and control.

At issue was the involvement of a senior employee conducting significant litigation tasks without a practising certificate. The High Court initially held that supervision alone did not equate to authorisation. However, the Court of Appeal endorsed a more nuanced position: unauthorised staff may carry out such work provided the authorised legal professional maintains ultimate responsibility and implements proper arrangements for delegation and supervision.

This decision foregrounds a fundamental tension as legal workflows become increasingly disaggregated and technologically enhanced. Tasks like drafting, research and document review are often shared among paralegals, trainees, external providers and AI systems. Yet while responsibility under the Legal Services Act 2007 cannot be delegated, the pace and complexity of modern workflows pose significant challenges to meaningful human control.

AI’s growing role in legal practice complicates this dynamic further. The legislation does not currently define where AI ceases to be a mere tool and begins to take substantive decisions in litigation. If AI systems autonomously determine procedural steps or generate pleadings with limited meaningful human oversight, the concern arises that the named professional may be positioned more as a symbolic signatory than a true decision-maker.

Court rulings have already reflected this concern, holding lawyers accountable for AI-generated errors, regardless of the source of the mistake. The obligation extends beyond supervision to include verification, ensuring outputs are accurate before being presented to the court. This reflects a strict professional responsibility standard intended to preserve accountability in an evolving technological environment.

For regulated professionals, particularly those governed by the Solicitors Regulation Authority (SRA), the judgment signals the need for updated guidance that sets out the standards for supervision and control in AI-enhanced workflows. This guidance will be critical to ensuring firms implement appropriate checks, maintain records, and clearly delineate oversight responsibilities, maintaining compliance with the regulatory framework.

It is increasingly clear that AI literacy is now a core competence within legal practice—not merely a technical skill, but essential to exercising professional judgment. Lawyers must understand AI’s functioning, limitations and output validation methods to meet their duties effectively.

Failure to align legal responsibility with operational control carries risks. If workflows become fragmented without corresponding supervisory mechanisms, authorised professionals risk being positioned as “fuses” absorbing liability for processes they do not fully control or understand. This structural weakness underscores the importance of designing legal workflows that preserve the meaningful exercise of professional responsibility.

In summary, the Mazur ruling reaffirms that legal responsibility cannot be abdicated despite technological advances. Only authorised individuals may conduct litigation, and they must genuinely direct and control the work, including that performed or facilitated by AI. The challenge moving forward will be to translate this principle into practical frameworks that ensure accountability is preserved as legal services continue to adapt to AI and digital tools.

The courts, regulators and legal professionals must collaborate to embed these standards within evolving practice environments. Properly implemented, AI has the potential to enhance legal service delivery without undermining core principles of accountability and responsibility.