HomeNewsAI within the Courtroom: The Dangers of Using ChatGTP in Legal Practice...

AI within the Courtroom: The Dangers of Using ChatGTP in Legal Practice in South Africa

A court case in South Africa made headlines in January 2025 for all of the improper reasons. The legal team in Mavundla v MEC: Ministry of Cooperative Government and Traditional Affairs KwaZulu-Natal and others had relied on case law that simply didn’t exist. It was generated by ChatGPT, a generative artificial intelligence (AI) chatbot developed by OpenAI.

Only two of the nine cases the legal team submitted to the Supreme Court were serious. The rest were “hallucinations” created by the AI. The court called this behavior “irresponsible and unprofessional” and referred the matter for investigation to the Legal Practice Council, the statutory body that regulates lawyers in South Africa.

It was not the primary time that South African courts needed to cope with such an incident. Parker vs. Forsyth in 2023 also addressed fake case law from ChatGPT. But the judge was more lenient on this case, finding there was no intent to mislead him. The Mavundla ruling represents a turning point: courts are losing patience with lawyers who use AI irresponsibly.

We are legal scholars who’ve done this Research on the increasing use of AI, especially generative AI, in legal research and teaching. While these technologies offer powerful tools to extend efficiency and productivity, in addition they pose serious risks when used irresponsibly.

Aspiring lawyers who misuse AI tools without proper guidance or ethical foundation risk serious skilled consequences even before they start their careers. Law schools should equip their students with the talents and judgment they should use AI tools responsibly. But most institutions are still unprepared for the speed at which AI is being adopted.

Very few universities have formal policies or training on AI. Students haven’t any guide through this rapidly evolving terrain. Our work requires a proactive and structured approach to AI education in law schools.

When technology becomes a burden

The lawyer in Mavundla Case admitted She hadn't checked the citations and as a substitute relied on the research of a younger colleague. This colleague, a trainee lawyer, claimed to have obtained the fabric through a web based research tool. Although she denied using ChatGPT, the pattern was consistent with similar global incidents during which lawyers unknowingly filed AI-generated judgments.

In the American case, 2024 Park vs. KimIn her reply letter, the lawyer referred to non-existent case law, which in her opinion was created with ChatGPT. In the Canadian case, 2024 Zhang vs. Chenthe attorney filed a notice of motion containing two non-existent case authorities invented by ChatGPT.

The Mavundla court was clear: regardless of how advanced technology becomes, lawyers remain liable for ensuring that each source they supply is accurate. Work pressure or ignorance of the risks of AI are not any protection.

The judge also criticized the supervising attorney for not checking the documents before submitting them. The episode underscored a broader ethical principle: Senior lawyers must properly train and supervise younger colleagues.

The lesson here goes far beyond a law firm. Integrity, accuracy and demanding pondering aren’t optional extras within the legal career. These are core values ​​that have to be taught and practiced from the very starting of legal training.

The classroom is the primary courtroom

The Mavundla case should function a warning to universities. If experienced legal practitioners can fall into AI traps in the case of the law, so can students who’re still learning to research and argue.

Generative AI tools like ChatGPT will be powerful allies – they’ll summarize cases, formulate arguments and analyze complex texts in seconds. But you may also confidently fabricate information. Because AI models don’t all the time “know” after they are incorrectthey produce text that appears authoritative but could also be completely improper.



There are two dangers for college kids. First, an over-reliance on AI can hinder the event of essential research skills. Second, serious academic or skilled misconduct may occur. A student who submits AI-generated content could face disciplinary motion on the university and reputational damage that extends into their legal profession.

In our article, we argue that law schools should teach students to make use of AI tools responsibly relatively than banning AI tools outright. This means developing “AI literacy”: the power to query, confirm and contextualize AI-generated information. Students should learn to treat AI systems as assistants and never as authorities.



In South African legal practice, authority traditionally refers to accepted sources corresponding to statutes, precedents and scholarly commentary that lawyers use to support their arguments. These sources are accessed through established legal databases and law reports, a process that, although time-consuming, ensures accuracy, accountability and adherence to the rule of law.

From law schools to courtrooms

Legal educators can embed AI skills into existing courses on research methodology, skilled ethics, and legal writing. Exercises could include checking AI-generated summaries against real-world judgments or analyzing the moral implications of counting on machine-generated arguments.

Teaching responsible AI use is just not nearly avoiding embarrassment in court. It's about protecting the integrity of the justice system itself. As seen in Mavundla, a trainee lawyer's uncritical use of AI led to skilled investigations, public scrutiny and reputational damage to the firm.

The financial risks are also real. Courts can impose out-of-pocket costs on lawyers if serious skilled misconduct has occurred. In the digital age, where court rulings and media reports are immediately disseminated online, a lawyer's status can collapse overnight whether it is found that she or he has relied on fake or unverified AI material. It would even be helpful for courts to be trained in identifying fake AI-generated cases.

The way forward

Our study concludes that AI is here to remain, as is its use in law. The challenge is just not whether the legal career should use AI, but how. Law schools have a critical opportunity and an ethical obligation to organize future practitioners for a world during which technology and human judgment must work side by side.

Speed ​​and convenience can never replace accuracy and integrity. As AI becomes a routine a part of legal research, tomorrow's lawyers will should be trained not only to command but additionally to think.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read