HomeNewsArtificial intelligence mustn't be allowed to influence the decision-making process within the...

Artificial intelligence mustn’t be allowed to influence the decision-making process within the Canadian Federal Court

Canadian society is moving ever deeper into the digital age. Artificial intelligence (AI) technologies – comparable to the generative AI ChatGPT and the legal platform Harvey – are increasingly shapes judicial processes and legal systemseven when deciding complicated cases.

Like other parts of the world, Canada shouldn’t be proof against these changing interfaces of AI technology and their impact on the administration of justice.

2024 is the primary full yr of implementation Canada's latest AI policy for the Federal Court. To date, not a single Supreme Court judge in Canada has said a decisive “no” to the usage of AI within the courts.

The Federal Court The AI ​​policy statement was peppered with only a commitment that more “public consultations” were needed – but without describing what that meant.

A fragile dance

Instead of stopping the usage of AI – as was recently done in British Columbia in dispute over fake AI-generated cases — the Federal Supreme Court has launched into a fragile dance. The focus has been to minimise the known risks of ‘automated decision-making’ within the justice system, while capitalising on the potential for business efficiencies. This includes translating court texts, conducting legal research and administrative tasks, coping with case management issues, supporting litigants who represent themselves and supporting alternative solutions.

Under the Bangalore Principles of Judicial Conductthat is the equivalent of play technological footsteps.

As These technologies have gotten ubiquitousA fragile query is raised from the shadows of the Federal Court's judicial booth: is it even the court's task to make your mind up on such a critical matter, or should this be left to the parliamentary powers?

Global News reports on fake cases brought before the British Columbia court.

Instructions for using AI

The Federal Court’s AI Directive states that the intention is to “guide the potential use of AI of Members of the Court and their legal trainees.”

But then it says: “The Court, through its Technology Committee, will begin investigating and testing possible uses of AI for internal administrative purposes.”

There is not any “potential” – AI will actually utilized by the Courtalthough not yet in formal sentencing proceedings. And the Chief Justice has delegated his own oversight functions to an unelected committee, thereby bypassing Parliament's role in legislating significant changes to the judicial process.

This matter must not be left to committees or under the only real authority of a single Chief Justice not elected by the Canadian people.

While the authors of the rule of thumb state that they’re merely exploring the potential uses of AI, the Federal Court also bluntly admits that AI “can save time and reduce the workload of judges and court staff, just because it does for lawyers.”

To be fair, the court also acknowledged “the potential for AI to have a negative impact on judicial independence” and that “there’s a risk that public confidence within the judiciary could also be undermined by certain uses of AI.”

However, the court doesn’t provide any information on the way it intends to make sure the implementation and enforcement of control mechanisms, for instance through the usage of ChatGPT itself.

Eliminate reviews

Another federal initiative was launched during COVID-19 by the Treasury Board of Canada (TBOC)In this example, TBOC sought to make sure a “responsible” use of automated decision-making to reduce risks to customers, federal institutions and Canadian society. This resulted in many questions amongst legal scholars about AI and its role in administrative decisions, even when machines replace a human decision maker.

If used improperly, AI could undermine the role of Canadian judges and limit the courts’ role in judicial review, although some think that remains to be distant.

The Federal Supreme Court has stated that it can “seek the advice of the relevant stakeholders before implementing AI.” However, if the federal government is a stakeholder, the query arises as to what influence the chief may have on the operational policy of the judiciary.

Lack of research on the impact on courts

The Federal Court’s AI policy suggests an alarming opportunity for machine learning to exist inside a poorly structured policy that favors potential efficiencies over inherent risks. It also ignores the likelihood of erasing legal diversity and popular culture bias, comparable to removing Indigenous legal customs and traditions in favour of Eurocentric legal norms and processes.

This raises further questions on how federal court policy will, over time, cope with problems with advancing machine learning and the physical and psychological relationships between judges, court staff, lawyers and machines – relationships that might ultimately pave the way in which for the removal of human judges from our courts.



While the intersections between AI and broader legal contexts are sadly under-researched, it’s the duty of the legal career to be sure that we’re governed and heard by the people we entrust with our freedoms, not the machines others construct. Business efficiency has nothing to do with the true role of our courts – upholding the rule of law and protecting the Constitution.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read