HomeNewsThe rise of the “machine defendant” – who's guilty when an AI...

The rise of the “machine defendant” – who’s guilty when an AI makes mistakes?

There are few industries that remain untouched by the transformative potential of artificial intelligence (AI) – or no less than the hype.

For businesses, the promise of the technology goes far beyond writing emails. It is already getting used to automate a wide selection of business processes and interactions. Coaching employeesand myself Help for doctors analyze medical data.

Competition between the varied AI model developers – including OpenAI, Meta, Anthropic and Google – will proceed to steer to rapid improvements.

We can expect these systems to turn out to be more intelligent over time, which suggests we may entrust them with an increasing number of responsibility.

The big query then is: what happens if something goes improper? Who is ultimately liable for the selections a machine makes?

My Research has looked into this very issue. What is worrying is that our current legal framework might not be as much as the duty.

We appear to have escaped a catastrophe – to date

With any technological advancement, it’s inevitable that things will go improper. We have already seen this with the web, which has brought enormous advantages to society but has also created a variety of latest problems – corresponding to social media addiction, data breaches and the rise of cybercrime.

So far, it seems we've been spared a worldwide web disaster, however the CrowdStrike outage in July – which quickly brought businesses and plenty of other services to a halt – was a timely reminder of how dependent we've turn out to be on technology and the way quickly things can crumble in such an interconnected web.

The Crowdstrike outage in July showed how vulnerable our technology-driven global economy has turn out to be.
Michael Dwyer/AP


Like the early Internet, generative AI guarantees enormous advantages to society, but is more likely to bring with it some significant and unpredictable drawbacks.

There has actually been no shortage of warnings. In extreme cases, some experts imagine that out-of-control AI could create a “nuclear level” threat and represent a serious Existential risk for humanity.

One of probably the most obvious risks is that “bad actors” – corresponding to organized crime groups and rogue states – will use the technology to cause targeted harm. This could include using deepfakes and other misinformation to influence elections or to commit mass cybercrime. We have already seen examples of such use.

Less dramatic, but still highly problematic, are the risks that arise after we entrust necessary tasks and responsibilities to AI, especially within the running of companies and other essential services. It is actually no exaggeration to assume a future global technology failure brought on by computer code written and deployed entirely by AI.

If these AIs make autonomous decisions that unintentionally cause harm – whether financial loss or actual injury – who can we hold liable?

Our laws usually are not prepared

What is troubling is that our existing theories of legal liability may not do justice to this latest reality.

This is because, except some product liability laws, current theories often require fault attributable to willful intent or no less than provable negligence on the a part of a person.

Selective focus on programmer typing code on computer keyboard
AI systems exhibit “emergent” behaviors that usually can’t be predicted by their developers.
DC Studio/Shutterstock

For example, a claim based on negligence requires that the damage was reasonable and that the conduct of the designer, manufacturer, seller or other one that is likely to be a defendant in the particular case was causal.

But as AI systems evolve and turn out to be more intelligent, they are going to almost actually produce results that will not have been fully expected or anticipated by their manufacturers, developers, etc.

The “emergent behavior” could arise because AI has turn out to be more intelligent than its creators. But it may be self-protective after which self-serving Drives or goals through advanced AI systems.

My own Research is meant to attract attention to a serious, looming problem within the assessment of liability.

In a hypothetical case where an AI caused significant harm, its human and company creators may have the option to guard themselves from criminal or civil liability.

They could do that by arguing that the damage just isn’t reasonable foreseeable or that the unexpected actions of the AI ​​​​change the chain of Causation between the conduct of the manufacturer and the losses or damages suffered by the victims.

These can be possible defenses against each criminal and civil claims.

The same applies to the defense’s argument that what’s known as “Error element“a criminal offence – intentional, knowing, reckless or negligent – ​​by the developer of the AI ​​system just isn’t accompanied by the crucial “physical element“ – which on this case would have been committed by a machine.

We must prepare now

Market forces are already driving developments in the sphere of artificial intelligence at a rapid pace. Where exactly is less certain.

It may prove that the common law we now have today, developed by the courts, is adaptable enough to take care of these latest problems. But it’s also possible that we are going to find that current laws have flaws, which could add a way of injustice to future disasters.

It is very important to make sure that the businesses which have benefited most from the event of artificial intelligence are also held accountable for the prices and consequences when things go improper.

Preparing to resolve this issue ought to be a priority for the courts and governments of all nation states, not only Australia.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read