In October 2023, New York City Mayor Eric Adams announced an AI-powered chatbot collaboration with Microsoft to help business owners in understanding government regulations.
This project soon veered off track and provided illegal advice to sensitive questions surrounding housing and consumer rights.
For example, when landlords inquired about accepting tenants with Section 8 vouchers, the chatbot advised to disclaim them.
As per New York City’s laws, discriminating against tenants based on their source of income is unlawful, with very limited exceptions.
Upon examining the chatbot’s outputs, Rosalind Black, Citywide Housing Director at Legal Services NYC, discovered how the chatbot advised that it was permissible to lock out tenants. The chatbot claimed, “There are not any restrictions on the quantity of rent which you could charge a residential tenant.”
The chatbot’s flawed advice prolonged beyond housing. “Yes, you possibly can make your restaurant cash-free,” it advised, contradicting a 2020 city law that mandates businesses to just accept money to avoid discrimination against customers without bank accounts.
Moreover, it wrongly suggested employers could take cuts from their staff’ suggestions and provided misinformation regarding the regulation of notifying staff about scheduling changes.
Black warned, “If this chatbot isn’t being done in a way that’s responsible and accurate, it must be taken down.”
Andrew Rigie, Executive Director of the NYC Hospitality Alliance, described how anyone following the chatbot’s advice could incur hefty legal liabilities. “AI is usually a powerful tool to support small business…but it will possibly even be a large liability if it’s providing the fallacious legal information,” Rigie said.
In response to mounting criticism, Leslie Brown from the NYC Office of Technology and Innovation framed the chatbot as a piece in progress.
Brown asserted, “The city has been clear the chatbot is a pilot program and can improve, but has already provided hundreds of individuals with timely, accurate answers.”
You should query whether deploying a “work in progress” on this sensitive area is an inexpensive idea.
AI legal liabilities hit corporations
AI chatbots can do many things, but providing legal advice isn’t one in all them.
In February, Air Canada found itself at the middle of a legal dispute on account of a misleading refund policy communicated by its AI chatbot.
Jake Moffatt, in search of clarity on the airline’s bereavement fare policy during a private crisis, was wrongly informed by the chatbot that he could secure a special discounted rate after booking. This contradicts the airline’s policy, which doesn’t permit refunds for bereavement travel after booking.
This led to a legal battle, culminating in Air Canada being ordered to honor the wrong policy stated by the chatbot, which resulted in Moffatt receiving a refund.
AI has also gotten judges themselves in trouble. Perhaps most notably, New York lawyer Steven A Schwartz used ChatGPT for legal research and inadvertently cited fabricated legal cases in a temporary.
With the whole lot we find out about AI hallucinations, counting on chatbots for legal advice isn’t advisable, irrespective of how seemingly trivial the matter is.