HomeEthics & SocietyClearview AI fined $33m for facial recognition image scraping

Clearview AI fined $33m for facial recognition image scraping

Clearview AI provides facial recognition services to law enforcement and other government agencies but, a Dutch data watchdog says its database of 30B images was built illegally.

The Dutch Data Protection Authority (DPA) says Clearview scraped images of individuals from the web and used them without their consent. European GDPR rules are fiercely protective of the privacy of EU residents, and the DPA has fined Clearview just over $33m for this alleged breach.

The DPA says that “Clearview has built an illegal database with billions of photos of faces, including of Dutch people. The Dutch DPA warns that using the services of Clearview can also be prohibited.”

While EU regulators aren’t fans of AI-powered facial recognition, Clearview says it provides its services to the ‘good guys’ and advantages society. The company’s website says “Clearview AI’s investigative platform allows law enforcement to rapidly generate results in help discover suspects, witnesses and victims to shut cases faster and keep communities secure.”

The DPA doesn’t imagine that the ends justify the means. Dutch DPA chairman Aleid Wolfsen says, “Facial recognition is a highly intrusive technology, that you just cannot simply unleash on anyone on the planet.

If there’s a photograph of you on the Internet – and doesn’t that apply to all of us? – then you definitely can find yourself within the database of Clearview and be tracked. This isn’t a doom scenario from a scary film. Nor is it something that would only be done in China.”

In an announcement sent to The Register, Jack Mulcaire, Chief Legal Officer at Clearview AI said “Clearview AI doesn’t have a workplace within the Netherlands or the EU, it doesn’t have any customers within the Netherlands or the EU, and doesn’t undertake any activities that may otherwise mean it’s subject to the GDPR. This decision is illegal, devoid of due process, and is unenforceable.”

This isn’t the primary time EU regulators have sued Clearview but the corporate is unlikely to vary tack or pay up, because it operates outside the EU. Wolfsen says the DPA may find other ways to stop Clearview from using images of EU residents in its database.

“Such company cannot proceed to violate the rights of Europeans and get away with it. Certainly not on this serious manner and on this massive scale. We at the moment are going to research if we will hold the management of the corporate personally liable and wonderful them for steering those violations,” Wolfsen said.

The recent arrest of Telegram CEO Pavel Durov shows that this isn’t an idle threat, and it’s a problem that would affect directors of other AI firms besides Clearview.

If AI models are trained using illegally obtained data or used to breach EU rules, more tech bosses could find themselves held personally liable in the event that they set foot in Europe. Elon Musk has already suggested in a recent X post that he might “limit movements” to nations where free speech is “constitutionally protected.”

The ease with which Grok creates controversial images and loose content regulation on X is unlikely to impress EU data watchdogs. Meta has also held back on releasing a few of its AI products within the EU because of potential rule breaches.

Last week, Uber was fined by DPA for violating EU rules by sending the private data of European taxi drivers to the United States.

Clearview and other US-based firms argue that they aren’t subject to GDPR rules but their directors might wish to recheck their travel plans in the event that they were planning a European trip.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read