HomePolicyEU ChatGPT Taskforce publishes report on data protection

EU ChatGPT Taskforce publishes report on data protection

The European Data Protection Board arrange the ChatGPT task force a 12 months ago to find out whether OpenAI's handling of private data complies with the GDPR. A report with preliminary findings has now been published.

The EU is amazingly strict with regards to handling its residents' personal data. The GDPR rules explicitly state what firms can and can’t do with this data.

Do AI firms like OpenAI comply with these laws when using data to coach and run their models? A 12 months after the ChatGPT task force began its work, the short answer is: perhaps, perhaps not.

The report states that these are preliminary results and that it’s “not yet possible to offer a full description of the outcomes”.

The three essential areas the duty force examined were legality, fairness and accuracy.

legality

To create its models, OpenAI collected public data, filtered it, used it to coach its models, and continues to coach its models with user input. Is this legal in Europe?

OpenAI's web scraping inevitably collects personal data, and GDPR states which you can only use this information when there may be a legitimate interest and making an allowance for people's legitimate expectations about how their data will probably be used.

OpenAI states that its models comply with Article 6(1)(f) of the GDPR, which states, amongst other things, that the use of private data is lawful if “processing is essential to safeguard the legitimate interests of the controller or of a 3rd party.”

The report states: “Provisions must be made to delete or anonymise personal data collected via web scraping before the training phase.”

OpenAI says it has put in place safeguards for private data, however the task force points out that “the burden of proof of the effectiveness of such measures lies with OpenAI.”

justice

When EU residents interact with firms, they expect their personal data to be treated properly.

Is it fair that ChatGPT has a clause in its terms and conditions that states that users are answerable for their chat input? According to GDPR, a corporation cannot shift the responsibility for GDPR compliance to the user.

The report states: “If ChatGPT is made available to the general public, it should be assumed that individuals will enter personal data ultimately. If these inputs then grow to be a part of the information model and are shared, for instance, with anyone who asks a selected query, OpenAI stays answerable for compliance with the GDPR and shouldn’t argue that the input of certain personal data was prohibited in the primary place.”

The report concludes that OpenAI should be transparent and explicitly tell users that their input could also be used for training purposes.

accuracy

AI models hallucinate and ChatGPT is not any exception. If it doesn't know the reply, sometimes it just makes something up. If it provides false facts about individuals, ChatGPT violates the GDPR requirement for the accuracy of private data.

The report states: “End users assume that the outcomes provided by ChatGPT, including details about individuals, are believed by them to be factually accurate, no matter their actual accuracy.”

Although ChatGPT warns users that it sometimes makes mistakes, in keeping with the duty force, that is “not sufficient to satisfy the information accuracy principle.”

OpenAI is facing a lawsuit because ChatGPT repeatedly misrepresents the birth date of a well known public figure.

The company said in its defense that the issue can’t be fixed and that folks should as an alternative ask that every one references to or not it’s faraway from the model.

Last September, OpenAI established an Irish legal entity in Dublin, which is now regulated by the Irish Data Protection Commission (DPC), protecting it from GDPR challenges by individual EU states.

Will the ChatGPT Task Force make legally binding findings in its next report? Could OpenAI comply even when it desired to?

In their current form, ChatGPT and other models may never find a way to totally comply with privacy rules written before the appearance of artificial intelligence.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read