HomeNewsOpenAI's hunger for data raises privacy concerns

OpenAI's hunger for data raises privacy concerns

Last month, OpenAI got here against a yet-to-be-passed California law that will set basic safety standards for developers of large-scale artificial intelligence (AI) models. This was a change of stance for the corporate, whose CEO Sam Altman had previously spoken in support AI regulation.

The former non-profit organization, which became known in 2022 with the discharge of ChatGPT, is now rated with as much as 150 billion US dollarsIt stays on the forefront of AI development, with the discharge of a brand new “argumentation” model designed to handle more complex tasks.

The company has made several moves in recent months that suggest a growing appetite for data collection—not only the text or images used to coach current generative AI tools, but potentially sensitive data related to online behavior, personal interactions and health.

There is not any indication that OpenAI plans to merge these different data streams, but doing so would offer significant business advantages. The mere possibility of access to such comprehensive information raises significant questions on privacy and the moral implications of centralized data control.

Media contracts

This yr, OpenAI signed Multiple partnerships with media corporations equivalent to Time Magazine, the Financial Times, Axel Springer, Le Monde, Prisa Media and most recently Condé Nast, owner of newspapers equivalent to Vogue, The New Yorker, Vanity Fair and Wired.

The partnerships give OpenAI access to large amounts of content. OpenAI's products will also be used to investigate user behavior and interaction metrics equivalent to reading habits, preferences, and engagement patterns across platforms.

If OpenAI is given access to this data, the corporate could gain comprehensive insights into how users interact with various kinds of content. These insights could possibly be used for detailed user profiling and tracking.

Video, biometrics and health

OpenAI also invested in a webcam startup called OpalThe aim is to expand the cameras with advanced AI functions.

Video footage captured by AI-powered webcams could yield more sensitive biometric data equivalent to facial expressions and inferred psychological states.

In July, OpenAI and Thrive Global launched Thrive AI Health. The company says it would use AI to “Hyperpersonalize and scale behavioral changes“ in health.

While Thrive AI Health guarantees to have “robust privacy and security safeguards,” it’s unclear what these will seem like.

Previous AI healthcare projects have involved extensive sharing of non-public data, equivalent to a partnership between Microsoft and Providence Health within the US and one other between Google DeepMind and the Royal Free London NHS Foundation Trust within the UK. In the latter case, DeepMind faced with legal motion for the usage of private health data.

Sam Altman's eyeball scanning side project

Altman also invests in other data-hungry corporations, most notably a controversial cryptocurrency project called WorldCoin (which he co-founded). WorldCoin goals to create a worldwide financial network and identification system using biometric identification, specifically iris scans.

The company claims it has already the eyeballs of greater than 6.5 million people were scanned in nearly 40 countries. More than a dozen jurisdictions have now either suspended the corporate's operations or reviewed its data processing.

The Bavarian authorities are currently discussing whether Worldcoin complies with European data protection regulationsA negative ruling could end in the corporate being banned from doing business in Europe.

The important concerns being investigated include the gathering and storage of sensitive biometric data.

Why is that this necessary?

Existing AI models equivalent to OpenAI’s flagship GPT-4o have largely been trained on publicly available data from the Internet. However, future models will need more data – and it’s is becoming increasingly difficult to acquire.

Last yr said The goal was to develop AI models that “comprehensively understand all subject areas, industries, cultures and languages,” which might require “the broadest possible training dataset.”

Against this backdrop, OpenAI's pursuit of media partnerships, its investments in biometric and medical data collection technologies, and its CEO's ties to controversial projects like Worldcoin paint a worrying picture.

By accessing vast amounts of user data, OpenAI is in a position to develop the following wave of AI models – but privacy could fall by the wayside.

The risks are manifold. Large collections of non-public data are vulnerable to breaches and misuse, equivalent to Data leak at Medisecure wherein almost half of all Australians had their personal and medical data stolen.

The potential for large-scale data consolidation also raises concerns about profiling and surveillance. Again, there is no such thing as a indication that OpenAI is currently planning such practices.

However, OpenAI’s privacy policy has been up to now lower than perfect. Technology corporations within the broader sense even have a protracted history of questionable data practices.

It's not hard to assume a scenario where OpenAI could exert significant influence over people in each personal and public spheres by centrally controlling many varieties of data.

Will security take a back seat?

OpenAI’s recent history does little to handle security and privacy concerns. In November 2023, Altman temporarily displaced appointed CEO, allegedly as a consequence of internal conflicts over the corporate's strategic direction.

Altman is a robust advocate of rapid commercialization and adoption of AI technologies, and has reportedly often prioritized growth and market penetration about security measures.

Altman’s retirement from the role was transient, followed by a rapid reinstatement and a big shakeup of OpenAI's board of directors, suggesting that company leadership now supports its aggressive approach to AI deployment despite potential risks.

With this in mind, the implications of OpenAI's recent opposition to the California bill transcend a single policy disagreement. The anti-regulation stance points to a troubling trend.


LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read