HomeNewsThis week in AI: OpenAI considers allowing AI porn

This week in AI: OpenAI considers allowing AI porn

Keeping up with an industry as fast-moving as AI is a large challenge. Until an AI can do it for you, here's a handy roundup of the newest stories from the world of machine learning, in addition to notable research and experiments that we haven't covered alone.

By the way in which, TechCrunch is planning to publish an AI newsletter soon. Stay tuned. In the meantime, we're increasing the frequency of our semi-regular AI column, which previously appeared about twice a month, to weekly – so keep an eye fixed out for further editions.

This week OpenAI in AI revealed that that is the case explore the best way to generate AI porn “responsibly.” Yes, you heard that right. Announced in a document OpenAI's latest NSFW policy is meant to drag back the curtain and collect feedback on its AI's instructions. It is meant to spark a discussion about how and where the corporate might allow explicit images and text in its AI products, OpenAI said.

“We wish to be sure that people have maximum control without violating the law or other people’s rights,” says Joanne Jang, product team member at OpenAI. said NPR. “There are creative cases where content with sexuality or nudity is very important to our users.”

It's not the primary time OpenAI has signaled its willingness to place its foot in controversial territory. Earlier this yr, Mira Murati, the corporate's CTO, said told The Wall Street Journal said it was “unsure” whether OpenAI would eventually allow its video generation tool Sora for use to create adult content.

So what to make of this?

There is a future where OpenAI opens the door to AI-generated porn and every part can be… high quality. I don't think Jang is unsuitable when he says that there are legitimate types of artistic expression for adults – expressions that could possibly be created using AI-powered tools.

But I’m unsure we will trust OpenAI – or some other generative AI provider – to get it right.

Firstly, consider the rights of the authors. OpenAI's models were trained on massive amounts of public web content, a few of which is undoubtedly pornographic in nature. But OpenAI hasn't licensed all of this content until relatively recently – and even allowed creators to opt out of coaching (and even then only certain forms of coaching).

It's hard to make a living from adult content, and if OpenAI were to bring AI-generated porn into the mainstream, there can be even stiffer competition for creators – a contest that wasn't built on the premise of those creators' works for nothing becomes.

The other problem, for my part, is the fallibility of the present protection measures. OpenAI and competitors have been refining their filtering and moderation tools for years. But users proceed to find workarounds that allow them to misuse corporations' AI models, apps and platforms.

Just in January, Microsoft was forced to make changes to its Designer image creation tool, which uses OpenAI models, after users found a technique to create nude images of Taylor Swift. When it involves text generation, it's trivial to search out chatbots based on supposedly “protected” models like Anthropic's Claude 3 I prefer to spit out eroticism.

AI has already created a brand new type of sexual abuse. Elementary and highschool students use AI-powered apps to “remove” photos of their classmates without their consent. a 2021 Opinion poll A study conducted within the United Kingdom, New Zealand and Australia found that 14% of respondents aged 16 to 64 had been victims of deepfake images.

New laws within the US and elsewhere aim to handle this. But the jury is out on the justice system – a justice system that already exists is struggling to eradicate most sex crimes – can regulate an industry as fast-moving as AI.

Frankly, it's hard to assume an approach OpenAI could take to AI-generated porn that isn't fraught with risk. Maybe OpenAI will reconsider its stance. Or perhaps – against all odds – it is going to find a greater way. Whatever the case, it seems we'll discover sooner moderately than later.

Here are another notable AI stories from recent days:

  • Apple's AI plans: Apple CEO Tim Cook revealed some details in regards to the company's plans to advance AI during its earnings call with investors last week. Sarah has the entire story.
  • Enterprise GenAI: The CEOs of Dropbox and Figma – Drew Houston and Dylan Field – have invested in Lamini, a startup that develops generative AI technology together with a generative AI hosting platform for enterprise organizations.
  • AI for customer support: Airbnb is rolling out a brand new feature that permits hosts to opt in to AI-powered suggestions to reply to guests' questions, reminiscent of sending guests a property's checkout guide.
  • Microsoft restricts AI use: Microsoft has reiterated its ban on US police departments using generative AI for facial recognition. Law enforcement agencies worldwide have also been banned from using facial recognition technology on body cameras and dash cams.
  • Money for the cloud: Alternative cloud providers like CoreWeave are raising a whole lot of hundreds of thousands of dollars because the boom in generative AI drives demand for low-cost hardware to coach and run models.
  • RAG has its limits: Hallucinations are a significant problem for corporations seeking to integrate generative AI into their operations. Some providers claim that they will eliminate them using a method called RAG. But these claims are greatly exaggerated, you actually think so.
  • Summary of the Vogels meeting: Amazon CTO Werner Vogels has open sourced a gathering summarization app called Distill. As you may expect, it relies heavily on Amazon's services.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read