HomeEthics & SocietyOpenAI shut down the Ghibli craze – now users are turning to...

OpenAI shut down the Ghibli craze – now users are turning to open source

When OpenAI released its latest image generator a couple of days ago, they probably didn’t expect it to bring the web to its knees.

But that’s kind of what happened, as tens of millions of individuals rushed to remodel their pets, selfies, and favorite memes into something that looked prefer it got here straight out of a Studio Ghibli movie. All you needed was so as to add a prompt like “within the type of Studio Ghibli.”

For anyone unfamiliar, Studio Ghibli is the legendary Japanese animation studio behind Spirited Away, Kiki’s Delivery Service, and Princess Mononoke.

Its soft, hand-drawn style and magical settings are immediately recognizable – and surprisingly easy to mimic using OpenAI’s latest model. Social media is full of anime versions of individuals’s cats, family portraits, and inside jokes.

It took many by surprise. Normally, OpenAI’s tools resist any prompts that name an artist or designer by name, as this shows, more-or-less unequivocally, that copyright imagery is rife in training datasets.

For some time, though, that didn’t appear to matter anymore. Even OpenAI CEO Sam Altman even modified his own profile photo to a Ghibli-style image and posted on X:

can yall please chill on generating images that is insane our team needs sleep

At one point, over 1,000,000 people had signed up for ChatGPT inside an hour.

Then, quietly, it stopped working for a lot of.

Users began to notice that prompts referencing Ghibli, and even attempting to describe the style more not directly, not returned the identical results.

Some prompts were rejected altogether. Others just produced generic art that looked nothing like what had been going viral the day before. Many are speculating now that the model was updated. OpenAI had rolled out copyright restrictions behind the scenes.

OpenAI later said that, despite spurring on the trend, they were throttling Ghibli-style images by taking a “conservative approach,” refusing any try to create images within the likeness of a living artist.

This type of thing isn’t latest. It happened with DALL·E as well. A model launches with stacks of flexibility and loose guardrails, catches fire online, then gets quietly dialed back, often in response to legal concerns or policy updates.

The original version of DALL·E could do things that were later disabled. The same appears to be happening here.

One Reddit commenter explained:

“The problem is it actually goes like this: Closed model releases which is a lot better than anything we have now. Closed model gets heavily nerfed. Open source model comes out that’s getting near the nerfed version.”

OpenAI’s sudden retreat has left many users looking elsewhere, and a few are turning to open-source models, akin to Flux, developed by Black Forest Labs from Stability AI.

Unlike OpenAI’s tools, Flux and other open-source text-to-image tools doesn’t apply server-side restrictions (or at the least, they’re looser and limited to illicit or profane material). So, they haven’t filtered out prompts referencing Ghibli-style imagery.

Control doesn’t mean open-source tools avoid ethical issues, after all. Models like Flux are sometimes trained on the identical form of scraped data that fuels debates around style, consent, and copyright. 

The difference is, they aren’t subject to corporate risk management – meaning the creative freedom is wider, but so is the grey area.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read