HomeArtificial IntelligenceA brand new Chinese video generation model appears to censor politically sensitive...

A brand new Chinese video generation model appears to censor politically sensitive topics

A brand new, powerful AI model for video generation is now generally available – but there’s a catch: The model apparently censors topics that the federal government of its country of origin, China, considers to be too politically sensitive.

The model, Ringdeveloped by Beijing-based company Kuaishou, launched earlier this 12 months as a waitlist for users with a Chinese phone number. Today, it's available to anyone willing to supply their email address. After signing up, users can enter prompts to have the model generate five-second videos of their descriptions.

Kling works just about as advertised. Its 720p videos, which take one to 2 minutes to generate, don't deviate an excessive amount of from spec. And Kling seems to simulate physical phenomena like rustling leaves and running water about in addition to video generation models like AI startup Runway's Gen-3 and OpenAI's Sora.

But Kling generates clips on specific topics. References similar to “Democracy in China,” “Chinese President Xi Jinping on the streets,” and “Protests in Tiananmen Square” result in a non-specific error message.

Photo credits: Kuaishou

The filtering seems to only occur on the prompt level. Kling supports animation of still images and can, for instance, easily generate a video with a portrait of Jinping so long as the associated prompt doesn’t mention Jinping by name (e.g., “This man is giving a speech”).

We have asked Kuaishou for comment.

Sound AI
Photo credits: Kuaishou

Kling's strange behavior is probably going the results of strong political pressure from the Chinese government on generative AI projects within the region.

Earlier this month, the Financial Times wrote reported that AI models in China can be tested by China's top web regulator, the Cyberspace Administration of China (CAC), to make sure their responses to sensitive topics “embody core socialist values.” According to the Financial Times report, the models can be evaluated by CAC officials based on their responses to a series of questions – lots of which relate to Jinping and criticism of the Communist Party.

According to reportsThe CAC has even gone up to now as to propose a blacklist of sources that can’t be used to coach AI models. Companies that submit models for review must Prepare tens of 1000’s of inquiries to test whether the models provide “confident” answers.

The result’s AI systems that refuse to answer issues that might raise the ire of Chinese regulators. Last 12 months, the BBC wrote found that Ernie, the flagship AI chatbot of the Chinese company Baidu, was reticent and evasive when asked questions that could possibly be perceived as politically controversial, similar to “Is Xinjiang place?” or “Is Tibet place?”

The draconian measures threaten slow China's AI is making progress. Not only does it require sifting through data to remove politically sensitive information, it also requires enormous development time to create ideological guardrails – guardrails that might still fail, as Kling illustrates.

From a user perspective, China's AI regulations already end in two classes of models: some restricted by intensive filtering, others significantly less so. Is this really thing for the broader AI ecosystem?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read