HomeArtificial IntelligenceThis week in AI: Generative AI floods academic journals with spam

This week in AI: Generative AI floods academic journals with spam

Hi guys and welcome to TechCrunch’s regular AI newsletter.

This week in AI, generative AI begins flooding academic publications with spam – a disheartening recent development on the disinformation front.

In a Post on Retraction WatchIn a blog that tracks recent retractions of educational studies, assistant professors of philosophy Tomasz Żuradzk and Leszek Wroński wrote about three journals published by Addleton Academic Publishers that appear to consist entirely of AI-generated articles.

The journal articles follow the identical pattern, replete with buzzwords like “blockchain,” “metaverse,” “Internet of Things,” and “deep learning.” They feature the identical editorial staff—ten of whom have since died—and a nondescript address in Queens, New York, that appears like a house.

“So what’s the issue?” you may ask. Isn’t browsing through AI-generated spam content just the value of doing business online nowadays?

Well, yes. But the fake journals show how easy it’s to control the systems used to judge researchers for promotions and hiring—and that may very well be a harbinger for knowledge employees in other industries.

On no less than one widely used rating system, CiteScore, the journals rank in the highest 10 for philosophical research. How is that this possible? They cite one another extensively. (CiteScore takes citations under consideration in its calculations.) Żuradzk and Wroński find that of 541 citations in considered one of Addleton's journals, 208 come from other bogus publications by the publisher.

“(These rankings) often function indicators of research quality for universities and funding agencies,” write Żuradzk and Wroński. “They play an important role in decisions about academic awards, hiring and promotions and might subsequently influence researchers' publication strategies.”

One could argue that CiteScore is the issue—it’s clearly a flawed measure. And that isn’t a unsuitable argument. But it is usually not unsuitable to say that generative AI and its misuse are disrupting systems that individuals's livelihoods rely upon in unexpected—and potentially quite harmful—ways.

There is a future where generative AI leads us to rethink and redesign systems like CiteScore to be more equitable, holistic, and inclusive. The grimmer alternative—and the one playing out now—is a future where generative AI continues to run amok, wreaking havoc, and ruining skilled lives.

I actually hope we will correct course soon.

News

DeepMind's soundtrack generator: DeepMind, Google's AI research lab, says it’s developing AI technology to generate soundtracks for videos. DeepMind's AI takes the outline of a soundtrack (e.g., “pulsating jellyfish underwater, marine life, ocean”) and combines it with a video to create music, sound effects, and even dialogue that matches the video's characters and tone.

A robot chauffeur: Researchers on the University of Tokyo have developed and trained a “musculoskeletal humanoid” named Musashi to drive a small electric automotive through a test track. Equipped with two cameras that replace human eyes, Musashi can “see” the road ahead in addition to the views reflected within the automotive's side mirrors.

A brand new AI search engine: Genspark, a brand new AI-powered search platform, uses generative AI to jot down custom summaries in response to go looking queries. So far, the corporate has raised $60 million from investors, including Lanchi Ventures. The latest round of funding valued the corporate at $260 million after the funding, a decent figure considering Genspark goes up against competitors like Perplexity.

How much does ChatGPT cost?: How much does ChatGPT, OpenAI's ever-growing AI-powered chatbot platform, cost? This query is harder to reply than you may think. To assist you keep track of the varied ChatGPT subscription options available, we've put together an updated guide to ChatGPT pricing.

Research paper of the week

Autonomous vehicles face an limitless number of edge cases depending on location and situation. If you're traveling on a two-lane road and someone puts on their left turn signal, does that mean they're going to vary lanes? Or that you need to pass them? The answer may rely upon whether you're traveling on I-5 or the freeway.

A bunch of researchers from Nvidia, USC, UW and Stanford show in a paper just published at CVPR that many ambiguous or unusual circumstances might be solved by—consider it or not—having an AI read the local driver's manual.

Her Large Language Driving Assistant or LLaDaLLM gives you access to a state, country or region's driving manual – not even fine-tuning. Local rules, customs or signage might be present in the literature, and when an unexpected circumstance similar to honking, high beams or a flock of sheep occurs, an appropriate motion (pull over, stop, turn, honk back) is triggered.

Photo credits: NVIDIA

It's certainly not an entire end-to-end driving system, but it surely shows another path to a “universal” driving system that also has surprises in store. And perhaps a way for the remaining of us to work out why we get honked at after we visit unfamiliar areas.

Model of the week

On Monday, Runway, an organization that develops generative AI tools for film and image content, unveils Gen-3 Alpha. Gen-3 has been trained on a lot of images and videos from public and internal sources and might generate video clips from text descriptions and still images.

Runway says Gen-3 Alpha offers a “significant” improvement in generation speed and fidelity over Runway's previous flagship video model, Gen-2, in addition to finer control over the structure, style and movement of the videos created. Gen-3 may also be customized to permit for more “stylistically controlled” and consistent characters, Runway says, targeting “specific artistic and narrative needs.”

Gen-3 Alpha has its limitations – including the undeniable fact that the utmost movie length is 10 seconds. But Runway co-founder Anastasis Germanidis guarantees that it’s just the primary of several video generation models in a next-gen family of models that can be trained on Runway's improved infrastructure.

Gen-3 Alpha is just the most recent of several generative video systems which have hit the market in recent months. Others include OpenAI's Sora, Luma's Dream Machine and Google's Veo. Together, they threaten to upend the film and tv industry as we understand it – assuming they’ll Copyright challenges.

Grab bag

Your next order at McDonald's won’t be taken by an AI.

McDonald's this week announced that the fast-food chain will remove the automated order-taking technology it has been testing for nearly three years from greater than 100 of its restaurants. The technology – developed jointly with IBM and installed in restaurants' drive-thru lanes – made headlines last yr for its tendency to misunderstand customers and make mistakes.

A recent Piece in Takeout suggests that AI is usually losing its hold amongst fast-food operators, who not way back were obsessed with the technology and its potential to extend efficiency (and reduce labor costs). Presto, a significant player within the AI-powered drive-thru lane space, recently lost a key customer, Del Taco, and faces mounting losses.

The problem is inaccuracy.

McDonald's CEO Chris Kempczinski told CNBC reported in June 2021 that its voice recognition technology was accurate about 85% of the time, but that about one in five orders required human assistance. The best version of Presto's system completes only about 30% of orders without human assistance, based on Takeout.

While AI decimating In certain segments of the gig economy, it appears that evidently some jobs – particularly people who require understanding quite a lot of accents and dialects – can’t be eliminated by automation. At least for now.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read