HomeNewsThis week in AI: AI shouldn't be the top of the world...

This week in AI: AI shouldn’t be the top of the world – however it remains to be very harmful

Hey guys, welcome to TechCrunch's regular AI newsletter.

This week, a brand new study in the sphere of AI shows that generative AI shouldn’t be all that harmful – at the least not in an apocalyptic sense.

In a Paper In a study submitted to the annual conference of the Association for Computational Linguistics, researchers from the University of Bath and the University of Darmstadt argue that models like those in Meta's llama family cannot learn independently or acquire latest skills without explicit instruction.

The researchers conducted 1000’s of experiments to check the power of various models to perform tasks they’d never seen before, reminiscent of answering questions on topics outside the scope of their training data. They found that while the models could superficially follow instructions, they were unable to learn latest skills on their very own.

“Our study shows that the fear that a model will do something completely unexpected, progressive and potentially dangerous is unfounded,” said Harish Tayyar Madabushi, a pc scientist on the University of Bath and co-author of the study. said in an announcement. “The prevailing view that any such AI poses a threat to humanity prevents the widespread adoption and development of those technologies and likewise diverts attention from the true problems that require our attention.”

The study has its limitations, nevertheless. The researchers didn’t test the newest and strongest models from vendors reminiscent of OpenAI and Anthropic, and benchmarking models is often an inexact science. But the research is far from the First To find that today’s generative AI technology poses no threat to humanity – and that assuming otherwise risks unlucky policy.

In a op-ed Writing in Scientific American last 12 months, AI ethicist Alex Hanna and linguistics professor Emily Bender argued that corporate AI labs are drawing regulators' attention to imaginary end-of-the-world scenarios as a bureaucratic ploy. They pointed to OpenAI CEO Sam Altman's appearance at a May 2023 congressional hearing where he suggested – without evidence – that generative AI tools could “go pretty unsuitable.”

“The general public and regulators must not fall for this maneuver,” Hanna and Bender wrote. “Instead, we should always turn to scientists and activists who conduct peer review and have pushed back the AI ​​hype to know its harmful effects here and now.”

Her and Madabushi's arguments are details to consider as investors proceed to pump billions into generative AI and the hype cycle reaches its peak. There's lots at stake for the businesses backing generative AI technology, and what's good for them – and their backers – isn't necessarily good for the remaining of us.

Generative AI may not cause our extinction. But it’s already causing harm in other ways—consider the proliferation of involuntary deepfake porn, wrongful arrests based on facial recognition, and the hordes of underpaid data annotators. Hopefully policymakers see this too and share this view—or eventually come around to it. If not, humanity may thoroughly have reason to fear.

News

Google Gemini and AI, oh dear: Google's annual Made By Google hardware event took place on Tuesday, and the corporate announced a slew of updates to its Gemini assistant — in addition to latest phones, earbuds, and smartwatches. For the newest news, try TechCrunch's roundup.

Copyright lawsuit against AI progresses: A category motion lawsuit brought by artists claiming that Stability AI, Runway AI and DeviantArt illegally trained their AIs on copyrighted works can proceed, but only partially, the presiding judge ruled Monday. In a mixed ruling, several of the plaintiffs' claims were dismissed while others stood, meaning the suit could find yourself in court.

Problems for X and Grok: X, Elon Musk's social media platform, has turn out to be the goal of a series of privacy complaints after it used the info of users within the European Union to coach AI models without obtaining users' consent. X has agreed to stop EU data processing for training Grok – for now.

YouTube tests Gemini brainstorming: YouTube is testing an integration with Gemini to assist creators brainstorm video ideas, titles, and thumbnails. The feature known as Brainstorm with Gemini and is currently only available to pick creators as a part of a small, limited experiment.

GPT-4o from OpenAI does strange things: OpenAI's GPT-4o is the corporate's first model trained on speech in addition to text and image data. And that causes it to sometimes behave strangely – for instance, mimicking the voice of the person it's talking to or shouting randomly in the midst of a conversation.

Research paper of the week

There are countless firms offering tools that claim to have the opportunity to reliably recognize text written by a generative AI model. This could be useful for fighting misinformation and plagiarism, for instance. But once we tested a few of them some time ago, the tools rarely worked. And a brand new study suggests that the situation hasn't improved much.

Researchers at UPenn designed a dataset and leaderboard, the Robust AI Detector (RAID), with over 10 million AI-generated and human-written recipes, news articles, blog posts, and more to measure the performance of AI text detectors. They found that the detectors they evaluated were “mostly useless” (within the researchers' words), and only worked when applied to specific use cases and texts that were just like the text they were trained on.

“If universities or schools were to depend on a narrowly trained detector to detect students' use of (generative AI) when writing assignments, they may falsely accuse students of cheating after they usually are not,” Chris Callison-Burch, a professor of computer and data science and co-author of the study, said in an announcement. “They could also miss students who cheat by utilizing other (generative AI) to create their homework.”

It seems that there isn’t a magic formula for text recognition using artificial intelligence – the issue is unsolvable.

According to reportsOpenAI itself has developed a brand new text detection tool for its AI models—an improvement over the corporate's first attempt—but is declining to release it, fearing it could disproportionately impact non-English-speaking users and be rendered ineffective by minor changes within the text. (Less philanthropically, OpenAI can also be concerned about how a built-in AI text detector could affect how its products are perceived—and used.)

Model of the week

Generative AI is seemingly not only good for memes. Researchers at MIT are Apply This makes it possible to discover problems in complex systems reminiscent of wind turbines.

A team at MIT's Computer Science and Artificial Intelligence Lab developed a framework called SigLLM that features a component for transforming time series data—measurements taken repeatedly over time—into text-based inputs that a generative AI model can process. A user can feed this prepared data into the model and tell it to start out detecting anomalies. The model may also be used to predict future time series data points as a part of an anomaly detection pipeline.

The framework didn’t perform well within the researchers' experiments. But if its performance could be improved, SigLLM could, for instance, help engineers detect potential problems in equipment reminiscent of heavy machinery before they occur.

“Since this is just the primary iteration, we didn't expect to get it right on the primary try, but these results show that there’s a possibility here to leverage (generative AI models) for complex anomaly detection tasks,” Sarah Alnegheimish, a doctoral student in electrical engineering and computer science and lead writer of a paper on SigLLM, said in an announcement.

Grab bag

OpenAI updated ChatGPT, its AI-powered chatbot platform, to a brand new base model this month – but didn’t publish a changelog (i.e. barely a change log).

So what should we make of it? What exactly should we make of it? There is nothing to base it on except anecdotal evidence from Subjective tests.

I feel Ethan Mollick, a professor at Wharton University who studies AI, innovation and startups, had the proper mindset. It is difficult to write down release notes for generative AI models since the models “feel different” from one interaction to the subsequent. They are largely vibes-basedAt the identical time, individuals are using ChatGPT – and paying for it. Don't they should know what they're stepping into?

It might be that the improvements are incremental, and OpenAI considers it unwise to signal this for competitive reasons. Less likely is that the model is someway compatible with OpenAI's reported Breakthroughs in pondering. Regardless, transparency needs to be a top priority in AI. Without it, there could be no trust – and OpenAI has already lost plenty of it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read