HomeNewsThis week in AI: Generation Z has mixed feelings about AI

This week in AI: Generation Z has mixed feelings about AI

Hey guys, welcome to TechCrunch's regular AI newsletter.

This week, polls suggest that Generation Z – a daily subject of mainstream media fascination – has very mixed opinions about AI.

Samsung recently surveyed over 5,000 Gen Z members in France, Germany, Korea, the UK and the US about their views on AI and technology usually. Almost 70% said they see AI as an indispensable resource for work-related tasks reminiscent of summarizing documents and meetings and conducting research, in addition to non-work-related tasks reminiscent of finding inspiration and brainstorming.

But based on a report As EduBirdie, an expert essay writing service, published earlier this yr, greater than a 3rd of Gen Z employees who use OpenAI's chatbot platform ChatGPT and other AI tools at work feel guilty about it. Respondents expressed concerns that AI could limit their critical pondering skills and hamper their creativity.

Of course, now we have to take each surveys with a grain of salt. Samsung isn't exactly impartial; it sells and develops many AI-powered products, and so has a vested interest in portraying AI in an overall flattering light. The same goes for EduBirdie, whose fundamental business is in direct competition with ChatGPT and other AI writing assistants. would undoubtedly prefer that folks were suspicious of AI – especially AI apps that give essay suggestions.

However, it may very well be that Generation Z doesn’t wish to approve of AI and even boycott it (if that were even possible), are more aware of the potential consequences of AI and technology usually than previous generations.

In a separate study According to the National Society of High School Scholars, a tutorial honor society, nearly all of Gen Z (55%) said they imagine AI may have a negative somewhat than positive impact on society over the subsequent decade. 55% imagine AI may have a major impact on privacy—and never in a positive way.

And the opinions of Generation Z are necessary. report NielsenIQ predicts that Generation Z will soon change into the wealthiest generation, with spending potential reaching $12 trillion by 2030 and surpassing baby boomers by 2029.

With some AI startups spending greater than 50% of their revenue on hosting, processing power and software (based on data from accounting firm Kruze), every dollar counts. So allaying Gen Z's fears about AI is a great business move. Whether their fears will probably be allayed stays to be seen, given the numerous technical, ethical and legal challenges AI presents. But the least firms can do is try. It never hurts to try.

News

OpenAI signs with Condé: OpenAI has signed a cope with Condé Nast – publisher of prestigious media outlets reminiscent of The New Yorker, Vogue and Wired – to feature stories from its properties in OpenAI's AI-powered chatbot platform ChatGPT and its search prototype SearchGPT, and to coach its AI on Condé Nast's content.

Demand for AI threatens water supply: The AI ​​boom is driving demand for data centers, driving up water consumption. In Virginia – home to the world's largest concentration of knowledge centers – water consumption increased by nearly two-thirds between 2019 and 2023, from 1.13 billion gallons to 1.85 billion gallons, based on the Financial Times.

Twins Live And Advanced language mode Reviews: Two recent AI-powered, voice-focused chat experiences were launched by tech giants this month: Google's Gemini Live and OpenAI's Advanced Voice Mode. Both offer realistic voices and the liberty to interrupt the bot at any time.

Trump shares Taylor Swift’s deepfakes again: On Sunday, former President Donald Trump released a set of Memes on Truth Social, where it looked like Taylor Swift and her fans were supporting his candidacy. But my colleague Amanda Silberling writes that as recent laws come into effect, these images could have deeper implications for using AI-generated imagery in political campaigns.

The big debate about SB 1047: California's SB 1047 bill, which goals to stop AI-caused real-world disasters before they occur, continues to draw outstanding criticism. Just recently, Congresswoman Nancy Pelosi released an announcement outlining her opposition, calling the bill “well-intentioned” but “ill-informed.”

Research paper of the week

Proposed by a team of Google researchers in 2017, the Transformer has change into by far the dominant architecture for generative AI models. Transformers form the idea of OpenAI's video generation model Sora, the newest version of Stable Diffusion and Flux. They also form the core of text generation models reminiscent of Anthropics' Claude and Meta's Llama.

And now Google is using them to make music recommendations.

In a recent blog post, a team at Google Research, one in every of Google's many research and development divisions, describes the brand new (or somewhat novel) transformer-based system behind YouTube Music recommendations. The system, they are saying, is designed to capture signals including the “intent” of a user's motion (e.g. pausing on a track), the “salience” of that motion (e.g. percentage of the track played), and other metadata to discover related tracks the user might like.

Google says the transformer-based suggestion service resulted in a “significant” reduction in music skip rates and a rise within the time users spent listening to music. Sounds (no pun intended) like a win for El Goog.

Model of the week

While it's not exactly recent, OpenAI's GPT-4o is my favorite for model of the week because it could possibly now be fine-tuned using custom data.

On Tuesday, OpenAI publicly launched fine-tuning for GPT-4o, allowing developers to make use of proprietary datasets to regulate the structure and tone of the model's responses or to make the model follow “domain-specific” instructions.

Fine-tuning just isn’t a panacea, but, as OpenAI explains in a Blog post The announcement of this feature can have a huge impact on model performance.

Grab bag

Another day, one other generative AI copyright lawsuit, this time related to anthropology.

A bunch of authors and journalists filed a category motion lawsuit against Anthropic in federal court this week, claiming the corporate committed “large-scale theft” by training its AI chatbot, Claude, on pirated e-books and articles.

Anthropic has “built a multibillion-dollar business by stealing a whole lot of hundreds of copyrighted books,” the plaintiffs say of their lawsuit. “People who learn from books buy legitimate copies of them or borrow them from libraries that buy them, thereby providing a minimum of some compensation to the authors and creators.”

Most models are trained using data obtained from public web sites and datasets on the Internet. Companies argue that fair use shields their efforts to indiscriminately harvest data and use it to coach industrial models. However, many copyright holders disagree and are also filing lawsuits to stop the practice.

In this latest case against Anthropic, the corporate is accused of using “The Pile,” a set of knowledge sets containing an enormous library of pirated e-books called “Books3.” Anthropic recently confirmed to Vox that The Pile was among the many datasets in Claude’s training set.

The plaintiffs are demanding damages of an unknown amount in addition to a everlasting ban on Anthropic's misuse of the authors' works.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read