HomeNewsThis week in AI: Looking for balance within the news flood

This week in AI: Looking for balance within the news flood

Longtime readers of the newsletter could have noticed that we skipped per week last week. That was not our intention and we apologize.

The reason for this was that we reached an inflection point within the AI ​​news cycle. We are a small team and overloaded. It becomes almost unimaginable to cover all announcements, controversies, academic papers, trends, open model releases, lawsuits, etc.

Take this month for instance. OpenAI is in the course of what’s practically a 12-day press engagement. Google is preparing for this start essential recent AI products, because it is Elon Musk's company, xAI. And that's just the news from the most important AI players.

To higher manage the surge, we're making a small change to This Week in AI. In the long run, the word count of the newsletter will likely be barely shorter. It won't be a drastic reduction – you may not even notice it – but the thought is to make This Week in AI more concise while ensuring it reaches your inbox at regular intervals.

We hope you discover the improved newsletter easier to digest. As at all times, we’re open to feedback – be at liberty to message me anytime.

News

Photo credit:Kirillm/Getty Images

A test for AGI: A widely known artificial general intelligence (AGI) test is near being solved, however the test's developers say it indicates flaws within the test's design somewhat than an actual breakthrough in research.

Amazon's recent laboratory: Amazon says it’s organising a brand new research and development lab in San Francisco, the Amazon AGI SF Lab, that may give attention to developing “foundational” skills for AI agents.

OpenAI's video generator starts: Most subscribers to OpenAI's ChatGPT Pro and Plus plans can have access to Sora, OpenAI's video generator, starting Monday. However, the people of Europe were out of luck.

China is investigating Nvidia: China's market regulator has reportedly opened an antitrust investigation into Nvidia's acquisition of Mellanox, an Israel-based company that develops high-performance chips for supercomputers.

Yelp Adds AI: Yelp released several recent features this week, including AI-powered review insights. The platform's AI attempts to investigate the sentiment of reviews and highlight them by category (e.g. food quality).

Google's spree on renewable energy: Google has signed a deal to supply enough carbon-free electricity to power multiple gigawatt-scale data centers. Overall, investments in renewable energies amount to around 20 billion US dollars.

Reddit introduces conversational AI: Reddit's newest AI-powered feature, Reddit Answers, allows users to ask questions and receive curated summaries of relevant answers and threads across the platform.

The recent image generator from X: X (formerly Twitter) has received a brand new image generator due to xAI, Elon Musk's AI startup. It's called Aurora and is tuned for “photorealistic rendering.” You can find it in X's Grok Assistant.

Research paper of the week

white clouds in the blue sky
Photo credit:Bryce Durbin/TechCrunch

A team of computer scientists from Ai2 and UC San Diego say They created an AI model that may predict 100 years of climate patterns in 25 hours.

The model, called Spherical Dyffusion, relies on knowledge of basic climate science after which applies a series of transformations to predict future patterns. Unlike many state-of-the-art climate prediction models, Spherical Dyffusion can run on relatively modest hardware, the team claims.

The model has limitations. But the researchers plan to refine it further. The next version will simulate how the atmosphere reacts to carbon dioxide, they are saying.

Separately, Ai2 released the second generation of its climate modeling AI. Climate emulator.

Model of the week

CausVid
Photo credit:Yang et al.

Sora could also be getting all the eye, but a brand new video generation model from MIT CSAIL and Adobe Research could also be more exciting.

The model, called CausVidcan start playing videos as soon as they begin being generated – providing a form of preview of the finished clip. This is in contrast to models like Sora, which cannot display ongoing clips.

The researchers plan to release an open source implementation soon.

Lucky bag

The group of artists who revealed access to Sora last November have published a series of essays explain why they did it.

The essays are well value reading, however the gist is that the group desired to denounce what it saw because the exploitation of creatives for research and development and public relations.

“We called on artists to think beyond proprietary systems,” the group said wrote in a post “and the constraints of adopting a model mediated by Big Tech.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read