The AI ​​news cycle has not slowed down significantly this holiday season. Between OpenAIs 12 days of “Schiffmas” and the discharge of DeepSeek's foremost model on Christmas Day. If you simply browse quickly, you’ll miss a brand new development.
And it's not slowing down now. On Sunday, OpenAI CEO Sam Altman said in a post on his personal blog that he believes OpenAI knows easy methods to construct artificial general intelligence (AGI) and is beginning to concentrate on superintelligence.
AGI is a nebulous term, but OpenAI has its own definition: “highly autonomous systems that outperform humans in essentially the most economically invaluable work.” As for superintelligence, which Altman sees as a step beyond AGI, he said within the blog post that it may well “massively speed up” innovations that go far beyond what humans could achieve alone.
“(OpenAI continues) to imagine that iteratively putting great tools into people’s hands results in great, widespread results,” Altman wrote.
Altman, like Dario Amodei, CEO of OpenAI competitor Anthropic, is optimistically convinced that AGI and superintelligence will result in wealth and prosperity for all. But if we assume that AGI and superintelligence are even feasible without latest technological breakthroughs, how can we make certain that they are going to profit everyone?
A current data point of concern is a study marked by Wharton professor Ethan Mollick on X earlier this month. Researchers from the National University of Singapore, the University of Rochester and Tsinghua University examined the impact of OpenAI's AI-powered chatbot ChatGPT on freelancers in various labor markets.
The study identified an economic “AI tipping point” for various job types. Before the tipping point, AI increased freelancers’ income. For example, web developers saw a rise of around 65%. But after the tipping point, AI began using freelancers. Translators saw a decline of about 30%.
The study suggests that when AI begins to interchange a job, it doesn’t reverse course. And that ought to worry us all if more powerful AI is definitely on the horizon.
Altman wrote in his post that he was “quite confident” that “everyone” within the age of AGI – and superintelligence – will recognize the importance of “maximizing broad utility and empowerment.” But what if he's fallacious? What if AGI and superintelligence were on the rise and only corporations had anything to indicate for it?
The result won’t be a greater world, but more of the identical inequality. And if that’s the legacy of AI, it’s going to be deeply depressing.
News
Silicon Valley is suffocating doom: Technologists have been ringing alarm bells for years concerning the possibility that AI could cause catastrophic damage. But in 2024, these warning calls were lost.
OpenAI loses money: OpenAI CEO Sam Altman said the corporate is currently losing money on its $200 monthly ChatGPT Pro plan because individuals are using it greater than the corporate expected.
Record funding for generative AI: Investments in generative AI, which incorporates a spread of AI-powered apps, tools and services to generate text, images, videos, voice, music and more, reached latest heights last 12 months.
Microsoft increases data center spending: Microsoft committed $80 billion in fiscal 2025 to constructing data centers to handle AI workloads.
Grok 3 MIA: xAI's next-generation AI model, Grok 3, did not arrive on time, reinforcing the trend of flagship models missing their promised launch windows.
Research paper of the week
AI could make many mistakes. But it may well also encourage experts of their work.
At least that's the way it is Discovery by a research team He is from the University of Chicago and MIT. In a brand new study, they suggest that investors who use OpenAI's GPT-4o to summarize earnings forecasts earn higher returns than those that don't.
The researchers recruited investors and had GPT-4o give them AI summaries tailored to their investment expertise. Experienced investors received more technical, AI-generated notes, while beginners received simpler notes.
The more experienced investors saw a 9.6% improvement of their one-year returns after using GPT-4o, while the less experienced investors saw a 1.7% increase. That's not too bad for AI-human collaboration, I might say.
Model of the week
Prime Intellect, a startup constructing infrastructure for decentralized AI system training, has released an AI model that may supposedly help detect pathogens.
The model, called METAGENE-1, was trained on a dataset of over 1.5 trillion DNA and RNA base pairs sequenced from human wastewater samples. METAGENE-1 was developed in collaboration with the University of Southern California and SecureBio's Nucleic Acid Observatory and will be used for various metagenomic applications, resembling studying organisms, in accordance with Prime Intellect.
“METAGENE-1 achieves excellence in various genomic benchmarks and latest evaluations focused on the detection of human pathogens,” said Prime Intellect wrote in a series of posts on
Lucky bag
In response to legal motion by major music publishersAnthropic has agreed to keep up protections that prevent its AI-powered chatbot Claude from sharing copyrighted song lyrics.
Labels including Universal Music Group, Concord Music Group and ABKCO sued Anthropic in 2023, accusing the startup of copyright infringement since it trained its AI systems on lyrics from at the least 500 songs. The lawsuit has not been settled, but Anthropic has, for now, agreed to stop Claude from providing lyrics for the publishers' songs and from creating latest lyrics based on the copyrighted material.
“We proceed to sit up for demonstrating that the use of doubtless copyrightable material in training generative AI models constitutes fully fair use consistent with existing copyright law,” Anthropic said in a press release.