This week was something of a swan song for the Biden administration.
On Monday, the White House announced sweeping latest restrictions on the export of AI chips – restrictions that tech giants including Nvidia loudly criticized. (Nvidia's business can be severely impaired through the restrictions, they need to go into effect as proposed.) Then on Tuesday, the federal government issued an executive order opening up federal land to AI data centers.
But the plain query is: will the measures have a long-lasting impact? Will Trump, who takes office on January 20, simply reverse Biden's actions? So far, Trump has not signaled his intentions in any way. But he definitely has the ability to undo Biden's latest AI acts.
Biden's export rules are set to take effect after a 120-day comment period. The Trump administration can have wide latitude in implementing the measures — and whether to alter them in any way.
As for the chief order on federal land use, Trump could repeal it. Former PayPal COO David Sacks, Trump’s AI and crypto “czar” recently has committed to revoking one other AI-related executive order from Biden that sets standards for AI safety.
However, there may be reason to consider that the brand new government is not going to shake up the situation an excessive amount of.
In keeping with Biden's push to release federal funding for data centers, Trump said recently promised expedited approvals for firms investing a minimum of $1 billion within the US picked Lee Zeldin, who has promised to chop regulations he sees as burdensome on businesses, is taking up as EPA's helm.
Aspects of Biden's export rules could stand in addition to. Some of the regulations goal China, and Trump has done so made no secret of it that he sees China because the US's biggest rival in AI.
One questionable point is the inclusion of Israel within the list of nations subject to trade caps on AI hardware. As recently as October, Trump described himself as a “protector” of Israel and has done so signaled that he’s more likely to be more lenient toward Israel's military actions within the region.
In any case, we can have a clearer picture because the week progresses.
News
ChatGPT, remind me…: Paid users of OpenAI's ChatGPT can now ask the AI assistant to schedule reminders or recurring requests. The latest beta feature called Tasks is rolling out this week to ChatGPT Plus, Teams and Pro users around the globe.
Meta versus OpenAI: Executives and researchers leading Meta's AI efforts became obsessive about beating OpenAI's GPT-4 model while concurrently developing Meta's own Llama 3 family of models, based on communications unsealed by a court on Tuesday .
OpenAI's board grows: OpenAI has appointed Adebayo “Bayo” Ogunlesi, an executive at investment firm BlackRock, to its board. The company's current board bears little resemblance to OpenAI's late 2023 board, whose members fired CEO Sam Altman, only to reinstate him days later.
Blaize goes public: Blaize is anticipated to be the primary AI chip startup to go public in 2025. Founded in 2011 by former Intel engineers, the corporate has raised $335 million from investors including Samsung for its chips for cameras, drones and other edge devices.
An “argumentation model” that thinks in Chinese: OpenAI's AI reasoning model o1 sometimes “thinks” in languages like Chinese, French, Hindi and Thai, even when an issue is asked in English – and nobody really knows why.
Research paper of the week
A current one study Co-authored by Dan Hendrycks, an advisor to billionaire Elon Musk's AI company xAI, suggests that many AI security metrics correlate with the capabilities of AI systems. That is, as a system's overall performance improves, it “performs higher” on benchmarks – making it appear as if the model is “safer.”
“Our evaluation shows that many AI security benchmarks – about half – often unintentionally capture latent aspects which can be closely linked to general skills and pure training computation,” write the researchers behind the study. “Overall, it’s hard to avoid measuring the capabilities of upstream models in AI security benchmarks.”
In the study, the researchers propose an empirical basis for developing “more meaningful” safety metrics that they hope will “advance the science” of safety assessments in AI.
Model of the week
In a technical paper published on Tuesday, Japanese AI company Sakana AI said detailed Transformer² (“Transformer-squared”), an AI system that dynamically adapts to latest tasks.
Transformer² first analyzes a task – for instance writing code – to grasp its requirements. “Task-specific adjustments” and optimizations are then made to tailor them to the duty at hand.
Sakana says that the methods behind Transformer² could be applied to open models like Metas Llama and that they “offer a glimpse right into a future where AI models are not any longer static.”
Lucky bag
A small team of developers has released an open alternative to AI-powered serps equivalent to OpenAI's Perplexity and SearchGPT.
The project, called PrAIvateSearch, is out there at GitHub is under an MIT license and might subsequently be used largely without restrictions. It is predicated on openly available AI models and services, including Alibaba's Qwen family of models and the search engine DuckDuckGo.
The PrAIvateSearch team says its goal is to “implement similar functionality to SearchGPT,” but in an “open source, local, and personal manner.” For startup suggestions, see the team's latest release Blog post.