HomeNewsThis week in AI: How Kamala Harris could regulate AI

This week in AI: How Kamala Harris could regulate AI

Hey guys, welcome to TechCrunch's regular AI newsletter.

Last Sunday, President Joe Biden announced that he now not intends to run for re-election. Instead, he offered his “full support” to Vice President Kamala Harris because the Democratic Party's nominee. In the times that followed, Harris secured the support of the Democratic delegate majority.

Harris has been outspoken on technology and AI policy. What wouldn’t it mean for AI regulation within the US if she wins the presidency?

My colleague Anthony Ha wrote a couple of words about this over the weekend. Harris and President Biden previously they said they “reject the false alternative that implies we will either protect the general public or drive innovation.” At the time, Biden issued an executive order calling on corporations to set recent standards for developing AI. Harris said the voluntary commitments were “a primary step toward a safer AI future, and there will likely be more to come back” because “within the absence of regulation and powerful government oversight, some technology corporations are putting profits ahead of the well-being of their customers, the protection of our communities, and the soundness of our democracies.”

I also spoke to AI policy experts to get their views, and most of them said they’d expect continuity from a Harris administration, versus a dismantling of current AI policies and general deregulation as advocated by Donald Trump's camp.

Lee Tiedrich, AI advisor on the Global Partnership on Artificial Intelligence, told TechCrunch that Biden's support of Harris could “increase the possibilities of maintaining continuity in US AI policy.” “(This is) framed by the AI ​​Executive Order of 2023 and in addition by multilateralism through the United Nations, the G7, the OECD and other organizations,” she said. “The executive order and related actions also call for greater government oversight of AI, including through increased enforcement, stronger agency AI rules and policies, a concentrate on safety, and certain mandatory testing and disclosures for some large AI systems.”

Sarah Kreps, a professor of presidency at Cornell University with a special interest in AI, noted that there’s a perception in certain segments of the tech industry that the Biden administration has been too aggressive in regulating and that the AI ​​decrees are “micromanagement overkill.” She doesn't expect Harris to roll back the AI ​​safety protocols put in place under Biden, but wonders whether a Harris administration might take a less top-down approach to regulation to appease critics.

Krystal Kauffman, a research fellow on the Distributed AI Research Institute, agrees with Kreps and Tiedrich that Harris will probably proceed Biden's work to deal with the risks related to AI deployment and increase transparency around AI. But she hopes that if Harris wins the presidential election, she is going to forged a wider net of stakeholders when formulating her policies – a net that features data staff, whose plight (poor pay, poor working conditions and mental health issues) often goes unnoticed.

“Harris needs to incorporate the voices of knowledge staff who help program AI in these vital conversations going forward,” Kauffman said. “We can now not look to off-the-record meetings with tech CEOs as a method of crafting policy. If this continues, we will certainly be heading down the incorrect path.”

News

Meta launches recent models: Meta this week released Llama 3.1 405B, a text generation and evaluation model with 405 billion parameters. Llama 3.1 405B, the biggest “open” model so far, is making its way into various Meta platforms and apps, including the Meta AI experience on Facebook, Instagram, and Messenger.

Adobe updates Firefly: Adobe released recent Firefly tools for Photoshop and Illustrator on Tuesday, giving graphic designers more ways to leverage the corporate's AI models.

Facial recognition in schools: An English school has been formally reprimanded by the UK Data Protection Authority after using facial recognition technology without obtaining students’ explicit consent to process their facial scans.

Cohere raises half a billion: Cohere, a generative AI startup co-founded by former Google researchers, has raised $500 million in recent funding from investors including Cisco and AMD. Unlike a lot of its competitors within the generative AI startup space, Cohere is adapting AI models for giant enterprises – a key think about its success.

Interview with the CIA's AI Director: As a part of TechCrunch's ongoing Women in AI series, I interviewed Lakshmi Raman, the CIA's AI director. We talked about her path to becoming director, the CIA's use of AI, and the balance that should be struck between embracing recent technologies and using them responsibly.

Research paper of the week

Ever heard of the Transformer? It's the popular AI model architecture for complex reasoning tasks, powering models like OpenAI's GPT-4o, Anthropics' Claude, and plenty of others. But as powerful as Transformers are, they’ve their weaknesses. That's why researchers are investigating possible alternatives.

One of probably the most promising candidates is State Space Models (SSM), which mix the properties of several older sorts of AI models corresponding to recurrent neural networks and convolutional neural networks to create a more computationally efficient architecture that may process long sequences of knowledge (think novels and films). And one in every of the strongest incarnations of SSMs so far, Mamba-2, has been tested in a Paper this month by research scientists Tri Dao (Professor at Princeton) and Albert Gu (Carnegie Mellon).

Like its predecessor Mamba, Mamba-2 can handle larger amounts of input data than transformer-based equivalents while remaining performance-competitive with transformer-based models on certain language generation tasks. Dao and Gu say that if SSMs proceed to enhance, they are going to sooner or later run on commodity hardware—and deliver more powerful generative AI applications than are possible with today's transformers.

Model of the week

In one other recent development in the sphere of architecture, a team of researchers has developed a brand new kind of generative AI model that they are saying can rival – and even surpass – the 2 strongest Transformers, Mamba, by way of efficiency.

The architecture, called test-time training (TTT) models, can reason across tens of millions of tokens, the researchers say, and potentially scale to billions of tokens in future, more refined designs. (In generative AI, “tokens” are chunks of raw text and other bite-sized pieces of knowledge.) Because TTT models can handle many more tokens than traditional models and achieve this without putting undue strain on hardware resources, the researchers say they’re well-suited to the “next generation” of generative AI apps.

To dive deeper into the TTT models, take a look at our recent feature.

Grab bag

Stability AI, the generative AI startup that was recently rescued from financial disaster by investors including Napster co-founder Sean Parker, has generated considerable controversy with its restrictive terms of use and licensing policies for its recent products.

Until recently, to commercially use Stability AI's latest open AI image model, Stable Diffusion 3, corporations with annual revenues of lower than $1 million had to buy a “Creator” license that limited the full variety of images they may generate to six,000 per thirty days. The larger problem for many shoppers, nonetheless, was Stability's restrictive fine-tuning terms, which gave Stability AI the best (or no less than the looks of doing so) to charge for and exercise control over any model trained on images generated by Stable Diffusion 3.

Stability AI's rigorous approach prompted CivitAI, one in every of the biggest providers of image-generating models, to impose a short lived ban on models based on or trained on Stable Diffusion 3 images while the corporate sought legal advice on the brand new license.

“The concern is that, based on our current understanding, this license gives Stability AI an excessive amount of power over using not only all models tuned to Stable Diffusion 3, but in addition all other models that include Stable Diffusion 3 images of their datasets,” CivitAI wrote in a post on his blog.

In response to the backlash, Stability AI announced earlier this month that it could adjust the license terms for Stable Diffusion 3 to permit for more liberal business use. “As long as you don't use it for illegal activities or clearly violate our license or usage guidelines, Stability AI won’t ever ask you to delete resulting images, fine-tuning, or other derivative products – even should you never pay Stability AI,” Stability noted in a Blog.

The saga highlights the legal pitfalls that proceed to plague generative AI – and the extent to which “open” stays a matter of interpretation. Call me a pessimist, however the growing number The variety of controversial and restrictive licenses leads me to consider that consensus within the AI ​​industry won’t be reached anytime soon, and even that clarity will likely be slow to emerge.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read