HomeNewsThis week in AI: OpenAI and publishers are partners of purpose

This week in AI: OpenAI and publishers are partners of purpose

Keeping up with an industry as fast-moving as AI is a giant challenge. Until an AI can do it for you, here's a handy roundup of the newest developments from the world of machine learning, in addition to notable research and experiments that we haven't covered individually.

By the best way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we're increasing the frequency of our semi-regular AI column, which used to look about twice a month, to weekly – so keep an eye fixed out for more issues.

This week in AI, OpenAI announced that it has reached an agreement with News Corp, the brand new publishing giant, to coach generative AI models developed by OpenAI on articles from News Corp brands corresponding to , and MarketWatch. The agreement, which the businesses describe as “multi-year” and “historic,” also gives OpenAI the best to display News Corp titles in apps corresponding to ChatGPT in response to certain questions – presumably in cases where the answers come partly or entirely from News Corp publications.

Sounds like a win-win, right? News Corp is getting a money injection for its content – ​​over $250 million. According to reports — at a time when the prospects for the media industry even darker than usual. (Generative AI didn’t helpand threatened significantly reduce the advice traffic of publications.) Meanwhile, OpenAI, which is battling copyright holders on multiple fronts over fair use disputes, has less to fret a few costly court battle.

But the devil is in the small print. Note that the News Corp deal has an end date – like all of OpenAI’s content licensing deals.

This in and of itself isn’t sick will on OpenAI's part. Perpetual licenses are a rarity within the media, as all parties need to keep the opportunity of renegotiating the contract open. However, that is a little bit questionable given recent comments from OpenAI CEO Sam Altman concerning the declining importance of coaching data for AI models.

In an appearance on the “All-In” podcast, Altman said said that he “definitely doesn’t consider that there might be an arms race for (training) data,” because “when models change into intelligent enough, sooner or later it should now not be about more data – a minimum of not for training.” Elsewhere he writes told James O'Donnell of MIT Technology Review expressed optimism that OpenAI – and/or the AI ​​industry as an entire – “will discover a strategy to stop needing increasingly more training data.”

The models are usually not yet so “intelligent”, which prompts OpenAI to They are reportedly experimenting with synthetic training data and search the vastness of the Internet – and YouTube — for organic sources. But let's assume that in the future they need lots of additional data to enhance by leaps and bounds. Where does that leave publishers, especially when OpenAI has searched their entire archives?

My point is that the publishers – and the opposite content owners that OpenAI has partnered with – seem like short-term, self-interested partners, nothing more. By getting into licensing agreements, OpenAI effectively neutralizes a legal threat – a minimum of until the courts resolve the best way to apply fair use within the context of AI training – and might rejoice a PR victory. The publishers get much-needed capital. And work on AI that would seriously harm those publishers continues.

Here are another notable AI stories from the previous couple of days:

  • Spotify’s AI DJ: Spotify's addition of the “AI DJ” feature, which presents users with personalized song selections, was the corporate's first step into an AI future. Now Spotify is developing an alternate version of that DJ who will speak Spanish, Sarah writes.
  • Meta's AI advice: Meta announced on Wednesday the creation of a AI Advisory Board. But there's one big problem: it's all white men. That seems a little bit insensitive, considering that marginalized groups are those most certainly to suffer the implications of AI technology's shortcomings.
  • FCC proposes AI disclosures: The Federal Communications Commission (FCC) has issued a rule requiring disclosure—but not banning—of AI-generated content in political ads. Devin has the complete story.
  • Answer calls together with your voice: Thanks to a brand new partnership with Microsoft, customers of the widely known caller ID service Truecaller will soon give you the chance to reply calls by voice using its AI-powered assistant.
  • Humane is considering a sale: Humane, the corporate behind the highly publicized Ai Pin, which had a mixed bag launch last month, is searching for a buyer. The company has reportedly set a price tag between $750 million and $1 billion and the sales process remains to be within the early stages.
  • TikTok relies on generative AI: TikTok is the newest tech company to integrate generative AI into its ad business. On Tuesday, the corporate announced that it’s launching a brand new TikTok Symphony AI suite for brands. The tools will help marketers write scripts, produce videos and enhance their current ad assets, Aisha reports.
  • AI Summit in Seoul: At an AI security summit in Seoul, South Korea, government officials and AI industry leaders agreed to use basic security measures on this rapidly evolving field and construct a global security research network.
  • Microsoft’s AI PCs: In two keynotes during its annual Build developer conference this week, Microsoft unveiled a brand new line of Windows computers (and Surface laptops) it calls Copilot+ PCs, in addition to generative AI-powered features like Recall, which helps users find apps, files and other content they've viewed up to now.
  • OpenAI’s voting debacle: OpenAI removes considered one of the voices from ChatGPT's text-to-speech feature. Users found the voice, named Sky, to be eerily just like Scarlett Johansson (who has played AI characters before) – and Johansson herself issued an announcement saying she had hired legal counsel to inquire concerning the Sky voice and get precise details about its development.
  • British autonomous driving law: The UK's self-driving automotive regulations are actually official following Royal Assent, the ultimate step a bill must undergo before becoming law.

More machine learning

This week we’ve got some interesting AI-related research for you. Prolific University of Washington researcher Shyan Gollakota strikes again with a pair of noise-cancelling headphones that may prompt you to Block every little thing except the person you desire to take heed to. While wearing the headphones, you press a button while taking a look at the person. It then samples the voice from that specific direction. This information is used to power an acoustic exclusion engine that filters out background noise and other voices.

The researchers, led by Gollakota and several other graduate students, call the system Target Speech Hearing and unveiled it at a conference in Honolulu last week. It's useful each as an accessibility tool and as an on a regular basis option, and one can imagine considered one of the key technology corporations copying the feature for the following generation of high-end headphones.

Chemist at EPFL are obviously uninterested in doing 18 tasks, because they’ve trained a model called ChemCrow to do those tasks. Not real tasks like titrating and pipetting, but planning work like combing through literature and planning response chains. ChemCrow doesn't do every little thing for the researchers, after all, but acts more like a natural language interface for the entire set, using the search or calculation option as needed.

Photo credits: EPFL

The lead writer of the paper introducing ChemCrow said it’s “analogous to a human expert with access to a calculator and databases,” i.e. a PhD student, so hopefully they will work on something more essential or skip the boring parts. Reminds me a little bit of Coscientist. As for the name, it's “because crows are known to be good with tools.” Good enough!

Disney Research roboticists are working hard to make the movements of their creations more realistic without having to manually animate each possible movement. A brand new paper they are going to present at SIGGRAPH in July shows a mixture of procedurally generated animation and an artist interface for tweaking those animations. All of this works on an actual bipedal robot (a Groot).

The idea is that the artist can create a mode of locomotion – springy, stiff, unstable – and the engineers don't must implement every detail, just ensure that it's inside certain parameters. The movement can then be performed spontaneously, with the proposed system improvising the precise movements. Expect to see this at Disney World in a number of years…

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read