HomeIndustriesOpenAI expands lobbying team to influence regulation

OpenAI expands lobbying team to influence regulation

OpenAI is assembling a world team of lobbyists to influence politicians and regulators who’re increasingly scrutinizing powerful artificial intelligence.

The San Francisco-based startup told the Financial Times that it had increased the scale of its global affairs team from three at first of 2023 to 35. The company goals to extend that number to 50 by the top of 2024.

The move comes at a time when governments are reviewing and debating AI safety regulations that would hinder the startup's growth and the event of its cutting-edge models that underpin products like ChatGPT.

“We're not approaching this from the attitude that we just have to step in and take away regulations… because our goal is just not to maximise profits; our goal is to make certain that AGI advantages all of humanity,” said Anna Makanju, vp of presidency affairs at OpenAI, referring to artificial general intelligence, or the concept that machines have the identical cognitive abilities as humans.

Although it represents only a small portion of OpenAI's 1,200 employees, the Global Affairs department is the corporate's most international unit and is strategically positioned in locations where AI laws is being advanced. It includes employees in Belgium, the UK, Ireland, France, Singapore, India, Brazil and the US.

However, OpenAI lags behind its Big Tech competitors on this regard. According to public filings within the US, Meta spent a record $7.6 million on working with the US government in the primary quarter of this yr, while Google spent $3.1 million and OpenAI spent $340,000. In terms of AI advocacy, Meta has appointed 15 lobbyists, Google has five, OpenAI just two.

“When I walked within the door, (ChatGPT) had 100 million users (but the corporate had) three people doing public policy,” said David Robinson, head of policy planning at OpenAI, who joined the corporate in May last yr after a profession in academia and as a White House adviser on AI policy.

“It literally got to the purpose where someone at a high level desired to have a conversation, but nobody could answer the phone,” he added.

But OpenAI's global affairs department doesn't handle among the most sensitive regulatory cases. That task falls to its legal team, which is handling issues related to UK and US regulators' review of its $18 billion alliance with Microsoft; the US Securities and Exchange Commission's investigation into whether Chief Executive Officer Sam Altman misled investors during his transient firing by the board in November; and the US Federal Trade Commission's consumer protection investigation into the corporate.

Instead, OpenAI's lobbyists are focused on pushing for AI laws. The UK, US and Singapore are amongst the numerous countries AI regulation and are coordinating closely with OpenAI and other technology firms on proposed regulations.

The company was involved within the discussions surrounding the EU AI Act passed this yr, some of the progressive pieces of laws to control powerful AI models.

OpenAI was among the many AI ​​firms that argued in early drafts of the law that a few of its models mustn’t be counted amongst those who pose “high risk” and would due to this fact be subject to stricter rules, based on three people involved within the negotiations. Despite that push, the corporate's most capable models will fall throughout the law's scope.

OpenAI also opposed the EU's push to review all data submitted to its foundation's models, based on people accustomed to the negotiations.

The company told the FT that pre-training data – the information sets that give large language models a broad understanding of language or patterns – mustn’t be within the scope of the regulation because they’re a poor technique of understanding the outputs of an AI system. Instead, the corporate suggested the main target needs to be on post-training data, which is used to fine-tune models for a particular task.

The EU has decided that regulators can proceed to require access to training data for high-risk AI systems to make sure that they’re freed from errors and bias.

Since the EU law was passed, OpenAI has hired Chris Lehane, who worked for President Bill Clinton and Al Gore's presidential campaign and was Airbnb's policy lead as vp of public works. Lehane will work closely with Makanju and her team.

OpenAI also recently poached Jakob Kucharczyk, a former head of competition at Meta. Sandro Gianella, head of European policy and partnerships, joined us in June last yr after working at Google and Stripe, while James Hairston, head of international policy and partnerships, joined from Meta in May last yr.

The company has recently been involved in quite a few discussions with policymakers within the US and other markets around OpenAI's Voice Engine model, which might clone and create custom voices, which led to the corporate scaling back its release plans after concerns arose concerning the risks of how the model might be utilized in this yr's global elections.

The team has held workshops and published guidelines on methods to cope with misinformation in countries like Mexico and India, that are holding elections this yr. In autocratic countries, OpenAI grants direct access to its models to “trusted individuals” in areas where it doesn’t consider it protected to publish its products.

A government official who works closely with OpenAI said one other concern for the corporate is to make sure that all rules remain flexible in the longer term and should not overtaken by latest scientific or technological advances.

OpenAI hopes to eliminate some remnants of the social media era that Makanju says have led to a “general distrust of Silicon Valley firms.”

“Unfortunately, people often have a look at AI through the identical lens,” she added. “We spend loads of time attempting to make people understand that this technology could be very different and that the regulatory interventions that make sense for them are going to be very different.”

However, some industry representatives are critical of OpenAI's expanded lobbying efforts.

“Initially, OpenAI recruited people and specialists who were deeply involved in AI policy, whereas now they simply hire unusual tech lobbyists, which is a very different strategy,” said one one that was directly involved with OpenAI in drafting laws.

“They just wish to influence lawmakers in a way that big tech firms have been doing for over a decade.”

Robinson, head of planning at OpenAI, said the worldwide affairs team has more ambitious goals. “The mission is protected and broadly useful. And what does that mean? It means creating laws that not only allow us to innovate and supply useful technology to people, but additionally result in a world where the technology is protected.”

Video: AI: blessing or curse for humanity? | FT Tech

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read