HomePolicyOnly 2% of AI research addresses security, in line with a study...

Only 2% of AI research addresses security, in line with a study from Georgetown University

We hear lots about AI safety, but does that mean it’s being heavily discussed in research?

A brand new study from Georgetown University's Emerging Technology Observatory suggests that despite the noise, AI security research occupies only a tiny minority of the industry's research focus.

Researchers analyzed over 260 million scientific publications and located that only 2% of AI-related papers published between 2017 and 2022 directly addressed topics related to AI safety, ethics, robustness, or governance.

While the variety of publications on AI security has increased by a formidable 315% over this era, from around 1,800 to over 7,000 per yr, security research is being far outpaced by the explosive growth of AI capabilities.

Here are crucial findings:

  • Only 2% of AI research from 2017 to 2022 focused on AI security
  • AI security research grew by 315% during this era, but is dwarfed by overall AI research
  • The US is a frontrunner in AI security research, while China lags behind
  • Key challenges include robustness, fairness, transparency and maintaining human control

AI security research grew 315% between 2017 and 2022 but stays lacking. Source: Emerging Technology Observatory.

Many leading AI researchers and ethicists warn of existential risks if artificial general intelligence (AGI) is developed without sufficient safeguards and precautions.

Imagine an AGI system able to recursively improving itself, quickly surpassing human intelligence while pursuing goals which are inconsistent with our values. It's a scenario that some argue could possibly be out of our control.

However, it shouldn’t be one-way traffic. In fact, many AI researchers consider so AI safety is overrated.

Furthermore, some even consider that the hype was orchestrated to assist Big Tech implement regulations and eliminate grassroots and tech corporations Open source competitors.

However, even today's narrow-minded AI systems trained on past data can exhibit biases, produce harmful content, violate privacy, and be used maliciously.

So while AI security must look to the long run, it must also address risks within the here and now, which is arguably not enough as deep fakes, bias and other issues proceed to play a significant role.

Effective AI security research must address each shorter-term challenges and longer-term speculative risks.

The US is a frontrunner in AI security research

Looking deeper into the information, the US is the clear leader in AI safety research, hosting 40% of related publications, in comparison with 12% from China.

However, China's security performance lags far behind its overall AI research – while 5% of American AI research addressed security, only one% of Chinese research did so.

One might speculate that examining Chinese research is an altogether difficult task. Plus, China was proactive about regulation – arguably greater than within the US – so this data may not give the country’s AI industry a good hearing.

At the institutional level, Carnegie Mellon University, Google, MIT and Stanford are leaders.

But globally, no organization produced greater than 2% of total security-related publications, underscoring the necessity for a bigger, more concerted effort.

Security imbalances

So what might be done to correct this imbalance?

That is determined by whether one believes that AI security is a pressing risk comparable to nuclear war, pandemics, etc. There isn’t any clear answer to this query, making AI safety a highly speculative topic with little reciprocity Agreement between researchers.

Security research and ethics are also more of a subfield of machine learning and require different skills, academic backgrounds, etc. that is probably not adequately funded.

Closing the AI ​​security gap also requires addressing questions on openness and secrecy in AI development.

The biggest tech firms will conduct extensive internal security studies which have never been made public. As AI becomes more commercialized, firms have gotten more protective of their AI breakthroughs.

OpenAI, for instance, was a research powerhouse in its early days.

The company previously conducted extensive independent reviews of its products, labeling biases and risks – comparable to: sexist bias in his CLIP project.

Anthropic remains to be actively involved in public AI safety research and often publishes studies on it Bias And Jailbreaking.

DeepMind also documented the potential of AI models setting “emergent targets” and actively contradicting their instructions or becoming adversaries of their creators.

Overall, nonetheless, security has taken a backseat as Silicon Valley lives by its motto “move fast and break things.”

Ultimately, the Georgetown study makes it clear that universities, governments, technology firms and research funders need to speculate more effort and money in AI security.

Some have also requested one International Body for AI Securityjust like the International Atomic Energy Agency (IAEA), which was founded after a series of nuclear incidents that required intensive international cooperation.

Does AI need its own catastrophe to succeed in this level of presidency and company collaboration? Hopefully not.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read