HomeNewsCIA AI Director Lakshmi Raman claims the agency is taking a “thoughtful...

CIA AI Director Lakshmi Raman claims the agency is taking a “thoughtful approach” to AI

As a part of TechCrunch's ongoing Women in AI series, which goals to provide AI-focused academics and others their well-deserved—and long overdue—time within the highlight, TechCrunch interviewed Lakshmi Raman, the CIA's AI director. We talked about her path to becoming director, the CIA's use of AI, and the balance that should be struck between embracing recent technologies and using them responsibly.

Raman has been within the intelligence community for a very long time. She joined the CIA in 2002 as a software developer after earning her bachelor's degree from the University of Illinois Urbana-Champaign and her master's degree in computer science from the University of Chicago. A couple of years later, she moved into management on the agency, eventually leading the CIA's entire enterprise-wide data science effort.

Raman says she was fortunate to have female role models and predecessors on the CIA, because the intelligence field has traditionally been a male domain.

“I still have people I can look as much as, ask for advice and reach out to with regards to what the following level of leadership should seem like,” she said. “I feel there are things that each woman has to take care of as a part of her profession.”

In her role as director, Raman orchestrates, integrates and drives AI activities across the CIA. “We consider AI supports our mission,” she said. “Humans and machines are together at the guts of our use of AI.”

AI is nothing recent to the CIA. The agency has been researching applications of knowledge science and AI since about 2000, Raman said, particularly within the areas of natural language processing (i.e., text evaluation), computer vision (image evaluation) and video evaluation. The CIA tries to remain current with newer trends like generative AI, she added, following a roadmap influenced by each industry and academia.

“When we expect concerning the huge amounts of knowledge we’ve to process on the agency, content triage is one area where generative AI could make a difference,” Raman said. “We're things like search and discovery assistance, ideation assistance, and helping to develop counterarguments to counteract potential analytical bias.”

There is a way of urgency inside the U.S. intelligence community to deploy tools that would help the CIA combat growing geopolitical tensions world wide, from terror threats motivated by the war in Gaza to disinformation campaigns by foreign actors (e.g., China, Russia). Last 12 months, the Special Competitive Studies Project, a high-profile advisory group focused on AI in national security, establish a two-year plan for domestic intelligence services to transcend experiments and limited pilot projects and implement generative AI on a big scale.

A generative AI-powered tool the CIA has developed known as Osiris, which has similarities to OpenAI's ChatGPT but adapted for intelligence applications. It summarizes data – currently only unclassified and publicly or commercially available data – and allows analysts to dig deeper by asking follow-up questions in plain English.

Osiris is now utilized by hundreds of analysts not only inside the CIA but additionally within the 18 U.S. intelligence agencies. Raman declined to say whether the software was developed internally or based on third-party technology, but said the CIA has partnerships with reputable vendors.

“We use business services,” Raman said, adding that the CIA also uses AI tools for tasks reminiscent of translation and informing analysts of probably necessary developments outside of labor hours. “We must work closely with the private sector to find a way to supply not only the larger services and solutions you've heard about, but additionally area of interest services from non-traditional providers that chances are you’ll not have considered.”

A fragile technology

There are loads of reasons to be skeptical and anxious concerning the CIA's use of artificial intelligence.

In February 2022, Senators Ron Wyden (D-OR) and Martin Heinrich (D-New Mexico) revealed in a public letter that the CIA, although generally prohibited from investigating Americans and American corporations, has a secret, undisclosed data storage This includes information collected on US residents. And last 12 months, a report by the Office of the Director of National Intelligence showed that US intelligence agencies, including the CIA, Buying data about Americans from data traders like LexisNexis and Sayari Analytics with little oversight.

If the CIA ever used AI to sift through this data, many Americans would surely object. It can be a transparent violation of civil liberties and could lead on to significantly unfair outcomes as a consequence of the restrictions of AI.

Several studies have shown that predictive crime algorithms from corporations reminiscent of Geolitica barely distorted by arrest rates and are likely to disproportionately label black communities. Other Studies suggest that facial recognition results in more misidentifications amongst people of color than amongst white people.

Bias aside, even the most effective AI today hallucinates or invents facts and figures in response to queries. Take, for instance, Microsoft’s meeting summarization software, which occasionally attributes quotes to non-existent peopleOne can imagine that this might turn into an issue in intelligence work, where accuracy and verifiability are of paramount importance.

Raman insisted that the CIA not only complies with all U.S. laws, but additionally “follows all ethical guidelines” and uses AI “in a way that mitigates bias.”

“I might call it a thoughtful approach (to AI),” she said. “I might say that in our approach, we wish our users to know as much as possible concerning the AI ​​system they're using. To develop responsible AI, all stakeholders must be involved; meaning AI developers, meaning our Office of Privacy and Civil Rights (and so forth).”

Raman's argument: Regardless of what an AI system is designed to do, it is necessary that the system's developers clarify the areas through which it could fail. In a recent study, researchers at North Carolina State University found that AI tools, including facial recognition and gunshot detection algorithms, were utilized by law enforcement officials who weren’t aware of the technologies or their shortcomings.

A very blatant example of the misuse of AI by law enforcement agencies, which can have arisen out of ignorance, is the NYPD is claimed to have once used photos of celebritiesdistorted images and sketches to generate facial recognition matches on suspects when surveillance photos failed to supply results.

“Any output generated by AI needs to be clearly comprehensible to users, and naturally meaning labeling AI-generated content and providing clear explanations of how AI systems work,” Raman said. “In every thing we do on the agency, we adhere to our regulatory requirements and be sure that our users, our partners and our stakeholders are aware of all relevant laws, regulations and policies that govern using our AI systems, and we comply with all of those rules.”

At least the reporter hopes that that is true.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read