HomeIndustriesVerizon executive unveils responsible AI strategy amid 'Wild West' landscape

Verizon executive unveils responsible AI strategy amid 'Wild West' landscape

Verizon is using generative AI applications to enhance customer support and experience for its greater than 100 million phone customers and is expanding its responsible AI team to mitigate its risks.

Michael Raj, vice chairman of AI at Verizon's network enablement, said the corporate is implementing several measures as a part of this initiative, including requiring data scientists to register AI models with a central data team to make sure security audits and increasing scrutiny of the sorts of large language models (LLMs) utilized in Verizon's applications to attenuate bias and forestall harmful language.

AI Auditing is just like the “Wild West”

Raj spoke last week throughout the VentureBeat AI Impact event in New York City, which focused on auditing generative AI applications, where the LLMs used might be notoriously unpredictable. He and other speakers agreed that the sphere of AI auditing continues to be in its early stages and that corporations have to speed up their efforts in the realm because regulators haven’t yet set specific guidelines.

The constant drumming of major errors by AI agents in customer support, for instance from big names like Chevy, Air Canadaand myself New York Cityand even leading LLM providers like Google, which employed black Nazis, has brought the necessity for greater reliability back into focus.

The technology is evolving so quickly that government regulators are only releasing the high-level guidelines and leaving private corporations to find out the small print behind them, said Justin Greenberger, senior vice chairman at UiPath, which helps large corporations with automation, including generative AI. “In some ways, it appears like the Wild West,” added Rebecca Qian, co-founder of Patronus AIan organization that supports corporations within the audit of their LLM projects.

Many corporations are currently focused on step one of AI governance – defining rules and policies for the usage of generative AI. The next step is audits to make sure that applications adhere to those policies. However, only a few corporations have the resources to do that accurately, the speakers said.

A recent Accenture report found that while 96% of organizations support some level of presidency regulation in AI, only 2% have fully implemented responsible AI across their operations.

Verizon's focus is on supporting agents with intelligent AI

Raj explained that Verizon desires to grow to be a number one player in applied AI, with a concentrate on equipping field staff with an intelligent conversational assistant to assist them manage customer interactions. These customer support staff or those in Verizon stores face information overload, but a generative AI-based assistant can ease that burden. It can immediately provide staff with personalized details about a customer's plan and preferences and handle “80 percent of the repetitive tasks,” reminiscent of details about different devices and phone plans. This allows staff to concentrate on the “20 percent of issues that truly require human intervention” and supply personalized recommendations.

Verizon also uses generative AI and other deep learning technologies to enhance the shopper experience across its network and website and to learn more about its services. Raj mentioned that the corporate has implemented models to predict the propensity to churn amongst its greater than 100 million customers. (See video of his full remarks below).

Verizon has made significant investments in AI governance, including tracking model drift bias, Raj said. This was made possible by consolidating all governance functions right into a single “AI and Data” organization, which incorporates the Responsible AI unit. Raj noted that this unit is being “built out” to implement standards around privacy and respectful language. He said the unit is a obligatory “single point of contact” that helps with anything related to AI security and works closely with the CISO office in addition to procurement managers. Verizon released its roadmap for Responsible AI earlier this 12 months in a Whitepaper in collaboration with Northeastern University (Download PDF).

To ensure AI models are properly managed, Verizon has made data sets available to developers and engineers in order that they can interact directly with the models reasonably than using unapproved models, Raj said.

This trend of registering AI models is more likely to grow to be more common amongst other B2C corporations over time, said UiPath's Greenberger. Models have to be “version controlled and audited,” much like how pharmaceutical corporations handle drugs. He suggested that corporations should assess their risk profiles more incessantly due to rapid technological change. Legislation is being discussed within the U.S. and other countries to implement model registration because these models are trained using publicly available data, Greenberger added.

The emergence of AI governance units

Most advanced corporations are establishing centralized AI teams, much like Verizon, Greenberger said. The emergence of “AI governance” groups can also be gaining momentum in lots of corporations. Working with third-party LLM model providers can also be forcing corporations to rethink how they approach collaboration. Each provider offers multiple LLM models with different and dynamic features.

The nature of generative AI applications is fundamentally different from other technologies, making it difficult to legislate the review process. LLMs, by their nature, produce unpredictable results, Patronus AI's Qian said, resulting in safety errors, bias, hallucinations and unsure outcomes. This requires regulations for every category of those errors and industry-specific regulations, she said. In sectors reminiscent of transportation or health, errors can mean “life or death,” while in e-commerce recommendations, the stakes are lower, Qian explained.

In the nascent field of AI testing, creating transparency in models is a significant challenge. Traditional AI may very well be understood by examining its code, but generative AI is more complex. Getting even the fundamentals of testing right is a challenge most corporations can't face. Only about 5% have accomplished pilot projects focused on bias-free and accountable AI, Greenberger estimates.

As the AI ​​landscape continues to evolve at a rapid pace, Verizon's commitment to responsible AI can function one other example and benchmark for the industry, while the various ways LLMs fail underscore the urgent need for greater governance, transparency and ethical standards of their adoption. Watch the video of the speaker's full Q&A below.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read