This topic is currently making headlines and causing executives sleepless nights: How do firms check their AI models for bias, performance and ethical standards?
VentureBeat welcomed UiPath and others to the recent VB AI Impact Tour in New York City to share methodologies, best practices, and real-world case studies. Michael Raj, VP of Network Enablement (AI and Data) at Verizon Communications, Rebecca Qian, co-founder and CTO at Patronus AI, and Matt Turck, managing director at FirstMark, brought diverse viewpoints. To close out the event, VB CEO Matt Marshall spoke with Justin Greenberger, SVP Client Success at UiPath, about what audit success looks like and where to begin.
“It was once that the chance landscape was assessed annually,” said Greenberger. “I feel the chance landscape now must be assessed almost monthly. Do you understand your risks? Do you understand the controls that mitigate them and the way you may assess them? The IIA (Institute of Internal Auditors) just released its updated AI framework. It's good, nevertheless it's also quite a lot of fundamentals. What are your KPIs to watch? What is the transparency of the info source? Do you’ve got a source? Are there responsibilities? Do you’ve got people signing off on the info sources? The evaluation cycle must be much tighter.”
He pointed to GDPR, which was widely seen as over-regulation on the time but ultimately formed the premise for data security for many firms today. The interesting thing about generative AI is that, unlike the same old lag that happens in countries with stricter regulations, markets worldwide are keeping pace and developing at essentially the identical speed. The competitive landscape is leveled as firms consider their risk tolerance in all areas of technology in addition to their potential impact.
Challenges posed by the increasing variety of pilot projects and proof of concepts
True enterprise-wide transformation remains to be in its infancy, but a lot of firms have initial projects underway and are testing the waters to some extent. Some challenges remain the identical – for instance, finding material experts with the contextual understanding and demanding considering skills needed to set the parameters of use cases and their implementation. Another common challenge in auditing and governance is empowerment and engagement, which entails training employees, although at this stage of the AI revolution, the total scope of what employees should and shouldn’t know or do is just not yet entirely clear, says Greenberger, especially as technologies like deep fakes gain traction.
Finally, the component-based implementation of generative AI must catch up. Companies are largely integrating generative AI into their workflows fairly than redesigning entire processes, and audits might want to adapt because it becomes more widespread – for instance, to watch how private data is incorporated and utilized in a medical use case.
How the role of humans will evolve
Humans will remain within the loop for now as risks and controls evolve together with technology, Greenberger said. A user first makes a request, then the AI runs the calculations and provides the info the worker must do their job. For a logistics provider, for instance, it may be a job offer that the worker accepts and offers to the shopper. That decision and direct interaction with the shopper is a human role, but one that will find yourself on the hit list.
“The decision-making process remains to be driven by people,” Greenberger said. “Over time, as we get used to audit controls and spot checks, we'll see that lessen. Will people tackle more of the creative and emotional aspect? That's what we're being taught now as managers and leaders. Focus on creative and emotional concepts because your decision-making responsibility could possibly be taken away from you. It's more a matter of time than the rest.”