HomeArtificial IntelligenceAI governance is evolving rapidly - here's how government agencies need to...

AI governance is evolving rapidly – here's how government agencies need to arrange

The global AI governance landscape is complex and rapidly evolving. Important issues and concerns are emerging, but government agencies should stay ahead of the competition by assessing their agency priorities and processes. Compliance with official guidelines through auditing tools and other measures is just the ultimate step. The foundations for effectively operationalizing governance are people-centered and include securing funded mandates, identifying accountable leaders, developing agency-wide AI capabilities and centers of excellence, and incorporating insights from academia, nonprofits, and the private sector.

The global governance landscape

At the time of this writing, the OECD Policy Observatory lists 668 national AI governance initiatives from 69 countries, territories and the EU. These include national strategies, agendas and plans; AI coordination or monitoring bodies; public consultations with stakeholders or experts; and initiatives to make use of AI in the general public sector. In addition, the OECD assigns legally enforceable AI regulations and standards to a separate category from the aforementioned initiatives and lists a further 337 initiatives there.

The term could be difficult to define. In the context of AI, it might probably seek advice from the protection and ethics guidelines of AI tools and systems, policies on data access and model use, or government-mandated regulation itself. Therefore, we see that national and international policies address these overlapping and overlapping definitions in other ways. For all of those reasons, AI governance should start on the concept level and proceed throughout the lifecycle of the AI ​​solution.

Common challenges, common themes

Broadly speaking, government agencies Pursuit of governance that supports and balances societal concerns of economic prosperity, national security and political dynamics, as we’ve seen within the recent White House order Establishment of AI governance bodies in US federal agencies. Meanwhile, many private firms look like prioritizing economic prosperity and specializing in efficiency and productivity that increase business success and shareholder value. Some firms like IBM are emphasizing the mixing of guardrails into AI workflows.

Nongovernmental organizations, academics and other experts also publish guidelines which can be useful to public sector agencies. This 12 months, the World Economic Forum's AI Governance Alliance released the Presidential AI Framework (PDF). It “…provides a structured approach to the secure development, deployment and use of generative AI.” The framework highlights gaps and opportunities in addressing security concerns, viewed from the angle of 4 key stakeholders: AI model builders, AI model adapters, AI -Model users and AI application users.”

Some common regulatory themes are emerging across industries and sectors. For example, it’s becoming increasingly advisable to supply end users with transparency concerning the presence and use of any AI they interact with. Leaders must ensure reliability of performance and resilience to attack, in addition to an actionable commitment to social responsibility. This includes prioritizing fairness and impartiality in training data and results, minimizing environmental impact, and increasing accountability through designating responsible individuals and organization-wide training.

Guidelines will not be enough

Regardless of whether governance policies are based on soft law or formal enforcement, and irrespective of how comprehensive or scholarly they’re formulated, they’re merely principles. What matters is how organizations put them into motion. For example, New York City published its own book AI motion plan in October 2023 and formalized it AI principles in March 2024. Although these principles were consistent with the themes mentioned above – including the statement that AI tools “ought to be tested before use” – the AI-powered chatbot introduced the town to reply questions on establishing and operating an organization to reply gave replies that encouraged users to interrupt the law. Where did the implementation fail?

Operationalizing governance requires a people-centered, responsible and participatory approach. Let's have a look at three key actions agencies must take:

1. Appoint responsible managers and finance their mandates

Trust cannot exist without responsibility. To implement governance frameworks, government agencies need accountable leaders who’ve funded mandates to get the job done. To name only one knowledge gap, several senior technology leaders we spoke with lack an understanding of how data could be skewed. Data is an artifact of human experience that tends to calcify worldviews and create inequality. AI could be viewed as a mirror that reflects our biases back to us. It is imperative that we discover responsible leaders who understand this and could be each financially empowered and held accountable for ensuring that their AI is operated ethically and aligned with the values ​​of the community it serves.

2. Provide applied governance training

We are seeing many agencies host AI “innovation days” and hackathons aimed toward improving operational efficiency (e.g., reducing costs, engaging residents or employees, and incorporating other KPIs). We recommend expanding the scope of those hackathons to deal with AI governance challenges through the next steps:

  • Step 1: Have a governance lead candidate deliver a keynote on AI ethics to hackathon participants three months before pilots are unveiled.
  • Step 2: Let the federal government agency that sets the policy act because the judge for the event. Provide criteria for assessing pilot projects that include AI governance artifacts (documentation results) including fact sheets, audit reports, impact level analyzes (intended, unintended, primary and secondary impacts), and functional and non-functional requirements of the model in operation.
  • Step 3: Six to eight weeks before the presentation date, provide teams with hands-on training on developing these artifacts through workshops on their specific use cases. Empower development teams by inviting diverse, multidisciplinary teams to take part in these workshops and assess ethics and model risks.
  • Step 4: On the day of the event, have each team present their work holistically and show how they’ve assessed and would mitigate various risks related to their use cases. Judges with expertise, regulatory and cybersecurity backgrounds should query and evaluate each team's work.

These timelines are based on our experience in hands-on training of practitioners on very specific use cases. It gives aspiring leaders the chance to tackle governance under the guidance of a coach while placing team members within the role of sophisticated governance judges.

But hackathons will not be enough. You can't learn every part in three months. Agencies should spend money on constructing an AI competency culture that encourages continuous learning and discards old assumptions when mandatory.

3. Assess inventory beyond algorithmic impact assessments

Organizations that develop many AI models often depend on algorithmic impact assessment forms as the first mechanism to gather essential metadata about their assets and to evaluate and mitigate the risks of AI models before deploying them. These forms simply ask AI model owners or procurers concerning the purpose of the AI ​​model, its training data and approach, responsible parties, and disparate impact concerns.

There are many reasons for concern that these forms are getting used in isolation without rigorous education, communication and cultural considerations. These include:

  1. Incentives: Is there an incentive or no incentive for people to finish these forms rigorously? We find that almost all are affected by this because they’ve to fulfill quotas.
  2. Liability for risks: These forms can mean that model owners are de-risked because they used a particular technology or cloud host, or purchased a model from a 3rd party.
  3. Relevant definitions of AI: Model owners may not realize that what they’re procuring or deploying meets the definition of AI or intelligent automation as described in a regulation.
  4. Ignorance of various effects: By placing the responsibility for completing and submitting an algorithmic assessment form on a single individual, one could argue that an accurate assessment of disparate impacts is omitted.

We have seen worrying form submissions from AI practitioners across geographies and levels of education, in addition to individuals who said that they had read the published policy and understood the principles. These entries include “How could my AI model be unfair if I don’t collect personal data?” and “There isn’t any risk of disparate impact because I actually have the perfect intentions.” These point to the urgent need for applied training and a Organizational culture that consistently measures exemplary behavior based on clearly defined ethical guidelines.

Create a culture of responsibility and collaboration

A participatory and inclusive culture is important as organizations struggle to navigate a technology with such far-reaching impacts. As we’ve already discussed, diversity shouldn’t be a political factor, but a mathematical one. Multidisciplinary centers of excellence are essential to make sure employees are educated and responsible AI users who understand risks and different impacts. Organizations must make governance an integral a part of collaborative innovation efforts and emphasize that responsibility lies with everyone, not only model owners. They must discover truly responsible leaders who can bring a socio-technical perspective to governance issues and mitigate AI risks, whatever the source – whether governmental, non-governmental or academic.

IBM Consulting may also help firms operationalize responsible AI governance


For more information on this topic, see a Summary a recent IBM Center for Business in Government roundtable with government leaders and stakeholders on how responsible use of artificial intelligence can profit the general public through improved delivery of presidency services.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read