AI Governance

AI governance is emerging globally as policymakers and regulators aim to make AI trustworthy. In this chapter, we propose a model for addressing the increasingly complex set of new rules in a seamless and consistent way.

  • Download

Introduction

Making AI trustworthy has become a core goal for policymakers and regulators globally. In 2021, the European Union introduced the first major legislative proposal for a dedicated cross-sector framework for regulating AI, commonly referred to as the AI Act.  Since then, several other proposals and initiatives have been introduced worldwide. The emergence of new regulations in this area is predominantly driven by the policy objective of protecting end-users, consumers and individuals from potential harms arising from the deployment of autonomous and semi-autonomous technologies.  Essentially, emerging global AI regulation aims to achieve digital trust.

In practice, this means that organizations developing or implementing AI systems face an increasingly complex set of new rules that require an effective management approach. AI governance has therefore become a pillar for addressing these rules in a way that enables AI technology to be successfully deployed in a legally compliant way. Doing this at a global scale requires vision and the implementation of practices that are aligned with business objectives. We have developed a model that is specifically aimed at meeting this need in a seamless and consistent way.

Quote

Organizations developing or implementing AI systems face an increasingly complex set of new rules that require an effective management approach. AI governance has therefore become a pillar for addressing these rules in a way that enables AI technology to be successfully deployed in a legally compliant way.

Quote

AI governance is about understanding the objectives of policy makers and legislators and deploying effective and transparent practices that can maximize the benefits of AI and minimize risk in a lawful and ethical manner.

How can organizations successfully deploy AI technology at a global scale in a legally compliant way? 

 

Potential risks widely regarded as capable of eroding trust in AI include:

Algorithmic bias – The potential for outputs from an AI system to be biased in a way that results in unfair or unlawful discrimination against specific groups or individuals.

Opacity – Due to model complexity, operators of an AI system or individuals impacted by its outputs may find it challenging to understand and interpret the rationale for those outputs in a given context.

Performance – Unanticipated inaccuracies, unreliability and other performance issues may arise, and even go undetected.

Misinformation & disinformation – Content produced by generative AI can misinform due to performance issues. Equally, a model may be intentionally manipulated to produce false or inaccurate content used to spread disinformation.

Security attacks – Bad actors may seek to launch attacks against the AI system, aiming to gain access to confidential information or personal data, or manipulate the system’s behavior.

Safety – Performance issues and malicious attacks can, in certain contexts, result in the AI system becoming unsafe. When the system operates in a physical environment, such safety concerns could pose risks of injury or death to individuals.

In light of this, policymakers across many jurisdictions are focusing on introducing principles and obligations for developers and deployers of AI that are broadly similar. Our recently published global survey of AI principles identifies eight key areas of focus for policymakers across jurisdictions including the US, European Union, United Kingdom, China, Japan and Australia.

The consistency with which these principles and obligations are being deployed in regulatory proposals allows developers and users to adopt a global approach to AI governance.  Simplicity and consistency are key to helping build trust in these systems. Our proposed global approach to AI governance comprises the following core components:

Oversight

To ensure the effective implementation of an AI governance program, it is vital to have adequate oversight and coordination among different departments or teams in an organization. Based on our experience, it is increasingly common practice to establish an AI governance committee (or similar) responsible for providing direction, setting objectives and managing overall enterprise risk.

It is also advisable to undertake an independent audit of existing compliance standards (under legal privilege) in order to identify gaps and inform future priorities.

Responsible AI by design

To comply with many of the obligations proposed under emerging AI regulations, it will be necessary to integrate various technical governance measures into the design, development and deployment of AI tools. These measures are intended to help mitigate the risks we have identified above and “responsible” AI means trustworthy AI. They include:

Data governance – Taking appropriate steps to assess the quality of training and testing data sets and interventions to address potentially harmful biases.

Explainability – Integrating explainability features into AI models to allow for easier traceability of outputs.

Performance and accuracy – Ensuring that models launched into a live production environment perform as expected and, where feasible, address inaccuracies.

Robustness – Addressing the risks of third party malicious attacks on models, which may result in data leaks or manipulation of an AI system.

Documentation

To set consistent standards, demonstrate accountability and provide evidence of responsible AI deployment, it is important to produce appropriate documentation. This may include internal policies and procedures, AI impact assessments and technical documentation. Impact assessments are likely to form a particularly important tool in risk management, providing a basis for documenting potential risks and mitigations in relation to AI models.

Quality assurance

A key component of the AI Act is ensuring the consistent performance of AI systems throughout their lifecycle and promptly detecting and addressing issues when they arise. Implementing appropriate human oversight, such as testing and ongoing monitoring of systems, is necessary to detect inaccuracies, errors, unexpected behaviours and potentially harmful biases. Employees should receive training on internal AI governance practices to maintain consistent standards throughout the organization.

Transparency

Regulations may require providers to develop detailed user instructions for AI deployers, including outlining how it should be operated and any limitations of the technology. End-users should also receive information about how models operate and that they are interfacing with artificial intelligence.

Liability

In addition to the emergence of new AI regulations, certain jurisdictions are also considering rules for the allocation of liability for faults and damage caused by AI systems. This includes the EU’s AI Liability Directive, which is currently being negotiated. Therefore, it is important to assess whether existing contractual protections are sufficient and whether disclaimers or exclusions should apply to the use of systems in particular environments or for specific high-risk purposes.

Ultimately, AI governance is about understanding the objectives of policy makers and legislators and deploying effective and transparent practices that can maximize the benefits of AI and minimize risk in a lawful and ethical manner. This is essential to responsible deployment and establishment of trust by demonstrating safe application of the technology over time.

Key recommendations

1

Undertake a global regulatory applicability assessment

To understand the full impact of emerging AI regulations, we recommend performing a global regulatory assessment. The objective would be to identify the potential applicability and impact of any proposed laws both to an organization’s existing and potential future projects.

2

Perform a compliance assessment

Considering the applicable requirements, a compliance assessment should be performed to identify current practices already in place, those that are existing and on the horizon (e.g., data governance, bias mitigations, etc.) that can support future adherence to AI regulations.

3

Implement an AI impact assessment tool

This tool should be used to perform and document risk assessments related to existing AI systems and should also be deployed for future solutions being designed and developed. As mentioned above, responsible AI programs involve embedded audit and feedback loops which allow risk identification and mitigation not just in theory, but live and in practice. The importance of impact assessment tools, together with appropriately structured organization level governance to review, understand and act on assessment outcomes, cannot be over-emphasized.

Key contacts