Why We Need to Govern AI 

Why We Need To Govern AI

There is no one-size-fits-all when it comes to AI governance, but failing to adequately address it can have far-reaching ramifications, including financial loss and reputational damage

Harnessed appropriately, Artificial Intelligence (AI) can deliver significant benefits for business and society, and support decision-making which is fairer, safer and more inclusive and informed. But it can be only realised when we consider how its development and usage should be governed, and what degree of legal and ethical oversight is needed.

Regulators around the globe continue to grasp how best to encourage the responsible development and adoption of AI technology. Many governments and regulatory bodies have released high level principles on AI ethics and governance. Many proposals have emerged from international organisations in the past few years, as geopolitical entities such as the UN, the EU and the OECD have begun to encourage the discussion on AI regulation. 

The goal behind many of these recommendations is to generate a human-centred approach for the development of AI, ensuring a minimum of guarantees for all citizens. The OECD for instance, has launched a Council on Artificial Intelligence that published a series of general recommendations signed by 42 countries. The document points to both responsibility and transparency in the creation of technology and its use.

In 2020, Singapore’s Personal Data Protection Commission (PDPC) released the second edition of its Model AI Governance Framework and with it the Implementation and Self-Assessment Guide for Organisations (ISAGO) developed in collaboration with the World Economic Forum; another example of a practical method of encouraging responsible AI adoption.

Large organisations that rely heavily on machine learning (ML), such as Facebook, Google, and Uber, have built systems to automate the management of AI governance. Uber, for example, has an AI workflow management system, Michelangelo, a combination of open-source systems and components built in-house which enforces a standard workflow across data management, model training, model evaluation, model deployment, prediction making, and prediction monitoring.

But a little more than 20 per cent of organisations have implemented formal AI governance processes or tools, and most think of governance as optional rather than essential, according to O’Reilly’s AI Adoption in the Enterprise 2020 report.

Since last year, there has been even more focus on the role of AI across the environmental, social, and governance (ESG) landscape. This includes AI use cases and applications in healthcare, education, law enforcement and financial services among others. 

An AI governance plan, which refers to the concept that AI systems and machine learning applications need an underlying legal basis and policy framework, and that practices and possible outcomes are researched thoroughly and implemented fairly, is important not just for extra regulated industries like finance and banking and healthcare, but also if businesses are making decisions based on an algorithm’s output that will affect the company and clients.

Also Read: Towards Better Data Governance

The main elements of AI governance are: 

AI model definition: The purpose of the AI model must be clearly defined. What does the organisation wish to achieve, and is the AI model capable of achieving it?

AI model management: It is important to develop an AI model catalogue to ensure the right model is being used for the right purpose, and track what each model can do and what it can’t do, which departments are using which models for what purpose, who built the model.

Data governance: Because successful AI outcomes depend on high-quality data, effective data governance is essential. Experts say that if a business already has a data policy in place, it is almost halfway there, because the relationship between data and AI is close. What data do you have? Where is it coming from? How is the data being altered? are the questions that are answered in a data policy.

At the same time, it’s also important to recognise that data scientists have different approaches and skill-sets than application developers. In other words, there needs to be clear-cut requirements and principles.

Oftentimes, in the competitive rush to design, develop, and deploy AI-driven applications, many organisations fail to give AI governance due consideration.

Also, since automation of AI governance is in the nascent stages, AI governance requires a hands-on approach. Keeping in mind that AI introduces unique problems — training data is often flawed, such as with errors, duplications and even bias, then there is the issue with model drift when the AI degrades over time because the algorithms and data do not adequately reflect the changes in the real world, there’s an increasing realisation among organisations that failing to adequately address AI governance can have far-reaching ramifications. Just like failing to deal with data security can cause serious financial harm and damage a company’s reputation.

An AI governance plan should be actionable, including concerns of all stakeholders, and AI and data engineers and operations engineers need to refer to this plan for all projects.

The most complex part of the process is how to set the objectives of the AI plan.
Here are the key areas for suggested actions.

Also Read: Zero Trust Level

Explainability standards

  • Assemble a collection of best practice explanations, like assess if your company is at the right stage of your AI journey, prioritise goals and take on the governance tasks that are appropriate for the sophistication of your AI projects, along with commentary on their praiseworthy characteristics to provide practical inspiration. 
  • Provide guidelines for hypothetical use cases to balance the benefits of using complex AI systems against the practical constraints that different standards of explainability impose. 
  • Describe minimum acceptable standards and application contexts. 
  • Provide an indication to individuals of what will happen with their data if the purposes for processing are unclear at the outset. Update and proactively communicate privacy information as processing purposes become clearer;
  • Design a process to update the privacy information and communicate the changes to individuals before starting any new processing where there are plans to use personal data for a new purpose within AI processing.

Fairness appraisal

  • Articulate frameworks to balance competing goals and definitions of fairness.
  • Clarify the relative prioritisation of competing factors in some common hypothetical situations, even if this will likely differ across cultures and geographies. 
  • Individuals should be informed if they are tracked in an AI system or personal data is collected or used for analysis.

Safety considerations 

  • Outline basic workflows and standards of documentation for specific application contexts that are sufficient to show due diligence in carrying out safety checks.
  • Establish safety certification marks to signify that a service has been assessed as passing specified tests for critical applications.
  • Organise an external security expert to view, read and debug part of the AI’s source code. Implement appropriate system vulnerability monitoring / testing tools or software.

Human-AI collaboration 

  • Determine contexts when decision-making should not be fully automated by an AI system.
  • Assess different approaches to enabling human review and supervision of AI systems.

Liability frameworks 

  • Evaluate potential weaknesses in existing liability rules and explore complementary rules for specific high-risk applications. 

Review standard and best practices

  • Periodically review the standards and best practices to ensure they are keeping pace with evolving knowledge across the industry. For larger organisations, consider creating an independent and dedicated governance team to enforce standards and consistency across the organisation.

Developing an AI governance framework can be challenging. Each industry has its own nuances, as there is no one-size-fits all when it comes to AI governance. In general, transparency is the key item in AI governance — knowing how and why an AI made a decision. Additionally, fairness and ethics are high on many lists — ensuring that biases or unseen prejudices in data are not reproduced with AI. Accountability is a principle championed by organisations like Microsoft and the Partnership on AI.

For the most part, AI governance needs to be well-thought out before an AI project is undertaken. As AI systems continue to sweep across industry sectors — spanning education, public safety, healthcare, transportation, economics, and business — AI governance can establish the standards that unify accountability and ethics as the technology evolves. 

Over the next few years, we will likely see a variety of new tools and services come to market to support these efforts, especially as companies continue to increase investments in AI.