AI, Explain Yourself!

AI,-Explain-Yourself!

As machine learning becomes more powerful, the onus will be on businesses to account for what their algorithms know or how they know it.

Everybody keeps saying that Artificial intelligence (AI) and machine learning (ML)-powered systems can outperform humans at decidedly human tasks because computers have more data-crunching power than our three-pound brains. The AI-and ML-powered systems can make excellent predictions on a wide range of subjects, provide unprecedented accuracy and efficacy to challengingly complex optimisation problems, helping companies use these findings to augment existing decisions and incrementally improve business outcomes. 

The success of Deep Learning (DL) models such as Deep Neural Networks (DNNs) stems from a combination of efficient learning algorithms and their huge parametric space. DNNs comprise hundreds of layers and millions of parameters, which makes them complex. Like Ensembles and DNNs, the models derive the rules from the data — the more, the better. But these intelligent systems are inscrutable to us. It’s called the “black box” problem — our inability to discern exactly what machines are doing.

Machines making more and more decisions for us — decisions that affect humans’ lives — birthed a new use for transparency and a field of research called Explainable AI, or XAI, driven by issues ranging from unfair bias, model instability, and regulatory requirements. The XAI, the term first used in 2004, is to make machines able to account for the things they learn in ways that we can understand.

While the foundation in data spares AI models, which are getting more advanced, from risks around assumptions and other pre-conceived notions, it does not make them invulnerable. The danger is in creating and using decisions that are not justifiable or that simply do not allow obtaining explanations of their behaviour.
Explanations supporting the output of a model are crucial, especially in precision medicine, where experts require far more information from the model than a simple binary prediction for supporting their diagnosis. Deep neural networks can now detect certain kinds of cancer as accurately as a human, or better. But human doctors still have to make the decisions, and they won’t trust an AI unless it can itself.

The AI techniques may seem alien to our ways of thinking, nonetheless, it must conform in which decisions require explanations, whether in the way business is run or in the advice our doctors give us. 

Banks and other lenders are also increasingly using ML technology to cut costs and boost loan applications. A computer program decides whether to give you a loan by comparing the loan amount with your income; it looks at your credit history, marital status or age; then it might consider any number of data points. After looking at all the possible variables and millions of cases to consider and their various outcomes, the computer gives its decision. But there’s always room for a ML algorithm to tweak itself to predict how likely each loan is to default reliably.

Meanwhile, there have been cases of unconscious inherent biases in the data fed to AI systems that humans program. In 2019, a US financial regulator, New York’s Department of Financial Services (DFS), opened an investigation into claims Apple’s credit card offered different credit limits for men and women. It followed complaints, including from Apple’s co-founder Steve Wozniak, that algorithms used to set limits might be inherently biased against women. Bloomberg reported that tech entrepreneur David Heinemeier Hansson had complained that the Apple Card gave him 20 times the credit limit that his wife got. Any discrimination, intentional or not, “violates New York law”, the DFS said.

The US healthcare giant UnitedHealth Group was investigated over claims an algorithm favoured white patients over black patients. Although there is no evidence that algorithms are sexist or racist, a lack of transparency has been a recurring theme.

Also Read: How Has Machine Learning Impacted Marketing?

Just as AI gains its strength from harvesting deep and wide datasets, a poor dataset condemns an AI to inadequacy, risking to mislead rather than to inform. Skewed data injects subjectivity into an otherwise objective decision-making process, which can negatively impact both to the businesses using AI as a decision-making aid and the customers or constituents of those organisations.

It’s for this reason that AI algorithms need to have explanatory capabilities in order for users to understand why certain decisions were made. Dealing with how actually to deal with AI falls back on the conventional bromides about human agency and oversight, privacy and governance, diversity, non-discrimination and fairness, societal well-being, accountability and that old favourite “transparency”.

In 2018, European Union enforced GDPR, a law that required many decisions made by a machine to be readily explainable. Now, XAI is one of the first concerns of companies. Data scientists work to create trust across many stakeholders in the organisation, regulators, and end-users to understand the AI models better.

When it comes to autonomous vehicles, a report on The Moral Algorithm by Gowling WLG argues that harmonised safety regulations will be needed, such as when it is permissible for a car to break the rules of the road, or when determining the “assertiveness: of a vehicle when it interacts with other road-users.
Life-threatening or not, there’s a call for greater transparency around the systems used by dominant tech firms. 

The sheer proliferation of different techniques can leave businesses flummoxed over which ML-powered model to choose, although they have produced some of the most breathtaking technological accomplishments, from learning how to translate words with better-than-human accuracy to learning how to drive.

In order to avoid limiting the effectiveness of the current generation of AI systems, experts propose creating a suite of ML techniques that produce more explainable models while maintaining a high level of learning performance, for example, prediction accuracy; enable humans to understand, appropriately trust, and effectively manage the emerging generation of AI partners. Procedural regularity — the consistent application of algorithms or a distributed ledger — in the way the blockchain works — is mooted as possible ways to monitor algorithms.

Meanwhile, researchers are designing new kinds of deep neural networks made up of smaller, more easily understood modules, which can fit together like Legos to accomplish complex tasks. A team at Rutgers is designing a deep neural network that, once it makes a decision, can then sift through its data set to find the example that best demonstrates why it made that decision.

An international research team from TU Wien (Vienna), IST Austria and MIT (US) has developed an AI system based on the brains of tiny animals, such as threadworms. This AI system can control a vehicle with just a few artificial neurons. The team says that the system has decisive advantages over previous deep learning models because its simplicity and mode of operation can be explained in detail. 

The questions of transparency and accountability are some the Partnership on AI and OpenAI could take on. The Partnership on AI is a collaboration between Facebook, Google, Microsoft, IBM, and Amazon to increase the transparency of algorithms and automated processes.

To ensure that AI develops in a way that is ethical, accountable, and advances the public interest, LinkedIn founder Reid Hoffman and eBay’s founder Pierre Omidyar jointly contributed $20m to a fund that aims to keep artificial intelligence technology in check, the Ethics and Governance of Artificial Intelligence Fund.

Meanwhile, Sir David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society, who has spent his life trying to teach people how to understand statistical reasoning, published an article in the Harvard Data Science Review on the question “Should we trust algorithms?”

Also Read: Top trends in Natural Language Processing

He suggests a set of questions one should ask about any algorithm, such as — Could I explain how it works (in general) to anyone interested? Could I explain to an individual how it reached its conclusion in their particular case? Does it know when it is on shaky ground, and can it acknowledge uncertainty? Do people use it appropriately, with the right level of scepticism?
And the most important question Spiegelhalter asked: does it know when it is on shaky ground, and can it acknowledge uncertainty? A machine should know when it doesn’t know – and admit it. Otherwise, the long-term consequences will be machines, and thereby businesses, having more power over things in a very untransparent and unaccountable way.