The State Of The AI Superheroes Committee

The State Of The AI Superheroes Committee

AI ethics are multifold, and looking at every possibility is critical. It is the first step towards regulation and monitoring

With great power comes great responsibility. It does not just apply to Spiderman but to every brand that deals with data. Today’s data-driven world is not just about owning data but also about how a company uses it.

Salesforce’s engineers incorporated Natural Language Processing and Machine Learning tools to analyse people’s sentiments towards their products on social media, which was when they discovered a bias problem.

The company’s AI ethics board, led by Kathy Baxter, stepped in. Despite the team’s warnings, the product management leaders wanted to roll out the product. After much to and fro, the ethics team won in the end. The team conducted bias mitigation, ran countless testings, and finally launched the product in 2019.

While many data scientists use Artificial Intelligence (AI) tools to process data and turn them into actionable insights, it’s not a foolproof process. It’s important to understand that not every data asset is good and relevant, and not every AI system can interpret each data correctly.

There are several questions that AI might turn blindsight to. Are the algorithms aware of the source of data? Are they unknowingly discriminating? Does it have permission to access the data? Well, an AI ethics board committee seems like a good idea, and many brands have already chosen that path.

Ethical risks, technological biases, and unintended consequences can create chaos in an otherwise organised company. According to Harvard Business Review, if a company utilises AI, it needs an AI ethics board.

A Rising Priority

Years ago, AI was a genius technology that began to be leveraged by companies worldwide for the immense and revolutionary potential it portrayed, with little thought to ethics, bias, and other possible harms. With time, people began to question and debate ethical concerns. More companies began to form internal or external AI ethics boards and committees to keep track of AI usage. For instance, IBM launched its committee in 2018. Meta also has an interdisciplinary AI group that works with product teams to address fairness. Microsoft’s responsible AI board adds governance processes across the departments and has another team to incorporate responsible AI into the engineering work processes.

Even the governments have been taking an active interest. A couple of years ago, the European Union announced a possible ban on the unacceptable use of AI for mass surveillance.

Although it seems like AI ethics has become a necessity only in the last few years, foundations were laid over a decade ago. In 2011, a German science and technology firm developed their advisory panel with their code of AI ethical principles such as Beauchamp and Childress’s Principles of Niomedical Ethics to guide digital innovation.

Years later, ethical questions around data and AI in medical research grew within Merck in 2019 when it was knee-deep in developing digital health solutions. The firm’s bioethics panel lacked special expertise, so it hired more experts in technology and regulation. The team of experts researched and zeroed in on 42 ethical AI frameworks. Using them, Merck derived five core ethical principles for digital innovation: justice, autonomy, beneficence, non-maleficence, and transparency. The Merck CoDE (Code of Digital Ethics) was introduced in 2020.

The firm provides basic CoDE training for all employees. They could then question projects, analyse solutions, and flag risks under the CoDE, which the Digital Ethics Advisory Board then reviewed.

The Bigger Picture

AI ethics is not just about “fairness” and “bias”. Discussing tools that can only identify bias is not good enough.

The few companies who stop to consider AI ethics usually adopt a risk-mitigation strategy that leverages some identification or monitoring tool. But according to HBR, it barely scratches the surface of ethical risks that AI brings with it.

It states that defining AI ethics as fairness in AI is not right as fairness is only a subset of ethical issues. For example, facial recognition technology is not free from ethical problems if bias is removed, and AI-powered self-driving cars do not have ethical risks of bias or privacy but killing and maiming.

Although searching for all possible technical solutions to AI ethics is understandable, many issues might not get removed simply by metrics or KPIs.

The truth is, AI ethics are multifold, and looking at every possible perspective is critical, and AI ethics boards are the first step towards regulation and monitoring. Building a comprehensible AI ethical risk-mitigation program implemented by individual brands is imperative.

Data laws and guidelines exist, but it takes an interdisciplinary team to ensure it’s being enforced into product development and company data use. From IT experts, legal experts, privacy and security analysts to audio and industry experts in consumer affairs, a diverse and knowledgeable board can ensure the guidelines are followed and effective. The goal should be to make AI ethics a part of the company culture.

Consider Google’s state of ethical AI. Google started and ended its AI ethics board before it could even have a chance to work on any project. Suffering a huge controversy and social backlash over the constituting members, the external advisory board called Advanced Technology External Advisory Council (ATEAC) was dissolved. Yet, Google continues to take AI ethics seriously.

Changing Mindsets, Responsible AI

A 2021 Reuters report stated that Google, IBM, and Microsoft have been turning down projects due to ethical concerns. The tech giants have rejected facial recognition, voice mimicking software, and emotion analysis projects.

For instance, in 2020, Google Cloud was approached to collaborate with a financial company. After several internal discussions, Google opted out of it as it did not want to create an AI to make choices related to lending money. It seemed to be ethically wrong.

Although there was a possibility of potential growth in the AI-powered credit scoring industry, Google researchers believed AI could branch into racial and gender bias from the data.

Similarly, Microsoft stopped using AI software that mimicked voices. The company countered the benefits of the software to restore impaired spec against the possibility of deep fakes being created. What would stop malicious threat actors from using voice mimicry technology to impersonate people without consent?

But companies cannot sit quietly and let the technology go to waste. With a Responsible AI focus, Microsoft gave the green flag with some restricted uses, such as user consent must be verified before using the tool.

In 2020, IBM rejected an advanced facial recognition system project, and six months later, the brand discontinued its entire face-recognition service. The AI Ethics board at IBM also discussed the unethical possibilities with implants and other wearable technology. Hackers could manipulate the thoughts of people wearing neuro-implants that are intended to help with hearing.

Time To Level Up

We are familiar with Zillow’s AI controversy and Facebook’s decision to step away from facial recognition systems. Will 2022 continue to headline such disappointments?

Oxylabs, a global provider of premium proxies and data scraping solutions for large-scale web data extraction,has an AI ethics board, and AI and ML Advisory Board Member Pujaa Rajan believes the combination of AI and blockchain technology will be revolutionary.

“Blockchain has the potential to give people the power to manage data privacy and security. Imagine a world where you own your data and/or algorithms, control its transparency, and maybe even get paid for your data. Currently, many AI applications do not run on the blockchain because of latency issues and data centralisation. However, advancements in both fields could change this in 2022,” she said.

In the future, experts predict several other companies will strengthen internal ethics boards by leveraging secure technology such as blockchain, and with AI ethics laws and regulations becoming a government priority too, it’s hard to neglect the holistic ethical issue anymore.

If you liked reading this, you might like our other stories

Is Ethical Hacking Our Last Defence?
Is Neuromorphic Computing Using Edge The Future Of AI?