The key challenges organisations struggle to counter bias include developing trustworthy algorithms and determining what data is used to train AI
AI can help solve complex problems, but can we trust the AI solutions? Do organisations have the proper systems in place to prevent, or quickly address, issues resulting from AI bias.
AI bias is harming businesses and there’s a significant appetite for more regulation to help counter the problem, according to the findings of the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders.
The report involved responses from over 350 CIOs, IT directors, IT managers, and development leads in the the US and the UK who use or plan to use AI to learn how organisations perceive the issue of AI bias and its importance, which issues are thought to be the biggest risks if AI bias is left unchecked, what kind of tools and capabilities are organizations looking for to help them mitigate bias in AI.
“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long,” said Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum.
The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics,” Firth-Butterfield added.
Just over half (54 per cent) of respondents have “deep concerns” around the risk of AI bias while a much higher percentage (81 per cent) want more government regulation to prevent it.
Given the still relatively small adoption of AI at this stage across most organisations; there’s a concerning number reporting harm from bias.
Over a third (36 per cent) of organisations experienced challenges or a direct negative business impact from AI bias in their algorithms. This includes:
- Lost revenue (62 per cent)
- Lost customers (61 per cent)
- Lost employees (43 per cent)
- Incurred legal fees due to a lawsuit or legal action (35 per cent)
- Damaged brand reputation/media backlash (6 per cent)
“The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place. Organisations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted, and explainable,” said Ted Kwartler, VP of Trusted AI at DataRobot.
Four key challenges were identified as to why organisations are struggling to counter bias:
- Understanding why an AI was led to make a specific decision
- Comprehending patterns between input values and AI decisions
- Developing trustworthy algorithms
- Determining what data is used to train AI
Fortunately, a growing number of solutions are becoming available to help counter/reduce AI bias as the industry matures. Responsible AI solutions offer a range of capabilities that help companies turn AI principles, such as fairness and transparency, into consistent practices.
Experts say, demand for these solutions will likely double next year as interest extends beyond highly regulated industries into all enterprises using AI for critical business operations.
If you liked reading this, you might like our other stories