Why AI Still Needs Humans

Why-AI-Still-Needs-Humans

The business world is in love with AI, but AI-powered algorithms are not self-sustaining. It can falter

Last November, Zillow, an American digital real estate company, brought down shutters on its home buying business when its AI-powered algorithm, iBuying, failed to accurately predict real estate prices during the pandemic. This led to homes being sold at a discount to purchase price. Zillow buried that AI-powered well-oiled buying machine after having blown $1.4 billion on flipping houses.

The key takeaway here is Zillow’s collapse proves that AI is far from perfect. It is a warning sign to other businesses that rely entirely on AI algorithms.

The bigger discussion is: While no company can afford to be behind the AI race, can they let the AI systems run without human intervention?

The pandemic changed the buyer’s behaviour. An MIT article indicated that all was not well with AI-based systems during the pandemic, forcing humans to step in to set them straight. Suppose AI algorithms handle your inventory, customer support, and other functions, working efficiently without human intervention. In that case, the models trained on normal behaviour can falter when facing massive deviation during “not normal” situations.

Failed AI cases

From delivering hyper-personalised services to improving efficiency in operations and productivity, AI is helping organisations make faster and more insightful decisions. No wonder the global market for AI software, hardware, and services is expected to surpass $500 billion by 2024, according to IDC.

Yet, even big AI organisations keep humans in the loop and aren’t ready to run their AI systems independently. Facebook’s recent problems are largely tied to its algorithm failure. Is its content selection tuned to inflame rather than empower?

Also, what’s happening with autonomous vehicles? We were supposed to be driving those AI-powered vehicles by now, but it turns out there are more complex problems than many tech experts imagined. Remember Uber’s autonomous car crash?

There have been several instances that served as a reminder that human involvement in automated systems is crucial. We have witnessed several setbacks in the AI journey when machines have not worked as they should have.

In the healthcare industry, AI and humans can work together to improve outcomes only if humans are fully engaged in the decision-making process. AI can make recommendations to the doctor, who will evaluate if that recommendation is sound.

IBM’s Watson for Oncology was shelved when it gave incorrect medical advice. According to a report, the problem lay in that Watson was trained on a small number of “synthetic cancer cases” rather than real patient data. Even Amazon’s hiring engine was found to be biased towards white males. In 2016, Microsoft launched its intelligent chatbot Tay that spewed out hate speech; the tech giant had to pull it out immediately.

In another instance, In 2020, just hours after making waves and triggering a backlash on social media, Genderify, an AI-powered tool designed to identify a person’s gender by analysing their name, username, or email address, was shut down.

Negative business impact

AI malfunctions will multiply in the coming years, rattle business reputation and incur a huge financial cost.

AI bias is harming businesses, according to the findings of the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. Over 36 per cent of organisations experienced challenges or a direct negative business impact from AI bias in their algorithms. It includes lost revenue, lost customers, lost employees, incurred legal fees due to a lawsuit or legal action, and damaged brand reputation/media backlash.

Not surprisingly, IDC predicts that by 2022, possibly due to a few high-profile PR disasters, over 70 per cent of G2000 companies will have formal programs to monitor their “digital trustworthiness” as digital trust becomes a critical corporate asset.

Human intervention and monitoring of AI responses will avoid flaws that could lead to harm and continuously train the model so they get better.

The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions. So organisations need guidance from human experts when navigating AI bias and the complex issues attached.

Reinforcement learning

Human-machine hybrid intelligence is one of the core research directions in the new generation of AI. An example of how human intervention supports AI systems in making better decisions is Human-in-the-Loop (HitL) Reinforcement Learning. HitL models, which learn by observing humans dealing with real-life work and use cases, continuously self-develop and improve based on human feedback.

It limits the inherent risk of biases, especially in manufacturing critical parts for vehicles or aircraft that require equipment that is up to standard. While it increases the accuracy of inspections, the human eye provides added assurance that parts are safe and secure for passengers. Reinforcement learning can be applied to a variety of areas in real life.

Also, using cognitive AI in collaboration with people who possess the expertise, empathy, and moral judgment will lead to augmented intelligence and positive outcomes.

There are ethical and regulatory reasons to keep humans in the loop. Inaccurate data can lead to poor decisions over time. Biases can also creep into the system while training the AI model due to changes in the training environment or due to trending bias where the AI system reacts to recent activities more than previous ones.

There’s a delicate co-dependence in which changes to our behaviour change how AI works and how AI works to change our behaviour.

Experts believe that AI should be trained on worst-case scenarios such as the Great Depression of the 1930s, the financial crisis of 2007, and the Covid-19 pandemic. The pandemic highlights how easy it is not to foresee some scenarios while building AI models and the consequences of such limited imaginations.

AI systems are as good as the data they are trained upon. To overcome bias and malfunction, it is imperative to train AI across disparate data sets and put human checks in place. As algorithms continue to be embedded deeply into all aspects of human life, AI systems will be best applied when monitored by and augmenting people.

Virtually every industry faces AI disruption, and those who fail to make AI a priority will risk extinction. So, where does that leave tech firms using AI?

Organisations can learn from Zillow’s recent failure and many before it.

They need to stop over-applying AI technology without checks and balances and focus on applying AI to make humans better, as humans are still central to any technology-powered solution.

If you liked reading this, you might like our other stories

Garbage In Garbage Out: The Problem Of Data Labelling
AI, Explain Yourself!