Toby Walsh, a professor of Artificial Intelligence at the University of New South Wales in Sydney and leads a research group at Data61, Australia’s Centre of Excellence for ICT Research, is one of the world’s leading experts on artificial intelligence. He has been working with the Campaign to Stop Killer Robots, a coalition of scientists and human rights leaders seeking to halt the development of autonomous robotic weapons. His open letter to the UN asking for pledges from other AI researchers to stand up against killer robots was signed by more than 20,000 AI researchers and high profile scientists, entrepreneurs and intellectuals, including Stephen Hawking, Noam Chomsky, Steve Wozniak, Elon Musk and Bill Gates.
His new book 2062: The World that AI Made, which is a call for thoughtful decision-making and already available in a dozen languages, will soon be published in Arabic.
In this interview, Walsh briefly talks about quantum computing, his concerns about people misusing AI, how Middle East companies should build ethical principles into AI, and his new book.
Excerpts from the interview:
Your new book is titled 2062: The World that AI Made. Why did you pick 2062 as a pivotal year?
I surveyed 300 of my colleagues, other experts in AI, and 2062 was the median year in which they predicted machines would match human intelligence. There was a lot of variability in their answers but the important thing is that it will likely happen in 50 to 100 years, in the lifetime of our children, and if we’re lucky, in ours.
You are a scientist and an inventor. How did you become an activist in the fight against “killer robots”?
By accident. I never set out to be one. But I felt a fundamental responsibility to speak out about this dangerous path that we do not need to go down. When my daughter grows up, I don’t want her to ask why I sat back and did nothing.
You have said that computers can do dirty, dull, difficult and dangerous tasks and we can sit back and enjoy the finer things of life. But, how soon will that happen? 2062. But seriously, computers are already doing dirty, dull, difficult and dangerous tasks. In Australia we have some of the most automated mines on the planet. And that’s been great. It used to be that 300 people would die each year in mining accidents. Last year, it was just 32. That’s 32 too many. But a tenth of what it used to be as we’re handing over many tasks to machines.
Also Read: Will Quantum Computing Be a Game-Changer?
What are the key ingredients that allowed AI to thrive?
Data. Compute. And better algorithms
What were the most exciting developments in AI in the past ten years?
The most exciting part is AI is leaving the lab and becoming part of our lives. You can’t open a newspaper without reading multiple stories of this.
Also Read: Top trends in Natural Language Processing
We talk about democratising AI, but as companies move toward democratisation, even the most sophisticated AI systems can fall victim to bias, explainability issues, and other flaws. How concerned are you about the idea that people are misusing AI?
Greatly. Especially in politics. In the legal system, the welfare system. In surveillance, insurance, and marketing. There are so many areas to worry about.
In The Middle East, companies are creating AI-based products and services, how should they go about building ethical principles into AI?
There are many useful frameworks, from the OECD principles to those proposed by IEEE and ISO that they can look to for asking some helpful questions. Essentially, they are about asking who profits and who might be harmed by this system?
Also Read: Blockchain + IoT: A Perfect Match
What advice would you have for a company looking at injecting some machine intelligence into their operations?
Go for it! Your competitor will.
One of the fertile areas for quantum computing is AI. How do they help each other?
Quantum computing will give us faster computers. This will help AI. But AI is much more than faster computing so it’s not a silver bullet that will solve all our AI problems.
Initial hopes for AI are on the verge of being realised. But just as in the first decades of moonshot hope, ambitious predictions continue to be the norm. In your opinion, what are the predictions that have gone horribly wrong, or going to be, about AI?
We have made almost no progress on giving computers our common sense intelligence. We call this “Moravec’s paradox”. The hard things for us like playing chess are often easy for computers. But surprisingly, the easy things for us like picking up a chess piece are often strangely hard for computers.