Science continues to test AI for consciousness. Meanwhile, we pause to ask whether we are nearing it at all.
The debate around AI going sentient might have popped up due to many recent advancements the AI community has made towards achieving AGI. However, the thing to consider is if AGI itself is near and what it holds for AI in the future.
Whether AI is sentient or not has been debated for years. However, in June this year, Google Engineer Blake Lemoine asserted that Google’s large language model LaMDA has gone sentient. Although Lemoine was promptly fired; interestingly, the reasons included violating the company’s employment and data confidentiality policies, not taking the debate around sentience this far.
Several critics jumped in in no time and marked the claims as unwarranted. Experts suggest AI first has to accomplish Artificial General Intelligence (AGI), before which any talk around AI being sentient or not is useless. AGI is an extension of AI that aims to include a generalisation capability or the ability to perform mental tasks as good or better than humans across a wide range of domains. So, the question pertains, are we heading towards general intelligence?
March towards AGI
Physicist Mark Gubrud first coined the term AGI in 1997; it was popularised by Webmind founder Ben Goertzel and DeepMind cofounder Shane Legg.
The development of AGI is a significant topic of discussion today. The ultimate goal is essentially an AI that can handle various random tasks and figure out handling a new task independently, much like a human. Some researchers believe we are very close to reaching the level of AGI, and the observations are not made on shaky grounds. Certain recent instances give solid backing to these assumptions:
April 2022: OpenAI announced the AI system DALLE-2, which can create realistic images from a description in natural language. Upon providing the prompt – “teddy bears working on new AI research on the moon in the 1980s” – by Sam Altman, CEO of OpenAI, it generated and filled in details in an image depending on its understanding of the inputs. Check here.
May 2022: AI models were earlier designed to carry out one specific task. Gato is a brand-new “generalist” AI model from DeepMind, which can play Atari video games, caption pictures, converse, and stack blocks with a real robot arm. Gato is capable of 604 distinct tasks in all.
June 2022: GPT-3, the language model by San Francisco-based AI research laboratory OpenAI, was asked to write an academic thesis about itself in 500 words and add scientific references and citations inside the text. It produced the paper in two hours and is currently under peer review for publication in a reputed scientific journal. GPT-3 is renowned for producing text that resembles human speech. It has already written articles, produced books in a day, and generated stories.
July 2022: Biologists have been facing the “protein folding problem” for half a century. It is the challenge of determining a protein’s three-dimensional form from its one-dimensional amino acid sequence. The world’s most brilliant brains have attempted and failed to tackle this problem for years now. To that end, DeepMind’s AI model AphaFold released predicted structures for nearly all catalogued proteins known to science, expanding the AlphaFold database from almost 1 million to over 200 million structures – with the potential to advance our understanding of biology significantly. The achievement was beyond the grasp of the human mind but not the modern AI systems.
In one of his tweets, Elon Musk even set a deadline and said: “2029 feels like a pivotal year. I’d be surprised if we don’t have AGI by then. Hopefully, people on Mars too.” The incidents and predictions make us wonder if we are nearing AGI’s complete development, consequently taking us to possibilities for AI to be sentient.
Much of our understanding might be wrong
The concept of sentience can be described in two categories – awareness and emotions. In terms of awareness, the Turing Test by Alan Turing in 1950 still holds the ground. The test determines whether the system is capable of thinking like other human beings. For emotions, considering the speed with which technology is advancing, the day is not far when the machine might express sadness when speaking with someone who recently lost their job, or it might express fury in the discourse when it learns that the person was fired without justification.
Considering the state of AI today, it significantly outperforms humans in some areas, as we have seen with AlphaFold unfolding complex protein structure. But, at the same time, modern AI systems, in particular, still find it challenging to capture the “common sense” information that directs prediction, inference, and behaviour in everyday human scenarios.
However, two things need to be understood in quick succession: first, there is no clear and widely accepted definition of sentience; second, the concept of AGI in itself is debatable. For example, Yann LeCun, chief AI scientist at Meta, says, “There is no such thing as AGI. Reaching “Human-Level AI” may be a useful goal, but even humans are specialised.” Other experts, including Andrew Ng, consider a larger focus and debate around AGI or sentience useless.
We may see future functions of AI that resemble sentience; however, for the time being, the focus should be on utilising AI to solve larger issues in healthcare, climate, urbanisation, and transportation, amongst others. It is in human interest that AI should remain dumb and dependent on humans.
If you liked reading this, you might like our other stories
AI – How Or Why?
The AI and Data-Driven BFSI Future