Artificial intelligence just got more lifelike.
Researchers at Meta, Facebook’s parent company, unveiled an artificial intelligence model named Cicero, after the Roman statesman demonstrating negotiation skills, trickery and forethought. More frequently than not, it wins at Diplomacy, a complex, ruthless strategy game where players forge alliances, craft battle plans and negotiate to conquer a stylized version of Europe.
It’s the latest evolution in artificial intelligence, which has experienced rapid advancements in recent years that have led to dystopian inventions, from chatbots becoming humanlike, to AI-created art becoming hyper-realistic, to killer drones.
According to Meta, Cicero could trick humans into thinking it was real and invite players to join alliances, craft invasion plans and negotiate peace deals when needed. The model’s mastery of language surprised some scientists and its creators, who thought this level of sophistication was years away.
But experts said its ability to withhold information, think multiple steps ahead of opponents and outsmart human competitors sparks broader concerns. This technology could be used to concoct smarter scams that extort people or create more convincing deep fakes.
“It’s a great example of just how much we can fool other human beings,” said Kentaro Toyama, a professor and artificial intelligence expert at the University of Michigan, who read Meta’s paper. “These things are super scary … [and] could be used for evil.”
For years, scientists have been racing to build artificial intelligence models that can perform tasks better than humans. Associated advancements have also been accompanied with concern that they could inch humans closer to a science fiction-like dystopia where robots and technology control the world.
In 2019, Facebook created an AI that could bluff and beat humans in poker. More recently, a former Google engineer claimed that LaMDA, Google’s artificially intelligent chatbot generator, was sentient. Artificial intelligence-created art has been able to trick experienced contest judges, prompting ethical debates.
Many of those advances have happened in rapid succession, experts said, due to advances in natural language processing and sophisticated algorithms that can analyse large troves of text.
Meta’s research team decided to create something to test how advanced language models could get, hoping to create an AI that “would be generally impressive to the community,” said Noam Brown, a scientist on Meta’s AI research team.
They landed on gameplay, which has often been used to show artificial intelligence’s limits and advancements. Games such as chess and Go, played in China, were analytical, and computers had already mastered them. Meta researchers quickly decided on Diplomacy, Brown said, which did not have a numerical rule base and relied much more on conversations between people.
To master it, they created Cicero. It was fueled by two artificial intelligence engines. One guided strategic reasoning, which allowed the model to forecast and create ideal ways to play the game. The other guided dialogue, allowing the model to communicate with humans in lifelike ways.
Scientists trained the model on large troves of text data from the internet and on roughly 50,000 games of Diplomacy played online at webDiplomacy.net, which included transcripts of game discussions.
To test it, the study showed that Meta let Cicero play 40 games of Diplomacy with humans in an online league, placing it in the top 10 per cent of players.
Meta researchers said when Cicero was deceptive, its gameplay suffered, and they filtered it to be more honest. Despite that, they acknowledged that the model could “strategically leave out” information when needed. “If it’s talking to its opponent, it’s not going to tell its opponent all the details of its attack plan,” Brown said.
Cicero’s technology could affect real-world products, Brown said. Personal assistants could become better at understanding what customers want. Virtual people in the Metaverse could be more engaging and interact with more lifelike mannerisms.
“It’s great to make these AIs that can beat humans in games,” Brown said. “But we want AI that can cooperate with humans in the real world.”
But some artificial intelligence experts disagree.
Toyama, of the University of Michigan said the nightmare scenarios are apparent. Since Cicero’s code is open for the public to explore, he said, rogue actors could copy it and use its negotiation and communication skills to craft convincing emails that swindle and extort people for money.
If someone trained the language model on data such as diplomatic cables in WikiLeaks, “you could imagine a system that impersonates another diplomat or somebody influential online and then starts a communication with a foreign power,” he said.
Brown said Meta has safeguards in place to prevent toxic dialogue and filter deceptive messages but acknowledged this concern applies to Cicero and other language-processing models. “There’s a lot of positive potential outcomes and then, of course, the potential for negative uses as well,” he said.
Despite internal safeguards, Toyama said there’s little regulation in how these models are used by the larger public, raising a broader societal concern.
“AI is like the nuclear power of this age,” Toyama said. “It has tremendous potential both for good and bad, but … I think if we don’t start practising regulating the bad, all the dystopian AI science fiction will become dystopian science fact.”