“To figure out whether random AI can help people coordinate, Hirokazu Shirado, a sociologist and systems engineer, and Nicholas Christakis, a sociologist and physician, both at Yale University, asked volunteers to play a simple online game. Each person controlled one node among 20 in a network. The nodes were colored green, orange, or purple, and people could change their node color at any time. The goal was for no two adjacent nodes to share the same color, but players could see only their color and the colors of the nodes to which they were connected, so sometimes settling conflicts with neighbors raised unseen conflicts between those neighbors and their neighbors. If the network achieved the goal before the 5-minute time limit was up, all players in the network received extra payment. The researchers recruited 4000 players and placed them in 230 randomly generated networks.
Some of the networks had 20 people controlling the nodes, but others had three of the most central or well-connected nodes already colored in such a way that they fit one of the solutions. (Each network had multiple solutions.) And some of the networks had 17 people and three bots, or simple AI programs, in charge of the nodes. In some networks, the bot-controlled nodes were placed centrally, in some they were placed peripherally, and in some they were placed randomly. The bots also varied in how much noise, or randomness, influenced their choice of node color. In some networks, every 1.5 seconds the bots picked whatever color differed from the greatest number of neighbors—generally a good strategy among people playing the game. In some networks, they followed this strategy, but 10% of the time they would pick randomly. And in some networks, they would pick randomly 30% of the time.
All of the networks with bots performed the same as the networks with 20 people, except for one type. The networks in which the bots were placed centrally and randomized their decisions 10% of the time outperformed the all-human networks. They solved the coordination game within the time limit more frequently (85% versus 67% of the time). And the median time spent on the task was 103 seconds versus 232 seconds, a significant difference, the researchers report today in Nature. The fact that bots with 0% noise or 30% noise did not outperform humans means that there’s a Goldilocks zone of randomness.
What’s more, the bot-aided networks performed just as well as the networks that already had a head start—those with three nodes preset to fit a solution. But whereas the set-color networks required top-down control, the noisy bots achieved equal results with just a bit of local randomness. “We get the same bang,” Christakis says. “To me that was a beautiful result.””