Benjamin Franklin wrote a book about chess. Napoleon spent his post-Waterloo years in exile playing the game on St. Helena. John Wayne carried a set and played during downtime while filming “El Dorado.”
“Chess can be addictive,” Dimitri Bertsekas says.
Long before Bertsekas became a luminary in mathematics and computer science, authoring foundational textbooks on reinforcement learning, a type of artificial intelligence, or AI, he was an undergraduate with a passion for chess.
“I was playing all the time and missing classes,” he says, jokingly. “I ultimately decided I wanted to be a mathematician more than a chess player, and for a while I gave up the game.”
Bertsekas explains the attraction.
“Games are designed to challenge human intelligence,” he says. “So they are a good way to also demonstrate the intelligence of an artificial system.”
He adds that games tend to have well-known, fixed rules, which means that the results are socially well understood: “We all know what it means to win or to lose a game.”
Now Bertsekas, a member of the National Academy of Engineering and a professor in the School of Computing and Augmented Intelligence, part of the Ira A Fulton Schools of Engineering at Arizona State University, has found a way to combine his lifelong passion for chess with his expertise for developing new forms of innovative AI.
Working with Yuchao Li, a Fulton Schools postdoctoral research scholar, and Atharva Gundawar, an ASU computer science graduate student, he has created a meta-algorithm that leverages the outputs of multiple top chess engines. An algorithm is a set of instructions that a computer follows to complete its work, while a meta-algorithm is a type of AI wherein one system learns from another system’s algorithms.
Bertsekas, Li and Gundawar have published their findings in the paper “Superior Computer Chess with Model Predictive Control, Reinforcement Learning, and Rollout” and are seeking a patent for the new technology.
The meta-algorithm has other potentially far-reaching applications in areas such as automated transportation, health care and cybersecurity.
The early moves
Artificial intelligence researchers have long been interested in chess. In 1949, American mathematician Claude Shannon laid the foundation for computer chess in a transformative paper. The next year, British computer scientist Alan Turing developed the first chess algorithm, designed to be executed by hand, as computers at the time were not powerful enough to run it.
Their work triggered a quest among computer scientists to develop AI that could beat a world chess champion — efforts that were ultimately successful in 1997 when IBM’s supercomputer, Deep Blue, famously defeated Garry Kasparov, a Russian champion recognized as one of the best chess players of all time.
Computer scientists have developed several chess-playing computer programs, known as chess engines, that can both compete with and help train human players. Traditional engines use different forms of minimax search, an algorithm that looks for the best move, while taking into account the opponent’s countermove. The engines also evaluate chess positions with handcrafted algorithms that are based on human intuition.
Enter neural networks
Google’s chess engine AlphaZero stunned the world in 2017 with its powerful and innovative play, blowing away all competition. AlphaZero combined a powerful neural network with a kind of reinforcement learning wherein a computer uses trial and error to learn from its successes. The program has played millions of matches against itself, teaching the neural network to provide far more sophisticated evaluations of chess positions than the handcrafted algorithms of the past.
Other chess engines, such as the popular Stockfish and Leela Chess Zero programs, followed AlphaZero’s lead to combine classical minimax search with neural network calculations.
Standing on the shoulders of giants
The new AI meta-algorithm developed by Bertsekas, Li and Gundawar builds on this work. It incorporates evaluations of multiple existing engines within a reinforcement learning framework, called model predictive control, to assess and compare the future consequences of candidate moves at a given position.
The meta-algorithm improves the play of its component engines and can beat the original engines in actual play.
There are many applications of the new work across engineering disciplines: A self-driving car can make safer decisions on the road. AI-powered health systems can make more accurate predictions about disease progression and suggest sophisticated treatments. Cybersecurity solutions can more efficiently forecast and forestall the actions of hackers.
Bertsekas also remains hopeful that AI can be used to educate players and enhance enjoyment of the game.
“Human players will never beat AI in chess,” Bertsekas says. “But AI has made chess more fascinating and better understood, and people will always continue to play it.”
More Science and technology
Study finds cerebellum plays role in cognition — and it's different for males and females
Research has shown there can be sex differences for how male and female brains are wired.For example, links have been made between neurobehavioral diseases — such as attention-deficit/hyperactivity…
Artificial intelligence drives need for real data storage innovations
In southeastern Mesa, Arizona, construction crews are hard at work on a state-of-the-art data center. The $1 billion facility will open in 2026 and provide approximately 2.5 million square feet…
Extended reality class prepares students for semiconductor industry
Semiconductor manufacturing is a complex and fast-changing field, driven by innovation and investment to meet growing societal demands.For engineering students interested in joining this industry,…