Educational project chess and artificial intelligence. Artificial intelligence and knowledge bases. It is no longer necessary to dream of victory over artificial intelligence

Let's look at some basic concepts that will help us create a simple artificial intelligence that can play chess:

  • moving;
  • evaluation of the chessboard;
  • minimax;
  • alpha beta clipping.

At each step, we will improve our algorithm with one of these time-tested chess programming techniques. You will see how each of them affects the algorithm's play style.

The finished algorithm can be found on GitHub.

Step 1. Generation of moves and visualization of the chessboard

We will use the chess.js libraries to generate moves and chessboard.js to render the board. The library for generating moves implements all the rules of chess. Based on this, we can calculate all the moves for a given state of the board.

Visualization of the motion generation function. The starting position is used as the input, and the output is all possible moves from that position.

Using these libraries will help us focus only on the most interesting task - creating an algorithm that finds the best move. We'll start by writing a function that returns a random move from all possible moves:

Var calculateBestMove = function(game) ( //Generate all moves for a given position var newGameMoves = game.ugly_moves(); return newGameMoves; );

Although this algorithm is not a very solid chess player, but it is a good starting point, since its level is enough to play with us:

Black plays random moves

JSFiddle .

Step 2. Evaluate the board

Now let's try to understand which side is stronger in a certain position. The easiest way to achieve this is to calculate the relative strength of the pieces on the board using the following table:

With the score function, we can create an algorithm that chooses the move with the highest score:

Var calculateBestMove = function (game) ( var newGameMoves = game.ugly_moves(); var bestMove = null; //Use any negative number var bestValue = -9999; for (var i = 0; i< newGameMoves.length; i++) { var newGameMove = newGameMoves[i]; game.ugly_move(newGameMove); //Возьмите отрицательное число, поскольку ИИ играет черными var boardValue = -evaluateBoard(game.board()) game.undo(); if (boardValue >bestValue) ( ​​bestValue = boardValue; bestMove = newGameMove ) ) return bestMove; );

The only tangible improvement is that our algorithm will now eat a piece if possible:

Black plays with a simple evaluation function

You can see what happened at this stage on JSFiddle .

Step 3 Search Tree and Minimax

We will then create a search tree from which the algorithm can choose the best move. This is done using the minimax algorithm.

Note. transl. In one of our articles, we already dealt with - we learned how to create an AI that cannot be beaten in tic-tac-toe.

In this algorithm, a recursive tree of all possible moves is explored to a given depth, and the position is evaluated on the "leaves" of the tree.

After that, we return either the smallest or largest child value to the parent node, depending on whose turn is calculated (that is, we try to minimize or maximize the result at each level).

Minimax visualization in artificial position. The best move for white is b2-c3, so we can guarantee that we get to a position where the score is -50

Var minimax = function (depth, game, isMaximisingPlayer) ( if (depth === 0) ( return -evaluateBoard(game.board()); ) var newGameMoves = game.ugly_moves(); if (isMaximisingPlayer) ( var bestMove = -9999; for (var i = 0; i< newGameMoves.length; i++) { game.ugly_move(newGameMoves[i]); bestMove = Math.max(bestMove, minimax(depth - 1, game, !isMaximisingPlayer)); game.undo(); } return bestMove; } else { var bestMove = 9999; for (var i = 0; i < newGameMoves.length; i++) { game.ugly_move(newGameMoves[i]); bestMove = Math.min(bestMove, minimax(depth - 1, game, !isMaximisingPlayer)); game.undo(); } return bestMove; } };

With minimax, our algorithm begins to understand basic chess tactics:

Minimax with depth level 2

You can see what happened at this stage on JSFiddle .

The effectiveness of minimax depends largely on the achievable search depth. This is what we will improve in the next step.

Step 4: Alpha Beta Clipping

Positions we don't need if alpha-beta pruning is used. The tree is visited in the order described.

With alpha-beta clipping, we get a significant improvement in minimax, as shown in the following example:

The number of positions to evaluate in the case of a search with a depth of 4 and the starting position shown in the picture.

You can see what happened at this stage on JSFiddle .

Step 5. Improved scoring function

The original scoring function is rather naive, since we simply calculate the scores of the pieces that are on the board. To improve it, we will start to take into account the position of the figures. For example, a knight in the center of the board is "more expensive" because it has more moves available and is therefore more active than a knight on the edge of the board.

Developed by engineers at the Massachusetts Institute of Technology. Fischer checkmated the computer three times and won by a landslide. In his letters, the chess player wrote that programs make "blunders", and called the computers themselves "useless pieces of iron."

But in the same year, Monty Newborn, one of the first scientists to study computer chess, said prophetic words:

“Grandmasters used to come to computer chess tournaments to have a laugh. Now they come to observe, and in the future they will study there.”

Bobby Fischer after defeating the computer. Photo: Getty Images

It seems that people have some kind of innate love for mind games. When King Charles I of England was sentenced to death in 1649, he took two things with him to his execution - a bible and a set of chess. The famous artist of the 20th century, Marcel Duchamp, at the peak of his career, suddenly left for Argentina and began carving chess pieces from wood, and in general became interested in chess. In the 19th century, a mysterious story related to the game of Go occurred in Japan. According to legend, the spirits prompted one famous player three brilliant moves. As a result, he was able to win, and his opponent after the game fell to the floor, choked with blood and died.

Computers are far from all this mysticism, but in just a couple of decades they have studied mind games more deeply than humanity has in millennia. In 2014, the company acquired DeepMind for $400 million to "carry out the most unusual and complex research, the ultimate goal of which is to unravel the essence of intelligence." In particular, scientists wanted to teach a computer to play Go. This game is much more difficult than chess. In 1985, a Taiwanese industrial magnate said he would pay $1.4 million for a program that could beat the best Go player. In 1997, the magnate died, and three years later his offer expired - no one was able to take the prize.

Now it could belong to the DeepMind AlphaGo program, which uses modern neural networks. A year ago, she was international go champion Lee Sedol. In May of this year, she again defeated the best Go player, as well as a team of five other professional players.

AlphaGo has become the absolute champion. But soon after her high-profile victories, oblivion awaits her. At the end of May, DeepMind quietly announced that AlphaGo was leaving the competitive scene. To mark the occasion, the company published 50 variations of the games that the program played against itself. In the future, DeepMind wants to release a final research paper that will describe the effectiveness of the program's algorithm.

As for chess, mankind lost the palm in them 20 years before these events, when chess player Garry Kasparov lost to the IBM Deep Blue supercomputer. Chess and Go are not the only games AI is trying to teach. They tried to teach the computer checkers, short backgammon, reversi, poker and many other board games. And human intelligence can no longer be compared with artificial intelligence. This is partly due to advances in technology. For example, back in 1997, the Deep Blue computer ranked 259th in the list of the fastest supercomputers in the world and could perform about 11 billion operations per second. Now, thanks to modern algorithms, even your smartphone is able to defeat Kasparov.

Garry Kasparov vs Deep Blue computer. On the left is one of the IBM engineers, Xiong Feixiong. Photo: Getty Images

Such achievements of AI have caused quite human emotions in people: sadness, depression and despair. After Lee Sedol was defeated by AlphaGo, he went through an existential crisis. “I questioned human ingenuity,” he admitted after the match. “I wondered if all the moves in Go that I knew were correct.” According to one eyewitness, after the defeat, Lee looked like he was "physically ill." Kasparov felt no better after losing to the computer. When he returned to the hotel, he simply undressed, got into bed and stared at the ceiling.

"The computer analyzes some positions so deeply that it plays like a god," Kasparov said.

Deep Blue showed the public for the first time that a computer can outperform humans in solving intellectual problems. “Then it was a shock,” said Murray Campbell, co-founder of Deep Blue. “Now we are gradually getting used to this idea.” However, it is not clear what awaits mankind in the future. How can achievements be used in the real world in games? Campbell's answer to this question sounds pessimistic. "It's hard to find a good example of such success in board games," he said. - In the early 90's, an IBM employee named Gerald Tesauro tried to teach AI to play backgammon and made some advances in stimulated learning. Now his methods are often used in robotics. However, his case is rather an exception to the rule.

Unfortunately, there are no better algorithms for chess yet than the enumeration of very many positions. True, the enumeration is in order (and not one) optimized, but still it is a big enumeration. To search for a response move, a tree is built with the original move at the root, edges - moves-answers and nodes - new positions.

It is easy to explain how the next move is chosen in elementary algorithms. On your turn, you choose a move (in your opinion) that will bring the greatest benefit (maximizes your benefit), and the opponent, on his next move, tries to choose a move that will bring him the most benefit (maximizes his benefit and minimizes yours). An algorithm with this principle is called minimax. At each stage, you assign a position estimate to each node in the tree (more on that later) and maximize it on your own move, and minimize it on your opponent's move. The algorithm during operation must go through all the nodes of the tree (that is, through all possible game positions in the game), that is, it is completely unsuitable in terms of time.
Its next improvement is alpha-beta clipping (branch and border method).

From the name it follows that the algorithm cuts off by some two parameters - alpha and beta. The main idea of ​​clipping is that now we will keep the clipping interval (lower and upper bounds - alpha and beta, respectively - your K.O.) and we will not consider estimates of all nodes that do not fall into the interval from below (since they are not affect the result - these are just worse moves than the one already found), and the interval itself will be narrowed as the best moves are found. Although the alpha-beta clipping is much better than the minimix, its running time is also very long. If we assume that in the middle of the game there are approximately 40 different moves on one side, then the time of the algorithm can be estimated as O(40^P), where P is the depth of the move tree. Of course, with minimax there can be such a sequence of considering moves when we do not make any cuts, then the alpha-beta cut will simply turn into a minimax. At best, alpha-beta pruning can avoid checking the root of all moves in minimax. In order to avoid a long running time (with such an O-large complexity of the algorithm), the search in the tree is done by some fixed value and the node is evaluated there. This estimate is a very great approximation to the real estimate of the node (that is, iterating to the end of the tree, and there the result is “win, lose, draw”). As for evaluating a node, there are just a bunch of different methods (you can read it in the links at the end of the article). In short - then, of course, I count the player's material (according to one system - with integers pawn - 100, knight and bishop - 300, rook - 500, queen - 900; according to another system - real in parts from one) + position on the board of this player. As for the position, this is where one of the nightmares of writing chess begins, since the speed of the program will mainly depend on the evaluation function and, more precisely, on the evaluation of the position. There is already someone in that much. For paired tours, the player +, for covering the king with his own pawns +, for a pawn near the other end of the board +, etc., and the hanging pieces, the open king, etc. minus the position. etc. - You can write a bunch of factors. Here, to assess the position in the game, an assessment of the position of the player is built, which makes a move, and the assessment of the corresponding position of the opponent is subtracted from it. As they say, one photo is sometimes better than a thousand words, and maybe a piece of pseudo C# code would also be better than explanations:

Enum CurrentPlayer(Me, Opponent); public int AlphaBetaPruning (int alpha, int beta, int depth, CurrentPlayer currentPlayer) ( // value of current node int value; // count current node ++nodesSearched; // get opposite to currentPlayer CurrentPlayer opponentPlayer = GetOppositePlayerTo(currentPlayer); / / generates all moves for player, which turn is to make move / /moves, generated by this method, are free of moves // after making which current player would be in check List moves = GenerateAllMovesForPlayer(currentPlayer); // loop through the moves foreach move in moves ( MakeMove(move); ++ply; // If depth is still, continue to search deeper if (depth > 1) value = -AlphaBetaPruning (-beta, -alpha, depth - 1, opponentPlayer); else // If no depth left (leaf node), evalute that position value = EvaluatePlayerPosition(currentPlayer) - EvaluatePlayerPosition(opponentPlayer); RollBackLastMove(); --ply; if (value > alpha) ( // This move is so good that caused a cutoff of the rest tree if (value >= beta) return beta; alpha = value; ) ) if (moves.Count == 0) ( // if no moves, than position is checkmate or if ( IsInCheck(currentPlayer)) return (-MateValue + ply); else return 0; ) return alpha; )

I think some explanations about the code will not be superfluous:

  • GetOppositePlayerTo() simply changes CurrentPlayer.Me to CurrentPlayer.Opponent and vice versa
  • MakeMove() makes the next move from the list of moves
  • ply - a global variable (part of the class) that holds the number of half-passes made at a given depth
An example of using the method:

( ply = 0; nodesSearched = 0; int score = AlphaBetaPruning(-MateValue, MateValue, max_depth, CurrentPlayer.Me); )
where MateValue is a large enough number.
The max_depth parameter is the maximum depth to which the algorithm will descend in the tree. It should be kept in mind that the pseudo-code is purely demonstrative, but quite working.

Instead of coming up with a new algorithm, the people promoting alpha-beta pruning have come up with many different heuristics. The heuristic is just a little hack that sometimes makes a very big speed gain. There are a lot of heuristics for chess, you can't count them all. I will give only the main ones, the rest can be found in the links at the end of the article.

First, a very well-known heuristic is applied "zero move". In a calm position, the opponent is allowed to make two moves instead of one, and after that, the tree is examined to a depth (depth-2), and not (depth-1). If, after evaluating such a subtree, it turns out that the current player still has an advantage, then there is no point in considering the subtree further, since after his next move the player will only make his position better. Since the search is polynomial, the speed gain is noticeable. Sometimes it happens that the enemy evens out his advantage, then you need to consider the entire subtree to the end. An empty move does not always have to be made (for example, when one of the kings is in check, in zugzwang or in an endgame).

Further, the idea is used to first make a move, in which there will be a capture of the opponent's piece, which made the last move. Since almost all moves during the enumeration are stupid and not very reasonable, such an idea will greatly narrow the search window at the beginning, thereby cutting off many unnecessary moves.

Also known history heuristics or best moves service. During enumeration, the best moves at a given level of the tree are saved, and when considering a position, you can first try to make such a move for a given depth (based on the idea that at equal depths in the tree, the same best moves are very often made).
It is known that such a kind of caching of moves improved the performance of the Soviet program Caissa by 10 times.

There are also some ideas about the generation of moves. First, winning captures are considered, that is, such captures when a piece with a lower score beats a piece with a higher score. Then promotions are considered (when the pawn at the other end of the board can be replaced by a stronger piece), then equal captures, and then moves from the history heuristic cache. The rest of the moves can be sorted by board control or some other criterion.

Everything would be fine if alpha-beta pruning was guaranteed to give the best answer. Even considering the long time to bust. But it was not there. The problem is that after the enumeration by a fixed value, the position is evaluated and that's it, but, as it turned out, in some game positions it is impossible to stop enumeration. After many attempts, it turned out that the enumeration can be stopped only in calm positions. Therefore, in the main enumeration, an additional enumeration was added, in which only captures, promotions and checks are considered (called forced enumeration). We also noticed that some positions with an exchange in the middle also need to be considered deeper. So there were ideas about extensions і reductions, that is, recesses and shortenings of the enumeration tree. For deepening, the most suitable positions are like endgames with pawns, avoiding check, exchanging a piece in the middle of a bust, etc. For shortening, “absolutely calm” positions are suitable. In the Soviet Caissa program, forced enumeration was a bit special - there, after a capture during enumeration, forced enumeration immediately began and its depth was not limited (since it will exhaust itself in a calm position after some time).

As Anthony Hoare said: Premature optimization is the root of all evil in programming." (note: for those who believe that this quote is from Knuth, there are interesting discussions

Photos from open sources

The new artificial intelligence has become the best chess player on Earth in just 4 hours of training! (website)

Do you remember what a sensation the Deep Blue chess supercomputer made in 1996 when it won the first game against the Russian champion Garry Kasparov? Despite the fact that our compatriot still won this game, even then it became clear that artificial intelligence is rapidly progressing and will someday become the best chess player, after which it will be useless for people to play with the program. The only question left was when that would happen.

Representatives of the well-known corporation "Google" said that this time has finally come. According to experts, the AlphaZero neural network developed by them in just 4 hours of self-study turned into the most virtuoso and flawless chess player in the history of this game. A super-powerful artificial intelligence learned to play chess, knowing only its rules. After playing for 4 hours with itself, the robot learned to play perfectly, easily defeating the Stockfish chess program, which was considered the most perfect before. Computers played 100 games - AlphaZero managed to win 28 of them and draw the remaining 72. An advanced neural network that mimics the work of the human brain is able to take risks and even use a kind of intuition.

It is no longer necessary to dream of victory over artificial intelligence

Earlier "AlphaZero" models learned the game by watching live chess players. The developers assumed that this would help artificial intelligence to better understand the strategies of the game. In fact, it turned out that watching people only slows down the development of the program. When the neural network was left to its own devices, its abilities skyrocketed. Now Google engineers are thinking about how to apply such technologies for real benefit to mankind, since a chess game, even the most virtuoso one, has no practical purpose.

In 1968, the famous David Levy made a bet that no program would beat him for the next decade. All this time, the grandmaster was constantly competing with various chess computers and each time he beat them. In 1978, he defeated Chess 4.7, the strongest program at the time, winning a bet. Unfortunately, these days there will be no such interesting fights - we now have to learn only about how one fantastic neural network defeated another. Living chess players can no longer even dream of defeating such monsters. And this is just the beginning of such AI victories over humans…

culture. Dissertation. Cand. Ped Sciences. Rostov-on-Don. 2003.

2. Azarova E.A. Destructive forms of family education, topical problems of our time, crimes of recent times: spiritual, moral and forensic aspects. - Rostov-on-Don: Publishing House of the Russian State Pedagogical University, 2005.

3.Gabdrev GSH. The main aspects of the problem of anxiety in psychology // School psychologist. - 2004. - N ° 8. - S. 9.

4. Enikolopov S.N. Problems of family violence // Problems of psychology. -2002. -#5-6.

5. Tseluiko V.M. Psychology of a dysfunctional family: A book for teachers and parents. - M.: Publishing house VLADOS-PRESS, 2003.

6. Shapar V.B. Practical psychology. Psychodiagnostics of relations between parents and children. - Rostov n / a: Phoenix, 2006.

© Azarova E.A., Zhulina G.N., 2016

A.I. Alifirov

cand. ped. Sciences, Associate Professor, RSSU, Moscow, Russian Federation

I.V. Mikhailova Cand. ped. Sciences, Associate Professor, RSSU, Moscow, Russian Federation

"ARTIFICIAL INTELLIGENCE" IN CHESS

annotation

The article discusses the genesis of the use of software and hardware capable of carrying out intellectual activity comparable to the intellectual activity of a person.

Keywords

Computer technologies in chess, chess programs, chess.

Today, the term "artificial intelligence" (AI) refers to the theory of creating software and hardware capable of carrying out intellectual activity comparable to human intellectual activity. When solving practical problems, they most often use the task from the list, while considering that if a computer system is able to solve these problems, then it is an AI system. Often this list includes playing chess, proving theorems, solving diagnostic problems on an initial incomplete data set, understanding natural language, the ability to learn and self-learn, the ability to classify objects, and the ability to generate new knowledge based on the generation of new rules and regularization models. knowledge .

One of the most important problems of the new science - cybernetics was the problem of how to improve management, how to improve decision-making. One of the founders of cybernetics C. Shannon proposed to formalize and program chess in order to use a chess computer as a model for solving similar control problems. The authority of K. Shannon was so great that his ideas immediately laid the foundation for a new scientific direction. The ideas of K. Shannon were used in the works of A. Turing, K. Zuse, D. Prince.

Author of information theory. K. Shannon, wrote: "The chess machine is ideal to start with, because (1) the task is clearly defined by permissible operations (moves) and the ultimate goal (checkmate); (2) it is not too simple to be trivial, and not too difficult to obtain a satisfactory solution; (3) believe that chess requires "thinking" for skillful play, the solution of this problem will lead us either to admire the ability of mechanized thinking, or to limit our concept of "thinking"; (4) The discrete structure of chess fits well with the digital nature of modern computers."

Later, chess became the subject of a contest between natural and artificial intelligence, and a number of matches were played by the world's leading chess players against computers. In 1995, in an interview with the popular Wired magazine, G.K. Kasparov outlined his view of the game of chess: "Chess is not mathematics. It is fantasy and imagination, it is human logic, not a game with a predictable result. I don't think that theoretically the game of chess can fit into a set of formulas or algorithms." Two years later, the DEEP BLUE supercomputer, having defeated the 13th world champion G.K. Kasparova in a rematch of six games, removed the question of the possibilities of chess artificial intelligence from the agenda. DEEP BLUE kept in memory a complete database of all games and analyzed strategy by calculation only. After the match, G.K. Kasparov changed his point of view, admitting that: "Chess is the only field where one can compare human intuition and creativity with the power and the machine." The match changed the course of development of both classical and computer chess. Artificial intelligence assistance has become widely used in the training system. DI. Bronstein in his book "David vs. Goliath" (2003) wrote: "Botvinnik believed that chess is the art of analysis, and the time of lone improvisers like Andersen, Morphy, Zukertort is gone forever. Looking at modern chess, we must admit that Botvinnik turned out to be right. The "computer boys" carried his idea of ​​the need for home analysis to the point of absurdity. They don't even hide the fact that they are polishing opening variations to a clear result. At the tournament in Linares (2000), the Hungarian Leko admitted without a trace of embarrassment that the entire game with Anand was on his computer!".

List of used literature:

1. Alifirov A.I. Career guidance work in secondary schools by means of chess / Alifirov A.I. // Problems of development of science and education: theory and practice. Collection of scientific papers based on the materials of the International Scientific and Practical Conference August 31, 2015: in 3 parts. Part II. M.: "AR-Consult", 2015 - S. 13-14.

2. Mikhailova I.V., Alifirov A.I. Tactical actions of chess players / Mikhailova I.V., Alifirov A.I. // Results of scientific research Collection of articles of the International scientific-practical conference. Managing editor: Sukiasyan Asatur Albertovich (February 15, 2016) at 4 h. P/3 - Ufa: AETERNA. -2016.S. 119-121.

3. Mikhailova I.V., Alifirov A.I. Theoretical and methodological foundations of the method of thinking by schemes of chess players / Mikhailova I.V., Alifirov A.I. // Results of scientific research Collection of articles of the International scientific-practical conference. Managing editor: Sukiasyan Asatur Albertovich (February 15, 2016) at 4 h. P/3 - Ufa: AETERNA. - 2016. S. 123-125.

4. Mikhailova I.V. Training of young highly qualified chess players with the help of computer chess programs and the "Internet": author. dis. ... cand. ped. Sciences: 13.00.04 / Mikhailova Irina Vitalievna; RSUPC. - M., 2005. - 24 p.

© Alifirov A.I., Mikhailova I.V., 2016

UDC 378.046.2

A.I. Alifirov

Candidate of Pediatric Sciences, Associate Professor of RSSU, Moscow, RF V.V. Fedchuk, Ph.D.

LLC "Prosperity", senior instructor methodologist, Moscow, RF STUDY OF THE LEVEL OF PHYSICAL HEALTH OF ADOLESCENTS

annotation

The article deals with the problem of physical health of adolescents and the influence of various factors