Informed Search Techniques- Hill Climbing, Best First Search, Greedy Search, A* Search, Adversarial Search Techniques-Minimax Procedure, Alpha Beta Procedure

Informed search is the strategy that uses problem specific knowledge beyond the problem definition. It can find solutions more efficiently than an uninformed search. The additional knowledge may be represented as an evaluation function or heuristic function. The term heuristic is referred to as a method which might not always find the best solutions but guarantee to find the good solutions in a reasonable time. The heuristic function is an estimate based on domain specific information that is computable from the current state description to know how close the goal state is. Hill climbing search states that the searching movement in a hill should be in such a way that the downhill is no longer needed. It is simply a loop that continuously moves in the direction of increasing value that is up-hill. It terminates after it reaches a peak where no neighbor has a higher value. Hill climbing algorithm does not look ahead beyond the immediate neighbor of the current state. The hill climbing algorithm typically chooses randomly among the set of based successor if there are more than one. It is the special case of depth first search which uses some additional information that says “Push the farthest child into the stack at first so that the nearest node will be visited at first”. The additional information in hill climbing algorithm is the cost between the two nodes. The best first search technique is the combination of depth first search and breadth first search. This algorithm tries to implement the advantage of depth first search and breadth first search that is the searching algorithm may change from breadth first search to depth first search and vice versa. As the name implies it always choose the best node among the generated successor and its siblings. Greedy search algorithm tries to expand the node that is closest to the goal on the grounds that it is likely to lead to a solution quickly. Greedy search is incomplete even in a finite state space, much like depth-first search as it can get stuck in loops. Thus the goal is not sure to achieve. A* algorithm is a generic search algorithm which overcomes the problem of best first search by considering past as well as the future cost that is the evaluation cost is defined as: f(n) = g(n) + h(n), where g(n) is past cost that is the cost to reach the current node from starting node and h(n) is future cost that is the cost to reach the goal node from current node. The minimax search is known for its usefulness in calculating the best move where all the information is available for example in games such as chess or tic tac toe. It navigates through a tree which captures all the possible moves in the game where each move is represented in terms of loss and gain for one of the players. The problem with minimax search is that the number of game states is has to examine is exponential in the number of move. In order to deal with this problem we use pruning to eliminate large parts of the tree from consideration and Alpha-Beta Pruning is preferable for this purpose. The effectiveness of the alpha-beta pruning depends on order of successors. The alpha-beta cutoffs are used for optimizing the minimax search which evaluates the most promising moves first. It performs the action by remembering the prior positions and reusing their backed-up values. It avoids generating equivalent states.

Summary

Informed search is the strategy that uses problem specific knowledge beyond the problem definition. It can find solutions more efficiently than an uninformed search. The additional knowledge may be represented as an evaluation function or heuristic function. The term heuristic is referred to as a method which might not always find the best solutions but guarantee to find the good solutions in a reasonable time. The heuristic function is an estimate based on domain specific information that is computable from the current state description to know how close the goal state is. Hill climbing search states that the searching movement in a hill should be in such a way that the downhill is no longer needed. It is simply a loop that continuously moves in the direction of increasing value that is up-hill. It terminates after it reaches a peak where no neighbor has a higher value. Hill climbing algorithm does not look ahead beyond the immediate neighbor of the current state. The hill climbing algorithm typically chooses randomly among the set of based successor if there are more than one. It is the special case of depth first search which uses some additional information that says “Push the farthest child into the stack at first so that the nearest node will be visited at first”. The additional information in hill climbing algorithm is the cost between the two nodes. The best first search technique is the combination of depth first search and breadth first search. This algorithm tries to implement the advantage of depth first search and breadth first search that is the searching algorithm may change from breadth first search to depth first search and vice versa. As the name implies it always choose the best node among the generated successor and its siblings. Greedy search algorithm tries to expand the node that is closest to the goal on the grounds that it is likely to lead to a solution quickly. Greedy search is incomplete even in a finite state space, much like depth-first search as it can get stuck in loops. Thus the goal is not sure to achieve. A* algorithm is a generic search algorithm which overcomes the problem of best first search by considering past as well as the future cost that is the evaluation cost is defined as: f(n) = g(n) + h(n), where g(n) is past cost that is the cost to reach the current node from starting node and h(n) is future cost that is the cost to reach the goal node from current node. The minimax search is known for its usefulness in calculating the best move where all the information is available for example in games such as chess or tic tac toe. It navigates through a tree which captures all the possible moves in the game where each move is represented in terms of loss and gain for one of the players. The problem with minimax search is that the number of game states is has to examine is exponential in the number of move. In order to deal with this problem we use pruning to eliminate large parts of the tree from consideration and Alpha-Beta Pruning is preferable for this purpose. The effectiveness of the alpha-beta pruning depends on order of successors. The alpha-beta cutoffs are used for optimizing the minimax search which evaluates the most promising moves first. It performs the action by remembering the prior positions and reusing their backed-up values. It avoids generating equivalent states.

Things to Remember

  • Informed search is the strategy that uses problem specific knowledge beyond the problem definition. It can find solutions more efficiently than an uninformed search. The additional knowledge may be represented as an evaluation function or heuristic function.
  • The term heuristic is referred to as a method which might not always find the best solutions but guarantee to find the good solutions in a reasonable time.
  • The heuristic function is an estimate based on domain-specific information that is computable from the current state description to know how close the goal state is.
  • Hill-climbing search states that the searching movement in a hill should be in such a way that the downhill is no longer needed.
  • Hill climbing algorithm does not look ahead beyond the immediate neighbor of a current state. The hill climbing algorithm typically chooses randomly among the set of based successor if there are more than one.
  • The best first search technique is the combination of depth first search and breadth first search.
  • Greedy search algorithm tries to expand the node that is closest to the goal on the grounds that it is likely to lead to a solution quickly.  Greedy search is incomplete even in a finite state space, much like depth-first search as it can get stuck in loops.
  • A* algorithm is a generic search algorithm which overcomes the problem of best first search.
  • The minimax search is known for its usefulness in calculating the best move where all the information is available for example in games such as chess or tic tac toe.
  • The effectiveness of the alpha-beta pruning depends on the order of successors. The alpha-beta cutoffs are used for optimizing the minimax search which evaluates the most promising moves first.
  • It performs the action by remembering the prior positions and reusing their backed-up values. It avoids generating equivalent states.  

MCQs

No MCQs found.

Subjective Questions

No subjective questions found.

Videos

No videos found.

Informed Search Techniques- Hill Climbing, Best First Search, Greedy Search, A* Search, Adversarial Search Techniques-Minimax Procedure, Alpha Beta Procedure

Informed Search Techniques- Hill Climbing, Best First Search, Greedy Search, A* Search, Adversarial Search Techniques-Minimax Procedure, Alpha Beta Procedure

Informed Search:

This is the strategy that uses problem specific knowledge beyond the problem definition. It can find solutions more efficiently than an uninformed search. The additional knowledge may be represented as an evaluation function or heuristic function.

The term heuristic is referred to as a method which might not always find the best solutions but guarantee to find the good solutions in a reasonable time. The heuristic function is an estimate based on domain-specific information that is computable from the current state description to know how close the goal state is.

Example is given below:

.

Hill Climbing Algorithm:

Hill-climbing search states that the searching movement in a hill should be in such a way that the downhill is no longer needed. The graphical representation of hill climbing algorithm is shown below:

.

It is simply a loop that continuously moves in the direction of increasing value that is up-hill. It terminates after it reaches a peak where no neighbor has the higher value. Hill climbing algorithm does not look ahead beyond the immediate neighbor of the current state.

The hill climbing algorithm typically chooses randomly among the set of based successor if there are more than one. It is the special case of depth-first search which uses some additional information that says “Push the farthest child into the stack at first so that the nearest node will be visited at first”. The additional information in hill climbing algorithm is the cost between the two nodes.

Algorithm:

  1. Push the initial node into the stack.
  2. Pop the node from the stack.
  3. If the node is goal node, then go to step 7.
  4. If the node is already visited ignore it otherwise.
  5. Do following:
    Mark the pop node as the visited node.
    Push the child into the stack in such a way that the farthest child will be inserted at first.
  6. Repeat step 2 to 5 until the stack is empty.
  7. Display the solution.

Drawbacks:

  • Local maxima: It is the value which is peak among its neighbor but it is lower than global maxima.
  • Plateau: It is the area where the evaluation function is flat or constant.
  • Ridges: The results in the sequence of local maxima that is very difficult for greedy algorithms to navigate are called ridges.

Best First Search:

The best first search technique is the combination of depth first search and breadth first search. This algorithm tries to implement the advantage of depth first search and breadth first search that is the searching algorithm may change from breadth first search to depth first search and vice versa. As the name implies it always choose the best node among the generated successor and its siblings. The best node is decided with the help of evaluation function f(n)= h(n) where h(n) is the heuristic function to reach the goal node from the current node.
Algorithm:

  1. Start with initial node and put that into the priority queue.
  2. Pick the best node from the priority queue.
  3. Generate it's successor.
  4. For each successor, do the following:
    If it has not been generated before then evaluate it and add to the priority queue.
    If it has been generated before then change the parent and if this new path is better to updated the cost of getting to its any successor node.
    If the goal step is found or no more node is left in the priority queue then stop otherwise.
  5. Go to step 2.

Greedy Search Algorithm:

Greedy search algorithm tries to expand the node that is closest to the goal on the grounds that it is likely to lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic function that is f(n) = h(n). Greedy search is incomplete even in a finite state space, much like depth-first search as it can get stuck in loops. Thus the goal is not sure to achieve. Like the depth first search, this algorithm also will generate O(bm) where m is the maximum depth.But a good heuristic can give the dramatic improvement of average cost.The space complexity for the tree version is O(bm) where m is the maximum depth for the search space. With a good heuristic function, however, the complexity can be reduced substantially. The amount of reduction depends on the particular problem and on the quality of the heuristic.

A* search algorithm:

This algorithm is a generic search algorithm which overcomes the problem of best first search by considering past as well as the future cost that is the evaluation cost is defined as:
f(n) = g(n) + h(n)
where, g(n) is past cost that is the cost to reach the current node from starting node and h(n) is the future cost that is the cost to reach the goal node from the current node.

.

The sum of g(n) and h(n) is also called as fitness number which is used to select the best successor.

Adversarial Search:

In many applications, there might be multiple agents searching for the solution in the same solution space. This type of scenarios usually occurs in game playing where the two opponents also called as adversaries are searching for goals. Their goals are usually contrary searches in which two or more players with contrary goals are trying to explore in the same solution space in order to search solution. This process is called adversarial search. In such type one player tries to cater for the opponent mode by intelligently deciding what will be the impact of his one move in the overall configuration of the move.

Minimax search:

The minimax search is known for its usefulness in calculating the best move where all the information is available for example in games such as chess or tic tac toe. It navigates through a tree which captures all the possible moves in the game where each move is represented in terms of loss and gain for one of the players. It follows that this can only be used to make decisions in zero-sum games where one player’s loss is the other player’s gain. Theoretically, this search algorithm is based on Von Neumann’s minimax theorem which states that in these types of games there is always a set of strategies which leads to both players gaining the same value and that seeing as this is the best possible value one can expect to gain one should employ this set of strategies.

The first step of this search algorithm is to give each node a value in terms of utility for the first player that is player 1. These values are derived from the values of the terminal nodes, which each represents an end of the game. Each level of the tree alternates between the first player's move who is trying to maximize her score and the second player’s move who is trying to minimize the first player's score in order to undermine her success. If the level above the terminal nodes represents the second player’s possible moves then the value of the nodes will be the lowest value among those of their children that are the minimum. On the other hand, if this level represents the possible moves for the first player then the value of its nodes will be the highest value among those of their children. Finally, once all the nodes have been assigned values, it is decision time for the first player who just picks the move which will lead to the highest value.

Alpha-Beta Pruning:

The problem with minimax search is that the number of game states has to examine is exponential in the number of the move. In order to deal with this problem, we use pruning to eliminate large parts of the tree from consideration and Alpha-Beta Pruning is preferable for this purpose. Recognize when a position can never be chosen in minimax no matter what its children are:

  • Max (3, Min(2,x,y) …) is always ≥ 3.Min (2, Max(3,x,y) …) is always ≤ 2.We know this without knowing x and y.
  • Alpha is equal to the-the value of the best choice that is found so far for MAX (highest).
  • Beta is equal to the-the value of the best choice that is found so far for MIN (lowest).
  • When maximizing, cut off values are lower than Alpha.
  • When minimizing, cut off values greater than Beta.

The effectiveness of the alpha-beta pruning depends on the order of successors. If we can evaluate the best successor first then the search is O(bd/2) instead of O(bd).This means that in the same amount of time, alpha-beta search can search twice as deep. However, pruning does not affect the final result. Although good move ordering improves the effectiveness of pruning. With “perfect ordering”, time complexity O(bm/2) doubles the depth of search.

Optimizing the minimax search:

The alpha-beta cutoffs are used for optimizing the minimax search which evaluates the most promising moves first. It performs the action by remembering the prior positions and reusing their backed-up values. It avoids generating equivalent states.

References:

  1. Elaine Rich, Kevin Knight 1991, "Artificial Intelligence".
  2. Nilsson, Nils J. Principles of Artificial Intelligence, Narosa Publishing House New Delhi, 1998.
  3. Norvig, Peter & Russel, Stuart Artificial Intelligence: A modern Approach, Prentice Hall, NJ, 1995
  4. Patterson, Dan W. Introduction to Artificial Intelligence and Expert Systems, Prentice Hall of India Private Limited New Delhi, 1998.

Lesson

Search Techniques

Subject

Computer Engineering

Grade

Engineering

Recent Notes

No recent notes.

Related Notes

No related notes.