Problem Solving in Ai

Only available on StudyMode
  • Topic: Search algorithms, Breadth-first search, Problem solving
  • Pages : 10 (2523 words )
  • Download(s) : 99
  • Published : January 18, 2013
Open Document
Text Preview
Problem Solving and Searching
IT Elect 104
(Chapter 3)
Some text and images in these slides were drawn from
Russel & Norvig’s published material

Problem Solving
Agent Function

Problem Solving Agent
* Agent finds an action sequence to achieve a goal
* Requires problem formulation
* Determine goal
* Formulate problem based on goal
* Searches for an action sequence that solves the problem * Actions are then carried out, ignoring percepts during that period

Problem
* Initial state
* Possible actions / Successor function
* Goal test
* Path cost function
* State space can be derived from the initial state and the successor function

Example: Vacuum World
* Environment consists of two squares,
A (left) and B (right)
* Each square may or may not be dirty
* An agent may be in A or B
* An agent can perceive whether a square is dirty or not
* An agent may move left, move right, suck dirt (or do nothing) * Question: is this a complete PEAS description?
Vacuum World Problem
* Initial state: configuration describing
* location of agent
* dirt status of A and B
* Successor function
* R, L, or S, causes a different configuration
* Goal test
* Check whether A and B are both not dirty
* Path cost
* Number of actions

State Space
* 2 possible locations
x
2 x 2 combinations
( A is clean/dirty,
B is clean/dirty )
=
8 states

Sample Problem and Solution
* Initial State: 2
* Action Sequence:
Suck, Left, Suck
(brings us to which state?)

States and Successors

Example: 8-Puzzle
* Initial state:
as shown
* Actions?
successor function?
* Goal test?
* Path cost?

Example: 8-Queens Problem
* Position 8 queens on a chessboard so that no queen attacks any other queen * Initial state?
* Successor function?
* Goal test?
* Path cost?
Example: Route-finding
* Given a set of locations, links (with values) between locations, an initial location and a destination, find the best route * Initial state?
* Successor function?
* Goal test?
* Path cost?

Some Considerations
* Environment ought to be be static, deterministic, and observable * Why?
* If some of the above properties are relaxed, what happens? * Toy problems versus real-world problems
Searching for Solutions
* Searching through the state space
* Search tree rooted at initial state
* A node in the tree is expanded by applying successor function for each valid action * Children nodes are generated with a different path cost and depth * Return solution once node with goal state is reached

Tree-Search Algorithm

Search Strategy
* Strategy: specifies the order of node expansion
* Uninformed search strategies: no additional information beyond states and successors * Informed or heuristic search: expands “more promising” states Evaluating Strategies
* Completeness
* does it always find a solution if one exists?
* Time complexity
* number of nodes generated
* Space complexity
* maximum number of nodes in memory
* Optimality
* does it always find a least-cost solution?
Time and space complexity
Expressed in terms of:
* b: branching factor
* depends on possible actions
* max number of successors of a node
* d: depth of shallowest goal node
* m: maximum path-length in state space
Uninformed Search Strategies
* Breadth-First Search
* Uniform-Cost Search
* Depth-First Search
* Depth-Limited Search
* Iterative Deepening Search
Breadth-First Search
* fringe is a regular first-in-first-out queue
* Start with initial state; then process the successors of initial state, followed by their successors, and so on… * Shallow nodes first before deeper nodes
* Complete
* Optimal (if path-cost = node depth)
* Time Complexity:...
tracking img