Action-Based Discretization for AI Search
Dr. Todd W. Neller*
Department of Computer Science Gettysburg College Campus Box 402 Gettysburg, PA 17325-1486 Introduction As computer gaming reaches ever-greater heights in realism, we can expect the complexity of simulated dynamics to reach further as well. To populate such gaming environments with agents that behave intelligently, there must be some means of reasoning about the consequences of agent actions. Such ability to seek out the ramifications of various possible action sequences, commonly called “lookahead”, is found in programs that play chess, but there are special challenges that face game programmers who wish to apply AI search techniques to complex continuous dynamical systems. In particular, the game programmer must “discretize” the problem, that is, approximate the continuous problem as a discrete problem suitable for an AI search algorithm. As a concrete example, consider the problem of navigating a simulated submarine through a set of static obstacles. This continuous problem has infinite possible states (e.g. submarine position and velocity) and infinite possible trajectories. The standard approach to discretize the problem is to define a graph of “waypoints” between which the submarine can easily travel. A simple waypoint graph can be searched, but this approach is not without significant disadvantages. First, the dynamics of such approximate navigation are not realistic. It’s still common to see massive vehicles in computer games turn about instantly and maintain constant velocity at all times. When considering acceleration in agent behavior, there’s a quick realization that the notion of a “waypoint” becomes far more complex. For example, a vehicle with realistic physical limitations cannot ignore momentum and turn a tight corner at any velocity. A generalized waypoint for such a system would contain not only a position vector, but a velocity vector as well, doubling the dimensions of the waypoint. If waypoint density is held constant, memory requirements grow exponentially with the waypoint dimensions. The second disadvantage is that relevant state can incorporate many factors beyond waypoints in a dynamic environment. If the programmer wishes the submarine to pilot around moving obstacles, state dimensionality is further increased along with an exponential increase of the memory requirements for our state-based discretization. An alternative way to look at the discretization of continuous search problems makes no attempt to discretize the search space at all. Instead, the programmer focuses on two This work was done both at the Stanford Knowledge Systems Laboratory with support from the Stanford Gerald J. Lieberman Fellowship and NASA Grant NAG2-1337, and at Gettysburg College. *
separate discretization issues: (1) discretizing action parameters (choosing a set of ways to act), and (2) discretizing action timing (choosing when to act). When high-dimensionality of the state space makes it infeasible to perform a state-based discretization for search, an actionbased discretization can provide a feasible solution if the computer agent control interface is low dimensional with few discrete action alternatives. Even so, action-based discretization is not trivial. In our submarine example, an action-based approach might sample control parameters that affect positional and angular velocity. The choice of the sample is (1) not obvious, and (2) crucial to the effectiveness of search. Additionally, the programmer needs to choose good timing of control actions. If time intervals between actions are too short/long, search is too shallow/deep in time and behavior is thus shortsighted/inadequately responsive. This paper reviews state-of-the-art search algorithms (e.g. Epsilon-Admissible IterativeDeepening A* and Recursive Best-First Search), and presents new action-based discretization search algorithms that perform action parameter and action timing...
Please join StudyMode to read the full document