"Game AI is not about trying to make something smart; it's about making something look smart while still being able to be beaten, though not too easily. That's what makes the game fun, and the key to game AI is fun through illusion, not true intelligence."
• emergent behavior What does this mean?
Light-switch timer--trying to outsmart thiefs
• Hard-Coded AI--predictable, 100% deterministic
• Randomization--completely random won't work, use deviation from start time and end time
• Weighted Randoms--[Doesn't apply to light program, but it could be applied] Game monster can attack, cast fire spell, run away--60% attacks, 30% casts spell, 10% run away
Finite State Machines
• states, transitions--Patrol, Attack, Run Away (Figure 18.1)
Update function iterates through transitions and does the first one that evaluates to true.
What are examples of transitions? Are they easy or difficult to evaluate?
What are examples of good applications of FSMs? What things would they not work so well for?
Each nonleaf node is a decision node and each leaf node is an action.
Start at the root and walk down the tree until you get to an action.
Figure 18.3 Decision tree for a guard
Advantages and disadvantages compared to FSMs?
membership in sets--not just yes or no
AttackSet = player is close AND I am healthy
RunSet - player is close AND I am hurt
Could be in both sets at the same time and do some combination.
function GetBestAction() bestUtility = 0 bestAction = none for action in currentWorldState.GetPossibleActions() tempWorldState = currentWorldState tempWorldState.ApplyAction(action) utility = tempWorldState.Utility() if utility > bestUtility bestAction = action bestUtility = utility return bestAction
Goal-Oriented Action Planning
• goals "desirable world states that the agent wants to achieve"
Choose a goal, then decide how to achieve it
Various techniques for choosing goals, including decision trees, etc.
Figuring out how to achieve the goal can be more difficult
Goal: not be hungry. How can you achieve it?
--Find car keys
--Drive to store
Each action has a set of effects (conditions) that it produces and a set of preconditions (also conditions) that it requires.
What are the preconditions and effects of the actions in the plan above? Algorithm starts with goal states and looks for actions that achieve those conditions, then looks at preconditions of that action as "mini" goals, and walks backwards.
How to search the action space efficiently
PathfindingGraph of nodes and edges--can look robotic