Game Programming Complete
Ch. 18 An Introduction to Game AI, by David "Rez" Graham

Safari Books Online: http://proquest.safaribooksonline.com/book/-/9781133776574
Chapter 18

Intro
"Game AI is not about trying to make something smart; it's about making something look smart while still being able to be beaten, though not too easily. That's what makes the game fun, and the key to game AI is fun through illusion, not true intelligence."

AI Techniques
emergent behavior What does this mean?
Light-switch timer--trying to outsmart thiefs
Hard-Coded AI--predictable, 100% deterministic
Randomization--completely random won't work, use deviation from start time and end time
Weighted Randoms--[Doesn't apply to light program, but it could be applied] Game monster can attack, cast fire spell, run away--60% attacks, 30% casts spell, 10% run away

Finite State Machines
states, transitions--Patrol, Attack, Run Away (Figure 18.1)
Update function iterates through transitions and does the first one that evaluates to true.
What are examples of transitions? Are they easy or difficult to evaluate?
What are examples of good applications of FSMs? What things would they not work so well for?

Decision Trees
Each nonleaf node is a decision node and each leaf node is an action.
Start at the root and walk down the tree until you get to an action.
Figure 18.3 Decision tree for a guard
Advantages and disadvantages compared to FSMs?

Fuzzy Logic
De-Vulcanizing
membership in sets--not just yes or no
Fuzzy sets:
AttackSet = player is close AND I am healthy
RunSet - player is close AND I am hurt
Could be in both sets at the same time and do some combination.

Utility Theory

Stuart Russell and Peter Norvig provide an excellent definition for utility theory in their book Artificial Intelligence: A Modern Approach:
"Utility theory says that every state has a degree of usefulness, or utility, to an agent and that the agent will prefer states with higher utility."
• Every possible state has a utility value--how happy will the agent be in that state?
Take the current world state, find the anticipated world state after performing an action, and see what the utility of that new world state is
• Pseudocode from the book:
function GetBestAction()
  bestUtility = 0
  bestAction = none
  for action in currentWorldState.GetPossibleActions()
    tempWorldState = currentWorldState
    tempWorldState.ApplyAction(action)
    utility = tempWorldState.Utility()
    if utility > bestUtility
      bestAction = action
      bestUtility = utility
    return bestAction

Goal-Oriented Action Planning
goals "desirable world states that the agent wants to achieve"
Choose a goal, then decide how to achieve it
Various techniques for choosing goals, including decision trees, etc.
Figuring out how to achieve the goal can be more difficult
Goal: not be hungry. How can you achieve it?
--Find car keys
--Drive to store
--Buy food
--Drive home
--Cook food
--Eat food
Each action has a set of effects (conditions) that it produces and a set of preconditions (also conditions) that it requires.
What are the preconditions and effects of the actions in the plan above? Algorithm starts with goal states and looks for actions that achieve those conditions, then looks at preconditions of that action as "mini" goals, and walks backwards.
Problems:
World representation
How to search the action space efficiently

Applying Goal-Oriented Action Planning to Games by Jeff Orkin
http://web.media.mit.edu/~jorkin/GOAP_draft_AIWisdom2_2003.pdf

Pathfinding

Graph of nodes and edges--can look robotic
A*