DecisionMakingAlgos

Artificial Intelligence Algorithms

Individual Project

The below post consists of a series of popular Artificial Intelligence decision making algorithms. All the algorithms were written in C++. I developed this in Visual studio using the OpenFrameworks Library for C++. OpenFrameWorks is an elementary yet powerful library that provides support for multiple systems such as Graphics, GUI, Sound etc. Through out the post, I use the word "boid" which means it is an AI agent. Implementing the Decision Making algorithms was really fun. I particularly enjoyed developing the Behavior Trees. There are 3 parts in this post. Each describing a unique decision making algorithm 1. Decision Trees 2. Behavior Trees 3. Goal Oriented Action Planner (GOAP)

Before we go into the decision making algorithms, I want to talk about two more algorithms which will act based on the decision made by the decision making algorithms. These are

1. Movement Algorithms : These are at the lowest level in the AI hierarchy. Movement algorithms allow the agents to move based on either kinematic or dynamic motion. Movement algorithms include Seek, Evade, Flocking etc.

2. Path finding Algorithms : These are one level below the Decision Making algorithms. Path finding is used to generate a path between two points in the world by converting the world space into graph space (Quantization). The path generated is then converted back to world space (Localization). Popular path finding algorithms are Dijkstra and A*.

I will use the Seek movement algorithm which updates the movement based on kinematic motion. For the path finding I use the A-Star algorithm. The window is divided into a tile based graph like below. The tiles filled in black are obstacles and do not offer path through them. The obstacles and the path are generated dynamically.

Decision Trees:

Here is the rough sketch of the Decision Tree I have implemented. It is pretty straight forward and also very simple. There are 2 types of nodes in the Decision Tree below

1. Decision Nodes - (Diamond shaped nodes) - Performs a check based on either a boolean condition or a numerical evaluation. Branches into either true or false based on the condition evaluated

2. Action Nodes - (Rounded rectangle nodes) - Perform an action. An action can be anything that changes the state of the world.

DT

To Summarize the Decision Tree and what it does at each node.

Node 1: Checks if there is a valid target to seek. If yes go to node 3 if not go to node 2

Node 2: Sets a random target. This is an action node

Node 3: There is a valid target, so this decision node will check if it reached the target. If yes go to node 5, if not go to node 4

Node 4: We have a target and we haven’t reached it yet, so just seek the target. This is also an action node

Node 5: Have I reached the target and been idle for more than 5 seconds, if yes go to node 7, if not go to node 6

Node 6: We have reached the target, but I don’t want to set a new target just as yet, but wait for 5 seconds, so the seek action here tries to seek the current target, which is where I’m at currently,so it basically does nothing.

Node 7: Sets the target to null so that when the decision tree is executed next time, it goes to node 2 after failing at node 1.

If we observe all the nodes, the path finding is only used at the action nodes 4 and 6 where the action is Seeking the target. Most of the other nodes are either setting a value or checking a boolean. It was important for me to store some information about the world to evaluate those boolean checks. These variables are

  1. Target : a vector representing the current location of the target

  2. Timer : to check if 5 seconds have passed since I’ve reached the target.

Since it is not optimal to run the decision tree every frame, I’ve added a timer to execute the decision tree for every 5 seconds. The toughest part was to author the decision tree itself. My goal was to design a balanced decision tree but for the sake of simplicity in designing I choose to leave it imbalanced. The algorithm itself isn’t hard to implement but the data structures are a little bit tricky. I made a base class of node named DecisionTreeNode and all the other types such as ActionNode and DecisionNode are derived from it. There is also an Action class which is the parent of all the actions that can be executed. And since each of the actions might be different, I had to make a class for each type of action by extending the Action class. To give an example of this. 

This is the data structure for Action which basically has an action which needs to be returned when an ActionNode is executed. Examples of actions in the decision tree above are seek, setting the target to random, setting the target to null.

AIAlgos_ActionClass

Now, when I wanted to have an Action that Seeks the target, I created a new class by extending the above Action class. By overriding the Execute function I can have as many numbers of Actions I want. 

AIAlgos_SeekAction

Below is a gif showing the execution of the decision tree.

AIAlgos_DecisionTree

In the video, the red circle represents the current target. As you can see the boid starts seeking the target, when it reaches it rotates crazily for 5 seconds(the video is fast paced) which is because it is trying to seek the same location even after reaching it. Then the target is updated to a new position and the boid starts seeking the new target. The behavior matches what I wanted to achieve.

Behavior Trees:

I like behavior trees most among the decision making algorithms. It is comparatively simple to author and develop. They were also easy to mess with, you can make changes to part of the trees and attach them to another part of the tree easily. The behavior I wanted to achieve was have the monster wander until it finds the player, once it finds the player, seek it until it kills it. When the  monster reaches the player, both the positions of monster and player are reset and the behavior tree execution is started from the beginning. In this section I’m going to use the words enemy and player interchangeably because this behavior tree is associated with the monster and the enemy for the monster is player.  

This is the behavior tree I authored

BTAuthored

There are 4 different types of nodes and a total of 9 nodes. Each node has an execution status associated with it. This execution status can be Success, Failure, Running.

Node 1 (Selector): A selector is a node which executes from left to right, it succeeds when one of the children succeeds and fails if all the children fail. It moves from left to right until it finds a child that succeeds, as soon as a child is succeeded, it doesn’t execute the child to the right of it. So the ones in the left are higher priority than the ones in the right. 

Node 2,6,8 (Sequencer): A sequencer is a node which also executes its children from left to right but it only succeeds if all the children succeed and fails if one of the children fails. It breaks out the execution as soon as it finds a failed child, which means the children to the right of the failed child doesn’t get a chance to execute. 

Node 3,5 (Until fail): This node executes its children until it fails. So it only runs as long as the child runs successfully. 

Node 4 (Atomic): An Atomic Node (or an action node) is where any action can be executed. At node 4 the action that needs to be executed is checking whether the enemy (player) in the range. If it fails the sequencer on the top (Node 2) also gets failed and node 5 will never be executed. 

Node 7 (Atomic): Node 7 task is to seek the enemy, the success or failure of an atomic task has to be checked by the atomic task itself. So in the node 7 example, the task is to seek the enemy, although it sounds weird, I made the Seek node such that it fails when the monster reaches the player. So there is no ‘Success’ status for the seek action,  it will always be either ‘Running‘ when it is still seeking, or ‘Failure’ when it actually reaches the enemy. The reason I choose to do this is because I do not want to complicate things by creating a new type of node, which basically inverts the execution status.  

Node 9 (Atomic): This node’s task is to wander but it also checks additionally if the enemy is in range. If the enemy is in range, then it fails, failing the sequencer parent (Node 8) and ultimately Node 3 (Until fail).

The expected behavior will be - wander around until it sees the player, once it sees/ senses the player, seek it and kill it.  

The fun part about Behavior trees is even though you don’t know what state it is currently trying to achieve, you will always know where it will end up based on the tree you authored. 

Implementing the algorithm was mostly similar to the Decision Tree, there is one additional thing in Behavior trees which is a Blackboard. A Blackboard is a container which holds necessary data about the world. It is the knowledge source of the BT. I created an empty namespace and stuffed it with the variables I wanted to expose to the BT. Since the variables in the empty namespace are only accessible to the BT, it served the purpose of Blackboard pretty efficiently.

There is also the Tick variable which is passed to every node in the BT. Tick contains information about the world and the current execution status of the BT, and also the task it is executing, if any. This is the data structure for Tick

TickStruct

I had to add two variables, one for player information and one for monster information. A Task is any node in the BT, all the different types of tasks are derived from the BTTask class

BTTaskClass

For example this is the SelectorTask

SelectorBTTask

The tasks differ in their implementation of the ‘run’ method. The Actions are similar to that we discussed in the Decision Tree section. BT is executed every frame which is different from the Decision Tree execution. The way the BT works is, the current execution status is passed down the tree as it goes from each parent to children, if the current executing task is finished with either success or failure, then that information is bubbled up back to the top through the tick.

Here is the GIF of the BT execution. (The playback speed is modified)

BT

The black boid is the player and the red one is the monster. The player boid just wanders aimlessly. As you can see the monster starts by wandering around then after some time it senses the player and then seeks it, once it reaches, both the player and monster are reset to the start positions. This is exactly the behavior I wanted to achieve.

Goal Oriented Action Planning (GOAP):

GOAP is the trickiest one to implement. GOAP is a meticulously planned system which tries to achieve the goal specified by following a set of actions. Each of these actions have preconditions and postconditions. An action is executed when its preconditions are met. Think of preconditions as the state of the world represented in some form of data structure. So if the world state (represented in the form of some data structure) matches the preconditions of an action, then that action is executed and the world state is changed by adding the postconditions of the action. The addition of postconditions (or effects) can also mean addition of negative effects (or delete effects - effects that remove certain information from the world state).

With GOAP, the plan was to implement the player behavior such that it evokes certain actions from the monster BT. So that interesting things might happen when both are run in parallel. 

This is the class for a Planning state. 

PlanningState

For the world state representation I used a bit array of 5 elements. Technically it can result in 2 power 5 = 32 states. But the planner I implemented has only 3 states, namely Seek, Evade and Wander. The 5 bits (indices 0-4) represent

0 - time since last collision with monster > 10

1 - time since last collision with monster > 20

2 - Distance to monster < 250

3 - Distance to monster > 300

4 - Distance to monster > 500

Even though 2 and 3 can be deduced using 3, I wanted to add 2 to make it simpler for me to understand what is happening. That's purely a design choice I made solely for my benefit.

The Three states preconditions, effects are as follows

1.Wander

WanderGOAP

2. Evade

EvadeGOAP

3. Seek

SeekGOAP

The initial state of the world is

GOAPInitState

Which effectively means

Time since last collision with monster > 10 is true

Time since last collision with monster > 20 is false

Distance to monster < 250 false

Distance to monster > 300 true

Distance to monster > 500 true

This is the code that checks if a state is satisfying the preconditions.

GOAPCheckConditions

With the initial state of the world, the player enters the wander state, then after 20 seconds, it starts to seek the monster, and then starts to evade. The purpose of this is to evoke action from the monster, so when the player seeks the monster, the BT of monster senses the player and starts to run towards it, but the player will try to evade, this should raise interesting scenarios. But one thing to notice is once the player enters the wander state, it can’t leave it until 10 seconds have passed, so if the monster catches the player in those ten seconds, it's a win situation for the monster. 

The goal in this GOAP is to evoke different actions from the Monster BT. So if the monster is wandering, the goal will be to get close enough to the monster, so that it notices you, and when the monster starts chasing, the goal is to evade the monster. The monster BT also stops seeking if it fails to sense the player (distance > 300). 

Here are some videos of the behaviors achieved. There are some interesting scenarios where the monster tries really hard to kill the player but eventually the player evades (because I gave more speed to the player;-) ). The reason the player boid always ends up reaching the edges is because of the evade action I implemented. In evade, the player is always trying to reach the best possible corner I can go to. So, you can see some predicted behavior from the player.

GOAP1 GIF

GOAP2 GIF