UnityLearn – AI For Beginners – Crowd Simulations – Fleeing

May 22, 2020

AI For Beginners

Crowd Simulations

Part 2


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Intro

I was going to combine both of the end parts of the Crowd Simulation tutorial into one blog, but the Flocking tutorial was rather large and interesting so I decided to break it off into its own post. Because of that, this is a smaller one just focused on the basic fleeing crowd logic.

Fleeing

This is similar to the concept covered with single AI agents, it is just being applied to many agents here. An object is introduced to the environment and the fleeing agents determine the vector towards that object, and set a new course for the exact opposite of that vector.

Key Parameters covered for their flee inducing object:
Radius of Influence – how close agents must be to be affected by the object
Cooldown – how long an agent flees before returning to standard behavior

They decided to induce fleeing by adding a cylinder object to the Nav Mesh environment by clicking. This required adding a mouse button script along with adding a dynamic obstacle to the Nav Mesh again (similar to the FPS controller from the last tutorial).

This tutorial gets into several useful NavMesh methods that have not been used yet. These are helpful references to use when expanding and developing my own pathfinding methods, since they show some useful options to give developers when working with intelligent pathfinding.

There are NavMeshPath objects which hold information about a path for the agent to some given destination. You can set these path objects using the CalculatePath method, using a vector3 position (the destination) and the NavMeshPath object as parameters to set that path with that given destination. This alone does NOT set the agent along this path, it simply gets the information for that path (this is useful for the next step).

They perform a check on the path using fields within the NavMeshPath class and an enum named NavMeshPathStatus before actually setting the new destination of the agent. The check is as follows:

NavMeshPath path = new NavMeshPath();
agent.CalculatePath(newGoal, path);

if(path.status != NavMeshPathStatus.PathInvalid){… }

You are able to access the status of a NavMeshPath with a field within the class. They use this to check that the status of the newly created path is valid by checking it against the PathInvalid option within the NavMeshPathStatus enum. Only after passing this check do they set the newly determined path of the agent. Here, they also show that NavMeshPath has a vector3 array field named corners, which are effectively the waypoints of the path the agent uses for its pathfinding.

Summary

This fleeing logic was pretty basic, but it was helpful to learn a bit more about Nav Mesh in general for Unity. The extra information about NavMeshPath was good to learn about, as well as giving me more options on useful aspects to give my own pathfinding systems.

UnityLearn – AI For Beginners – Crowd Simulations – Flocking

May 22, 2020

AI For Beginners

Crowd Simulations

Part 3


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Intro

This was by far the most interesting crowd simulation tutorial covered in the basic AI tutorials. This one really got into an actual rule based logic system for pathing of agents within a large group to move them in an interesting way with emergent behavior that is still controlled and possible to direct.

Flocking

Part 1

Flocking: simple rules to generate movement for groups of individuals to move towards common goals (i.e. birds or fish)

They create a FlockManager because flock movement requires the individual agents to know about and understand they movement and positioning of all the other agents around them. This will be at a higher level providing data for an entire flock as a whole. This starts with logic to instantiate the flock by creating many fish prefabs in a randomized starting position bound around the FlockManager’s position. They also created a class named Flock to go directly on the the individual fish agents themselves.

    Flocking Rules:

  1. Move towards average position of the group
  2. Align with the average heading of the group
  3. Avoid crowding other group members

Flock Rule 1: Move Towards Average Position of Group

This is done by summing all the positions of the agents within the group and dividing by the number of group members, so it directly is the average position of the agents within the group. The agent’s can then find where they are in relation to this average position, and turn towards it.

Flock Rule 2: Align with the Average Heading of the Group

Similar to rule 1, this is also directly an average within the entire group, but this time it is done using the heading vectors of all the agents within the group. The heading vectors are summed and divided by the total number of agents within a group to determine the group’s overall average heading. The agents then attempt to align their heading with this average heading.

Flock Rule 3: Avoid Crowding Other Group Members

The agents must be aware of the positions of their nearest neighbors and turn away from them, as not to collide with them.

Finally, these three rules produce three vectors which are summed to generate the actual new heading of each individual agent.

new heading = group heading + avoid heading + group position

Back in the Flock class, they start applying some of these rules. Here is a list of some of the variables within their ApplyRules() method and what they represent:

Vector3 vcenter = Vector3.zero; // Average center position of a group
Vector3 vavoid = Vector3.zero; // Average avoidance vector (since avoiding all members in group)
float gSpeed = 0.01f; // Global speed of the entire group (Average speed of the group)
float nDistance; // Neighbor distance to check if other agents are close enough to be considered within the same group
int groupSize = 0; // Count how many agents are within a group (smaller part of the group an individual agent considers neighbors)

When setting up their Flock class and applying these rules, they decided to only apply these rules to neighbor agents. This means that the agents are not directly tied to the entire flock at all times, they simply check for agents within a certain distance around them and they only determine their behavior based on all the agents within that certain radius. I just wanted to clarify since it was unclear if some or all of the rules applied to neighbors or the entire flock (here they just apply all rules to only neighbors).

The summary of the Flock class ending here, specifically within the ApplyRules() method, is that each agent finds all the other agents within the given neighbor distance to determine which agents to momentarily flock with. It sums all the positions of these agents together to eventually get the average position. It then checks if these agents are extraordinarily close to determine if it should avoid them, and if so, calculates the avoidance vector (just the vector directly away from that agent) and sums that into a total avoidance vector (which is NOT averaged later on). It then sums all the speeds of the neighbors, again to average later on.

Finally, it checks if the agent is within a group (so is groupSize > 0), and performs the averaging mentioned earlier here. The new heading is calculated here by summing the average center position of a group with the avoidance vector (and subtracting the agent’s position itself to get a proper vector relative to its current position) and the agent performs a slerp to move towards this new heading.

This created a swirling ball of fish that did not particularly seem to deviate from this large mass with a large number of fish (50) and average values for all the sliders (speeds of 0.25 to 2.5; neighbor distance of ~5; rotation speed ~3.5). While moving, significantly reducing the neighbor distance ( < 1.0) did have them separate into smaller groups and swim off infinitely.

Part 2

Adding General Goal Position for Flock

To help provide the group of agents with a general direction, they added a general goal position to the FlockManager class. The Flock class then uses this position within its average center position calculation to help influence the direction of the agents towards this goal position. Initially they tied this to the position of the FlockManager object itself, and moving this around in the editor moves all the agents tied to it in the general direction towards this object (their goal position).

To automate this process a bit, they have the goal position move randomly every now and then within the given instantiation limits (these are the position ranges which the initial agents spawn in around the FlockManager object). This allows for the agents to move around with some more guidance on their own.

They then extend this random “every now and then” process to the Flock class as well. Here they apply it to randomize the agent’s speed and only run the ApplyRules() method occasionally, so they are not constantly following the flocking rules every single frame. This has the added benefit of reducing the processing intensity as well as the agents will not perform the entire logic of flocking every single frame now.

Returning Stray Agents Back to Flock

They finally add logic to deal with rogue agents that leave the flock and travel outward forever. They use the same bounds which determines the general area to spawn all the agents to determine the greater bounds to contain the agents. The Bounds class within Unity is used to check if the agent is contained within these bounds or not. If not, the agent changes its heading towards the FlockManager’s location instead, which remains intact until it encounters other agents to flock with.

Part 3

Obstacle Detection/Avoidance

The major addition in this final flocking tutorial is the addition of obstacle detection. To accomplish this, they have the individual agents cast a physics ray forward, along their direction of travel, and if it detects an obstacle, it will start to turn they away from it.

To have the agents turn away from an obstacle, they choose to use Unity’s reflect method. Using the hit information of the raycast and the normal information from the object hit, Unity can determiner the reflection vector based on the incoming ray. This produces a vector away from an object at the same angle relative to the normal of the surface as the incoming vector.

Video of the Flocking Result In Action

My Flocking Example

Summary

The implementation and fine tuning of Reynold’s Flocking Rules here was by far the most interesting part of the crowd simulation tutorials in this overall section. The idea of using a set of fairly simple rules on many, many agents in a large group to provide interesting yet realistic motion with controllable direction and emergent behaviors is exactly what I hoped for when looking into crowd behavior, and AI in general.

It was interesting to learn that Reynold’s rules are applied to motion by simply converting each of the three rules into their own vector, and then just summing those vectors for the most part. It is also very cool to see just how much you can change up the behavior and general motion of the agents by altering just a few values, like neighbor distance, speed, and rotation speed.

The additional features they covered after the bare minimum of flocking were also very helpful and practical. Showing ways to control stray agents and move them in a unified general direction towards a common goal are very good additional beahviors to add to flocking, and they were implemented in a very easy to understand way. Obstacle detection is also extremely nice, but its implementation was very straight forward and basic so it wasn’t quite as exciting (although the use of the Unity Reflection method is something I hadn’t used before, so that was helpful to learn).

UnityLearn – AI For Beginners – Crowd Simulations – Moving As One & Creating A City Crowd

May 20, 2020

AI For Beginners

Crowd Simulations

Part 1


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Intro

This is the first part of the Crowd Simulations section of the AI course that I covered. This was the first two major tutorials of four, Moving As One & Creating a City Crowd. These introduce the basic concepts of crowd simulations, and at least cover ways to start moving groups of agents with Nav Mesh Agents in Unity to help visualize these concepts (more so than particularly creating agent crowd AI behaviors yourself).

1: Moving As One

Crowd Behavior: agents move enmasse, as opposed to solely as individual units; their movements are usually also dependent on each other’s movements

Reynold’s Flocking Algorithm:

    Common flocking logic based on 3 simple rules:

  1. Turn towards average heading of group
  2. Turn towards average center of the group
  3. Avoid those around you

They start with a really basic Nav Mesh setup with a bunch of blue capsule agents on one side of a tight corridor, and a bunch of red capsule agents on the other side, and they each need to navigate to a goal on the opposite side, running through/past each group of capsules. The standard Nav Mesh Agent setups with colliders gives interesting group interactions by themselves already to start. This simulation was mostly to help visualize the idea of lines of flow in crowds, as well as instances of turbulence within crowd simulations.

2: Creating A City Crowd

Part 1

Starting setup is adding Street Crowd Simulation package to a Unity project. They included a starting scene of a street with several different characters with basic idle, walk, and run animations. This setup just helps visualize crowd behavior in a more interesting and realistic setting. Initially the agents select a random object from all the objects tagged “goal” as their destination and move there.

The additions done in the tutorial were having the agents continue finding new destinations after reaching the first one and adding more static meshes as obstacles for the agents to move around. The first goal used a distance check for once the agents were close enough to their destination, and then they would select another goal randomly from the initialized goal array. To accomplish the second part, they just added large cubes to simulate buildings to make a more realistic street walking setup with the characters.

Part 2

They added a bunch of buildings to the center of the street scene so agents would walk around the outter edge as normal street traffic.

Alternating Character Animations:

All the characters are using the same animations, so they look strange because they are all doing the exact same animations on the exact same time cycles. To rectify this, they looked into the “Cycle Offset” parameter within the standard walking animation for all the characters in the Animator.

They suggest setting this value with the SetFloat method within Unity’s Animator class. I tried doing this to set the value for the float parameter they added and tied to Cycle Offset, but it was not working for me. The string I entered matches the parameter name, and I connected the value to the parameter as they showed, but it was having no effect for me.

FIXED:

I was able to rectify this issue by explicitly stating within the Random.Range method setting the values to use floats for 0 and 1 (with 0.0f, and 1.0f). Without this explicit declaration, it must’ve been using ints and it was only setting everything to 0 every time (so initializing it as it was originally). Making them floats fixed the issue and they were being put on different cycles as expected.

They also set varied starting speeds for the agents. This included changing the speed of their animations along with their actual Nav Mesh Agent travel speed. To keep these working together well, they just randomly selected a float value and applied this same value to both of these parameters.

Dynamic Nav Mesh Obstacle (with First Person Controller):

They then move on to adding dynamic Nav Mesh obstacles by adding a First Person controller and making the player an obstacle on the Nav Mesh. This is done simply by adding a Nav Mesh Obstacle to the First Person controller. It creates a wireframe similar to a collider that can be shaped and controlled to control the area it removes from the Nav Mesh itself.

Summary

Overall the tutorials have been extremely basic so far and have not really delved into the actual AI behaviours of groups yet really. I am hoping they explore implementing Reynold’s Flocking theorem at some point so I can get some experience with that. If not I may take a detour into the world of “boids” to expose myself to more crowd behavior AIs.

UnityLearn – AI For Beginners – Autonomously Moving Agents

May 13, 2020

AI For Beginners

Autonomously Moving Agents

*Full Course*


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Intro

I completed the full course for Autonomously Moving Agents within the AI Unity Learn course and am just covering it all in this blog post. I broke each individual tutorial into its own section.

1: Seek and Flee

They start a new project and scene with this project where there is a robber (NPC) and a cop (player controlled), and the robber will be given logic to satisfy seeking and fleeing in relation to the player and the terrain obstacles.

Seek: Follow something else around
Flee: Move away from a specific object

Seek follows logic they have already used for following an object. Using the difference in positions between two objects, you can generate a vector which gives the direction between them and use that to guide an object towards the other one. Similar logic is actually used for Flee, except they just use the opposite of the vector generated between the objects to inform the agent of a direction directly away from a specific target. Since most movement done currently in these projects is target based, they just add the agent’s position to this vector to actually create a target to use in the desired direction.

2: Pursuit

Pursuit: similar to seek, but uses information to determine where the target will be to decide on pathing, as opposed to just its immediate location

Mathematically, this uses information on the target’s position, as well as it’s velocity vector to determine the target location. This information combined with the npc’s information on its own speed allow it to determine a path which should allow it to intercept its target some time in the future assuming a constant velocity.

Conditions they are adding for varied movement: If player is not moving, change NPC behavior from Pursue to Seek If NPC is ahead of player, turn around and change Pursue to Seek

They again use the angle of the forward vectors and vectors between the two interacting objects to determine the relative positioning of the NPC and player (whether the NPC is ahead of the player). They use a Transform method, TransformVector, to ensure the target’s forward vector is accurately translated to world space to compare with the NPC’s own forward vector in world space to get a proper angle measurement.

3: Evade

Evade: similar to pursuit but uses predicted position to move away from where target is predicted to end up

Since it is so similar to pursuit, implementing it in the tutorial could basically be done by copying the pursuit logic and just changing the Seek method to the Flee method (which effectively just sets the destination to the opposite of the vector between the agent and its target position). They do it even more simply by just getting the target’s predicted position and straight away using the Flee method on that position.

4: Wander

Wander: it is the somewhat random movement of the agent when it does not have a particular task; comparable to the idle state of standing

Their approach for wander is creating a circle in front of the agent where a target location is located on the edge of this circle and moves a bit along the circle to provide a bit of variety in the agent’s pathing

Important Parameters: Wander Distance: distance from agent to center of circle Wander Radius: size of circle Wander Jitter: influences how target location moves about circle

5: Hide

Part 1

Hiding requires having objects to hide behind. Their approach is to create a vector from the chasing object to an object to hide behind, and extend this vector a bit to create a target destination for the agent to hide at. A vector is then created between the hiding agent and this hiding destination to guide pathing.

The “hide” objects are tagged “hide” in Unity, and are static navigational meshes. They created a singleton class named World. This would hold all the hide locations. This does this by using the FindGameObjectsWithTag method to find all the objects with the “hide” tag.

They mention two ways to find the best hiding spot. The agent can look for either the furthest spot or the nearest spot. They decided to use the nearest spot approach.

Hide Method

Goes through the array of all the “hide” tag objects gotten in the World class and determines the hiding position for each of them relative to the target object it is hiding from. Using this information, it chooses the nearest hiding position and moves there. The hiding position is determined by creating a vector between the target the agent is hiding from and every hide object, and starting from this point, it adds a bit more in the same direction so that it is some distance behind the hiding object and the hiding object is between the target and the agent.

Part 2

The hide vector calculations use the transform position of the hiding objects, which is in the center of the object generally. This means setting the agent some distance away from the center should vary because objects can be different sizes.

To consistently get a position close to an object regardless of its size, they use an approach where the objects have a collider and the vector passes fully through this collider and beyond. It then produces a vector from that position coming directly back in the opposite direction until it hits the collider again. This position where it hits the collider is then what is used as the position for the agent to hide in. This is done because it is much easier to determine where a ray or vector first collides with a collider than where it leaves a collider.

The new method they created with this additional logic is named CleverHide(). Combining this logic with the NavMeshAgent tools in Unity can require some fine tuning. The main factor to keep track of on the NavMesh side is the stopping distance, as this is the distance the agent has to get within the actual destination to be good enough for the system to stop the agent.

They added a house object to test the system with more objects of various sizes and it worked ok. It was interesting because the agent wouldn’t move with significant moving from the target player, so sometimes you could get very close before it would move to another hiding position. I am not positive if this was another NavMesh stopping distance interaction or something with the new hiding method.

Finally, for efficiency purposes they did not want to run any Hide method every frame so they looked to add a bool check to see if the agent was already hidden or not before actually performing the Hide logic. To check if the agent should hide, they performed a raycast to see if it could directly see the player target. If so, it would hide, if not, it would stay where it was at.

6: Complex Behaviors

Complex Behaviours: combining the basic behaviours covered so far to give decisions for the agent to determine how it should act. The first combination they do is choosing between pursue (when player target is looking away) and hide (when player target is looking at them). The looking check is done with an angle check between the target and the forward vector.

They also added a cooldown timer to update the transition between states since it led to some very weird behavior when hiding with the current setup (the agent would break line of sight with the target immediately upon hiding, which would then cause it to pursue immediately, and then hide again, etc. so it would just move back and forth at the edge of a hiding spot).

7: Behavior Challenge

Challenge: Keep pursuit and hiding as they are Include range If player is outside distance of 10, agent should wander

My approach: I added a bool method named FarFromTarget() that did a distance check between the agent and the target. If the magnitude of that vector was greater than 10, it returned true, otherwise it returned false.

Then in Update I added another option of logic after the cooldown bool check to see if FarFromTarget was true. If so, the agent entered Wander and else it performed the similar logic before with the checks to perform either CleverHide or Pursue.

Their approach: They also created a bool method, but they named it TargetInRange(). Following this, they did the same exact process I did. They checked for TargetInRange within the cooldown if statement check, and performed Wander above all else, then it checked with else if it did the other previous logic (CleverHide or Pursue).

Summary

All these behaviors are actually relatively simple on their own, but as shown in the final tutorials, combining these with transitions can create interesting and seemingly complex behaviors. This type of design would work very well with the Finite State Machine systems covered in earlier tutorials (as well as others I’ve researched as well). State Machines are also very nice to ensure the isolation and encapsulation of the individual types of behavior.

Completed Prototype Model of a Shark for Dark Shark Project

April 14, 2020

Model Shark for Dark Shark

Modeling in Maya 2019

Modeling a Shark – Pt 4

Youtube – Tutorial #1

By: Dan Pejril


Summary

I followed parts 4 and 5 of this tutorial series on Youtube to guide my process for making my shark model for the Dark Shark project. Part 4 helped me with finding a way to mirror objects in Maya to create the pectoral fins for the model. The approach they used was to delete half the faces of the model to then duplicate the remaining half with “Duplicate Special” and scale -1.0 in the x. This way anything done on the one half of the model was duplicated on the other side.

Part 5 got more into creating the mirrored objects and merging it back together. After creating the pectoral fins, they then needed to combine the two halves back into a single mesh. This was done with “Combine”. This unites the individual meshes into one, this however does not really modify the verts and edges. To complete the combination, they used a “Merge” to join the verts and edges that are very close together. This properly completes combining the mesh.

Results

I was pretty happy with the results since I had not modeled in a while. I was just looking to make a simple, low poly, prototype model for now that I could just use as an effective place holder for now in the transition to make my game project in 3D. My Dark Shark project uses the orthographic view from the top, so the character silhouette from the top view is the most important so I focused on making that stand out and look nice. I am still working on how to cleanly represent the dorsal fin and tail fins in this view, but I made sure the pectoral fins were clearly visible to really drive home the imagery of a shark.

Shark Model in Maya

Shark Model in Unity and Game View

Modeling a Shark Player for Dark Shark Project

April 13, 2020

Model Shark for Dark Shark

Modeling in Maya 2019

Modeling a Shark – Pt 1

Youtube – Tutorial #1

By: Dan Pejril


Summary

I am looking to convert my original Dark Shark small game project from a 2D game to a 3D game as I progress with it, so I am using this as an opportunity to practice using Maya as a 3D modeling software and the pipeline of migrating models from there into Unity, since I haven’t done either in a while. My main focus is the gameplay and designing the underlying game systems, so I do not need an amazing model for now. This will just be practice so I’m making a simple, low-poly shark just to act as a place holder for now, and I can always upgrade it and make it nicer later if I want.

The video tutorial I have linked is a simple tutorial for modeling a simple shark, so I am using it as a rough reference as I brush up on the basics of using Maya as well as helping me design the shark a bit. As I mentioned, I’m just looking for a simple blocky low poly shark for now, so I am not dealing with some of the finer details from the tutorial, and ignoring details such as smoothing.

Some of the techniques it has helped me brush up on so far are: adding edge loops, selecting/moving/scaling verts/faces/edges, and extrusion. This is after going through the first three parts of this modeling tutorial series. I have done all this before, but it has been a long time now so it was helpful just to have a bit of guidance to remember which techniques to use when.

UnityLearn – AI For Beginners – Finite State Machines – Pt.02 – Finite State Machine Challenge

April 6, 2020

AI For Beginners

Finite State Machines

Finite State Machines


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Finite State Machine Challenge

This tutorial provided a challenge to complete and then provided a solution. The challenge was to create a state where the npc would retreat to an object tagged as “Safe” when the player approached the NPC closely from behind.

My Approach

Since they had a State enum named Sleep already that we had not worked with yet, I used that as the name of this new state (I started with Retreat, but then found the extra Sleep enum so I changed to that since I assumed it would be more consistent with the tutorial). Similar to the CanSeePlayer bool method added to the base State class for detecting when the player is in front of the NPC, I added an IsFlanked bool method here that worked similarly, but just detected if the player was very close behind instead of in front. I used this check in the Patrol and Idle state Update methods to determine if the agent should be sent into the new Sleep state.

In the Sleep state itself I used similar logic from the Pursue state for the constructor, so I set the speed to a higher value and isStopped to false so the NPC would start quickly moving to the safe location. In the Enter stage I found the GameObject with tag “Safe” (since this was set in the conditions for the challenge to begin with) and used SetDestination with that object’s transform.position.

The Update stage simply checked if the npc got close to the safe object with a continuous magnitude vector check, and once it got close enough, it would set the nextState to Idle before exiting (since Idle quickly goes back to Patrol in this tutorial anyway, this is the only option really needed).

Finally, the Exit stage just performs ResetTrigger for isRunning to reset the animator and moves on to the next State (which is only Idle as an option at this time).

Their Approach:

Most of what they did was very similar, although they did make a new State named RunAway instead of the extra Sleep State, so I could have stuck with Retreat and been fine.

Notable differences were that they checked if the player was behind them by changing the order of subtraction when performing the direction check (changed player – npc to npc – player) where I just had the angle check use the negative forward vector of the npc instead of the positive vector. These give effectively the same results, but I liked my approach better since it matched up with what was actually being checked better.

They also set the safe gameObject immediately in the constructor, where I was setting this in the Enter stage of the State. Again, this basically gives the same results in most cases, but I think their approach was better here just because the sooner you perform and set that FindGameObjectWithTag the better I think just to make sure it has access when it does need it.

Finally, for their distance check to see if they had arrived at the safe zone, they used a native NavMeshAgent value, remainingDistance. I used the standard distance check of subtracting the vectors and checking the magnitude, so these again both give similar results. Mine is more explicit in how it is checking, and the NavMeshAgent value is just cleaner, so these both had pros and cons.

Summary

This was a nice challenge just to work with a simple existing finite state machine. Similar to what they mentioned in the tutorial, I think setting the safe object in the static GameEnvironment script and just pulling from that (instead of using FindGameObjectWithTag every time the NPC enters the Sleep/RunAway State) would be much more efficient. Also just to help with checking states and debugging, I added a Debug.Log for the base State Enter stage method that just returned the name of the current State as soon as it was entered each time. This let me know which State was entered immediately when it was entered, so this also helped show me when the states were entered, so this was a very nice state machine check that only required a single line of code.

UnityLearn – AI For Beginners – Finite State Machines – Pt.01 – Finite State Machines

April 1, 2020

AI For Beginners

Finite State Machines

Finite State Machines


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Finite State Machines

Finite State Machine (FSM): conceptual machine that can be in exactly one of any number of states at any given time.
Represented by a graph where nodes are the states, and the paths between them are the transitions from one state to another. An NPC will stay in one state until a condition is met which changes it to another state.

Each state has 3 core methods: Enter, Update, Exit

  • Enter: runs as soon as the state is transitioned to
  • Update: is the continuous logic run while in this state
  • Exit: run at the moment before the NPC moves on to the next state

State Machine Template Code (Can use core of this for each individual state):

public class State
{
public enum STATE
{
IDLE, PATROL, PURSUE, ATTACK, SLEEP
};

public enum EVENT
{
ENTER, UPDATE, EXIT
};

public STATE name;
protected EVENT stage;

public STATE()
{ stage = EVENT.ENTER; }

public virtual void Enter() { stage = EVENT.UPDATE; }
public virtual void Update() { stage = EVENT.UPDATE; }
public virtual void Exit() { stage = EVENT.EXIT; }

public State Process()
{
if (stage == EVENT.ENTER) Enter();
if (stage == EVENT.UPDATE) Update();
if (stage == EVENT.EXIT)
{
Exit();
return nextState;
}
return this;
}
}

Creating and Using A State Class

State class template (similar but slightly different from last tutorial, with some comments):

public class State
{
public enum STATE
{
IDLE, PATROL, PURSUE, ATTACK, SLEEP
};

public enum EVENT
{
ENTER, UPDATE, EXIT
};

// Core state identifiers
public STATE name;
protected EVENT stage;

// Data to set for each NPC
protected GameObject npc;
protected Animator anim;
protected Transform player;
protected State nextState;
protected NavMeshAgent agent;

// Parameters for NPC utilizing states
float visionDistance = 10.0f;
float visionAngle = 30.0f;
float shootDistance = 7.0f;

public State(GameObject _npc, NavMeshAgent _agent, Animator _anim, Transform _player)
{
npc = _npc;
agent = _agent;
anim = _anim;
stage = EVENT.ENTER;
player = _player;
}

public virtual void Enter() { stage = EVENT.UPDATE; }
public virtual void Update() { stage = EVENT.UPDATE; }
public virtual void Exit() { stage = EVENT.EXIT; }

public State Process()
{
if (stage == EVENT.ENTER)
Enter();
if (stage == EVENT.UPDATE)
Update();
if(stage == EVENT.EXIT)
{
Exit();
return nextState;
}
return this;
}
}

Notice that the public virtual methods for the various stages of a state look a bit awkward. Both Enter and Update set the stage to EVENT.UPDATE because you want them to be setting stage equal to the next process called, and both of them would look to move that to Update. After entering, you move to Update, and each Update wants to move to Update again until it is told to do something else to Exit.

They also started to make actual State scripts, which create new classes that inherit from this base State class. The first example was an Idle state with little going on to get a base understanding. Each of the stage methods (Enter, Update, Exit) used the base versions of the methods from the base class as well as their own unique logic particular to that state. Adding the base methods within just ensured the next stage is set properly and uniformly.

Patrolling the Perimeter

This tutorial adds the next State for the State Machine, Patrol. This gets the NPC moving around the waypoints designated in the scene using a NavMesh.

They then create the AI class, which is the foundational logic for the NPC that will actually be utilizing the states. This is a fairly simple script in that the Update simply runs the line:

currentState = currentState.Process();

This handles properly setting the next State with each Update, as well as deciding which state to use and which stage of that state to run.

It turns out running the base Update method at the end of all the individual State classes’ Update methods was overwriting their ability to set the stage to Exit, so they could never leave the Update stage. They fixed this by simply removing those base method calls.

Summary

Using Finite State Machines is a clean and organized way to give NPCs various types of behaviors. It keeps the code clean by organizing each state of behaviors into its own class and using a central manager AI for each NPC to move between the states. This also helps ensure an NPC is only in a single state at any given time, to reduce errors and debugging.

This setup is similar to other Finite State Machine implementations I have run into in Unity. The Enter, Update, and Exit methods are core in any basic implementation.

UnityLearn – AI For Beginners – Navigation Meshes – Pt.02 – Navigation Meshes

March 30, 2020

AI For Beginners

Navigation Meshes

Navigation Meshes


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Navigation Mesh Introduction

Nav Mesh: Unity function for making a navigable area for agents to traverse over.

There are 4 main tabs in the Navigation window:

  • Agents
  • Area
  • Bake
  • Object

Agents

Core Parameters: Radius, Height, Step Height, Max Slope

These dimensions determine where the agent can fit when navigating around obstacles, as well as how it can traverse elevation differences. Radius and height help limit where a character can go (cannot go between very small gaps or under objects very close to the ground) where step height is the elevation difference it can traverse in a single jump and max slope is how inclined the surface can be for the agent to travel on. The name allows you to save several types of agents, with various values for these 4 core parameters.

Area

How you define different costs for different types of areas (used in A* pathing).

Bake

Creates the Nav Mesh over your given series of meshes using their polygons. It does this using a template agent called the baked agent size. By default there is only a single mesh created for the default agent type, but there are additional tools that can allow you to create multiple meshes to help handle various sized/types of agents. The Generated Off Mesh Links parameters determine how far off the mesh an agent should be able to jump or drop to get to another mesh location.

Advanced Options (Under Bake):
  • Manual Voxel Size: lets you determine the voxel size used to generate the Nav Mesh; larger voxels are less detailed and follow the mesh less accurately; default is 3.00 voxels per agent radius; generally aiming for a value between 2 and 4
  • Min Region Area: Helps remove areas that are deemed navigable, but are too small to really be of use or will cause issues by existing
  • Height Mesh: bool to determine if Nav Mesh should average elevations to turn steps into slopes or not

Object

Where you can align specific area types to different parts of the mesh, which parts of your mesh you want to generate nav meshes for, and helps create mesh links, which help travel from one nav mesh to another by jumping or falling.

From Waypoints to NavMesh

Introduction of Unity NavMesh, where they introduce the concept of baking the Nav Mesh with use of static game objects and adding the NavMeshAgent component to the AI agents you want to follow the Nav Mesh.

NavMeshAgents

They setup a new scene and project with a Nav Mesh. Nav Mesh specifically looks for “Navigation Static” game objects to build around when baking. They showed that selecting a gameObject before going into play mode with the Nav Mesh active helped visualize how the Nav Mesh was determining their paths.

Areas and Costs

The Areas tab in the Navigation Window lets you set different parts of the mesh as different types of areas. When selecting the polygons or meshes to use for these different areas, select the desired polygons and go to the Object tab. Here you can select which Area to apply to those specific polygons. These areas can then be used to tell agents where they can and cannot go. This is done by going to the NavMeshAgent component of the agent and changing the Area Mask of the Path Finding for that agent. By turning an Area type on or off, you designate which meshes it can travel on at all.

In the Areas tab, you can assign different Cost values to these different areas. This can make them more or less appealing for the agents to traverse across (higher values for cost mean the agents are less likely to use that path).

Following a Player on a NavMesh and Setting Up Off-Mesh Links

This tutorial setup another scene in another project, this time with the basis of following the player around. This was as simple as using the SetDestination method within the NavMeshAgent component to the player’s position in Update.

Off Mesh Links

Off Mesh Links: used if you have gaps in your NavMesh that you want an agent to cross

To create the Mesh Links, select the polygons of interest that you want to link over a gap and go to the Object tab. Here you can select “Generate Mesh Links”. Then go to the Bake tab, set the “Drop Height” and/or “Jump Distance” under the “Generated Off Mesh Links” section, and bake the mesh again to incorporate these links. They will be visually shown as circles between the meshes. Jump Distance is good for crossing horizontal disconnects, where Drop Height is good for crossing significant vertical disconnects. Setting drop links is similar, in that you select the two main groups of meshes in question and generate the mesh links.

Summary

After doing a lot of work with A*, building a path finding system with it from scratch, a lot of the tuning factors and parameters for Nav Mesh made a lot of sense with how A* works. This was good to know there was so much overlap, so I can learn a lot from what they use to work with Nav Mesh to give my A* system effective modifiable parameters to tweak it for designing needs.

Nav Mesh does seem powerful and very easy to get running, but I would have to work with it more to see just how controllable it is. I still like having my own A* system that I know the insides well to tweak exactly how I want it, but the Nav Mesh does offer a lot of similar features I am looking to add so I may need to explore it more to see how easy it is to work with.

UnityLearn – AI For Beginners – Waypoints and Graphs – Pt.01 – Graph Theory

March 24, 2020

AI For Beginners

Graph Theory

Graph Theory


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Graph Theory

Intro

The AI for Beginners course starts very basic so I have not covered a lot up until now. It handles the basics of moving an object with simple scripting, as well as guiding and aiming that movement a bit. This section starts to get into some more interesting theory and background for AI.

Graphs are simply collections of nodes and edges. Nodes are locations or points of data, and edges are the paths connecting them (which also contain significant data themselves). There are two directional types for edges within these graphs: directed and undirected. Directed paths only allow for movement between two nodes in a single direction, while undirected paths allow for movement either direction between nodes.

Graphs are used in any case to move between states. These states can be real states or conceptual states, so the nodes and edges between them can possibly be much more theoretical then actual objects or locations.

Utility Value: This is the value for an edge. Some examples shown were time, distance, effort, and cost. These are values that help an NPC make a decision to move from one node to the next over said edge.

Basic High Level Algorithms for Searching Nodes

Breadth-First Search

Marks original node as 1, then all adjacent nodes to that as 2, then all adjacent nodes to all of those as 3, etc. until it reaches the destination node. It then counts backwards to determine the path. Examines all possible nodes in graph to find the best path. This makes it effective but expensive and time consuming.

Depth-First Search

Starts at NPC position, then finds one adjacent node and numbers it, then it finds another single adjacent node and numbers it. This continues until it finds a dead end, in which case it returns to the last node where there was another direction to try and heads off in a different direction with the same method.

More Advanced General High Level Algorithm

A* Algorithm

All the nodes are numbered. It creates an open list and closed list, which keep track of nodes visited. There are three main cost values associated with the edges in this case: Heuristic cost (H cost), Movement cost (G cost), and the F cost.
Heuristic Cost (H Cost): estimated cost of getting to the destination from that specific node (this value is generally distance related)
Movement Cost (G cost): utility cost of moving from one node to another node
F Cost: sum of the H cost and G cost which determines the total value of that node
Each node stores these cost values as well as its parent, which is the closest node that continues the proper path. Once the final node is reached, the nodes follow this chain of parent nodes to determine which nodes make up the path to travel.

Summary

This was a very simplified approach to graph theory that was at the very least helpful as a small refresher on how A* worked. I also learned that Unity’s NavMesh uses the A* algorithm at its foundation. This does give me a good starting point with some subjects and terminology to investigate to understand some theory behind AI however, with graph theory, the nodes and edges, and the basic search methods.