UnityLearn – AI For Beginners – GOAP – Adding More Resources to the World

August 10, 2020

AI For Beginners

Goal Orientated Action Planning (GOAP)

Part 11


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Adding More Resources to the World

Adding More Resources to the World

The next step is setting up the use of the cubicles in the world. Similarly to patients, these cubicles act as a resource available to the agents. Therefore they must be monitored by the world state in some way to determine how many are available and which ones so they can be distributed properly when needed.

Setting this up was very similar to how the patient resources were. They added a queue of gameobjects to hold the cubicles in the GWorld class. This was a bit different as it also initialized full of resources, so the cubicles were immediately added to this queue (through FindGameObjectsWithTag) upon initialization. Then again similar to patients (except through the GWorld class itself instead of an action) a state was added to the WorldStates named “FreeCubicle” with a value equivalent to the number of cubicles.

Having these resources that are not other agents leads them to create a new class called GInventory to help organize and allocate these resources. The class created a list of gameobjects named items which effectively just stores whatever gameobjects may be needed as resources. The overarching GAgent class then had a GInventory added to it so all agents could access this pool or resources.

They then move on to the GetPatient GAction again to modify it to work with this new GInventory setup. They have added a gameobject named resource to hold a reference to a cubicle. The Preperform method of the GetPatient GAction now grabs a cubicle from the world state (if one is avaiable) and references it through this resource. If successful, it uses the WorldStates ModifyState method to reduce the “FreeCubicle” state by 1 through the following line:

GWorld.Instance.GetWorld().ModifyState(“FreeCubicle”, -1);

Similarly, the Postperform method of this GetPatient GAction now uses ModifyState to reduce the count of the “Waiting” state by 1 to account for the patient that is removed from the waiting status. Since every agent has a GInventory of their own now, this patient agent is given a reference to the same cubicle resource the nurse agent is using so that in the next steps they can travel to the same cubicle.

Summary

The addition of the GInventory adds another interesting avenue for organizing and allocating resources for the system to use. The following tutorials support the start of its utilization so I am interested to see how they further use this inventory system to properly maintain control of the resources of the world.

Architecture AI Prototype Historical Path Visualization and UI Updates

July 1, 2020

Architecture AI Project

Added Path Visualization and UI Updates

Introduction

To top off the prototype, we added a quick visualization tool to show any paths that agents have taken within the current game session. This was necessary to make it easier to compare the paths of various combinations of agents and environments even. Along with this, some elements of modifying the system that were only accessible within the Unity Inspector have been moved to the in game UI. This makes it much more manageable to alter the agents and their pathing for Unity users and non-users alike.

The following quick reference image shows an example of this new path visualization tool (the red line going through the diagonal of the area).

Reference for New Path Visualization Tool (the Red Line is the Path)

Vimeo – My Demo of the Past Path Visualization Tool (with New UI Elements)

Past Path Visualization Feature

Visualization

The path data was already being recorded in an in-depth manner for storing the data in a text format for export, so here we just focused on using that data during the game session to provide another useful data visualization tool. This data is kept as an array of Node objects, which in turn individually hold information on their world position. This provided a solid foundation to visualize the information again.

The array of Nodes could then provide me with an array of world positions which I could use with Unity’s line renderer to create a basic line connecting all of these points. Using the line renderer component also leaves the tool flexible through the Inspector to modify how the line is drawn to fit different needs or environments.

Dropdown UI

To make this easy to access for the user, I added a Unity dropdown UI element to the scene to hold information about all the past paths. It initializes by creating a newly cleared dropdown and adding a simple “None” option which helps keep everything in line, as well as providing the user an obvious spot to go back to when they don’t want to show any previous paths. Then as paths are generated, more options are added to this dropdown to keep it in line with all the paths created (this also keeps their indices in sync for easy reference between the dropdown and the recorded paths).

Added UI Elements for Operating System

As a prototype tool, a quick way to allow for tweaking values and controlling assets is to just use public variables or Unity tool assets (like Unity Ranges and enums within the scripts). This became even cumbersome for myself, so it could be a large issue and slow down for less savvy Unity users. This made me want to explore moving some of these parameters into the game UI for easier use.

Agent Affinity Sliders

I looked into creating Unity UI sliders and since there were pretty easy to implement I moved the range sliders from the Unity Inspector for setting the different agent affinities for the next spawned agent to the game UI. This was straightforward in that the slider values could almost directly be transferred to the spawnmanager (The slider actually goes between values of 0.0 – 1.0 and is multiplied by the max affinity value of the system to properly stay constrained within that range).

Agent Pathing Type Dropdown

After exploring the dropdowns for the path history visualization tool, I also decided to use them for determining the path type of the next spawned agent. This allows the user to quickly choose between different architectural types to use to calculate the agent’s pathing.

Since the architectural types are within a public enum, the dropdown could be populated by the ToString() versions of this entire enum (which can be cycled through in C#). Since they are populated this way in order based on the enum itself, the dropdown can inform the PathFindingHeapSimple (the class that needs to know the type in order to determine which pathing calculations to use) based on the enum index number. They should be kept in sync based on the way the dropdown is created.

Summary

The past path visualization tool is very useful for comparing paths a bit more quickly (since changing values for individual paths can take a small bit of time). It may help in the future of the project to visualize multiple paths at once to directly compare paths very easily. The system also displays information about the path it is showing so the user knows better why the path is the way it is.

Unity’s UI elements are pretty easy to work with, and adding methods to their OnValueChanged delegate is a very nice and consistent way of only calling changes/methods specifically when a UI element is modified. This was perfect for both the dropdowns and the sliders since they only need to invoke change within the system when their values are changed. This keeps them a bit cleaner and more efficient.

The UI elements are strongly tied to the borders of the screen using the standard Unity positioning constraint system, so it can look really messy at smaller resolutions. It operates well enough since there’s not too much going on, so just allocating a decent resolution to the window when using it at least shows everything which is the most important aspect for a prototype.

Architecture AI Completed Prototype with Data Visualization and Pathing Options

June 24, 2020

Architecture AI Project

Completed Project Prototype

Introduction

We were exploring different methods for importing architectural models into Unity and decided to try Revit to see what it had to offer. The models are easy to move between softwares, and it also creates .cs scripts that Unity can use to provide some interactive data visualization when using the Revit models.

Data Visualization Options

I moved everything dealing with the major logic of the A* grid node visualization into its own class named AStarGridVisualization. This allowed me to continue to hold on to all the different debug visualization methods I created over the course of the project and convert them into a list of options for the designer to use for visualization purposes.

As can be seen in the demo, there is a list of options in the drop down list in the Unity Inspector so the designer can tell the system which data to visualize from the nodes. The options currently available are: Walkable, Window, and Connectivity. Walkable shows which nodes are considered walkable or not. Window and Connectivity show a colored heat map to show those respective values within the nodes themselves.

Changing Values Used for Pathing

The project wanted to be able to use different types of values to see their effects on the agents’ pathing, so every node has several different types of architectural values (i.e. Window and Connectivity). Using the window information for pathing will give different agent paths than when using the connectivity values, because the agents have different affinities towards these types of values and the nodes have different values for these different types contained within the same node.

The PathFindingHeapSimple class handles the calculations for the costs of the paths for the agents, so I have also located the options to switch between pathing value checks here. This class is responsible for checking nodes and their values to see which lead to the lowest cost paths, so there is another drop down list available to the designer here that determines which architectural value type these checks look at when looking at the nodes. This allows the designer to choose which architectural value actually influences the agents’ paths.

A* Node Grid Construction Options

This point of the project was still working with various types of model imports from various softwares to find which worked the best and most consistent, so I added some extra options for generating the node grid in the first place to give some quick options that may better help line up the node coverage area with the model at hand. These major options located in the AGrid class itself are: GridOriginAtBottomLeft and UseHalfNodeOffset (the third is determining whether the system reads data from a file or not to add values to the node grid, but that’s a more file oriented option).

GridOriginAtBottomLeft

GridOriginAtBottomLeft determines where Unity’s origin is located relative to the node grid. When true, Unity’s origin is used as the bottom left most point of the entire node grid. When false, Unity’s origin is used as the central point of the generated node grid.

UseHalfNodeOffset

UseHalfNodeOffset is responsible for shifting the nodes ever so slightly to give a different centering option for the nodes. When true, half of the dimension of an individual node is added to the positioning vector of the node (in x and z) so that the node is centered within the coordinates made by the system. When false, the offset is not added, so the nodes are directly located exactly at the determined coordinates.

File Reading to Set Values Within Nodes

There are two major ways to apply architectural values to the nodes: using architectural objects (within Unity itself) or reading data from a file. The data files contain x and y coordinate information along with a value for a specific architectural value for many data points throughout a space. These files can be placed Unity’s Assets/Resources folder to be read into the system. The option referenced above in the AGrid to read data from a file or not is the IsReadingDataFromFile bool, and it just tells the system whether or not to check the file told to the system to assign values to the grid.

The FileReader object holds the CSVReader class which deals with the logic of actually reading this file information. The name of the file to be read needs to be inserted here (by copy/paste generally to make sure it is exact), and the dimensions of the data should be manually assigned at this time to make sure the system gets all of the data points and assigns them to the grid appropriately (these data dimensions are the “x dimensions” and “y/z dimensions” ranges of the coordinates within the data).

Demonstration of Prototype

This demonstration helps show the different data visualization options of the grid, the agent spawning system, how changing the architectural value path type changes the agents’ paths, and how a lot of these systems look in the Unity Inspector.

Vimeo – Architecture AI Project Prototype Demonstration

Summary

I really liked a lot of the options I was able to add to help increase the project’s overall usability and make it much easier to switch between various options at run time to test different setups. The end product is meant to help users that may not have a lot of Unity experience, so I tried to reduce the learning curve by making a lot of options easily accessible with visible bools and drop down lists, while also providing some simple buttons in the game space to help with some other basic actions (such as spawning new agents).

Many of the classes were constructed with scalability and efficiency in mind, so expanding the system should be very doable. However much of this scalability is still contained within the programming itself and the classes themselves would need to be modified to add more options to the system at this point (such as new architectural types). This is something I would like to make more accessible to the designer so that it’s intuitive and possible to modify without programming knowledge.

The file reading system works well enough for these basic cases, but it could definitely use some work to be more flexible and work with various node grid options. This is something I could use more work on in general, and having systems to read in files and output files to record data would be useful to understand better in general.

Architecture AI Working with Revit Models and Scripts in Unity

June 15, 2020

Architecture AI Project

Revit Models in Unity

Introduction

We were exploring different methods for importing architectural models into Unity and decided to try Revit to see what it had to offer. The models are easy to move between softwares, and it also creates .cs scripts that Unity can use to provide some interactive data visualization when using the Revit models.

Parameters Import Script

This script creates a large list of strings. It does so by receiving an identification looking string, and uses a method to convert that string into its corresponding parameter information.

Example:

private string ParamByName(string d)
{
switch(d)
{
case “Generic_-_12″_[354721]”:
d = “Area : 8100 ft2 nElevation at Bottom : -1 ft nElevation at Top : 0 ft nType Name : Generic – 12″ n”;
break;


}

return d;
}

Parameter Mouse Over Script

This script uses the name of this.gameObject to add information to the parameter list. The names of all the individual parts of the Revit model correspond to all the input strings found in the ParametersImport partner script. This shows that we most likely need to place this script on every piece of the model individually (since it’s only adding the one name of this.gameObject).

Adding this script to each individual gameObject present in the Revit model got it working. When mousing over the individual parts, it highlights that part with a color and displays some text information about that piece.

Revit In Unity Example

My Example Image of Revit Model with Mouse Over Parameters Shown

The following link shows an example of me using a Revit model in Unity with the corresponding scripts to get the Mouse Over action working.

My Showcase of Revit Model and Scripts

Summary

This is a nice included data visualization tool that can be included essentially for free. The information it provides could possibly be helpful, but the scripts are also very simple so they could easily be tweaked to include other information. All the information is also included in strings within the scripts already, so that data could possibly be extruded for other uses (if that turns out to be an easier way to get said data).

Architecture AI Varied Agent Affinities with Varied Pathing

May 28, 2020

Architecture AI Project

Varied Agent Affinities with Varied Pathing

Demonstration of Varied Agent Affinities with Varied Pathing

Vimeo Link – My Demo of Agents Pathing with Varied Affinities

Explaining Simple but Unclear Heatmap Coloring System

Just to help explain since I don’t have great UI in to explain everything currently, the first thing was that I was testing a really rough “heat map” visual for the architectural values for now. When you turn on the option to show the node gizmos now, instead of uniform black cubes showing all the nodes, the cubes are colored according to the architectural value of the node (for this test case, the window value) Unfortunately this color system isn’t great, especially without a legend, but you can at least see it’s working (I just need to pick a better color gradient). Red is unfortunately both values at/near 0 (min) as well as at/near 100 (max) (but the value range here is only from 0 – 80, so all red in this demo is 0).

Agent Affinity/Architectural Value Interaction for Pathing

The more exciting part is that the basis of the agent affinity and architectural value interactions seem to be working and affecting their pathing in a way that at least makes sense. Again just for demo purposes so far, as can be seen in the video (albeit a bit blurry), I added a quick slider on the inspector for the Spawn Manager to determine the “Window Affinity” for the next agent it spawns (for clarity I also added it as a text UI element that can be seen at the top of the Game View window). Just to rehash, this has a set range between 0 and 100, where 0 means they “hate” windows and avoid them and 100 means they “adore” windows and gravitate towards them.

Test #1: Spawn Position 1

As can be seen in the first quick part of the demo, I spawn 2 agents from the same position 1 but with different affinities. The first has an affinity of 0, and the second has an affinity of 100. Here you can already see the 0 affinity agent steers towards the left side of the large wide obstacle to avoid the blue (relatively high) window area, where as the 100 affinity agent goes around the right side of the same obstacle, preferring the high window valued areas in the blue marked zone.

Test #2: Spawn Position 2

Both the 0 affinity and 100 affinity agents take very similar path, differing by only a couple node deviations here and there. This makes sense as routing around the large high window value area would take a lot of effort, so even the window avoidant agent decides to take the relatively straight forward path over rerouting.

Test #3: Spawn Position 3

This test demonstrated similar results to that of Test #1. The 100 affinity agent moved up and over the large obstacle in the southern part of the area (preferring the high window value area in the middle again), where as the 0 affinity agent moved below the same obstacle and even routed a bit more south just to avoid some of the smaller window afflicted areas as well.

Summary

I did some testing of values in between 0 and 100 and with the low complexity of the area so far, most agents ended up taking one of the same paths as the 0 or 100 affinity agents from what I saw. This will require more testing to see if there already exists some variance, but if not, this suggests that some of the hardcoded behind the scenes values or calculations for additional cost may need tweaked (as is expected). Overall though, the results came out pretty well and seem to make sense. The agents don’t circle around objects in a very odd way, but they also do not go extremely out of there way to avoid areas even when they don’t like them.

Architecture AI Heatmap of Architectural Costs

May 27, 2020

Architecture AI Project

Visualizing Heatmap of Architectural Costs

Architectural Cost Heatmap

I was just using basic black cubes to visualize the grid of nodes in the AGrid A* system originally. Now I finally decided to update that system with a bit more information by adding a colored heatmap to map out the architectural values of the nodes. This will help me understand the pathing of my agents better as they start to use these values in more complex and interesting ways.

Basic Heatmap Color System

It turns out making a heatmap color scale that decently ranges between values of 0.0 and 1.0 is pretty easy. Using HSV color values, you can just vary the H (hue) value while keeping the other two values constant at 1.0 to cover a heatmap spectrum (where H of 0.0 is the lowest value in the range, 1.0 is the highest value in the range, and all other colors are bound within that range). By simply using the min and max architectural values possible in the system as the range bounds I could easily set this type of system up.

Applying Colors to Visualization

For now I decided to just apply this heatmap color methodology to the cubes mapping out the A* grid for now. While these gizmo cubes used to just be colored black for simplicity, now they are colored according to the heatmap color system using the architectural value found within the node.

Bug: Influence Objects Found to Radiate Improperly

The influence objects which apply architectural values to the nodes around them were initially designed to apply in an area around them, with the objects in the center. After applying the heatmap, it was clear that the influence objects were not applying to the nodes intended. It appears the influence actually uses the position of the influence object as the bottom left corner, and radiates upward and to the right (so just in the positive x and z direction away from the object). This is something I will have to look into to make sure it’s an issue with the influence objects, and not the coloring system.

Current Heatmap System (with Bugged Influence Ranges)

UnityLearn – AI For Beginners – Crowd Simulations – Fleeing

May 22, 2020

AI For Beginners

Crowd Simulations

Part 2


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Intro

I was going to combine both of the end parts of the Crowd Simulation tutorial into one blog, but the Flocking tutorial was rather large and interesting so I decided to break it off into its own post. Because of that, this is a smaller one just focused on the basic fleeing crowd logic.

Fleeing

This is similar to the concept covered with single AI agents, it is just being applied to many agents here. An object is introduced to the environment and the fleeing agents determine the vector towards that object, and set a new course for the exact opposite of that vector.

Key Parameters covered for their flee inducing object:
Radius of Influence – how close agents must be to be affected by the object
Cooldown – how long an agent flees before returning to standard behavior

They decided to induce fleeing by adding a cylinder object to the Nav Mesh environment by clicking. This required adding a mouse button script along with adding a dynamic obstacle to the Nav Mesh again (similar to the FPS controller from the last tutorial).

This tutorial gets into several useful NavMesh methods that have not been used yet. These are helpful references to use when expanding and developing my own pathfinding methods, since they show some useful options to give developers when working with intelligent pathfinding.

There are NavMeshPath objects which hold information about a path for the agent to some given destination. You can set these path objects using the CalculatePath method, using a vector3 position (the destination) and the NavMeshPath object as parameters to set that path with that given destination. This alone does NOT set the agent along this path, it simply gets the information for that path (this is useful for the next step).

They perform a check on the path using fields within the NavMeshPath class and an enum named NavMeshPathStatus before actually setting the new destination of the agent. The check is as follows:

NavMeshPath path = new NavMeshPath();
agent.CalculatePath(newGoal, path);

if(path.status != NavMeshPathStatus.PathInvalid){… }

You are able to access the status of a NavMeshPath with a field within the class. They use this to check that the status of the newly created path is valid by checking it against the PathInvalid option within the NavMeshPathStatus enum. Only after passing this check do they set the newly determined path of the agent. Here, they also show that NavMeshPath has a vector3 array field named corners, which are effectively the waypoints of the path the agent uses for its pathfinding.

Summary

This fleeing logic was pretty basic, but it was helpful to learn a bit more about Nav Mesh in general for Unity. The extra information about NavMeshPath was good to learn about, as well as giving me more options on useful aspects to give my own pathfinding systems.

UnityLearn – AI For Beginners – Crowd Simulations – Flocking

May 22, 2020

AI For Beginners

Crowd Simulations

Part 3


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Intro

This was by far the most interesting crowd simulation tutorial covered in the basic AI tutorials. This one really got into an actual rule based logic system for pathing of agents within a large group to move them in an interesting way with emergent behavior that is still controlled and possible to direct.

Flocking

Part 1

Flocking: simple rules to generate movement for groups of individuals to move towards common goals (i.e. birds or fish)

They create a FlockManager because flock movement requires the individual agents to know about and understand they movement and positioning of all the other agents around them. This will be at a higher level providing data for an entire flock as a whole. This starts with logic to instantiate the flock by creating many fish prefabs in a randomized starting position bound around the FlockManager’s position. They also created a class named Flock to go directly on the the individual fish agents themselves.

    Flocking Rules:

  1. Move towards average position of the group
  2. Align with the average heading of the group
  3. Avoid crowding other group members

Flock Rule 1: Move Towards Average Position of Group

This is done by summing all the positions of the agents within the group and dividing by the number of group members, so it directly is the average position of the agents within the group. The agent’s can then find where they are in relation to this average position, and turn towards it.

Flock Rule 2: Align with the Average Heading of the Group

Similar to rule 1, this is also directly an average within the entire group, but this time it is done using the heading vectors of all the agents within the group. The heading vectors are summed and divided by the total number of agents within a group to determine the group’s overall average heading. The agents then attempt to align their heading with this average heading.

Flock Rule 3: Avoid Crowding Other Group Members

The agents must be aware of the positions of their nearest neighbors and turn away from them, as not to collide with them.

Finally, these three rules produce three vectors which are summed to generate the actual new heading of each individual agent.

new heading = group heading + avoid heading + group position

Back in the Flock class, they start applying some of these rules. Here is a list of some of the variables within their ApplyRules() method and what they represent:

Vector3 vcenter = Vector3.zero; // Average center position of a group
Vector3 vavoid = Vector3.zero; // Average avoidance vector (since avoiding all members in group)
float gSpeed = 0.01f; // Global speed of the entire group (Average speed of the group)
float nDistance; // Neighbor distance to check if other agents are close enough to be considered within the same group
int groupSize = 0; // Count how many agents are within a group (smaller part of the group an individual agent considers neighbors)

When setting up their Flock class and applying these rules, they decided to only apply these rules to neighbor agents. This means that the agents are not directly tied to the entire flock at all times, they simply check for agents within a certain distance around them and they only determine their behavior based on all the agents within that certain radius. I just wanted to clarify since it was unclear if some or all of the rules applied to neighbors or the entire flock (here they just apply all rules to only neighbors).

The summary of the Flock class ending here, specifically within the ApplyRules() method, is that each agent finds all the other agents within the given neighbor distance to determine which agents to momentarily flock with. It sums all the positions of these agents together to eventually get the average position. It then checks if these agents are extraordinarily close to determine if it should avoid them, and if so, calculates the avoidance vector (just the vector directly away from that agent) and sums that into a total avoidance vector (which is NOT averaged later on). It then sums all the speeds of the neighbors, again to average later on.

Finally, it checks if the agent is within a group (so is groupSize > 0), and performs the averaging mentioned earlier here. The new heading is calculated here by summing the average center position of a group with the avoidance vector (and subtracting the agent’s position itself to get a proper vector relative to its current position) and the agent performs a slerp to move towards this new heading.

This created a swirling ball of fish that did not particularly seem to deviate from this large mass with a large number of fish (50) and average values for all the sliders (speeds of 0.25 to 2.5; neighbor distance of ~5; rotation speed ~3.5). While moving, significantly reducing the neighbor distance ( < 1.0) did have them separate into smaller groups and swim off infinitely.

Part 2

Adding General Goal Position for Flock

To help provide the group of agents with a general direction, they added a general goal position to the FlockManager class. The Flock class then uses this position within its average center position calculation to help influence the direction of the agents towards this goal position. Initially they tied this to the position of the FlockManager object itself, and moving this around in the editor moves all the agents tied to it in the general direction towards this object (their goal position).

To automate this process a bit, they have the goal position move randomly every now and then within the given instantiation limits (these are the position ranges which the initial agents spawn in around the FlockManager object). This allows for the agents to move around with some more guidance on their own.

They then extend this random “every now and then” process to the Flock class as well. Here they apply it to randomize the agent’s speed and only run the ApplyRules() method occasionally, so they are not constantly following the flocking rules every single frame. This has the added benefit of reducing the processing intensity as well as the agents will not perform the entire logic of flocking every single frame now.

Returning Stray Agents Back to Flock

They finally add logic to deal with rogue agents that leave the flock and travel outward forever. They use the same bounds which determines the general area to spawn all the agents to determine the greater bounds to contain the agents. The Bounds class within Unity is used to check if the agent is contained within these bounds or not. If not, the agent changes its heading towards the FlockManager’s location instead, which remains intact until it encounters other agents to flock with.

Part 3

Obstacle Detection/Avoidance

The major addition in this final flocking tutorial is the addition of obstacle detection. To accomplish this, they have the individual agents cast a physics ray forward, along their direction of travel, and if it detects an obstacle, it will start to turn they away from it.

To have the agents turn away from an obstacle, they choose to use Unity’s reflect method. Using the hit information of the raycast and the normal information from the object hit, Unity can determiner the reflection vector based on the incoming ray. This produces a vector away from an object at the same angle relative to the normal of the surface as the incoming vector.

Video of the Flocking Result In Action

My Flocking Example

Summary

The implementation and fine tuning of Reynold’s Flocking Rules here was by far the most interesting part of the crowd simulation tutorials in this overall section. The idea of using a set of fairly simple rules on many, many agents in a large group to provide interesting yet realistic motion with controllable direction and emergent behaviors is exactly what I hoped for when looking into crowd behavior, and AI in general.

It was interesting to learn that Reynold’s rules are applied to motion by simply converting each of the three rules into their own vector, and then just summing those vectors for the most part. It is also very cool to see just how much you can change up the behavior and general motion of the agents by altering just a few values, like neighbor distance, speed, and rotation speed.

The additional features they covered after the bare minimum of flocking were also very helpful and practical. Showing ways to control stray agents and move them in a unified general direction towards a common goal are very good additional beahviors to add to flocking, and they were implemented in a very easy to understand way. Obstacle detection is also extremely nice, but its implementation was very straight forward and basic so it wasn’t quite as exciting (although the use of the Unity Reflection method is something I hadn’t used before, so that was helpful to learn).

UnityLearn – AI For Beginners – Crowd Simulations – Moving As One & Creating A City Crowd

May 20, 2020

AI For Beginners

Crowd Simulations

Part 1


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Intro

This is the first part of the Crowd Simulations section of the AI course that I covered. This was the first two major tutorials of four, Moving As One & Creating a City Crowd. These introduce the basic concepts of crowd simulations, and at least cover ways to start moving groups of agents with Nav Mesh Agents in Unity to help visualize these concepts (more so than particularly creating agent crowd AI behaviors yourself).

1: Moving As One

Crowd Behavior: agents move enmasse, as opposed to solely as individual units; their movements are usually also dependent on each other’s movements

Reynold’s Flocking Algorithm:

    Common flocking logic based on 3 simple rules:

  1. Turn towards average heading of group
  2. Turn towards average center of the group
  3. Avoid those around you

They start with a really basic Nav Mesh setup with a bunch of blue capsule agents on one side of a tight corridor, and a bunch of red capsule agents on the other side, and they each need to navigate to a goal on the opposite side, running through/past each group of capsules. The standard Nav Mesh Agent setups with colliders gives interesting group interactions by themselves already to start. This simulation was mostly to help visualize the idea of lines of flow in crowds, as well as instances of turbulence within crowd simulations.

2: Creating A City Crowd

Part 1

Starting setup is adding Street Crowd Simulation package to a Unity project. They included a starting scene of a street with several different characters with basic idle, walk, and run animations. This setup just helps visualize crowd behavior in a more interesting and realistic setting. Initially the agents select a random object from all the objects tagged “goal” as their destination and move there.

The additions done in the tutorial were having the agents continue finding new destinations after reaching the first one and adding more static meshes as obstacles for the agents to move around. The first goal used a distance check for once the agents were close enough to their destination, and then they would select another goal randomly from the initialized goal array. To accomplish the second part, they just added large cubes to simulate buildings to make a more realistic street walking setup with the characters.

Part 2

They added a bunch of buildings to the center of the street scene so agents would walk around the outter edge as normal street traffic.

Alternating Character Animations:

All the characters are using the same animations, so they look strange because they are all doing the exact same animations on the exact same time cycles. To rectify this, they looked into the “Cycle Offset” parameter within the standard walking animation for all the characters in the Animator.

They suggest setting this value with the SetFloat method within Unity’s Animator class. I tried doing this to set the value for the float parameter they added and tied to Cycle Offset, but it was not working for me. The string I entered matches the parameter name, and I connected the value to the parameter as they showed, but it was having no effect for me.

FIXED:

I was able to rectify this issue by explicitly stating within the Random.Range method setting the values to use floats for 0 and 1 (with 0.0f, and 1.0f). Without this explicit declaration, it must’ve been using ints and it was only setting everything to 0 every time (so initializing it as it was originally). Making them floats fixed the issue and they were being put on different cycles as expected.

They also set varied starting speeds for the agents. This included changing the speed of their animations along with their actual Nav Mesh Agent travel speed. To keep these working together well, they just randomly selected a float value and applied this same value to both of these parameters.

Dynamic Nav Mesh Obstacle (with First Person Controller):

They then move on to adding dynamic Nav Mesh obstacles by adding a First Person controller and making the player an obstacle on the Nav Mesh. This is done simply by adding a Nav Mesh Obstacle to the First Person controller. It creates a wireframe similar to a collider that can be shaped and controlled to control the area it removes from the Nav Mesh itself.

Summary

Overall the tutorials have been extremely basic so far and have not really delved into the actual AI behaviours of groups yet really. I am hoping they explore implementing Reynold’s Flocking theorem at some point so I can get some experience with that. If not I may take a detour into the world of “boids” to expose myself to more crowd behavior AIs.

UnityLearn – AI For Beginners – Autonomously Moving Agents

May 13, 2020

AI For Beginners

Autonomously Moving Agents

*Full Course*


Beginner Programming: Unity Game Dev Courses

Unity Learn Course – AI For Beginners

Intro

I completed the full course for Autonomously Moving Agents within the AI Unity Learn course and am just covering it all in this blog post. I broke each individual tutorial into its own section.

1: Seek and Flee

They start a new project and scene with this project where there is a robber (NPC) and a cop (player controlled), and the robber will be given logic to satisfy seeking and fleeing in relation to the player and the terrain obstacles.

Seek: Follow something else around
Flee: Move away from a specific object

Seek follows logic they have already used for following an object. Using the difference in positions between two objects, you can generate a vector which gives the direction between them and use that to guide an object towards the other one. Similar logic is actually used for Flee, except they just use the opposite of the vector generated between the objects to inform the agent of a direction directly away from a specific target. Since most movement done currently in these projects is target based, they just add the agent’s position to this vector to actually create a target to use in the desired direction.

2: Pursuit

Pursuit: similar to seek, but uses information to determine where the target will be to decide on pathing, as opposed to just its immediate location

Mathematically, this uses information on the target’s position, as well as it’s velocity vector to determine the target location. This information combined with the npc’s information on its own speed allow it to determine a path which should allow it to intercept its target some time in the future assuming a constant velocity.

Conditions they are adding for varied movement: If player is not moving, change NPC behavior from Pursue to Seek If NPC is ahead of player, turn around and change Pursue to Seek

They again use the angle of the forward vectors and vectors between the two interacting objects to determine the relative positioning of the NPC and player (whether the NPC is ahead of the player). They use a Transform method, TransformVector, to ensure the target’s forward vector is accurately translated to world space to compare with the NPC’s own forward vector in world space to get a proper angle measurement.

3: Evade

Evade: similar to pursuit but uses predicted position to move away from where target is predicted to end up

Since it is so similar to pursuit, implementing it in the tutorial could basically be done by copying the pursuit logic and just changing the Seek method to the Flee method (which effectively just sets the destination to the opposite of the vector between the agent and its target position). They do it even more simply by just getting the target’s predicted position and straight away using the Flee method on that position.

4: Wander

Wander: it is the somewhat random movement of the agent when it does not have a particular task; comparable to the idle state of standing

Their approach for wander is creating a circle in front of the agent where a target location is located on the edge of this circle and moves a bit along the circle to provide a bit of variety in the agent’s pathing

Important Parameters: Wander Distance: distance from agent to center of circle Wander Radius: size of circle Wander Jitter: influences how target location moves about circle

5: Hide

Part 1

Hiding requires having objects to hide behind. Their approach is to create a vector from the chasing object to an object to hide behind, and extend this vector a bit to create a target destination for the agent to hide at. A vector is then created between the hiding agent and this hiding destination to guide pathing.

The “hide” objects are tagged “hide” in Unity, and are static navigational meshes. They created a singleton class named World. This would hold all the hide locations. This does this by using the FindGameObjectsWithTag method to find all the objects with the “hide” tag.

They mention two ways to find the best hiding spot. The agent can look for either the furthest spot or the nearest spot. They decided to use the nearest spot approach.

Hide Method

Goes through the array of all the “hide” tag objects gotten in the World class and determines the hiding position for each of them relative to the target object it is hiding from. Using this information, it chooses the nearest hiding position and moves there. The hiding position is determined by creating a vector between the target the agent is hiding from and every hide object, and starting from this point, it adds a bit more in the same direction so that it is some distance behind the hiding object and the hiding object is between the target and the agent.

Part 2

The hide vector calculations use the transform position of the hiding objects, which is in the center of the object generally. This means setting the agent some distance away from the center should vary because objects can be different sizes.

To consistently get a position close to an object regardless of its size, they use an approach where the objects have a collider and the vector passes fully through this collider and beyond. It then produces a vector from that position coming directly back in the opposite direction until it hits the collider again. This position where it hits the collider is then what is used as the position for the agent to hide in. This is done because it is much easier to determine where a ray or vector first collides with a collider than where it leaves a collider.

The new method they created with this additional logic is named CleverHide(). Combining this logic with the NavMeshAgent tools in Unity can require some fine tuning. The main factor to keep track of on the NavMesh side is the stopping distance, as this is the distance the agent has to get within the actual destination to be good enough for the system to stop the agent.

They added a house object to test the system with more objects of various sizes and it worked ok. It was interesting because the agent wouldn’t move with significant moving from the target player, so sometimes you could get very close before it would move to another hiding position. I am not positive if this was another NavMesh stopping distance interaction or something with the new hiding method.

Finally, for efficiency purposes they did not want to run any Hide method every frame so they looked to add a bool check to see if the agent was already hidden or not before actually performing the Hide logic. To check if the agent should hide, they performed a raycast to see if it could directly see the player target. If so, it would hide, if not, it would stay where it was at.

6: Complex Behaviors

Complex Behaviours: combining the basic behaviours covered so far to give decisions for the agent to determine how it should act. The first combination they do is choosing between pursue (when player target is looking away) and hide (when player target is looking at them). The looking check is done with an angle check between the target and the forward vector.

They also added a cooldown timer to update the transition between states since it led to some very weird behavior when hiding with the current setup (the agent would break line of sight with the target immediately upon hiding, which would then cause it to pursue immediately, and then hide again, etc. so it would just move back and forth at the edge of a hiding spot).

7: Behavior Challenge

Challenge: Keep pursuit and hiding as they are Include range If player is outside distance of 10, agent should wander

My approach: I added a bool method named FarFromTarget() that did a distance check between the agent and the target. If the magnitude of that vector was greater than 10, it returned true, otherwise it returned false.

Then in Update I added another option of logic after the cooldown bool check to see if FarFromTarget was true. If so, the agent entered Wander and else it performed the similar logic before with the checks to perform either CleverHide or Pursue.

Their approach: They also created a bool method, but they named it TargetInRange(). Following this, they did the same exact process I did. They checked for TargetInRange within the cooldown if statement check, and performed Wander above all else, then it checked with else if it did the other previous logic (CleverHide or Pursue).

Summary

All these behaviors are actually relatively simple on their own, but as shown in the final tutorials, combining these with transitions can create interesting and seemingly complex behaviors. This type of design would work very well with the Finite State Machine systems covered in earlier tutorials (as well as others I’ve researched as well). State Machines are also very nice to ensure the isolation and encapsulation of the individual types of behavior.