Architecture AI Pathing Project: Upward Raycast for Better Opening Detection

January 11, 2021

Raycast for A* Nodes

Architecture AI Project


Original Downward Raycast and Downfalls

The raycasting system for detecting the environment to setup the base A* pathing nodes was originally using downward raycasts. These rays traveled until they hit the environment, and then set the position of the node as well as whether it is walkable or not. An extra sphere collision check around the point of contact was also conducted so as to check for obstacles right next to the node as well.

This works for rather open environments, but this had a major downside for our particular needs as it failed to detect openings with in walls and primarily doors. Doors are almost always found within a wall, so the raycast system would hit the wall above the door and read as unwalkable. This would leave most doors as unwalkable areas because of the walls above and around them.

Upward Raycast System Approach

Method

The idea of using an upward raycast was that it would help alleviate this issue of making openings within walls unwalkable when they should be walkable. By firing the rays upward, the first object hit in most cases can safely be assumed to be the floor because our system only works with a single level at this time. Upon hitting the first walkable target (in this case, the assumed floor), this point is set and another raycast is fired upward until it hits an unwalkable target. If no target is hit, this is determined as walkable (as there is clearly nothing unwalkable blocking the way), but if a target is hit, the distance between these two contact points is calculated. This distance is then compared with a constant height check and if the distance is greater, the node is still marked as walkable even though it eventually hit an unwalkable object.

This approach attempts to measure the available space between the floor and any unwalkable objects above it. If a wall is set directly onto the floor, as many are, the distance will be very small so it will not meet the conditions and will be set unwalkable appropriately. If there is a walkable door or just a large opening such as an arch, the distance between the floor and the wall above the door or the example arch way should be large enough that the system notes this area is still walkable.

Sphere Collision Check for Obstacles

Similarly to the original system, we still wanted a sphere collision check to help note obstacles very close to nodes so that the obstacles would not slip between the cracks of the rays cast, effectively becoming walkable. We included this similarly, but it is noted because the initial hit used is now below the floor, so the thickness of the floor must be accounted for. Currently a larger radius check is just needed so it can reach above and through the floor. In future edits, it could make sense to have a noted constant floor thickness and perform the sphere radius check above the initial raycast contact point according to this thickness.

Test Results

Details

I compared the two raycast methods with a few areas in our current test project. In the following images, the nodes are visualized with Unity’s cube gizmos. The yellow nodes are “walkable” and the red nodes are “unwalkable”. Most of the large white objects are walls, and the highlighted light blue material objects are the doors being observed.

Test 1

A high density door area was chosen to observe the node results of both systems.

The downward check can work for doors when the ray does not directly hit a wall, as the sphere check properly checks them as walkable from the floor. However, the rays hitting the walls can be seen by the unwalkable nodes placed on top of them. A clear barrier is formed along the entirety of the walls, effectively making the doors unwalkable.

The upwards raycast test clearly shows every door has at least a single node width gap of walkable nodes in every case. The doors that were originally unwalkable with the downward raycast and now walkable as the height check was met.

Test 1: Downward Raycast




Test 1: Upward Raycast

Test 2

A larger room with a single noteable door was observed.

The downward check does pose a problem here as a full line of unwalkable nodes can be seen on the floor blocking access through the door and into the room. Because the problem is seen with nodes on the floor rather than nodes on top of the wall, this is actually a case where the sphere collision check is the problem and not the raycast particularly. Changing the collision radius for the check could potentially solve the issue here.

The upwards raycast is able to cleanly present a walkable path through this door into the room. While this does give us the result we desired, it should be noted again this difference can be attributed to the difference in the sphere collision check for obstacles. The same radius was used for both tests, but the upwards raycast sphere orginates from the bottom of the floor, so the extra distance it has to travel through the floor is what really opens up this path.

Test 2: Downward Raycast




Test 2: Upward Raycast

Summary

The upwards raycast seems extremely promising in providing the results we wanted for openings, doors especially. The tests clearly demonstrate that it helps with a major issue case the downward check had, so it is at worst a significant upgrade to the original approach. The upward raycast with distance check method also succeeded in other door locations, supporting its consistency. It will still have trouble from time to time with very narrow openings, but should work in a majority of cases.

via Blogger http://stevelilleyschool.blogspot.com/2021/01/architecture-ai-pathing-project-upward.html

Architecture AI Completed Prototype with Data Visualization and Pathing Options

June 24, 2020

Architecture AI Project

Completed Project Prototype

Introduction

We were exploring different methods for importing architectural models into Unity and decided to try Revit to see what it had to offer. The models are easy to move between softwares, and it also creates .cs scripts that Unity can use to provide some interactive data visualization when using the Revit models.

Data Visualization Options

I moved everything dealing with the major logic of the A* grid node visualization into its own class named AStarGridVisualization. This allowed me to continue to hold on to all the different debug visualization methods I created over the course of the project and convert them into a list of options for the designer to use for visualization purposes.

As can be seen in the demo, there is a list of options in the drop down list in the Unity Inspector so the designer can tell the system which data to visualize from the nodes. The options currently available are: Walkable, Window, and Connectivity. Walkable shows which nodes are considered walkable or not. Window and Connectivity show a colored heat map to show those respective values within the nodes themselves.

Changing Values Used for Pathing

The project wanted to be able to use different types of values to see their effects on the agents’ pathing, so every node has several different types of architectural values (i.e. Window and Connectivity). Using the window information for pathing will give different agent paths than when using the connectivity values, because the agents have different affinities towards these types of values and the nodes have different values for these different types contained within the same node.

The PathFindingHeapSimple class handles the calculations for the costs of the paths for the agents, so I have also located the options to switch between pathing value checks here. This class is responsible for checking nodes and their values to see which lead to the lowest cost paths, so there is another drop down list available to the designer here that determines which architectural value type these checks look at when looking at the nodes. This allows the designer to choose which architectural value actually influences the agents’ paths.

A* Node Grid Construction Options

This point of the project was still working with various types of model imports from various softwares to find which worked the best and most consistent, so I added some extra options for generating the node grid in the first place to give some quick options that may better help line up the node coverage area with the model at hand. These major options located in the AGrid class itself are: GridOriginAtBottomLeft and UseHalfNodeOffset (the third is determining whether the system reads data from a file or not to add values to the node grid, but that’s a more file oriented option).

GridOriginAtBottomLeft

GridOriginAtBottomLeft determines where Unity’s origin is located relative to the node grid. When true, Unity’s origin is used as the bottom left most point of the entire node grid. When false, Unity’s origin is used as the central point of the generated node grid.

UseHalfNodeOffset

UseHalfNodeOffset is responsible for shifting the nodes ever so slightly to give a different centering option for the nodes. When true, half of the dimension of an individual node is added to the positioning vector of the node (in x and z) so that the node is centered within the coordinates made by the system. When false, the offset is not added, so the nodes are directly located exactly at the determined coordinates.

File Reading to Set Values Within Nodes

There are two major ways to apply architectural values to the nodes: using architectural objects (within Unity itself) or reading data from a file. The data files contain x and y coordinate information along with a value for a specific architectural value for many data points throughout a space. These files can be placed Unity’s Assets/Resources folder to be read into the system. The option referenced above in the AGrid to read data from a file or not is the IsReadingDataFromFile bool, and it just tells the system whether or not to check the file told to the system to assign values to the grid.

The FileReader object holds the CSVReader class which deals with the logic of actually reading this file information. The name of the file to be read needs to be inserted here (by copy/paste generally to make sure it is exact), and the dimensions of the data should be manually assigned at this time to make sure the system gets all of the data points and assigns them to the grid appropriately (these data dimensions are the “x dimensions” and “y/z dimensions” ranges of the coordinates within the data).

Demonstration of Prototype

This demonstration helps show the different data visualization options of the grid, the agent spawning system, how changing the architectural value path type changes the agents’ paths, and how a lot of these systems look in the Unity Inspector.

Vimeo – Architecture AI Project Prototype Demonstration

Summary

I really liked a lot of the options I was able to add to help increase the project’s overall usability and make it much easier to switch between various options at run time to test different setups. The end product is meant to help users that may not have a lot of Unity experience, so I tried to reduce the learning curve by making a lot of options easily accessible with visible bools and drop down lists, while also providing some simple buttons in the game space to help with some other basic actions (such as spawning new agents).

Many of the classes were constructed with scalability and efficiency in mind, so expanding the system should be very doable. However much of this scalability is still contained within the programming itself and the classes themselves would need to be modified to add more options to the system at this point (such as new architectural types). This is something I would like to make more accessible to the designer so that it’s intuitive and possible to modify without programming knowledge.

The file reading system works well enough for these basic cases, but it could definitely use some work to be more flexible and work with various node grid options. This is something I could use more work on in general, and having systems to read in files and output files to record data would be useful to understand better in general.

Architecture AI Varied Agent Affinities with Varied Pathing

May 28, 2020

Architecture AI Project

Varied Agent Affinities with Varied Pathing

Demonstration of Varied Agent Affinities with Varied Pathing

Vimeo Link – My Demo of Agents Pathing with Varied Affinities

Explaining Simple but Unclear Heatmap Coloring System

Just to help explain since I don’t have great UI in to explain everything currently, the first thing was that I was testing a really rough “heat map” visual for the architectural values for now. When you turn on the option to show the node gizmos now, instead of uniform black cubes showing all the nodes, the cubes are colored according to the architectural value of the node (for this test case, the window value) Unfortunately this color system isn’t great, especially without a legend, but you can at least see it’s working (I just need to pick a better color gradient). Red is unfortunately both values at/near 0 (min) as well as at/near 100 (max) (but the value range here is only from 0 – 80, so all red in this demo is 0).

Agent Affinity/Architectural Value Interaction for Pathing

The more exciting part is that the basis of the agent affinity and architectural value interactions seem to be working and affecting their pathing in a way that at least makes sense. Again just for demo purposes so far, as can be seen in the video (albeit a bit blurry), I added a quick slider on the inspector for the Spawn Manager to determine the “Window Affinity” for the next agent it spawns (for clarity I also added it as a text UI element that can be seen at the top of the Game View window). Just to rehash, this has a set range between 0 and 100, where 0 means they “hate” windows and avoid them and 100 means they “adore” windows and gravitate towards them.

Test #1: Spawn Position 1

As can be seen in the first quick part of the demo, I spawn 2 agents from the same position 1 but with different affinities. The first has an affinity of 0, and the second has an affinity of 100. Here you can already see the 0 affinity agent steers towards the left side of the large wide obstacle to avoid the blue (relatively high) window area, where as the 100 affinity agent goes around the right side of the same obstacle, preferring the high window valued areas in the blue marked zone.

Test #2: Spawn Position 2

Both the 0 affinity and 100 affinity agents take very similar path, differing by only a couple node deviations here and there. This makes sense as routing around the large high window value area would take a lot of effort, so even the window avoidant agent decides to take the relatively straight forward path over rerouting.

Test #3: Spawn Position 3

This test demonstrated similar results to that of Test #1. The 100 affinity agent moved up and over the large obstacle in the southern part of the area (preferring the high window value area in the middle again), where as the 0 affinity agent moved below the same obstacle and even routed a bit more south just to avoid some of the smaller window afflicted areas as well.

Summary

I did some testing of values in between 0 and 100 and with the low complexity of the area so far, most agents ended up taking one of the same paths as the 0 or 100 affinity agents from what I saw. This will require more testing to see if there already exists some variance, but if not, this suggests that some of the hardcoded behind the scenes values or calculations for additional cost may need tweaked (as is expected). Overall though, the results came out pretty well and seem to make sense. The agents don’t circle around objects in a very odd way, but they also do not go extremely out of there way to avoid areas even when they don’t like them.

Architecture AI Heatmap of Architectural Costs

May 27, 2020

Architecture AI Project

Visualizing Heatmap of Architectural Costs

Architectural Cost Heatmap

I was just using basic black cubes to visualize the grid of nodes in the AGrid A* system originally. Now I finally decided to update that system with a bit more information by adding a colored heatmap to map out the architectural values of the nodes. This will help me understand the pathing of my agents better as they start to use these values in more complex and interesting ways.

Basic Heatmap Color System

It turns out making a heatmap color scale that decently ranges between values of 0.0 and 1.0 is pretty easy. Using HSV color values, you can just vary the H (hue) value while keeping the other two values constant at 1.0 to cover a heatmap spectrum (where H of 0.0 is the lowest value in the range, 1.0 is the highest value in the range, and all other colors are bound within that range). By simply using the min and max architectural values possible in the system as the range bounds I could easily set this type of system up.

Applying Colors to Visualization

For now I decided to just apply this heatmap color methodology to the cubes mapping out the A* grid for now. While these gizmo cubes used to just be colored black for simplicity, now they are colored according to the heatmap color system using the architectural value found within the node.

Bug: Influence Objects Found to Radiate Improperly

The influence objects which apply architectural values to the nodes around them were initially designed to apply in an area around them, with the objects in the center. After applying the heatmap, it was clear that the influence objects were not applying to the nodes intended. It appears the influence actually uses the position of the influence object as the bottom left corner, and radiates upward and to the right (so just in the positive x and z direction away from the object). This is something I will have to look into to make sure it’s an issue with the influence objects, and not the coloring system.

Current Heatmap System (with Bugged Influence Ranges)

Architecture AI Cost Calculation Class for AStar

April 27, 2020

Architecture AI Project

Building Architecture Value to A* Cost Class

Math for Calculating A* Cost with Architecture Value and Agent Affinity Integration

I needed to translate agent affinities for architectural elements and their interaction with the architectural values themselves found in the nodes throughout the A* grid into cost values to integrate them with A*’s pathfinding logic. With this I wanted to create a class that is constantly available to contain all the math and constraints for dealing with this cost translation.

Class MathArchCost

The class MathArchCost was created to fulfill this purpose. It is a public class with a public static instance to make it readily available for other classes to acces it to perform calculations or check for minimum and maximum values. I decided to go this route instead of a directly public static class overall because I wanted to make an instance of this object that I could edit in the inspector during building time to tweak and test values. I may investigate routes of editing a truly static class in the inspector in the future.

Relationship Between Agent Affinity and Architectural Value

I was looking to make a system where agents that high a very high affinity for a particular architectural element would be drawn to those nodes with that architectural value type, and more drawn to it the higher its architectural value. Similarly, I wanted a very low affinity for an architectural element type to drive them away from nodes with that type of architectural value, and even more so from those with high values in that type. These different types can be examples like: window, light, narrowness, etc.

To solve this, I decided to look into using a basic formula that takes an input of agent affinity and outputs a “cost per architectural value” value. This way high affinities would create very low or highly negative rates (because lowering cost is what persuades an agent to move to a node) and low affinities could be associated with high cost rates. Incorporating it with a rate helps influence the idea that the more of an architectural element in a node will have a greater impact on the overall cost.

Default Architectural Cost

Architectural cost is the resulting cost on a node for a particular agent accounting for their affinity and the node’s architectural values, which is then incorporated into the overall cost calculations for A* pathfinding to determine how “good” the agent perceives traveling on this node is. Since subtracting values can have very strange or unwanted results sometimes, I did not want to initially start by having certain affinity and architectural value combinations removing cost from the node’s overall cost. To circumvent this, I added a “default architectural cost” to each node, that can be increased or reduced by the affinity/architectural value relationship (without lowering below 0).

Summarizing Architectural Cost from Architectural Value and Agent Affinity Relationship

So the overall concept is that every node has some cost used in the A* pathfinding. An additional cost, Architectural Cost, was created to account for the agent affinities and architectural values of the nodes. An architectural cost is added to every node, starting with the default cost, and either reducing it (to a possible minimum of 0) with very high affinity and very high architectural value, or raising it, with an arbitrary ceiling determined by the max possible affinity in the system and the max possible architectural value a node can have. It is more important to ensure that the cost never dips below 0 that restricting the upper limit.

Next Steps

I have already looked into putting these values together in a precise mathematical relationship which I will be looking to explore in my next blog post. I am looking to finally integrate this into the A* pathfinding system and test this on agent’s with a single architectural affinity type (with various values) and adding varied architectural values of that type to nodes throughout the A* grid and seeing if the results are in the direction I expect, and how processing intensive it is. We only need to test a single agent at a time currently, so it can be a rather expensive process if needed, but it still needs tested and gives me concern to add to the cost determination process.

Updating A* to Incorporate Various Architectural Elements Influencing Pathfinding

March 21, 2020

Updating A*

Different Agent Types and Different Architectural Elements

Adding Architectural Elements of Influence to A* Pathfinding

We want to be able to add various architectural elements to an environment and use them to influence the pathing of AI agents using A* pathfinding. Along with this, we want various agents to respond differently to the same architectural stimulus (i.e. some agents should like moving near windows more, while others should prefer to stay away from windows). This required the addition of some architectural data to both the agents and the nodes within the A* grid.

Implementing Architectural Elements

I have started to implement my approach outlined in my blog post from yesterday to create the foundation for this architectural pathing addition. I have started with just a single data point to work out the feature by adding a “window” value to both the agents and the nodes of the grid. The Agent window measures their affinity for window spaces (higher value means they are more likely to move near/towards windows) and the Node window values represents how much that space is influenced by windows (being closer to a window or near several windows will raise this value, and a higher value means it is more window influenced, and more appealing to high window affinity agents).

I modified the existing Influence class into an abstract base class that I will use as the basis for these architectural influence objects. This allows me to pass on traits consistent with all types of influence objects (such as dimensions for area of influence) as well as methods they will all need (such as one which determines how they apply influence to the area around them). Related to this, I was able to move the influence method out of the AGrid class, which also prepares the entire grid, so this was good for trimming that class down.

Moving the influence spreading methods to the Influence objects themselves was also crucial because it allows different objects to apply influence differently and more uniquely. Some objects may apply to simple rectangular shapes, where others may project out in a cone, or even use ray casts to determine their area of influence. To help these Influence objects influence the proper area, I pass them a reference to the grid as well as their starting position (in terms of nodes on the grid). This is helpful to start in the AGrid class, which currently holds a reference to all the Influence objects in the scene, as it uses their transform information to determine which node they are centered on (which is passed on to the Influence object itself as stated to know where it is located in the grid).

Finally, I had to make sure a reference to the individual agents was getting passed through the entire pathfinding process so it could use that information to provide the possibly unique path for the different types of agents. This just meant I needed to add references to the UnitSimple class to pass through as a parameter from UnitSimple itself, to the PathRequestManagerSimple, and eventually to the PathfindingHeapSimple, which finally uses that data to modify how it calculates cost for creating the paths.

Investigating How to Apply Architectural Influence as Cost

Figuring out how to apply this combination of architectural value within the nodes themselves and the architectural affinities of the agent in a mathematical sense that makes sense has proven difficult, and I am still investigating a clean way to implement this. Because of the nature of how the costs work with this pathfinding system, subtracting values from costs in a way that is not very controlled can be risky and prone to breaking, as negative values can provide some very strange results.

To help determine a system to implement this, I broke it down as simply as I could. The goal is to use the two values, agent affinity and node architectural value, together to produce a architectural cost to add on top of the normal distance cost of a node. Looking at this, I created a set of constraints to guide the process, and eventually a basic system to follow for now to give results at least in the proper realm of what we are looking for.

Architectural Cost Constraint Breakdown

Early on I thought it would be helpful to have a MAX affinity value possible and a MAX architectural value possible since I thought I would either need to subtract from these values (so bigger numbers led to lower additional cost) or divide with these numbers (to provide some proportionality value within the cost calculation). These arbitrary values and their concept were then used in coming up with some of the initial constraints I delivered. These constraints were (the results are the architectural cost, added to total node cost):

  • MAX Affinity with MAX Architectural Value = 0
  • MIN Affinity with MAX Architectural Value = MAX Architectural Cost
  • Average Affinity with ANY Architectural Value = DEFAULT
  • ANY Affinity with 0 Architectural Value = DEFAULT

The ideas behind these constraints were: having the highest affinity on the most architectural value possible on a node should produce the lowest architectural cost possible (0 in this case), the lowest affinity (or greatest aversion) with the highest architectural value on a node should produce the highest architectural cost possible, having an average affinity means the agent is neatural towards that architectural element so no value has any major influence on the cost, and finally if a node has 0 architectural value for a feature the agent takes on no cost regardless of their affinity towards it since it is not present in any way for that node.

Further Breakdown of Interaction Between Agent Affinity and Architectural Node Value

While a step in the right direction, I still was not sure exactly how I wanted to numerically determine the architectural cost. To further aid myself, I settled on a system that made a decent amount of sense with how I wanted these factors to interact. The tricky part is that it is the interaction between these two factors that influences the resulting cost on the node, it is not simply more of one or more of the other reduces cost on their own.

They concept I settled on was that the agent’s affinity value would be used to calculate a cost/architectural value factor (using basic factor labeling, multiplying this by architectural value would result in some cost value). I decided having a MAX affinity value possible made sense, and could help with this approach. Affinities below the average (value in the middle) of the MIN and MAX affinities would increase the cost of the node, where values above the average would reduce the cost of the node. The architectural value would then just determine how much this rate influenced the cost of the node. This correctly hit the points that agents with very low affinities should really want to avoid nodes with a lot of that architectural element, and those with very high affinities should really prefer nodes with that architectural element.

I feel like at this point I have most of the constraining factors and rules in place, I just need to bring them all together in a relatively clean calculation. My first attempt does not need to be the perfect solution, so I can modify it later if certain mathematical approaches work better for this situation, I just need to make sure it encompasses the general ideas and goals we need from the system for now.

Next Steps

I mostly just need to bring everything together to get a calculation that satisfies our basic rules and goals for now. As stated, I can modify it if better mathematical approaches are found, but we just need something to test for now. I am investigating basic math formulas to see which curves look like they would fit the preferred results, so I will at least record those for now to test in the future (just simple squared, cubed, root graphs). Finally, I just need to fully implement the system so I can test the bare basics to make sure the different agents actually travel differently with the window objects in the world and that they are not reducing the processing time to a crawl.

Updating A* for Multiple Agent Types of Pathfinding

March 20, 2020

Updating A*

Different Pathfinding Logic for Different Agent Types

A* Current System

The A* Pathfinding currently performs the same calculations for every agent in order to determine its path. Every unit then takes the same path assuming they have the same starting and target positions. We are looking to adjust this so there are various types of agents which with different affinities to certain objects in the environment. This then needs to be incorporated in the path determination so different types of agents will takes different paths, even when using the same starting position and same target position.

Differentiating Pathfinding for Different Types of Agents

We want to add more parameters to the grid nodes besides flat cost that influence the agents’ paths. These parameters also need to be able to influence different agents’ paths differently (i.e. the parameters may give a node a cost of 10 for one agent, but 20 for another).

Approach

The initial needs for this system are the addition of parameters to the nodes of the A* grid and parameters that correlate with these on the agents themselves. With this in place, we are just going to add a reference pass of the individual agents themselves to the eventual path calculation so it can also factor in these additional values.
The actual steps for this concept are laid out here:

– Individual agents need to have data within them that determines how they act
– Pass the UnitSimple instance itself through PathRequestManagerSimple.RequestPath along with the current information (its position, its target position, and the OnPathFound method)
– In PathRequestManagerSimple script, add UnitSimple variable to the PathRequest struct
– Now, RequestPath with added UnitSimple parameter can also pass its UnitSimple reference to its newly created PathRequest object
– Then, TryProcessNext method is run
– Here, the UnitSimple reference can be passed on to PathfindingHeapSimple through the pathfinding.StartFindPath() method call
– Add UnitSimple parameter to StartFindPath method in PathfindingHeapSimple class along with existing parameter (start position of path request, target position of path request)
– PathfindingHeapSimple can now use variables and factors of UnitSimple when determining and calculating the path for that specific agent
– Finally, returns its determined path as a set of Vector3 waypoints to the agent (effectively stays the same process)

Next Steps

This is just a conceptual plan, so I still need to implement it to see if it works and how it runs. I will also be looking to add some parameters to the nodes and agents, so I can start differentiating various types of agents to test the system.

A* Architecture Project – Spawning Agents and Area of Influence Objects

March 25, 2020

Updating A*

Spawn and Area of Influence Objects

Spawning Agents

Goal: Ability to spawn agents in that would be able to use the A* grid for pathing. Should have options to spawn in different locations and all use grid properly.

This was rather straight forward to implement, but I did run into a completely unrelated issue. I created an AgentSpawnManager class which simply holds a prefab reference for the agents to spawn, a transform for the target to pass on to the agents, and an array of possibl spawn points (for incorporating the option of several spawn locations). This class creates new gameObjects based on the prefab reference, and then sets their target to that determined by the spawner. This was something worth tracking since sometimes there can be issues with Awake and Start methods when setting values after instantiation.

This was all simple enough, but the agents were spawning without moving at all. It turns out the issue was that the spawn location was above a surface that was above an obstacle (the obstacle was below the terrain, but entirely obstructed). This was an issue with how my ray detection and node grid was setup.

Editing the Grid Creation Raycast

The node and grid creation for the A* system uses a raycast to detect obstacles, as well as types of terrain to inform the nodes of their costs or if they are usable at all. Since it is very common to use large scale planes or surfaces as general terrain, and place obstacles on this, uses a full ray check would almost always pick up walkable terrain, even if it hit an obstacle as well.

To get around this, I simply had it check for obstacles first, and if it detected one, mark this node unwalkable and move on. This created an issue however in the reverse case, if an obstacle went a bit past the terrain into other nodes below the surface, they would be picked up as false obstacles, or in this case, it was picking up obstacles that were entirely located below the surface.

I was using a distance raycast in Unity, which just checks for everything over a set distance. I looked into switching this to a system that just detects the first collider the ray hits and simply use that information. I found that using the hit information of the raycast does this.

Unfortunately I am using a layer setup for walkable and unwalkable (obstacle) terrain, so I needed to incorporate that into my hit check. Checking layers is just weird and unreadable in Unity scripting, since I am currently just use the hardcoded integer value for the current layer number that is unwalkable when doing my check for obstacles. This does at least suffice to let the system work properly for now.

Influence Area with Objects

I wanted to be able to create objects which could influence the overall cost of nodes around them in an area significantly larger than the objects themselves. The idea is that a small but visible or detectable objects could influence the appeal of nodes around them to draw or push agents away from them.

For a base test, I created a simple class called Influence to place on these objects. The first value they needed was an influence int to determine how much to alter the cost of the nodes they reached with their influence. Then, to determine the influence range, I gave them an int each for the x and z direction to create dimensions for a rectangle of influence in units of nodes. I also added some get only variables to help calculate values from x and z to help center these influence areas in the future.

I then added an Influence array to the AGrid class which contains all the logic on initializing the grid of nodes and setting their values at start up. After setting up the grid, it goes through this array of Influence objects and uses their center transform positions to determine what node they are centered on, then finds all the nodes around it according to the x and z dimensions given to that influence object, and modifies their cost values with the influence value of the Influence object. Everything worked pretty nicely for this.

As a final touch just to help with visualization, I added a DrawGizmos method that draws yellow wire cubes around the influence objects to match their area of influence. Since the dimensions are mainly in node units, but the draw wire cube wants true Unity units, I simply multiplied the x and z node dimensions for the influence by the nodeDiamater (which is the real world size of each node) to convert it to units that made sense.

I have two sample images to show the influence objects in effect below. The small purple squares are the influence objects, and the yellow wire frame cubes around them show their estimated area of influence. The first image shows the paths of the agents when the influence of all squares is set to 0 (no influence), and the second shows the paths when the influence is set to 200 (makes nodes around them much more costly and less appealing to travel over).

Paths with Influence Set to 0


Paths with Influence Set to 200

Summary

The raycast and layer system for detecting the terrain and initializing the grid could use some work to perform more cleanly and safely, especially for future updates. The spawning seems to have no issues at all, so that should be good to work with and edit moving forward. The basic implementation of the influence objects has been promising so far, I will look into using that as a higher level parent class or an interface moving forward as this will be a large part of the project and their may be many objects that use this same core logic by want special twists on it (such as different area of influence shapes, or various calculations for how influence should be applied).

Updating A* to Move Units in 3D for Elevations (Cont.)

March 12, 2020

Updating A*

Adding 3D Movement (Elevations)

A* Current System and Notes

Now the A* system can deal with moving units along the y-axis according to the terrain, but it is still off with the advanced path smoothing functionality since this only cares about points where the unit is turning according to the xz-plane. To keep the system more simplified for now, I am looking to roll back some of the more advanced functionality from the Sebastian Lague tutorial I followed to incorporate the y-axis movement into a more simplified pathing system.

I want to ignore the SimplifyPath method for now (which narrows down the full list of nodes for a path down to only those which turn the agent, and only uses these as waypoints), which also causes problems if I have path smoothing as well. This is another reason to remove path smoothing temporarily.

A* Update

I wanted to keep the advanced path smoothing options available in the project, while looking to use the simplified version for now (with updates to continue having the y-axis movement). To do this, I identified which scripts dealt with path smoothing specifically which led me to the chain of these scripts:

  • PathFindingHeap
  • Unit
  • PathRequestManager

Since those were the scripts that dealt with applying path smoothing, I looked for versions of the project I saved as stepping stones of the tutorial that were before path smoothing (around episode 8 of the tutorials). From these projects, I grabbed their respective versions of these scripts to copy into this working project. To keep them as unique but separate options, I appended “Simple” to these versions of the scripts.

With these scripts, I could build out a scene (named SimpleTest) for the purposes of testing the y-axis movement with this simpler movement logic. While it is less aesthetically pleasing, it is in a way more accurate and representative of the core information the system is working with, so I wanted to get back to this state to have a better idea of what I was working with. This new scene has PathFindingHeapSimple and PathRequestManagerSimple on its A* gameobject, and each of the Seekers use the UnitSimple script instead.

With all of these in place to effectively remove path smoothing for now, I could effectively remove the SimplifyPath method from PathFindingHeap (and PathFindingHeapSimple as well for this case). This ensures the agents will travel to each individual position designated by every node along its path. This helps me see exactly every point of data that the agents are working with when moving. This is helpful to make sure the agents are properly receiving the right elevation data throughout the entirety of their paths.

With all of this done, I got the desired results I expected. The units immediately fly down into their starting position and move along the terrain properly, regardless of elevation. They effectively move up and down inclined surfaces, and can properly navigate up and down bumps and ramps in the terrain. They also show every single node point as a gizmo they are using as a path for movement, which clearly shows their paths following the terrain. I have included a video link for a quick demonstration of the agents in motion.

Vimeo – A* Movement in 3D Edit

AI Agents Moving in 3D with Path Highlighted

Next Steps

This simplified system appears to work exactly how I need it to, and I think this may be the preferred approach moving forward for now since it will be used to represent and show theoretical data. This may make it more practical to show the exact pathing information given as opposed to the skewed pathing created by the path smoothing operations.

The next step will be looking into creating objects which can influence the cost of a number nodes around them (as opposed to just those they are directly on, which is somewhat covered in the behavior of obstacles). This way different objects can influence the likelihood of an agent moving towards/around them.

I am looking at starting with a simple area of effect approach where the grid detects the object on a certain node and then alters the cost of all the nodes around it to some degree (the simpler approach, and mimics the blur effect already present from the tutorial followed). This blur effect makes it so the cost additions of the grass and road are blurred some around where they meet. The future advancement of this topic would be to allow objects to influence the cost of nodes based on line of sight, which is a much more variable amount of nodes to influence, making it a bit more difficult to implement.

Updating A* to Move Units in 3D for Elevations

March 11, 2020

Updating A*

Adding 3D Movement (Elevations)

A* Current System

The current A* system I am using solely works on a 2D plane (on the x-axis and z-axis). It creates a grid on this 2D plane and then allows agents to move specifically along this 2D plane, with no regard for heights. It is currently casting rays from above these grid nodes to detect the layer of the terrain the node is on. Each node also casts a small sphere the same size as it to detect collision for obstacles to determine if a node is traversable or not.

A* Update

Since the system already uses raycasts to detect the terrain type, I edited this to detect the height information from the terrain as well. Using RaycastHit.point, the raycast can return information on the exact point it hits, so I can pass the y-axis information from here to the node. I just do this along with the terrain detection, and pass this to the world position data of the node.

Since this update is specifically targeted at working with various elevations, I do not like using the collision spheres on the nodes to detect obstacles. Since I already have raycast incorporated, I thought it made sense to have it detect obstacles as well. Because there are layers involved for obstacles and terrain, I have the raycast check first if it can detect an obstacle (unwalkable terrain), and then if it does not find one, then check for walkable terrain and what type it is.

Finally, just to clean up, I added an extra variable to account for the unit heights when assigning elevation positions. This can be changed in the editor, but it adds a bit of elevation to the exact value from the terrain so the unit’s positioning as it moves along is a bit above the ground (so it is not in or below the ground).

Next Steps

These changes worked pretty well to get close to the desired effects. The units will move along altered elevation terrains to some degree. The current testing was done with a further simplified waypoint system after grabbing all of the nodes, so only the position data of points where the unit needs to turn is really taken into account for movement. This results in weird issues since the unit will not move up and down to account for elevation if it is moving in a straight line according to the xz-plane. It only accounts for elevation if the specific waypoints end up on varied elevation points (this is why it looks pretty proper on just a slanted plane).

I will look into reverting the simplified system to see if the standard system going to every single node works first. After I get that working again in all 3 dimensions, I will look to adjust back to the simplified waypoints system.