December 10, 2019
A* Tutorial Series
Pt. 07 – Smooth Weights
A* Pathfinding (E07: smooth weights)
By: Sebastian Lague
Intro:
This tutorial covers smoothing the movement penalty values out over an area, as opposed to the areas just directly being either one value or another. This will also be helpful for my uses as most cases when updating movement penalties in areas will not just want to update the single exact node where an important action happened. While that area should most likely have the largest impact on its weight, it will make sense to apply a light value change around that area as well. This seems to fit right in with smoothing or blending of weight values.
Theory
Box Blur: weight smoothing algorithm using averages of values within a kernel around a block in the gridKernel: a smaller grid sample within the overall grid; used here as the space to gather data from to create an average value with
The smoothing theory they look at is called Box Blur. In this case, each part of the grid represents an average value of the blocks around it. This smaller grid used for the averaging process is called a kernel. They intitially show this off by using a 3×3 kernel, which averages all 9 values in the kernel and places that value in another grid in the same relative location as the center of the kernel. This shows off the idea well, but they mention a more efficient way.
The more efficient way is to do a horizontal pass and a vertical pass separately. Representing the same overall 3×3 kernel as before, they start by doing the horizontal pass with a 1×3 sized kernel. This adds the 3 values in a row horizontally. This becomes more efficient because when they move to the next grid element, instead of searching for the next 3 elements, they simply keep tabs on the current sum and subtract the leftmost value (which is no longer in the kernel) and add the new rightmost value (which was just added to the kernel).
This is repeated over the entirety of the grid until they all received a horizontal pass (in all cases, values near edges of the grid are repeated if the kernel would go out of bounds). They then go through this new grid of summed horizontal values and do the same process, but vertically (so a 3×1 kernel). This gives the same exact resulting grid as the first method that averaged 9 values at once with the 3×3 kernels, but more efficiently.
AGrid
The approach here is actually pretty standard to what I would have guessed. They create 2 new grids with dimensions equal to the original pathing grid for each of the blurring passes, horizontal and vertical. They do the horizontal pass first, which grabs values from the original grid and sums them in kernels whose size is determined by the designer. The vertical pass then uses the values from the horizontal pass grid to create its own grid. These values are then passed in as the movement penalty values to the corresponding original grid nodes.
The key things to note were:
- They wanted the kernel to always be an odd value so that it had a clear central location, so the input parameter of blursize was always multiplied by 2 and had 1 added for the kernel size. This mathematically ensures they are always working with an odd value.
- To replicate the idea of “duplicating values at the boundaries outside of the grid” they used Mathf.Clamp when searching for grid elements to sum so that if it ever went out of bounds for a grid index, it used a clamped value at the edge instead (effectively grabbing the same value again in edge cases).
- The core of the algorithm needs values to already exist, so there is some additional logic for the first pass just to make sure all the intial values are properly set first before the true process begins.
Visualization
This again is something I always like to learn more about, and this was exactly what I wanted with this type of system just to help truly see what is going on behind the scenes with all the pathfinding logic.
They did this in the way I would have approached it. As they set the penalty values, they have a simple check to figure out what the lowest and highest values in the whole grid are. They then use these values as the bounds for a Color.Lerp for the visualization between black and white. This gives the nodes a gray scale representing their cost, with black being the highest values, white the lowest, and shades of gray for the values between.
This did show an issue with the current code. The obstacles had a slight blur of lower penalties around them. This is because the code does not factor in penalty values for obstacle areas at all, so they currently have a penalty value of 0. This is generally not ideal.
Summary
This tutorial was again, a great step in the direction I am trying to take this A* pathfindin logic in for my project. Since I want to update the penalty values at real time based on events that take place within the grid, it will make sense to disperse some of that value updating out from the single node the events occur on.