UnityLearn – Beginner Programming – Swords and Shovels: Character Controller and AI – Pt. 02 – NPC Controller

Novemeber 21, 2019

Beginner Programming: Unity Game Dev Courses

Beginner Programming: Unity Game Dev Courses

Unity Learn Course – Beginner Programming

NPC Controller

Creating NPC Patrol

The NPC has two main states: it will patrol within a set of waypoints or it will pursue the player. They create two main methods to control the NPC behavior, Patrol and Tick.

Both of these are called in the Awake method of the NPCController through the use of the InvokeRepeating command. This is done because you can set a time value that needs to pass between each invocation, so basically they can perform these methods in a “slow Update” fashion. Tick is called every 0.5s, while Patrol is called every patrolTime, which was defaulted at 15s.

The Patrol method used an inline ternary method to set the value of index, which is something I am not used to seeing. The line was as such:

index = index == waypoints.Length – 1 ? 0 : index + 1;

I believe this checks if index is equal to (waypoints.Length – 1), then if that is true, it sets the value of index to 0, and if it is false, it sets the value of index to (index + 1) (or index++). This is actually pretty basic waypoint logic, the syntax was just unusual.

This system has some flaws as well already. The InvokeRepeating calls start their timer countdowns immediately upon calling the method. So even though the NPC takes time to travel between waypoints, the countdown to move to the next waypoint has already started as soon as they start moving. This means their travel time must be taken into consideration when setting this value as if it is too low they will start moving to the next waypoint before they have even reached the current one as a destination.

Synchronizing Animation and Creating the Pursue Behavior

This tutorial starts similarly to synchronizing the player character’s speed with its animation. They simply pass the NavMeshAgent component’s speed value into the Animator controller’s speed parameter to alter animation as the character’s speed changes.

To create the pursue behavior, they added extra logic to the Tick method in the NPCController class. They added an if conditional to check if the player was within the aggroRange of the NPC, and if so, it would add the player’s position as the NavMeshAgent destination value and increase its speed.

SUMMARY

While the NPC logic was not very interesting, some of the programming syntax used was new and interesting. The use of InvokeRepeating without any coroutines or anything was also a neat way to just get a quick prototype system setup when you want something to run many times but Update is overkill.

UnityLearn – Beginner Programming – Swords and Shovels: Character Controller and AI – Pt. 01 – Overview and Character Controller

Novemeber 21, 2019

Beginner Programming: Unity Game Dev Courses

Beginner Programming: Unity Game Dev Courses

Unity Learn Course – Beginner Programming

Overview and Character Controller

Navigation and Animation Review

This part of the tutorial just gets you familiar with the NavMesh setup they have and the animator they are using on the player character. Some of the items are a bit off since they are using an older Unity version (2017), like where the Navigation tab is located (it is now under Window -> AI).

The player character animator has a blend tree that holds a speed parameter. This parameter appears to determine how much to blend between an idle animation and a running animation, as well as how fast the animation plays.

Controlling Motion with C# and Unity Events

This tutorial portion started with adding a NavMesh Agent component to the Hero character to have it interact with the baked NavMesh in the scene.

The next step was to edit the MouseManager class to actually move the character according to its NavMesh Agent and the created NavMesh. This would require a reference to the NavMesh Agent component. They did not want to use a public field (as this creates a dependancy that can lead to issues down the road), so they turned to making an event.

They needed to create this event, which they did so as a new class within the MouseManager script, but outside of the class. This was done in the following way:

[System.Serializable]
public class EventVector3 : UnityEvent { }

The base class UnityEvent lets it send vector3 information through an event.
They then created a public field of the type EventVector3 named OnClickEnvironment. This was then used to do the following:

if (Input.GetMouseButtonDown(0))
{
OnClickEnvironment.Invoke(hit.point);
}

The public field created a box within the inspector similar to UI events in Unity. You could drag an object in as a reference, and select a component and method to call when that event happened. In this case they used the Hero object, and called the NavMeshAgent.destination method.

Synchronizing Animation with Motion

They start by simply referencing the NavMeshAgent speed and passing that value into the animator’s speed parameter. They then begin to alter the NavMeshAgent component’s properties to fine tune the movement to feel better to play with. They significantly increased the Speed, Angular Speed, and Acceleration, and turned off Auto Braking. This felt much better, but led to an issue where the player would run back and forth near their destination.

The issue is caused by turning off Auto Braking and not having a Stopping Distance. I assume since the character has trouble “perfectly” reaching the value of its given destination, the Stopping Distance of 0 means it will keep trying to get there even though it cannot.

Setting the Stopping Distance to 0.5 fixed this issue. This allows the agent to stop moving once it is within 0.5 units of the given destination. There was still a strange issue where the character would move slowly around stairs, but they said they would cover this issue.

Handling the Passageway

This portion deals with adding code to allow the character to traverse the special passageway section of the level. The passageways have unique colliders to inform the cursor that a different action should take place when they are clicked. These colliders also have direction, which will be used to determine which way to move the character through the hallway.

They then go back to the MouseManager to show where the cursor is detecting the unique collider (currently just to swap the mouse icon with a new icon). This is where they add extra logic that if door is true (a doorway has been clicked) that they want to grab that object’s transform and move a fixed distance (10 in this case) in that object’s z-direction. This is why the collider’s orientation is important.

This setup seems very bad to me. This only works for passageways that are all exactly the same distance since the value is hard coded in. You can still click on areas between the passageway entrances, so the character just gets awkwardly stuck in the middle with the camera pressed up against a wall. There is no limitation when taking a passageway, so the player can click again midpassage and alter their character trajectory which can be weird or buggy.

I think I would prefer a system that somehow links the two passageway entrances so that no matter where they are, when the player clicks one they will enter it and come out of the other one (use their transform with some offset). Player input should also be limited somehow during the passage because there is no reason to do anything else within the passage, this only allows for more errors.

Note:

The MouseManager uses a lot of if/else conditional statements, and from the previous lessons, I think it could possibly benefit from having a small state machine setup. While I think it is just on the edge of being ok in its current setup, the Update method for the MouseManager is already looking a bit bloated and any more complexity may start to cause issues.

SUMMARY

I think this tutorial was mostly focused on just getting you used to finding and using some different features of Unity, such as NavMesh and the Animator. Nothing here seemed to be that deep or well done, so it felt very introductory. Getting a refresher on NavMesh agents was definitely useful at least, and working with animation never hurts.

Sebastian Lague A* Tutorial Series – Algorithm Implementation – Pt. 03

November 20, 2019

A* Tutorial Series

Pt. 03 – Algorithm Implementation

A* Pathfinding (E03: algorithm implementation)

Link – Tutorial

By: Sebastian Lague


Intro:

Since these sections are very coding intensive, I just setup a breakdown into the different classes that are worked on. I try to discuss anything done within the class in that class’s section here. As should be expected, they do not continuosly do a single section and move to the next, they jump back and forth so that the references and fields make more sense for why they exist, so I try to mention that if needed.

Pathfinding

This class begins to implement the psuedo code logic for the actual A* algorithm. It starts with a method, FindPath, that takes in two Vector3 values, one for the starting position and one for the target position. It then uses our NodeFromWorldPoint method in AGrid to determine which nodes those positions are associated with so we can do all the work with our node system.

It then creates a List of nodes for the openSet and a HashSet of nodes for the closed set, as seen in the psuedo code. It is around here that they begin to update the Node class since it will need to hold more information.

The next part is the meat of the algorithm, where it searches through the entire openSet to determine which node to explore further (by using the logic of finding the one with the lowest fCost, and in the case of ties, that with the lowest hCost). Once found, it removes this node from the openSet and adds it to the closedSet. It is mentioned that this is very unoptimized, but it is one of the simplest ways of setting it up initially (they return to this for optimization in future tutorials).

Continuing to follow the psuedo code, they go through the list of neighbors for the currentNode and check to see if any are walkable and not already in the closedSet to determine which to further explore.

Here they create the distance calculating method that will serve as the foundation for finding the gCost and hCost. This method, named GetDistance, takes two nodes and returns the total distance between them in terms of the grid system. Just to reiterate, it returns an approximated and scaled integer distance value between two nodes. Orthogonal moves have a normalized distance of 1, where diagonal moves are then relatively the sqrt(2), which is approximately 1.4. These values are then multiplied by 10 to give their respective values of 10 and 14 for ease of use and readability.

If it is determined that the neighbor node should be evaluated, it calculates the gCost of that neighbor from the current node by adding the distance to the neighbor from the currentNode to the currentNode gCost. It then checks if this is lower than the neighbor node’s current gCost (so they found a cheaper route to the same node) or if neighbor is not in the openSet (which means it has never been evaluated, so has no gCost to compare). If these criteria are met, it sets the gCost of the neighbor to this determined value, and calculates the hCost using the new GetDistance method created between the neighbor node and the targetNode.

It finally sets that neighbor node’s parent as the currentNode, and checks if the neighbor was already in the openSet. If not, it adds this node to the openSet.

The RetracePath method was created, which determines the path of nodes to follow once the target has finally been reached. Starting with the endNode (target position), it cycles through each node’s parent by continually changing the checked node to the current node’s parent until it gets back to the startNode, and adds them to a list named path. Finally, it reverses the list so they are in the proper order matching the actual object’s traversal path (since doing it this way effectively gives you the list of nodes backwards, starting with the end).

Node

They add the gCost, hCost, and fCost as public ints here finally. The fCost is actually just a getter function that returns gCost + hCost. This is a nice setup that provides some extra encapsulation as fCost will never be anything else so it may as well only return that sum whenever it is called.

Later they also add ints gridX and gridY, which are references to their indices in the overall grid array. This helps locate them, as well as their neighbors, more easily in later code.

A field is created of the type Node named parent to hold a reference to a parent node. This serves as the link between nodes to give a path to follow once the final destination has been reached. As the lowest fCost nodes are found, they will create a chain of parent nodes which can be followed. This is done with the RetracePath method in Pathfinding.

AGrid

They added the GetNeighbors method here. It takes in a node, then returns a list of nodes that are its neighbors. It effectively checks the 8 potential areas around the node with simple for loops spanning -/+ 1 in the x and y axes relative to the given node. It skips the node itself by ignoring the check when x and y are both 0. It also makes sure any potential locations to check exist within the grid borders (so it does not look for nodes outside of the grid for nodes on the edges for example).

Sebastian Lague A* Tutorial Series – Node Grid – Pt. 02

November 20, 2019

A* Tutorial Series

Pt. 02 – Node Grid

A* Pathfinding (E02: node grid)

Link – Tutorial

By: Sebastian Lague


Intro:

This started by creating the overall Node class to contain valuable information for each individual node, and the Grid class that would be dealing with the overall grid of all the nodes (I rename this to AGrid to avoid errors in Unity).

Node

For now, this simply holds a bool walkable, which represents whether this node contains an obstacle or not, and a Vector3 worldPosition, which contains data on its Unity real world position.

AGrid

This class has a couple parameters that influence the coverage and resolution of the overall A* system. The gridWorldSize represents the two dimensions covered by the entire grid (so 30, 30 will cover a 30 by 30 area in Unity units). The nodeRadius is half the dimension of a node square, which will be used to fill the entire grid. The lower the nodeRadius, the more nodes (so higher resolution, but more computing cost).

The intial setup is a lot of work that simply breaks down whatever the overall area being covered by the grid into int size chunks to use as indices to work with a 2D array containing all the nodes. The NodeFromWorldPoint method is also created, which is a nice method that takes in a Vector3 value and returns the node encompassing that point. I like the extra step of clamping the values here to reduce possible errors in the future.

Unity Feature Notes:

Clamp Example:

// Clamped to prevent values from going out of bounds (will never be less than 0 or greater than 1)
percentX = Mathf.Clamp01(percentX);
percentY = Mathf.Clamp01(percentY);

Mathf.Clamp01 clamps a value within the bounds of 0 and 1. This percent value should never be outside of those for the purposes of the grid anyway (they help determine basically what percentage away a node is on the x and y axes separately relative to the bottom left node). So in error cases, this will simply give a node that is at least on the border of the grid.

The CreateGrid method in the AGrid script uses a Physics.CheckSphere method to determine if a node is traversable or not. This simply creates a collision sphere of a determined radius that returns information on anything it collides with.

Gizmos:

They use the DrawWireCube Gizmos, which just lets you create a wire cube outline with defined dimensions. This is very nifty for conveying the general area covered by something visually in your editor.

Warning:

Unity had an issue with naming the one script “Grid”, as they already something named Grid built into the system. It gave me a warning that I would not be able to use components with this object. Just to make sure I did not run into any future issues, I renamed it “AGrid”.

Sebastian Lague A* Tutorial Series – Algorithm Explanation – Pt. 01

November 20, 2019

A* Tutorial Series

Pt. 01 – Algorithm Explanation

A* Pathfinding (E01: algorithm explanation)

Link – Tutorial

By: Sebastian Lague


Notes:

G cost = distance from starting node H cost (heuristic) = distance from the end node F cost = G cost + H cost

A* starts by creating a grid of an area. There is a starting node (initial position) and an end node (destination). Generally, you can move orthogonally (which is normalized as a distance of 1) or diagonally (which would then be a value of sqrt of 2, which is approx. 1.4). Just to make it look nicer and easier to read, it is standard to multiply these distances by 10 so that moving orthogonally has a value of 10, and diagonally has a value of 14.

The A* algorithm starts by generating the G cost and H cost of all 8 squares around the starting point (in a 2D example for ease of understanding). These are then used to calculate the F cost for each square. It starts by searching for the lowest F cost and then expanding the calculations to every square in contact with it (orthogonally and diagonally). Recalculations may be necessary if a certain path finds a way to reach the same square with a lower F cost. If multiple squares have the same F cost, it prioritizes the one with the lowest H cost (closest to the end node). And if there is still a tie, it basically can just pick one at random.

It is worth reiterating that the costs of a square can be updated through a single pathfinding event. This however only occurs, if the resulting F cost would be lower than what is already found in that square. This is actually very important as the search can lead to strange paths to certain squares giving them higher F costs than they should have when there is a much more direct way to reach that same square from the starting node.

Coding Approach

Psuedo Code (directly from tutorial):
OPEN //the set of nodes to be evaluated
CLOSED //the set of nodes already evaluated
add the start node to OPEN

loop
current = node in OPEN with the lowest f_cost
remove current from OPEN
add current to CLOSED

if current is the target node //path has been found
return

foreach neighbour of the current node
if neighbour is not traversable OR neighbour is in CLOSED
skip to the next neighbour

if new path to neighbour is shorter OR neighbour is not in OPEN
set f_cost of neighbour
set parent of neighbour to current
if neighbour is not in OPEN
add neighbour to OPEN

There are two lists of nodes: OPEN and CLOSED. The OPEN list are those nodes selected to be evaluated, and the CLOSED list are nodes that have already been evaluated. It starts by finding the node in the OPEN list with the lowest F cost. They then move this node from the OPEN list to the CLOSED list.

If that current node is the target node, it can be assumed the path has been determined so it can end right there. Otherwise, it checks each neighbour of the current node. If that neighbour is not traversable (an obstacle) or it is in the CLOSED list, it just skips to check the next neighbour.

Once it finds a neighbour to check, it checks that it is either not in the OPEN list (so this neighbour is a completely unchecked node since it is in no list since we also just checked to make sure it was not in the CLOSED list) or there is a new path to this neighbour that is shorter (which is done by calculating the current F cost of that neighbour, since it could be different now). If either of these are met it sets the calculated F cost as the actual F cost of this neighbour (since it is either lower or has never been calculated), and then sets the current node as a parent of this neighbour node. Finally, if neighbour was not in the OPEN list, it is added to the OPEN list.

Setting the current node as the parent of the neighbour in the last part of the psuedo code is helpful for keeping track of the full path. This gives some indication of where a node “came from”, so that when you reach the end you have some reference of which nodes to traverse.

Notes on Research Paper: Automatic Game Progression Design through Analysis of Solution Features by Butler et al.

November 19, 2019

Thesis Research

Notes on Resources

TITLE:

Automatic Game Progression Design through Analysis of Solution Features

AUTHORS:

E. Butler, E. Andersen, A. M. Smith, S. Gulwani, and Z. Popović

IEEE Citation – Zotero

[1]E. Butler, E. Andersen, A. M. Smith, S. Gulwani, and Z. Popović, “Automatic Game Progression Design through Analysis of Solution Features,” 2015, pp. 2407–2416.
ABSTRACT
– use intelligent tutoring systems and progression strategies to help with mastery of base concepts 
as well as combinations of these concepts
– System input: model of what the player needs to do to complete each level
– Expressed as either:
– Imperative procedure for producing solutions
– Representation of features common to all solutions
– System output: progression of levels that can be adjusted by changing high level parameters
INTRODUCTION
– Level Progression: “how game introduces new concepts and grows in complexity as the player progresses”
– link between level progressions and player engagement
– “Game designer Daniel Cook claims that many players derive fun from “the act of mastering 
knowledge, skills and tools,” and designs games by considering the sequence of skills that the 
player masters throughout the game [11].”
– *** Why we want to have system produce similar but varied versions of the same type 
of experience
– “Intelligent tutoring systems (ITS) [29, 22] have advanced the
teaching potential of computers through techniques that track
the knowledge of students and select the most appropriate
practice problems [8, 12].”
– “We aim to enable a different method
of game design that shifts away from manual design of levels
and level progressions, towards modeling the solution space
and tweaking high-level parameters that control pacing and
ordering of concepts”
– *** Possible major goal of my research
– “Andersen et al. [1] proposed a theory to automatically
estimate the difficulty of procedural problems by analyzing
features of how these problems are solved”
– Procedural Problems: those that can be solved by following a well-known solution procedure
– i.e. Integer division using long division
– Procedural Paths: code paths a solver would follow when executing procedure for a particular 
problem
– Use this as a measure of difficulty
– “This is mainly because Andersen’s
framework does not account for pacing. In contrast,
our system allows a designer to control the rate of increase
in complexity, the length of time spent reinforcing concepts
before introducing new ones, and the frequency and order
in which unrelated concepts are combined together to construct
composite problems”
– *** Another goal/direction for my research
RELATED WORK
– Intelligent Tutoring Systems have “several models for capturing student knowledge and selecting 
problems”
– *** Possible future work research
– “Generated levels are not useful in isolation; they must be sequenced
into some design-relevant order, called a progression”
APPLICATION
– Game called Refraction
– split lasers into number fractions that need to satisfy certain values at end
SYSTEM OVERVIEW
– Level: “any completeable chunk of content”
– Level Progression: “sequence of levels that create an entire game”
– Solution Features: “properties of the solution to a level”
– “need to be able to extract features from the levels that can
be used to create an intelligent ordering” for progression
EXTRACTING SOLUTION FEATURES
– Example of following procedural traces and n-grams within the exercise of doing hand subtraction
– Certain steps have a letter output that is recorded to see if a certain type of step 
was used and how often (i.e. Was borrowing necessary for subtraction?)
– n-grams: n-length substrings of the trace
– Example: Trace = ABABCABAB
– 1-grams: {A, B, C}
– 2-grams: {AB, BA, BC, CA}
– 3-grams: {ABA, BAB, ABC, BCA, CAB}
– 1-grams are fundamental concepts
– Compare complexity by comparing n-grams
– They used these general concepts to build their own similar, but not identical, system
– their game didn’t fit this model perfectly well, so they broke their game down into 
graph objects that they thought were more representative
– they had fundamental graph objects (similar to 1-grams), that could then build into 
more complex graph objects by combining these defined fundamental graph objects
AUTOMATICALLY GENERATING LEVELS
– “We applied the level generation process described by Smith et
al. [30] to generate a large and diverse database of puzzles”
– *** Look into this for our system?
CREATING A PROGRESSION
– n-grams = graphlets (their specific type of n-gram)
– “To summarize, given two levels L1 and L2, L1 is considered conceptually simpler if, for
some positive integer n, the set of n-grams of L1 is a strict subset of the set of n-grams of L2″
– if a level contains the steps required for another level, it is more complex (has the same and 
more basically)
– Because of several issues, their approach was create a rather large library of generated levels 
to choose from, and choose a select few that represent an effective progression
– model tracks whether player has completed a problem containing each component
– also track n-grams of those components (up to very small n as values get exponentially huge)
– hard to know what to do on failure
– General Framework:
– set of problems
– domain of concepts
– player model
– These three things are what are used to generate sequence of problems for the player
– Choosing the Next Problem
– given current model state and set of problems, what is appropriate next problem?
– dynamic cost model
– cost of problem, p, is weighted sum of the n-grams in the trace of p
– weight has 2 components:
– one based on what model knows about player
– one designer-specified weight
– Player Model Cost:
“First, at each point in the progression, for a given n-gram
x, the player model assigns a cost k(x). This cost should
be high for unencountered n-grams and low for ones already
mastered, which will ensure more complex problems have a
higher cost than simpler ones with respect to the player’s history”
– Designer Added Cost:
“as expert designers, we know
(possibly from data gathered during user tests) that particular
concepts are more challenging than others or should otherwise
appear later in the progression”
– “Thus, given a library of problems P, choosing the next problem
pnext consists of finding the problem with the closest cost
to some target value T”
– “In order for this sequencing policy to be effective, it requires
a large library of levels from which to choose. The levels
should be diverse, covering the space of all interesting solution
features. Furthermore, in order to enable effective control
of pacing, this space should be dense enough that the progression
can advance without large conceptual jumps”
– *** Supports need for my thesis work

UnityLearn – Beginner Programming – Finite State Machine – Pt. 02 – Finite State Machines

Novemeber 18, 2019

Beginner Programming: Unity Game Dev Courses

Beginner Programming: Unity Game Dev Courses

Unity Learn Course – Beginner Programming

Finite State Machines

Building the Machine

This part of the tutorial has a lot more hands on parts, so I skip some of the sections for note purposes when there is not much substance to them other than following inputs.

A key in finite state machines is that an object can only ever be in exactly one state at a time. This helps ensure each state be completely self contained.

Elements of a Finite State Machine:
Context: maintains an instance of a concrete state as the current state
Abstract State: defines an interface which encapsulates behaviors common to all concrete states
Concrete State: implements behaviors specific to a particular state of context

To get started in the tutorial, they created a public abstract class PlayerBaseState which will be the abstract state for this example. PlayerController_FSM is the context. They note that while in this case all the abstract state methods take the PlayerController_FSM (the context) in as a parameter in this case, that does not necessarily have to be the case for the general FSM pattern.

Concrete States

It is noted that the context in a FSM needs to hold a reference to a concrete state as the current state. This is done in the example by creating a variable which holds that of the type that is our abstract state, which is PlayerBaseState in this case. They then create a method called TransitionToState which takes a PlayerBaseState in as a parameter. It then sets the currentState to that parameter state, and then calls the new state’s EnterState method (all states have this method as it is dictated by the abstract class they all implement). This determines what actions should be done immediately upon entering this new state.

Example:

public void TransitionToState(PlayerBaseState state)
{
currentState = state;
currentState.EnterState(this);
}

The tutorial also shows a way to take control of the context’s general Unity methods and pass the work on to the concrete states instead. This example did this with Update and OnCollisionEnter. The abstract state, and in turn, all of the concrete states, have their own Update and OnCollisionEnter method. The context, PlayerController_FSM, then simply calls currentState.Update(this) in its Update method, and currentState.OnCollisionEnter(this) in its OnCollisionEnter method, so that the current concrete state’s logic for these methods are used without flooding the context itself with any more code.

Since it is necessary that your context has some initial state, they do this by simply calling the TransitionToState method within the Start method and entering the IdleState. IdleState is the initial state for this case.

Beginning the Implementation

Important benefits seen using this system:
While working on the concrete classes themselves, we never needed to go back to the PlayerController_FSM class (the context) to modify any code there. The entire behvior is handled within the concrete states and is abstracted from the character controller (the context). Setting expressions was much easier as no checks are needed and this can just be set in the EnterState method of each concrete state.

It is already clear that this method removes a lot of boolean checks from the overall code, and helps organize the code by ensuring any logic about a state is contained within the class for that state itself (with less bleeding into the code of other states).

Continuing the Implementation

It is worth noting that PlayerController_FSM holds a reference to every concrete state except the spinning state. This was done because they actually have the jumping state create a new spinning state on transition each time it is invoked. They apparently do this so that the local field for rotation within the spinning state is reset to 0 each time it is called, but it seems like there would be other ways to do this that seem less wasteful (such as resetting it to 0 when exiting the state). I am also not sure if this is intended behavior, but the spin also immediately cancels upon contacting the ground (resetting the player rotation to 0) with this setup, where as in the previous behavioral setup the spin completed even if the player contacted the ground.

Module Conclusion

Benefits of FSM:

  • More modular
  • Easier to read and maintain
  • Less difficult to debug
  • More extensible

Cons of FSM:

  • Take time to setup initially
  • More moving parts
  • Potentially less performant

Just something very notable with this approach, it seems much harder for me to break than the naive implementation. If I spammed key presses (like pressing jump and duck a lot) with the naive approach, sometimes I could break the system and have the player stuck in the duck position or be ducking while jumping. I have not been able to break it at all with the full FSM setup, which makes sense since transition behaviors solely exist within the states themselves so these inputs cannot be jumbled in any way.

SUMMARY

Using state machine systems appear way easier to use and build on than the “naive” approach of basic boolean behaviors (with lots of if statements and boolean checks). Not only was I very excited about how much easier this appears to work as a scalable option, it also just worked better and more cleanly when it was all put together.

The other version had small bugs that would pop up if you spammed all the different action keys (such as getting stuck ducking or being ducked in a jump), which were possible just because the key presses would get recorded before reaching the bools or if statements that should be telling them that they are not proper options. These very separated states make that type of error impossible as it is only concerned with a single state at a time.

This type of system just seems much cleaner, more organized, and less error prone than what I have done before and I am very excited to try and build a system like this for my own project (for both players and enemy AI).

UnityLearn – Beginner Programming – Finite State Machine – Pt. 01 – Managing State

Novemeber 15, 2019

Beginner Programming: Unity Game Dev Courses

Beginner Programming: Unity Game Dev Courses

Unity Learn Course – Beginner Programming

Managing State

Project Overview

This part of the tutorial has a lot more hands on parts, so I skip some of the sections for note purposes when there is not much substance to them other than following inputs.

The basics covered in this section are:
What is state and how to manage it
Finite State Machine pattern
Build your own finite state machine

Introduction

State: condition of something variable
State Examples: Game state, Player state, NPC State

Finite State Machine: abstract machine that can be in exactly one of a finite number of states at any given time

Parts of a Finite State Machine:
  • List of possible states
  • Conditions for transitioning between those states
  • State its in when initialized (initial state)

Naive Approach to Managing State

This naive approach focuses on boolean states and if staements. It uses a lot of if and else if statements in the Update method to determine what state the player is in and if/when/how they can switch to another state. Even with two states this becomes tedious and somewhat difficult to read. This example is just to emphasize the use of proper finite state machines.

Actions, Triggers, & Conditions

Look at your actions as a set of: actions, triggers, conditions.
Example for Arthur jumping:

  • Actions: Arthur jumps; jumping expression
  • Triggers: Spacebar is pressed
  • Conditions: Arthur is not jumping

Continuing to follow the naive state management approach, we see that everytime we add a new state it makes all snippets about other states more complex and harder to follow. This is very clear that this will become unmanagealbe with only a few states even.

Module Overview

The biggest issue with the naive approach is the interdependent logic of the various states. It makes each state exponentially harder to work with with every state that is added, so it is very limited on its scalability. This does not even come with a benefit to readability, as it also becomes difficult to read quickly.

SUMMARY

Using the naive approach (boolean fields and if/else statements) to manage state is only really useable for extremely simple cases. As soon as you reach 3 or 4 states with even small amounts of logic to manage them, this approach becomes very awkward and unwieldy. Fininte State Machines should hopefully open up a better way to manage more states with better scalability and allow for more complexity with better readability.

Programming A* in Unity

November 14, 2019

A* Programming

Unity

A* Pathfinding (E01: algorithm explanation)

Tutorial #1 – Link

By: Sebastian Lague


Unity – A Star Pathfinding Tutorial

Tutorial #2 – Link

By: Daniel


Unity3D – How to Code Astar A*

Tutorial #3 – Link

By: Coding With Unity


I have used A* before, but I would like to learn how to setup my own system using it so that I can make my own alterations to it. I would like to explore some of the AI methods and techniques I have discovered in my AI class and use that to alter a foundational pathfinding system built with A* to come up with some interesting ways to influence pathfinding in games. I would eventually like to have the AI “learn” from the player’s patterns in some way to influenece their overall “A* type” pathfinding to find different ways to approach or avoid the player.

Maya Tutorials – Basics and Fish

November 11, 2019

Maya Modeling

Basics and Fish Tutorials

Maya Fish Tutorial Part 1 Modeling

Tutorial #1 – Link

By: Alberto Alvarez


Cartoon animals). Fish 3d modeling in Autodesk Maya 2016

Tutorial #2 – Link

By: Cartoon with MK


3D Shark Modeling Tutorial – Autodesk Maya 2018

Tutorial #3 – Link

By: Vedant VFX


I may explore transitioning one of my game projects that I worked on from a 2D game to a 3D game. It’s mostly sea themed at this point so it would help to be able to put some basic fish shaped models together, so I gathered some sources on a wide variety here just to get started. These videos give me a detailed tutorial on a general fish, with a quick look at how people have made sharks and more cartoon styled fish.