Thesis Current Resources and Citations

October 31, 2019

Academic Resources

Current Citations

I just wanted to have a list of all my current sources I have compiled that may be useful for my thesis here. This list will be modified over time as I dtermine which topics I will not get to in my thesis, and which topics need more support, but this is a pretty good foundation for now.

  1. [1]J. L. Anderson and M. Barnett, “Learning Physics with Digital Game Simulations in Middle School Science,” Journal of Science Education and Technology, vol. 22, no. 6, pp. 914–926, Dec. 2013.
  2. [2]M.-V. Aponte, G. Levieux, and S. Natkin, “Measuring the level of difficulty in single player video games,” Entertainment Computing, vol. 2, no. 4, pp. 205–213, Jan. 2011.
  3. [3]S. Arnab et al., “Mapping learning and game mechanics for serious games analysis: Mapping learning and game mechanics,” British Journal of Educational Technology, vol. 46, no. 2, pp. 391–411, Mar. 2015.
  4. [4]E. Butler, E. Andersen, A. M. Smith, S. Gulwani, and Z. Popović, “Automatic Game Progression Design through Analysis of Solution Features,” 2015, pp. 2407–2416.
  5. [5]M. J. Callaghan, K. McCusker, J. L. Losada, J. Harkin, and S. Wilson, “Using Game-Based Learning in Virtual Worlds to Teach Electronic and Electrical Engineering,” IEEE Transactions on Industrial Informatics, vol. 9, no. 1, pp. 575–584, Feb. 2013.
  6. [6]M. Callaghan, M. Savin-Baden, N. McShane, and A. G. Eguiluz, “Mapping Learning and Game Mechanics for Serious Games Analysis in Engineering Education,” IEEE Transactions on Emerging Topics in Computing, vol. 5, no. 1, pp. 77–83, Jan. 2017.
  7. [7]D. B. Clark, B. C. Nelson, H.-Y. Chang, M. Martinez-Garza, K. Slack, and C. M. D’Angelo, “Exploring Newtonian mechanics in a conceptually-integrated digital game: Comparison of learning and affective outcomes for students in Taiwan and the United States,” Computers & Education, vol. 57, no. 3, pp. 2178–2195, Nov. 2011.
  8. [8]M. M. Cruz-Cunha, Ed., Handbook of Research on Serious Games as Educational, Business and Research Tools. IGI Global, 2012.
  9. [9]A. A. Deshpande and S. H. Huang, “Simulation games in engineering education: A state-of-the-art review,” Computer Applications in Engineering Education, vol. 19, no. 3, pp. 399–410, Sep. 2011.
  10. [10]M. D. Dickey, “Game design narrative for learning: Appropriating adventure game design narrative devices and techniques for the design of interactive learning environments,” Educational Technology Research and Development, vol. 54, no. 3, pp. 245–263, 2006.
  11. [11]J. Dormans and S. Bakkes, “Generating Missions and Spaces for Adaptable Play Experiences,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 3, no. 3, pp. 216–228, Sep. 2011.
  12. [12]M. Ebner and A. Holzinger, “Successful implementation of user-centered game based learning in higher education: An example from civil engineering,” Computers & Education, vol. 49, no. 3, pp. 873–890, Nov. 2007.
  13. [13]M. Eraut, “Non-formal learning and tacit knowledge in professional work,” British Journal of Educational Psychology, vol. 70, no. 1, pp. 113–136, Mar. 2000.
  14. [14]D. Farrell and D. Moffat, “Cognitive Walkthrough for Learning Through Game Mechanics,” in European Conference on Games Based Learning, 2013, p. 163.
  15. [15]H. Fernandez, K. Mikami, and K. Kondo, “Adaptable game experience through procedural content generation and brain computer interface,” 2016, pp. 1–2.
  16. [16]A. Foster, “Games and motivation to learn science: Personal identity, applicability, relevance and meaningfulness,” Journal of Interactive Learning Research, vol. 19, no. 4, p. 597, 2008.
  17. [17]C. Franzwa, Y. Tang, A. Johnson, and T. Bielefeldt, “Balancing Fun and Learning in a Serious Game Design:,” International Journal of Game-Based Learning, vol. 4, no. 4, pp. 37–57, Oct. 2014.
  18. [18]J. Fraser, M. Katchabaw, and R. E. Mercer, “A methodological approach to identifying and quantifying video game difficulty factors,” Entertainment Computing, vol. 5, no. 4, pp. 441–449, Dec. 2014.
  19. [19]J. P. Gee, “What video games have to teach us about learning and literacy,” Computers in Entertainment, vol. 1, no. 1, p. 20, Oct. 2003.
  20. [20]J. P. Gee, “Good video games and good learning,” in Phi Kappa Phi Forum, 2005, vol. 85, p. 33.
  21. [21]B. Gregorcic and M. Bodin, “Algodoo: A Tool for Encouraging Creativity in Physics Teaching and Learning,” The Physics Teacher, vol. 55, no. 1, pp. 25–28, Jan. 2017.
  22. [22]R. H. Mulder, “Exploring feedback incidents, their characteristics and the informal learning activities that emanate from them,” European Journal of Training and Development, vol. 37, no. 1, pp. 49–71, Jan. 2013.
  23. [23]T. Hainey, T. M. Connolly, E. A. Boyle, A. Wilson, and A. Razak, “A systematic literature review of games-based learning empirical evidence in primary education,” Computers & Education, vol. 102, pp. 202–223, Nov. 2016.
  24. [24]P. Hämäläinen, X. Ma, J. Takatalo, and J. Togelius, “Predictive Physics Simulation in Game Mechanics,” 2017, pp. 497–505.
  25. [25]M. Hendrikx, S. Meijer, J. Van Der Velden, and A. Iosup, “Procedural content generation for games: A survey,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 9, no. 1, pp. 1–22, Feb. 2013.
  26. [26]R. Hunicke, M. LeBlanc, and R. Zubek, “MDA: A Formal Approach to Game Design and Game Research,” p. 5.
  27. [27]I. Iacovides, P. McAndrew, E. Scanlon, and J. Aczel, “The gaming involvement and informal learning framework,” Simulation & Gaming, vol. 45, no. 4–5, pp. 611–626, 2014.
  28. [28]P. Lameras, S. Arnab, I. Dunwell, C. Stewart, S. Clarke, and P. Petridis, “Essential features of serious games design in higher education: Linking learning attributes to game mechanics: Essential features of serious games design,” British Journal of Educational Technology, vol. 48, no. 4, pp. 972–994, Jun. 2017.
  29. [29]H. B. Lisboa, “3D VIRTUAL ENVIRONMENTS FOR MANUFACTURING AUTOMATION,” vol. 6, p. 9, 2014.
  30. [30]R. Lopes, T. Tutenel, and R. Bidarra, “Using gameplay semantics to procedurally generate player-matching game worlds,” 2012, pp. 1–8.
  31. [31]S. D. Mohanty and S. Cantu, “Teaching introductory undergraduate physics using commercial video games,” Physics Education, vol. 46, no. 5, p. 570, 2011.
  32. [32]S. Moser, J. Zumbach, and I. Deibl, “The effect of metacognitive training and prompting on learning success in simulation-based physics learning,” Science Education, vol. 101, no. 6, pp. 944–967, Nov. 2017.
  33. [33]C. Peach, D. Rohrick, D. Kilb, J. Orcutt, E. Simms, and J. Driscoll, “DEEP learning: Promoting informal STEM learning through ocean research videogames,” in Oceans-San Diego, 2013, 2013, pp. 1–4.
  34. [34]J.-N. Proulx, M. Romero, and S. Arnab, “Learning Mechanics and Game Mechanics Under the Perspective of Self-Determination Theory to Foster Motivation in Digital Game Based Learning,” Simulation & Gaming, vol. 48, no. 1, pp. 81–97, Feb. 2017.
  35. [35]M. Qian and K. R. Clark, “Game-based Learning and 21st century skills: A review of recent research,” Computers in Human Behavior, vol. 63, pp. 50–58, Oct. 2016.
  36. [36]S. Sampayo-Vargas, C. J. Cope, Z. He, and G. J. Byrne, “The effectiveness of adaptive difficulty adjustments on students’ motivation and learning in an educational computer game,” Computers & Education, vol. 69, pp. 452–462, Nov. 2013.
  37. [37]M. Shaker, M. H. Sarhan, O. A. Naameh, N. Shaker, and J. Togelius, “Automatic generation and analysis of physics-based puzzle games,” 2013, pp. 1–8.
  38. [38]N. Shaker, M. Nicolau, G. N. Yannakakis, J. Togelius, and M. O’Neill, “Evolving levels for Super Mario Bros using grammatical evolution,” 2012, pp. 304–311.
  39. [39]G. Smith, “Understanding procedural content generation: a design-centric analysis of the role of PCG in games,” 2014, pp. 917–926.
  40. [40]V. Wendel, M. Gutjahr, S. Göbel, and R. Steinmetz, “Designing collaborative multiplayer serious games: Escape from Wilson Island—A multiplayer 3D serious game for collaborative learning in teams,” Education and Information Technologies, vol. 18, no. 2, pp. 287–308, Jun. 2013.
  41. [41]K. A. Wilson et al., “Relationships Between Game Attributes and Learning Outcomes: Review and Research Proposals,” Simulation & Gaming, vol. 40, no. 2, pp. 217–266, Apr. 2009.
  42. [42]J.-C. Woo, “Digital Game-Based Learning Supports Student Motivation, Cognitive Success, and Performance Outcomes.,” Journal of Educational Technology & Society, vol. 17, no. 3, 2014.
  43. [43]G. N. Yannakakis and J. Togelius, “Experience-Driven Procedural Content Generation,” IEEE Transactions on Affective Computing, vol. 2, no. 3, pp. 147–161, Jul. 2011.

Drawing and Animating 2D Pixel Art in Photoshop

October 30, 2019

Pixel Art for Games

Photoshop Tutorials

How To Draw Pixel Art | Tutorial

Tutorial #1 – Link

By: TipTut


How To Animate Pixel Art | Tutorial

Tutorial #2 – Link

By: TipTut


I will most likely be doing some more work on some of my older 2D pixel art games, so I wanted to brush up and get some more basic tutorials on pixel art in photoshop to get me back up to speed and hopefully learn some new things along the way. This creator had the benefit of following up with an animation video as well, which is a nice bonus to tag on. I may need to reset my Photoshop settings so it is proper for pixel art again.

UnityLearn – Beginner Programming – Tips & Considerations – Pt. 05

October 29, 2019

Beginner Programming: Unity Game Dev Courses

Beginner Programming: Unity Game Dev Courses

Unity Learn Course – Beginner Programming

Tips & Considerations

Unity’s Order of Events

Unity’s Order of Events:
Awake : OnEnable : Start : Update : OnDisable : OnDestroy

  • Awake: first function called when object is instantiated; true whether added to scene in editor or instantiated in code
  • OnEnable: event; fired when enabled gameObject is instantiated or when a disabled object is enabled
  • Start: after Awake, and OnEnable, but before the 1st frame update
  • Update: happens every frame after initialization methods
    • FixedUpdate: frame-independent and occurs before physics calculations are performed
    • LateUpdate: called once per frame after update method has completed execution
  • OnDisable: event; fires when object is disabled, or before it is destroyed
  • OnDestroy: execute when object is destroyed in code, or when scene containing it is unloaded

Reference Caching: creating a field for a reference and getting that reference once during initialization to use whenever that object is needed

This information is very useful to understand when you start referencing a lot of objects throughout your code and have a lot of pieces working together. This will help with avoiding errors, and debugging when those types of issues do come up. This is something I ran into a lot when working on my scene manager project as getting references was a bit of a pain and the timing was very crucial.

Using Attributes

Attributes (C#): powerful method of associating metadata, or declarative information, with code
Unity provides number of attributes to avoid mistakes and enhance functionality of inspector.

Some common attributes used in Unity include:

  • SerializeField: makes a field accessible to the inspector
  • Range
  • Header
  • Space
  • RequireComponent

Public fields are inherently accessible by the inspector, but many other access modifiers hide the field from the inspector (like private). To get around this, you add the SerializeField attribute.

The RequireComponent attribute takes in a type and automatically adds that component type to the same gameObject whenever this object is placed. It also ensures that the other component cannot be removed while this component exists on the gameObject.

How to Draw with Colored Pencils

October 28, 2019

Colored Pencils

Better Fundamentals

DO’S & DON’TS: How to Draw with Colored Pencils

Tutorial #1 – Link

By: Kirsty Partridge Art


I am trying to get back into some activities I liked doing in the past but have not found time to do them recently. Using colored pencils was always something I enjoyed, so I would like to get back to using them to just do some simple coloring and sketching. This seemed like a good time to improve on how I did this as well, so this video seemed like a good start to learn some basics to work on if you want to actually get better at using colored pencils.

UnityLearn – Beginner Programming – Delegates, Events, and Actions – Pt. 04

October 25, 2019

Beginner Programming: Unity Game Dev Courses

Beginner Programming: Unity Game Dev Courses

Unity Learn Course – Beginner Programming

Delegates, Events, and Actions

Delegates

Delegate: in C#, a type designed to hold a reference to a method in a delegate object

  • Delgates are created using the “delegate” keyword.
  • They are defined by their signature, meaning their return type.
  • Finally they have parameters which they take in, similar to methods.

Using a delegate allowed us to parameterize a method. Using the delegate as a parameter for a method also allows us to use any method which matches that delegate’s signature to satisfy the parameter.

These are some snippets from two scripts, GameSceneController and EnemyController, that show some basics of utilizing delegates:

– In GameSceneController script
public delegate void TextOutputHandler(string text);

public void OutputText (string output)
{
Debug.LogFormat(“{0} output by GameSceneController”, output);
}

– In EnemyController script
void Update()
{
MoveEnemy(gameSceneController.OutputText);
}

private void MoveEnemy(TextOutputHandler outputHandler)
{
transform.Translate(Vector2.down * Time.deltaTime, Space.World);

float bottom = transform.position.y – halfHeight;

if(bottom <= -gameSceneController.screenBounds.y)
{
outputHandler(“Enemy at bottom”);
gameSceneController.KillObject(this);
}
}

For example, in our case we created a public void delegate with a string parameter called TextOutputHandler. Then another one of our methods, MoveEnemy, took a TextOutputHandler as a parameter, named outputHandler. Any method matching the signature of the delegate (in this case, public void with input parameter string) can satisfy the input parameter for the MoveEnemy method. As can be seen in the example, whatever method is passed in will be given the string “Enemy at bottom”.

A delegate used this way is commonly known as a “Callback”.

Events

C# Events: enable a class or object to notify other classes or objects when something of interest occurs.
Publisher: class that sends the event
Subscriber: class that receives/handles the event

Things like Unity’s UI elements use events inherently. For example, the Button script uses events to tell scripts when to activate when a button is clicked. This is different from a basic way of doing inputs which checks every frame if a button is being pressed (which is called “polling”). This is also why creating UI elements in Unity automatically creates an EventSystem object for you.

In C#, events are declared using the “event” keyword, and all events have an underlying delegate type.
C# events are multicast delegates.
Multicast delegate: delegate that can reference multiple methods

To help with my understanding, I tried testing the setup without having an EnemyController parameter to see why it was needed. I discovered it was necessary to pass along the reference so the small event system knew which object to destroy when calling the EnemyAtBottom method. Using Destroy(this.gameObject) or Destroy(gameObject) both just destroyed the SceneController as opposed to the individual enemies. This also helped me understand that adding a method to an event does not pass the method over as an equivalent, it simply means that when that event is called, that any methods assigned to it are also called in their current location in a class. So even though the event was being called in the EnemyController script, the method I added to it was still called within the GameSceneController script, which makes sense.

Example:

In EnemyController script


public delegate void EnemyEscapedHandler(EnemyController enemy);

public class EnemyController : Shape, IKillable
{
public event EnemyEscapedHandler EnemyEscaped;

void Update()
{
MoveEnemy();
}

private void MoveEnemy()
{
transform.Translate(Vector2.down * Time.deltaTime, Space.World);

float bottom = transform.position.y – halfHeight;

if(bottom <= -gameSceneController.screenBounds.y)
{
if(EnemyEscaped != null)
{
EnemyEscaped(this);
}
// Can be simplified to:
// EnemyEscaped?.Invoke(this);
}
}
}

In GameSceneController script:


public class GameSceneController : MonoBehaviour
{
private IEnumerator SpawnEnemies()
{
WaitForSeconds wait = new WaitForSeconds(2);

while (true)
{
float horizontalPosition = Random.Range(-screenBounds.x, screenBounds.x);
Vector2 spawnPosition = new Vector2(horizontalPosition, screenBounds.y);

EnemyController enemy = Instantiate(enemyPrefab, spawnPosition, Quaternion.identity);

enemy.EnemyEscaped += EnemyAtBottom;

yield return wait;
}
}

private void EnemyAtBottom(EnemyController enemy)
{
Destroy(enemy.gameObject);
Debug.Log(“Enemy escaped”);
}
}

I’ve simplified the scripts down to just the parts dealing with the events to make it easier to follow. As I understand it, we create the delegate: public delegate void EnemyEscapedHandler(EnemyController enemy) in the EnemyController script (but outside of the EnemyController class). Within the EnemyController class, we create an event of the type EnemyEscapedHandler, so this event can take on methods with the same signature as EnemyEscapedHandler. Within the MoveEnemy method, we invoke the EnemyEscaped event and satisfy its parameters by passing in this, which is the unique instance of the EnemyController script (after checking that there is a method assigned to this event).

Then in the GameSceneController script, we see that when we instantiate an enemy, we keep a reference to its EnemyController script. This is to assign the EnemyAtBottom method to each one’s EnemyEscaped event. Now anytime EnemyEscaped is called in the EnemyController script, it will then call the EnemyAtBottom script here, passing whatever parameter it (EnemyEscaped) has to the parameter for EnemyAtBottom. In this case, passing this in EnemyEscaped ensures that EnemyAtBottom knows which enemy to destroy.

Actions

Actions (C#): types in the System namespace that allow you to encapsulate methods without explicitly defining a delegate
In fact, they are delegates
Actions can be generic

An Action is just an event delegate that does not need another delegate to be created first to be used as a reference. The Action itself determines what parameters are necessary for the passed methods.

Thesis Project – Concepts for System to Control Script Timings

October 24, 2019

Working with the HFFWS

System for Properly Timing Running of Scripts

Discovering the Timing Issues with the HFFWS Tools

While working with the Rope script in the Human API, I was again encountering timing issues when trying to accomplish tasks through script. I was having trouble instantiating and connecting rigid bodies to the ends of the rope in script. I ran several tests to confirm the timing issues I was having again.

Test #1
  • 2 Rigid Body objects exist in scene
  • Script has 2 gameObject references for those objects
  • Script sets startBody and endBody references to those objects at Start

This did not work. The references could be seen as being properly assigned in the Rope script in the editor, but that connection was being made too late and having no affect on the objects.

Test #2
  • 2 Rigid Body objects exist in scene
  • Script has 2 gameObject references for those objects
  • Script sets startBody and endBody references to those objects at Awake

This did have the intended result of connecting the rigid body objects to the rope.

Test #3
  • All done in Awake
  • 2 Rigid Body objects instantiated into scene
  • Script sets startBody and endBody references to those objects at Awake

This did work once, although I had some issues repeating it afterward. The objects were prefabs which were instantiated in Awake. GameObject references were created for these, and used for the values of the Rope script startBody and endBody, all done in Awake as well.

Test #4
  • Start or Awake is irrelevant here
  • Object with Rope script starts deactivated
  • 2 Rigid Body objects instantiated into scene
  • Script sets startBody and endBody references to those objects
  • Rope object is then activated

This is the best solution that works, as it can be done in Awake or Start. This leads to a much more controllable environment which is ideal for our system. This will be the foundation for the main systematic approach.

System for Controlling Awake Calls for HFFWS Objects

Deactivating/Activating Objects or Enabling/Disabling Scripts

The combinations of all of these tests, most notably Test #4, showed that some important connections are made during the Awake phase of many HFFWS objects. It is important to note that Test #4 specifically indicates that a lot of the important functionality may be happening in the Awake method of the scripts of the individual objects themselves (which is important to differentiate from some system wide class that is making connections in Awake). This important differntiation leads to this option of simply deactivating objects with HFFWS scripts (or possibly disabling the scripts specifically) to force their Awake methods not to run before my systems are ready for them. I can run my scripts, and then activate the HFFWS objects so their Awake methods run at that point instead.

This concept seems the most promising for safely and consistently controlling the timing of my script elements versus those of the HFFWS objects (since I do not have access to their core scripts). As this is a common issue I have run into with many objects, it makes sense to make a small script to attach to any prefabs I will be instantiating which can just hold references to gameObjects and/or scripts that I will need to have deactivated or disabled initially, and then activate or enable after everything is properly set.

Script Execution Order

This was actually the first idea to get around these timing issues, but it seems less safe so it will most likely only be used as a last resort. The HFFWS package already has a lot of various scripts ordered in the script execution order of Unity, so there is already a precedent set by them for when a lot of things should run relative to each other.

To test that this worked in the first place, I ran a test with everything being set in Awake not working but moving it to the first script in the script execution order actually made it work. This was a similar test with connecting rigid bodies to a Rope script object.

This is not an ideal process to use however as it can easily lead to weird behavior that is very hard to debug and will not scale well in general. Because of these factors, I will mostly be looking to expand upon the other system concept.

HFFWS Working with the Rope Script

October 23, 2019

Human Fall Flat Workshop

Rope Script

Identifying Key Components

The main components of controllilng the rope script when it comes to placing it and connecting it with objects, are the handles and the start/end bodies. The handles are transform positions that dictate key points on the rope. Two is the standard, as they determine a beginning and the end for the rope, but more can be used to direct the rope in multiple directions. The start and end bodies are rigid body references which determine objects that will be physically connected to the ends of the rope.

The discovery with how the handles work opens the possibility for more rope configurations other than flat ropes which fall over other objects. Being able to guide a rope around in interesting patterns could potentially allow for setups that are more similar to actual pulley setups, where a rope needs to go around a wheel object at least once.

The first prefab I worked with that used the Rope script had rigid body objects with Fixed Joints on them that seemed necessary at first. While working with another rope using prefab (the hook), I found that it did not use these Fixed Joints. This showed that these are actually not manditory for connecting rigid bodies to the rope. It is just the rigid body reference on the Rope script itself for the startBody and endBody. Further investigation may show what purpose the Fixed Joints served.

UnityLearn – Beginner Programming – Working with Classes – Pt. 03

October 22, 2019

Beginner Programming: Unity Game Dev Courses

Beginner Programming: Unity Game Dev Courses

Unity Learn Course – Beginner Programming

Working with Classes

The Four Pillars of OOP
  • 1. Encapsulation: grouping of data and methods into a cohesive object
  • 2. Abstraction: process of exposing only those features of an object necessary for interactions
  • 3. Inheritance: creating a new class based on and extending another
  • 4. Polymorphism: ability of an object of function to take on a different form

The PlayerController class created for this section of the tutorials dervied from the Shape class, which allowed it to inherit the SetColor method and change the player to yellow. It also extended the class by creating its own method, MovePlayer. I am trying to keep track of this to ensure I keep all the terminology straight.

There was an interesting approach to using WaitForSeconds in the enemy spawning method. Instead of directly using new WaitForSeconds directly in the yield return statement of the coroutine, they actually created a WaitForSeconds variable reference named wait. They then just used wait in the yield return statement in place of all the WaitForSeconds syntax. This is nice to keep in mind as another way to organize coroutines, especially those that use similar values for multiple yield statements.

Inheritance and Polymorphism

Inheritance was demonstrated by creating protected variables within the base class that could be used by all of the derived classes. The examples here were halfHeight and halfWidth, which assumed the values of the bounds.extents of the SpriteRenderer at Start. This was done in the Start method of the base class, so the derived classes simply had to call base.Start() to have those values individually set for all of them inheriting from Shape class.

It is important to note for this to work they made the Start method in the base class a virtual protected method. This allowed the derived classes to override the Start method to add functionality, while also using the base.Start() method still to assume the base class’s Start method functionality. This started to get into polymorphism.

virtual: this keyword can be used to modify a method, property, indexer, or event declaration and allow for it to be overridden in a derived class

IMPORTANT: This can be used in conjunction with the protected access modifier to allow for a base class’s Start method to be useable within the Start method of derived classes. By creating a protected virtual void Start method in the base class, the derived classes can have their own modified Start methods by using a protected override void Start method and calling the base.Start() method from within.

Unity Procedural Landmass Generation by Sebastian Lague

October 21, 2019

Procedural Terrain Generation

Tutorial

Procedural Landmass Generation (E01: Introduction)

Tutorial #1 – Link

By: Sebastian Lague


This is the beginning of a procedural landmass tutorial by Sebastian Lague. This appears to get move involved in setting up noise with more controlled variability along with shading and other interesting tricks. This will compliment a procedural terrain tutorial I followed by Brackeys since this gets much more involved and gives many more designer options.

Thesis Puzzle Generation – Pulley Information and Variations

October 17, 2019

Thesis Puzzle Generation

Pulley Variants

Pulley Wikipedia Page

Info #1 – Link

By: Wikipedia


Pulley

General

The wikipedia definition of a pulley is:
“a wheel on an axle or shaft that is designed to support movement and change of direction of a taut cable or belt, or transfer of power between the shaft and cable or belt”
The pulleys we will be generating are generally simpler setups. They mostly consist of a rope with a physics object (rigid body) on each end. This is because of how the system is used to create the pulleys. Ropes cannot be intertwined between objects accurately, so they are just generated above their setup locations and allowed to fall into position. As a final note, the pulley can also be tied into demonstrating the wheel and axle simple machine as well.

Varieties

Hook on an end to latch onto other objects

This focuses on the pulley’s inherent nature of changing the direction of force. The hooks allow the player to attach one end to various objects to move them into place. It could also be as simple as lifting other objects.

Object to move can be used as a platform
Focus on rotation of wheel

This would use the weight/friction of rope to rotate/spin the wheel that it is on while rope is moved. I am not sure how well the HFFWS system works with this.

Build a Compound Pulley

Since system does not allow for setup of complex block and tackle style pulleys, setup the environment for the player to build their own. This gets around the inability of the system itself by letting the player set it up. This would require building the environment in such a way (such as veritcal moving platforms or stairs around) to let the player move the rope through a series of wheels to build this pulley.

Lift Heavy Object

One end attached of the rope is fixed to a very heavy or massive object. The other end is where work is done to try to lift the heavy object. Examples of this could be a large door or simply a heavy platform. The system focuses on maximizing the overall force output. This will mostly be done by focusing on the free end of the rope, but applying an aiding upward force to the heavy mass may also be factored in.