Drawing and Rendering Many Cube Meshes in Unity (Part 1 of Part 1)

March 03, 2021

Shaders for Game Devs

Shaders


Title:
Shader Basics, Blending & Textures • Shaders for Game Devs [Part 1]


By: Freya Holmér


Unity – Forum

Description:
Discussions and code for drawing and rendering many cube meshes.


Overview

I have been exploring shaders as an option for efficiently generating large amounts of geometry and came across this recent talk covering shaders all the way from the beginning. This seems like a good opportunity to at least get a better understanding of what they are and good cases to look into using them.

Intro to Shaders

Shaders: code that runs on the GPU in their truest form
This was there answer for the simplest form of explaining what a shader is from a game development point of view, and I liked it as a good starting foundation to help my understanding. Textures, Normal Maps, Bump Maps, etc. are all examples of tools that can be used as input for shaders. Shaders then use the information provided by those along with their given code to determine how to visualize and render with that information.

Fresnel Shader: as something fades away from you, you get a stronger light.
It looks like an outline type effect often, but it is not an outline effect. It will highlight features which are moving away from your view. As the angle towards some surface versus the camera is very low, something happens. This is just a commonly used type of shader.

Structures of a Shader

Structure within Unity (Description)[Language or Tool to Modify]:

Shader

– Properties (Input data) [ShaderLab]

– Colors

– Values

– Textures

– Mesh

– Matrix4x4 (transform data: where it is, how it’s rotated, how it’s scaled)

– SubShader (can have multiple in a single shader) [ShaderLab]

– Pass (Render/Draw pass; Can have multiple)

– Vertex Shader [HLSL]

– Fragment Shader (“Pixel” Shader) [HLSL]

Vertex Shader

This deals with all the vertices of the mesh. Similar to a foreach loop that runs over all the vertices you have. One of the first common issues with vertex shaders is placing the vertices. Shaders however do not particularly care about world space. They generally deal with position in clip space, which are values between 0 and 1 to determine where to place them on the screen. This can often be done simply by taking the local space coordinates and transform them with an MVP matrix to convert them to clip space and you are done.

Vertex shader is often used to animate water or sway grass and foliage in the wind. This aspect is used often to provide movements or animation. They mention that vertex UV coordinates can be manipulated in the vertex shader or the fragment shader, but if possible to do in the vertex shader it should be done their first. All you do here is set the positions of vertices or pass data to the fragment shader.

Fragment Shader

This is similar to a foreach loop that runs over each fragment. A pixel usually refers directly to a pixel being rendered on the screen, which a fragment shader does not always deal with. However, it is common for these to overlap, which is why some call this a pixel shader. This aspect generally comes down to determining what color to set for every fragment or pixel. The vertex shader always runs before the fragment shader. Data can be passed from the vertex shader to the fragment shader, but not vice a versa.

Shaders vs. Materials

Mesh and Matrix4x4 are normally supplied by your mesh renderer component or something like that, where as colors, values, and textures are something you must define yourself. These properties are generally defined with materials. The material helps contain these parameters which are then passed in to the shaders. You never “add a shader to an object” in Unity, it is effectively done by adding a material which then references the shader to be used. You can think of materials as preconfigured parameters to be used when rendering something with a shader. You can have multiple materials which all use the same shader, but have different input properties.

via Blogger http://stevelilleyschool.blogspot.com/2021/03/drawing-and-rendering-many-cube-meshes_3.html

UFGTX Maker of SkullGirls Talk on Making a Fighting Game

September 18, 2020

Game Development

Fighting Games

UFGTX: How to Make Fighting Games

Youtube – Link

By: Mike Zaimont


Overview

Mike Zaimont covers a lot about how he believes fighting games should be built and developed for the user experience. He covers topics such as using rollback netcode (GGPO specifically), providing hitbox data to players, as well as frame data. I liked this talk as an example of providing information you as a developer have to your players if they so wish to use it.

Indiecade Europe 2019 Talk – The Simple Yet Powerful Math We Don’t Talk About

July 7, 2020

Indiecade Europe 2019 Talk

Math in Game Dev

The Simple Yet Powerful Math We Don’t Talk About

Youtube – Link

By: Indiecade Europe


Presenter: Freya Holmér


Introduction

I am always interested to find new ways of using math within game development to produce fun and unique effects as well as for creating cool systems, so this talk sounded right up my alley. They focus on 3 major functions found in a lot of software as well as Unity: Lerp, InverseLerp, and Remap. While I have used Lerp pretty extensively already, I had never used the other two so covering all of them together was eye opening to see how many different ways they can be utilized for different systems.

Notes from Talk

Lerp

Lerp(a, b, t) = value

Lerp(a, b, t) where a is like the starting point, b is the end point, and t is a fraction, generally between 0 and 1. Lerp then outputs a blended value between a and b based on t. At t = 0, it outputs a, and at t = 1.0, it outputs b. t does not have to be a time value, it can be a value from anything. They show using values from positional data, so then your outputs are based on a location in space. Alpha blending literally just lerps pixels based on their alpha values to determine what to show when sprites are layered over each other.

Inverse Lerp

InverseLerp(a, b, value) = t

Just like how it sounds, this helps you find a t value based on some output Lerp value. They show an example of controlling audio volume based on distance using InverseLerp. Since it outputs values of t which are generally values between 0.0 and 1.0, you can use that t output as a multiplier for the volume. The a and b values placed in are the min and max distances (distances where sound stops getting louder even if you move closer, and distance where moving farther away can’t get quieter), and the distance is input as the “value”.

The InverseLerp example doesn’t particularly work well without clamping, so that’s the next feature that was covered. Some Lerp functions have clamping that can be applied, so keep this in mind when working with Lerps. InverseLerp can also be used to shrink a range down (again, with clamping in mind). So something like InverseLerp(0.3, 0.6, value) can compress a range so that everything that is 0.3 and lower becomes 0.0, everything at 0.6 and higher becomes 1.0, then the values in between become compressed between these new 0.0 and 1.0 values.

Color Elimination By Depth

InverseLerp can also be used separately on all three color channels (i.e. RGB) and this can be used to produce interesting color effects along with hue shifts that are difficult with normal gradients.

They cover how light color is affected by depth when traveling through water, showing a concise chart that shows how red light is lost quickly, green light is much slower, and then finally blue light lingers the longest, which gives the deep blue tones for the deepest water. Simply using this concept with Lerp and depth information, they created a pretty decent looking starting point for a water shader that was then prettied up with some extra effects (specular highlights, fog, and edge foam).

Remap

Remap(iMin, iMax, oMin, oMax, value) = ov

  • iMin and iMax are the input range
  • oMin and oMax are the output range
  • value is an input between iMin and iMax
  • ov is a value between oMin and oMax

Remap is like an all-in-one combination of Lerp and InverseLerp. To make that clear they showed the actual equivalent of Remap using these two.

t = InvLerp(iMin, iMax, value)
Lerp(oMin, oMax, t) = ov

Their examplpe for this was a health bar that changes color at certain value ranges (which is actually similar to something I had done in the past so Remap was a new way of approaching that I hadn’t seen before). The sample formula used for this was:

Remap(20, 50, Color.Red, Color.Green, health) = color

With inherent clamping, this makes the health bar pure red at 20 health and below, pure green at 50 health and above, then a blend of red and green at values between 20 and 50.

Other Examples

Some other examples they cover for Remap were:

  • Stats based on level (which can be useful unclamped so it will continue to grow)
  • Explosion damage based on distance (most likely want clamp since it could cause effects like healing very far away without it)
Simple Math Breakdown Behind the Functions

float Lerp(float a, float b, float t){
return (1.0f – t) * a + b * t;
}

float InvLerp(float a, float b, float v){
return (v – a) / (b – a);
}

float Remap(float iMin, float iMax, float oMin, float oMax, float v){
float t = InvLerp(iMin, iMax, v);
return Lerp(oMin, oMax, t);
}

They finally show a very complicated looking equation that is actually the equation behind Bezier Curves that are commonly found in any graphical software. They explain that a Bezier Curve is effectively just several Lerps of creating points and drawing lines between those points, between each point that is actually drawn by the user.

Summary

Covering Lerps is always interesting because there’s always a new way to learn how to utilize them. Learning about InverseLerp and Remap as well was very beneficial to me though, and they are covered in a very easy to understand way here that make it easy to look and implement them right away in my current or next projects. I actually have built systems already that I can think of using these tools (like the color range clamps for health bars) so I believe these will be very useful moving forward.

GDC 2020 Talk – Cursed Problems in Game Design

February 6, 2020

Cursed Problems in Game Design

GDC 2020 Talk

GDC 2020: Cursed Problems in Game Design

Video – Link

By: GDC – Alex Jaffe (Riot Games)


Notes

Cursed Problem: an unsolvable design problem, rooted in a conflict between core player promises

Politics: competition through social negotiation, alliances, and manipulation

The main skill here is identifying cursed problems so you know to avoid them or navigate them so you do not waste time on “solving” them.

Examples from Talk: Hard or Cursed?

No Man’s Sky: Exploration game with millions of worlds
Hard: This was evidenced by the game becoming successful after significant time updating the game, but it easy to see this was just a hard problem and not an impossible one.

Diablo: Loot games with efficient trading
Cursed: Many loot experiences are just incompatible with efficient marketplace trading systems because the loot experience relies on the random drops themselves feeling unique or special to the player, but having a market to deal this loot for other loot just makes every type of loot “gold”, or currency, and the specialness of all drops is significantly hampered.
Commodified Reward Problem supports this idea.

Pokemon GO: always-on location-based games
Cursed: These games look to provide an augmented reality layer to real life for rewards, but the counteracting topic is just the general need for personal safety and convenience. The main core of it being cursed is that one is “play anywhere/any time” and the other to make them safe is to “only play when appropriate”.

4 Core Solution Techniques for Cursed Problems

The 4 techniques are: Barriers, Gates, Carrots, S’mores

Barriers

Cut affordances that break promises

This technique focuses on just preventing the player from performing actions that would break the design promise.

Exapmle: Free-For-All Politics (Fortnite, or any large battle royale game)
Approach: Limit player’s agency over one another
Sacrifce: Some of the PvP fantasy of control
The game is so large scale with so many players, high amount of missing information, and high lethality so the political game is not very feasible to develop. It is hard to create an alliance or single out a specific targeted player with all of these variables. This however removes the feeling of “I have outplayed you specifically” or the personal level of domination since the game is so large scale.

Gates

Hide bad states behind challenge

This technique aims to make it harder to find bad ways to play the game so that players will hopefully focus on the good ways intended.

Example: Free-For-All Politics (Playstation All Stars: Battle Royale (4 Player Brawler))
Approach: Limit visibility of players’ success
Sacrifce: Some of the tension of buzzer beaters
They hid the score of each player, and made the scoring system a bit convoluted (2 * kills – deaths) so it was harder to reason out in the middle of the high paced game. This was done to encourage every player playing their best with less fear of being targeted for being in the lead and reduce politics factors. Not being sure of the score however can reduce high tension moments near the end of the game sine no one is really sure who is in the lead and you do not even know who won until they game tells you.

Carrots

Incentivize avoiding the bad states

Provide incentives for players that do not go into the bad game states.

Example: Free-For-All Politics (Catan tournament)
Approach: Add meta-game effects
Sacrifice: Magic circle of an individual game
In a Catan tournament, players play many games and their standings are used to add to their total score for placement. This makes getting first less of a “be all, end all” scenario and incentivizes each player to just do as well as they can. They suggest this ends up reducing overall politics in the game since players are continually incentivized to do their individual best. The con however is that it just makes each individual game feel less special or “magical” to play.

S’mores

Make it fun

This technique leans into the bad states of the game and just make them fun to play as well.

Example: Free-For-All Politics (Diplomacy)
Approach: Give players tools for deep political play (e.g. secrecy)
Sacrifce: Emphasis on moment-to-moment action
The game of Diplomacy goes hard into the politics with secrets and deception along with the gameplay, which can make the political aspects more fun and interesting. This however generally makes the game itself feel like it is taking a backseat (in this case your plays with your units) and leaving players mostly focusing on the relationships they make or break along the way.

Summary

Do not think of these as solutions, but as a small design framework. These techniques were identified to help you approach difficult problems you find in the design process for making a game to help you navigate them more efficiently. It can also just be beneficial to understand some problems are not “solvable” and you will have to make sacrifices to accomplish some goals or mitigate some negative aspects.

Challenges

These were some challenges given as examples of problems so cursed that they really have not been approached very much.

These are the challenges:

  • Balanced player-generated content in multiplayer games
  • PvP games in which all players feel there was a just outcome
  • Mystery and discovery in the age of the internet

The last challenge really resonated with me as something I have investigated, specifically through the concept of procedural content generation.

As an avid Pokemon player I would always go into the new games knowing every new pokemon from information released before hand, whether it was from strategy guides back in the Johto days to the games in the Alola region. I however stumbled upon player made rom games and played them mostly blind and enjoyed them much more. The sense of discovery and figuring things out was so much more interesting and refreshing.

I then got into Pokemon randomizers where you can randomize as much or as little as you want, and that game fresh new life to old games I had played. This gave me the idea for using procedural generation, something akin to the rogue likes of today, but in a more exploratory or experimental way. I think you could look into procedural generation of the rules of the game as opposed to just specific content, and the player could have consistent ways to discover these rules playing the game each time. This way the player’s skill of discovery is really what is emphasized and rewarded as they get better, and they use this core skill to get better at whatever world they are put into.

GDC 2019 Talks – Into the Breach and Dead Cells

April 11, 2019

2019 GDC Vault

Talks

GDC 2019 – Into the Breach’ Design Postmortem

By: Matthew Davis
From: Subset Games

GDC 2019 – ‘Dead Cells’: What the F*n!?

By: Sebastien Benard
From: Motion Twin

Into the Breach Design Postmortem

Constraint Focused Design: This basically has them set constraints as early in the game design process as possible that help direct and restrict future design choices. This can be something like a genre, or a game mechanic like health.

Gameplay focused design: They approach making games by starting the the main gameplay they want to create, and then building everything else to fit around that. Everything such as narrative or theme ends up following the gameplay.

They used a very board game inspired design for “Into the Breach”. They like to keep numbers small, easy to understand, and meaningful. The difference from 1 to 2 should be impactful. This also leads into how they designed enemies. There is a “chess-like” feel to enemies, and the overall gameplay in general, in that the player should have a good understanding of how a piece operates and its zone of danger it creates and be able to play around that.

Randomness was something to be kept to a minimum. This went very well with the idea of having the enemies forecast their attacks. Again, this also ties in with making as much information available to the player as possible. Even though they wanted to reduce randomness to a minimum, they still implemented it when they deemed it a much better option.

As most game designers do, they wanted a strong focus on interesting decisions. This led to keeping out a lot of complex systems that were too difficult to use for what they really brought to the game. This was also done a bit differently on their team since they only have two people, so they could iterate quickly and often and scrap ideas and designs much more easily.

‘Dead Cells’: What the F*n!?

They really wanted to focus on “permadeath” as a game mechanic. Focusing on this led them to some core mecahnics to “make death fun”. The core of this was death being a way to progress in the game. It gave you a way to improve your character’s abilities, as well as access previous areas again (since the game does not allow back tracking).

They wanted to focus the design on combat, NOT platforming. Even though it is still a 2D platformer, they really wanted the player to use their skills for fighting and limit or even remove punishments for platforming. To achieve this, they used many small mechanics such as allowing players to jump a few frames after leaving a platform, teleporting the player onto a platform if they missed a jump by a few pixels, and implementing an edge grabbing mechanic.

This player helping system in platforming got carried over into over mechanics as well, such as helping aim attacks. If the player is facing the wrong direction when going for an attack, the game can actually assist them and switch them around the other direction. I initially was very against this concept, but I appreciated their take on it and was much more accepting after their explanations. This helped reduce the mechanical demands for the game, rewarding the player much more for strategic combat gameplay. Then in this way, this actually allowed them to make the game more difficult in some respects since the player wouldn’t be punished for these less important game aspects.

Just a small note, since they use a lot of community feedback to update their game, they will leave comments in the patch notes stating who inspired the change. This is just a good way to let the community know how impactful they are and make them more willing to help in the future.

2019 GDC Vault Opening

April 9, 2019

2019 GDC Vault

Talks

GDC 2019 – Into the Breach’ Design Postmortem

By: Matthew Davis
From: Subset Games

GDC 2019 – Cursed Problems in Game Design

By: Alex Jaffe
From: Riot Games

GDC 2019 – How to Teach 5 Semesters of Game Design in 1 Class

By: Jason Wiser
From: Yaya Play Games, Tufts and Harvard University

GDC 2019 – Creating Customized Game Events with Machine Learning (Presented by Amazon)

By: Africa Perianez
From: Yokozuna Data

The 2019 GDC vault has finally opened! I just wanted to take a quick peak and see if I could find any interesting talks to watch in the near future. I really loved playing Into the Breach, so I’m excited to check out their Postmortem. Riot Games always has interesting talks with League of Legends size and popularity. The other two just seemed like interesting sounding titles that I’d like to hear more about.

Unite Europe 2017 – Multi-scene editing in Unity for FAR: Lone Sails

March 15, 2019

Unite Europe 2017

Multi-scene editing in Unity for FAR: Lone Sails

Youtube – Unite Europe 2017 – Multi-scene editing in Unity for FAR: Lone Sails

By: Goran Saric

I wanted to look into pushing the tower defense tutorial series I did from Brackeys into a more polished little game project, so one of the first upgrades I wanted to look into for the project was the suggestion about breaking up the scenes in a more efficient manner.

This was mentioned as the best way to set up the overall project if you wanted to make it significantly larger but keep it very easy to build upon. This is because there are a lot of objects in a given level currently that will persist between every level and right now they are just copied into every single level. If you make an edit to any of these objects, they need copied into every other scene again. This can be minimized by using prefabs much more extensively, but having a scene solely for these objects keeps everything much cleaner and easier to edit.

So searching for how to properly implement this idea of broken up scenes, I came across this Unite Europe 2017 talk where the makes of FAR: Lone Sails detail exactly how they approached the used of multi-scene editing in their game.

Before getting into their general additive scene setup, they mention how they broke down the main level content from a giant full story board type setup into equally sized level scenes. They then had two level scenes loaded at any given time to keep the memory usage down to a minimum, but ensure the next scene was ready when the player arrived.

The Scene Abstraction:

  • Logic Scene
    • Always loaded
    • Contains all relevant managers to keep game running
    • Examples: Scene Manager; Save Manager
  • Base Scene
    • All elements of game that are always present during gameplay and level independent
    • Examples: character(player); camera; their giant vehicle
  • Level Content 1 Scene
    • The rest of the level content that is unique for that area/level
  • Level Content 2 Scene
    • This is the same type as the previous scene
    • Just enforcing that the game has 2 level scenes loaded at any one time

They then detail some of their work flow with these level content scenes in the editor. Even though the game only ever has two scenes loaded at once, sometimes they had several level scenes all open at once to ensure the overall theme was consistent, for aligning geometry, etc. It is also noted the Editor play time gets faster by only having the scenes loaded that you need when testing. More broken up scenes also helps reduce merge conflicts.

Helper Tools

There were two main tools they mentioned being helpful for the designers to keep them organized: Highlight Scene Borders and Teleportation Keyboard Shortcuts.

Highlight Scene Borders: This tool just had a large colored plain at the ends of the scenes to help indicate where new scenes were starting/ending when having multiple open at once. This was especially helpful since they are dealing with more of a 2D platforming game world. This just helps ensure objects are placed in the correct scene, as well as helping determine camera frustrum angles.

Teleportation Keyboard Shortcuts: They had an issue where constantly during testing they would have to slide some of the major game components through the scenes to locate them. They discovered a much easier solution to this was to just have a keyboard shortcut that teleported these constantly moving pieces to the current mouse position. If done during the running of the game, this also has an added benefit that it doesn’t mess with the editor at all and will be reset to the proper location after testing.

Scene Collection

Unity doesn’t have a built in way to save scene hierarchies in the editor yet, but there are many tutorials online about creating your own editor tools to do this. Unity offers the corresponding API to do so, it just needs some work to be user friendly. They created a scriptable object that can save the constellation of all loaded scenes to be loaded again at a later time.

Cross-Scene References

Unity does not normally allow for cross scene references between objects within the Inspector. There are several ways to access an object from another scene though.

GameObject.Find: this is very slow and can be tricky to find the correct instance

Singleton/Statics: most programmers dislike using these, but they can be used for this purpose

Scriptable Objects: keep references of instances in a scriptable object and link them to your scripts

Zenject – Dependency Injection Framework: offers some built-in features to support reference injections over multiple scenes

For FAR, they didn’t need to do very many cross scene references in the way they setup the scenes. When they did need to do this however, they had a simple monobehavior component in the Base Scene which would access a reference in a static reference pool. This static field reference could then be influenced/updated by simple monobehavior components in the current Level Scene. This exposes some of the methods from that original component in the Base Scene. Their setup also helps keep the Level Designers from messing with scripts they shouldn’t be modifying.

Finally, if you want some really advanced features already setup for you, there is a tool you can buy on the Unity store. It’s titled “Advanced Multi-Scene: Cross-Scene References”. It has some nice features such as scene merging and cross scene Inspector referencing.

Game Architecture with Scriptable Objects Talks

February 7, 2019

Talks on Scriptable Objects

Unite Talks – 2017

Youtube – Unite Austin 2017 – Game Architecture with Scriptable Objects

By: Ryan Hipple

Youtube – Unite ’17 Seoul – ScriptableObjects What they are and why to use them

By: Ian Dundore

Unite talks on using scriptable objects for your game architecture. This just seems like a good things to learn about to keep your game creation/coding organized and easy to work with. Scriptable objects are something I need to learn more about in general.

AI Project Research – Looking for Inspiration

January 9, 2019

AI Project Research

GDC 2018 – AI Wish List: What Do Designers Want out of AI?

By: Raph Koster, Dave Mark, Richard Lemarchand, Laralyn McWilliams, Noah Falstein, and Robin Hunicke

GDC 2018 – AI Wish List: What Do Designers Want out of AI?

I’m bringing back a video for reference to try to come up with an AI intensive project. There are lots of quotes and the notation is very note-y, just using it to keep track of points of inspiration.

Laralyn mentions that she wants AI to be able to acknowledge that what the player is doing is unique. The way they are playing is differentiating from what is expected, and it should be noted by surrounding characters. Could we apply a type of “physical constraint” concept to this to help accomplish this goal? Physical constraints mathematically map out what an equilibrium should be, and then apply forces/actions to bring things out of line back to equilibrium. Could we use this concept to perform AI actions when it acknowledges a player diverges from what is expected, the “equilibrium” of the player experience?

From Raph Koster, “…building characters as props that are primarily reactive, and if we want to exploit AI, they need to have their own inner lives, out of which this stuff rises organically.”

”Heartificial Intelligence” – giving NPC’s empathy; Have deeper version of Sim system. Users more interested in “broken” characters with flaws. A dog that is scared of people now and is harder to train. Problem with Sim system was that every object had to echo out how it should affect an NPC, and they all needed to interact with each other so adding things to the system was very difficult(?).

Time Stamps: (0:49:40 – 0:51:18) : AI can be recursive and add detail. “…Can you do that but in a way that is data rich, instead of just going for ‘no let’s plot thousands of trees’, how about when I go look at that tree it’s way more detailed. Now there’s an ecosystem in that tree. Now there’s a burrow in this one and the gophers live under that, and there’s a woodpecker in that one. That’s the kind of detail we will never ever be manually able to do. We just can’t afford it, it’s ridiculous. And it’s exactly the kind of intelligently constructed, realistic, responsive environment. You want the woodpecker to know there’s a gopher. I’m not talking just shallowly, again, it’s not just splatting the textures. Can we actually create a data rich environment? … it’s a fractal or segmented thing. We don’t need each tree to know about each tree. But we could do something really interesting within the tree… each one could have variations.”

Solaris example – it builds things from your memory; makes you miss being on earth

”…go to a foreign planet and you find an object, and when you look inside that object there’s more objects, and you look inside those objects; and then come up with a design of mechanics that are interesting from the exploratory perspective that’s very similar to when you’re a child. When you wander into your backyard, and then the woods there and you find a stream then you look in the stream and you see the tadpoles and then you poke the tadpoles. Think of it from a mechanical perspective, not an aesthetics perspective, and then you get the dynamics that are so interesting.

Decentralize player, other things go on without the player. Make the player feel proud of something else. Have an AI do something the player didn’t expect it to do. Going off of this: Have an AI that the player needs to “teach” it how to do something with their actions. Similar to “Clumsy Ninja”.

Procedural Generation – Far Cry 5 GDC Content Generation

July 30th, 2018

Procedural Generation – GDC 2018 Talk – Procedural World Generation of ‘Far Cry 5’

By: Etienne Carrier

GDC 2018 – Procedural World Generation of ‘Far Cry 5’ – GDC Vault Link

GDC talk where Etienne details a large scale tool they created using procedural generation techniques to help build the world of Far Cry 5. It covers both the user end (world editors/designers) and the behind the scenes processes that make everything possible. They use Dunia and Houdini together to create these tools.