March 15, 2019
Unity ML Agents
Balancing Ball Setup
Basic Project Settings
Make sure the “Scripting Runtime Version” for every platform you are targeting to build is set to (.NET 4.6 Equivalent or .NET 4.x Equivalent). I had to update the project to work with Unity 2018 and it already had .NET 4.x Equivalent as the default setting for all of my platforms.
Overall GameObject Hierarchy
The overall platform prefab has a “Ball 3D Agent” script which needs a brain property
The brain object then holds a Tensor Flow model property
Setting Up Training Environments
There are two ways to train your objects: in the Unity Scene Editor in by using an executable.
The first example will train in the Unity scene editor. This is done by accessing the “Ball 3D Academy” object, adding “3DBallLearning” brain to the Broadcast Hub of the “Ball 3D Academy” script, and checking the Control check box. The Broadcast Hub exposes the brain to the Python process, and the Control checkbox allows that Python process to control the brain.
Next I needed to use Anaconda Prompt to run the learning processes. Since I’m still getting the hang of this, I ran into a few basic issues noted in the PROBLEMS section.
After successfully completing the training, the trained model is located at path:models/<run-identifier>/<brain_name>.nn
You then want to bring your model (the .nn file) into your Unity project, and then set this as the model property for the brain you are using.
Problems
Apparently I did not follow the default installation setup, so I was unable to access “mlagents-learn” from any directory. I found my ml-agents folder location and learned how to change my directory in Anaconda Prompt to get myself into the correct location. This then allowed the first step to properly process, which was running the line:mlagents-learn config/trainer_config.yaml –run-id=firstRun –train
After resolving this step, I was getting a UnityTimeOutException error in Anaconda Prompt. This was just because Unity was unable to communicate with the Python process because I forget to check the Control checkbox from the tutorial.
Finally, when I went to add my newly trained model to the Learning Brain and play the scene, I got an error and the platforms did not move at all. I did not reopen the scene like stated in the tutorial notes, and determined a default value of the scene might have still been altered. It turned out I just needed to uncheck the Control check box in the Brain Hub, which makes sense since that determines if the platforms are run by the outside Python process or not. Turning this off allowed them to perform on their own with the designated model properly again.
NEXT STEPS
These are the next steps suggested by the end of this small setup tutorial:
- For more information on the ML-Agents toolkit, in addition to helpful background, check out the ML-Agents Toolkit Overview page.
- For a more detailed walk-through of our 3D Balance Ball environment, check out the Getting Started page.
- For a “Hello World” introduction to creating your own Learning Environment, check out the Making a New Learning Environment page.
- For a series of YouTube video tutorials, checkout the Machine Learning Agents PlayList page.