Procedural Music in Typical
Updated: Jan 27, 2022
Procedural music is a type of music creation that is driven by sampling, randomization, and automation. In the use case of games, this process allows for relatively non-repetitive music that could respond to the player’s actions and help describe the environment.
The process of creating procedural music includes a decent amount of formalization. For example, I may tell the software to play note A every four beats. But the rules for generating music are far from a priori, as one must listen carefully to the software actually playing the music out: does the product suit the tone of the scene, and do my rules actually build towards something appealing? Patterns may emerge where I never intended: should I keep them or discard them?
That is an overview of the type of procedural music involved in Typical. The following sections will walk through how to implement it with Unity and FMOD Studio.
I will be using Unity 2020.3.4f1 and FMOD 2.02.03 on a Mac. It should be similar on Windows and Linux, but please refer to respective manuals when things divert. Both tools are free and are well documented online, including the installation process as well as providing sample projects to play with.
If you just want to see the product of this tutorial, I suggest cloning a demo from https://github.com/aelu419/UnityAmbientTutorial.git. You can also check out other branches in the repository for different stages of the project.
If you want to build the demo yourself, I encourage going through FMOD’s integration tutorial. You will also need to register for accounts at unity3d.com, fmod.com, and “buy” (get) the FMOD plugin from the Unity Asset Store. You will need to create a Unity project following the 2D or 3D template. This can be done through Unity Hub. I recommend checking out https://learn.unity.com/pathway/unity-essentials if any of this is confusing.
If you have any trouble understanding the rest of this, feel free to send me an email at email@example.com.
The sounds I use come from Freesound.org.
Open FMOD studio and create a new project, you should see a screen similar to that below. Save the project directly under your Unity project (it should be a sibling folder to the “Assets” folder). I am calling it “FMODProj”.
Just so that I have something to work with, here I have imported a few sounds into the Assets tab. Please ignore the foxraid_chant. At the time of writing, I have come to recognize how overly creepy it is, and have excluded it from the project.
II. Creating Looped Instruments
Create a new Event > 2D Timeline by right clicking under the Events tab.
Rename it to “background” and remember to assign it to the “Master” bank. Banks are collections of events and actions in FMOD. I will not delve deep into them in this tutorial, so we will only be using Master.
On the right, you should see a similar screen to this:
The top are several tracks that will be played. The bottom are modulations and effects, and on the right are parameters to the event itself.
Go to the Assets tab and drag the sterferny stream sound onto the timeline. Place your cursor on the edge of the blue bar until it becomes a left-right arrow. Then, drag towards the center of the blue bar. You will see a white curve from the bottom to the top: this is the fading of the audio. You can press SPACE to play the sound. Notice the fade in and fade out: this is what the white curve does.
Then, right click on the black bar on the top, below the gray bar with marks "0:00", "0:05" and so on. It is the "logic tracks" section. Then, choose "add loop region". This allows the sound to be looped.
Notice the teal bar (the loop region) appearing in the logic track. Drag it out so the left and right edges are still between the fade in/fade out curves.
Click on the loop region and notice "transition conditions" appearing on the bottom left section called "Region". This determines when the loop should "transition back to the beginning". There are fun ways you can change the behavior. But for now, we want to add condition/add event condition/event state: "not stopping". This just means that the segment loops until the event is told to stop.
Now play the sound. Notice that the sound does not immediately stop inside the loop region when you hit the stop button. Instead, it continues until the end of the region, exits, and reaches the end of the timeline.
But there's one more problem with our loop: the transition is too abrupt. To fix this, we need to add a similar fade-in fade-out effect for the transition itself, which could be done through transition timelines. Right click on the loop region and select "Add Transition Timeline".
Notice the bar splitting (see below). The left half is the end of the loop region, and the right half is the beginning of the next loop.
Drag the dashed region on the left, notice a different colored bar following your cursor. We want to extend this bar all the way so that it reaches the right side. Do the same for the right and drag it to the left edge.
A similar white curve appears, meaning that we have set up the fade-in and fade-outs of the loop transition. Try playing the sound again, this time the jump should sound smoother. You can also play with the curve itself as well as the length of the green and yellow sections to reach the best effect.
Next, I will show another type of background loop where it is not just one audio track playing again and again. Instead, we jump randomly between different tracks.
Create another 2D Timeline called "Whisper" and assign it to master bank. Then, right click on the audio track and select "Add Multi Instrument". As its name suggests, a multi-instrument is an instrument that can play multiple tracks. And we can set it up so that it plays these tracks at random.
Go to the assets tab and drag all the whisper sounds into the green multi-instrument. Notice on the bottom that these tracks appear in a list.
Select Playlist > Shuffle > Randomize. This means every time the multi-instrument is played, it selects a random track. How is this different from shuffle? Shuffle is where it randomizes at first, and then loops like a regular play list, while randomize is random at every time.
We then want to drag out the length of the multi-instrument. Notice the dashed regions on the right:
This is caused by the tracks not having the same length. If we set the multi-instrument to play 20 seconds, some tracks will finish early, while others won't reach their end. To prevent this, make sure the track is long enough, but not too long as to show dashed regions.
Similar to the background timeline, we add fade in, fade out, loop region, and transition timeline to the track. The finished product should look something like the following:
III. Creating Singular Instruments
Aside from loops, we also want to incorporate singular sounds that can be actively played at points during play. This part is actually easier as we do not need to set up loops and transitions—although, arguably, the complexity would come back when we revisit it in the modulation section. For now, create a 2D timeline called "Flute", assign it to the master bank, and drag the hyena flutee track onto it.
Then, create another 2D timeline called "Vibraslap", assign it to the master bank, and put the vibraslap track into it.
Here, we actually want to adjust the track. Notice that the track is significantly quieter: go to the knob on Audio 1 track where it says "0.0dB", drag it upwards until it is 10dB—the track should be louder now. Add a fade in and fade out, and we are done.
In order to be absolutely sure about what sounds your program produces, I suggest wearing a headphone or an earphone to make sure stereo sound works. You may need to restart FMOD Studio for it to recognize your device.
Now, go to the Background timeline, and click on the Master track. Notice a "fader" appearing on the bottom with dashed boxes on each side. This will be where we place our audio effects. On the left, right click and add a FMOD 3-EQ. I will not go too deep into how this works, but 3-EQ basically splits your sound into low, mid, and high frequencies, and allow you to control how loud they are. Change the settings to the ones here:
Notice the wind sound disappearing. This is because it is in "mid" frequency, and we have set Mid to -74dB, which makes it much quieter. You can change Mid back to 0.0dB and notice the wind appearing again.
On the right dashed box, add a FMOD Reverb effect. I will not pretend to know how this works, so just keep twisting knobs until it sounds good :) For reference, I changed the Wet Level, Dry Level, High Cut, and Early/Late parameters.
Now, right click the top bar of the reverb effect and select Bypass. Conversely, you can click the curved arrow with a dot below. This basically allows you to disable the effect: make sure that by adding the effect, you are actually making it sound better, not worse!
If you like the reverb version, unselect Bypass. If you like the original, just proceed.
Moving on to the whisper timeline. Add a 3-EQ effect just as the Background track. This time, we want to remove the High frequencies since it reduces the and noise.
For the flute, we also add a reverb.
Notice on the bottom right there is a "pitch" setting with unit "st", which stands for "semitones". In a western 12 tone scale, the semitone is a unit interval between notes like "E flat" and "E", or "F" and "F sharp". This will be useful later when we attach programmatic variables to the instrument.
For the Vibraslap, I used a Flanger, a Chorus, and a Reverb, as well as changing the fade-in to "ease in" instead of "ease out". This is not very important, as the vibraslap is only here for demo purposes.
Now that we have effects added, we move on to setting parameters that our game can control.
Go to flute, and double click on the reverb's title bar to make it minimized. On the right, add an FMOD Panner, and choose "stereo". Notice a dial named "Pan": this controls the left-right balance of the sound, negative values mean the sound will come from the left, and positive values mean it comes from the right.
We want the note to come from random directions instead of having a constant pan value. To achieve this, right click on the Pan dial and select "Add Automation". (Another way to achieve this is to use Add Modulation > Random, but that gives us less control from Unity's side.)
Notice an extended interface on the right side of the panner. Choose "Add Curve" and then "New Parameter".
A pop up will appear asking for how we want to set up this variable. Select Continuous, parameter name "Pan", and range from -1 to 1, with initial value 0, and select "OK".
A curve setting interface appears, with the horizontal (the value of the Pan variable) going from -1 to 1 as we have configured. Adjust the red line so that it goes from -30 on the left to +30 on the right. These two extreme values will be the left-most and right-most our panner will go. Feel free to try other values.
Now, go to the Whisper track. We want to adjust it so that the whisper will sound different. This can be achieved by adjusting the parameters of our 3-EQ effect. Note that when we reduce "Low", the voice sharpens and noise gets exaggerated. When we reduce "High", the voice muffles. Create a "Sharpness" variable that goes from 0 to 1, and add automations for Low and High (the finishing product should look like below).
We are technically done on the FMOD side of this project. Remember to save and build.
Scripted Sound Playing
I must apologize here as I cannot explain code well. I will try my best and lay out a high-level structure, and trust you to understand the actual code as it should be object oriented enough:
We will set up an abstract Instrument class that can “play”, “halt”, and “update” status of the instrument based on time elapsed
We will implement a looped instrument and a singular instrument
We will create a MonoBenaviour component called “Song” to play each “Instrument”. It will store each instrument as variables and update each instrument as time passes. For the singular instruments, the Song will play them once in a while, while setting their parameters to random values.
Also, note that for each of the following classes, we are importing the following namespaces:
I. The Instrument Class
First, set up the following variables.
And set up the actions the class would perform, using abstract methods.
A common approach you would see in generative art in general is the use of noise to create periodic but also randomized variations.
Do not worry too much about it. Just know that we will call this function by passing in time and getting a naturally varying value from 1 – fluctuation to 1 + fluctuation.
II. Singular Instrument
We need a coherent way to vary between notes in a smooth manner. To achieve this, we create an “underlying gain” variable.
The most complicated part is Invoke (the Mathf.Pow part for setting pitch will be explained later on):
In this code, we generate a randomized “note” from the instrument, play it, and simply move on.
As for updating the instrument, we just want to update the underlying gain:
III. Looped Instrument
Unlike the singular instrument, we want the looped instrument to remember the “note” it is playing—i.e. the background track itself.
Invoke and updates will operate on this event instance:
The complicated part is halting the instrument. We want to differentiate between a forced halt (stopping immediately) to an unforced halt (allowed to fade out).
1. Master volume
We want to control the song as an entire entity. For this tutorial, we create the simplest type of control, which is setting a master volume (gain) for the entire song. In the Song class, create the following variable:
2. Setting up the instruments
Set up the following variables:
Switch back to Unity, and open “Sample Scene”. Select the “Main Camera” object and drag the Song script to the inspector. Then, drag two looped instruments and two singular instruments to main camera as well.
Then, click and hold on the title bars for each looped instrument, and drag them onto respective slots on the song component.
Do the same for singular instruments.
In the Reference variable for each instrument, choose the magnifier icon and select the correct timelines. It will be nicer to work with if each paramter could be set by dragging, instead of typing in the value. To achieve this, we add tags to the parameters to “inform” the unity editor about how they should look like:
Going back to Unity, you should see the inspector UI updating.
3. Playing notes
When the song component is enabled, we want to start all the looped instruments, and start coroutines that will generate notes from the singular instruments.
Then, implement the following note player coroutine.
Note that this basically “throws a die” once a while. If the probability is correct, play a note.
4. Updating Instruments
We want to have separate gains for each instrument. For example, we want whispers to constantly sound quieter than the water sound, instead of both directly following the master volume. To achieve this, we adjust the Instrument implementations.
And each update, we want to multiply the master volume by these initial values, instead of directly passing the initial values.
Notice that we haven’t called the Update State functions of our instruments yet. This will be done in Song’s FixedUpdate method.
5. Stopping the Instruments
When the song is disabled, we want to stop all the instruments. This means to halt the looped instruments and stop the coroutines.
6. Additional Info
When I was implementing the variables, I did not realize that FMOD scripting in Unity does not support setting pitch by semitones. Instead, raising by one semitone is done by multiplying two to the twelfth power, while lowering is division. We can call the setPitch method to set the
Also, Random.value cannot be called before the constructor, so in Instrument the noise root must be set in the Start function instead:
As we have finished the coding part, switch back to Unity and hit “Play”. You can drag the “gain” of the Song to set the volume, and you should hear each timeline combined. Notice all the instruments’ gain matches that of the song.
However, as you will probably notice: it’s really boring :/
To make it better and actually use the features we have implemented, we can adjust each instrument until it “feels right”.
The tutorial is basically already done. But I figure if we already came so far, might as well add some tricks. What I mean is tweaking the flute so it plays a little tune instead of just a random note.
This is where good old OOP becomes handy. Simply create a Flute class off of the Singular Instrument class, and add “some functionality”.
In Song, we adjust the note player so a silent note actually flushes the flute’s parameters. This means we will have semi-coherent segments, but segments separated by silences may differ.
After adjusting some more parameters and removing the vibraslap because it doesn’t actually sound good, we get
But there really isn’t a barrier to adding more flutes! This can be done in two ways. The first one is adding another flute component and pairing it with a new flute2 variable. The second one is having two note players for the same flute variable. We will use both of these methods to create three flutes in total. The result is this:
Playing semi-random notes is often dangerous, as it walks the thin line between “special” and “cursed” music. An easy way to go around this problem is to play short segments of music instead of single randomized notes in order to preserve some structure within each phrase. Another solution is to have a better formalism for writing music and use that in the scripting process—this is essentially saying “git gud”.
For example, if we change the starting note of each phrase to only vary by octave
and make the phrases longer, we get:
Notice that some notes are going way too high, and there aren’t enough low notes. This can be solved by shifting the domain:
Now we allow the instrument to play from 3 octaves below all the way to 1 octave above.
We can even add chords by playing multiple notes at once, and have their notes separated by an interval. As I don’t know much about chords, especially when working with the pentatonic scale I am using, I will just have two notes displaced by an octave.
The result is this:
We can reduce memory usage by having FMOD “release” events we don’t want to keep track of. This basically involves all the notes from singular Instruments.