12
Advanced: Music

Summary: Musical stingers, melodic pickups, generative and sequenced approaches for variation, algorithmic / procedural music

Project:DemoCh12AdMusic01 Level:MusicPlatformer01

Introduction

In this section we’ll be looking at some more advanced approaches to music that treat it in more of a granular way, rather than working with relatively long prerendered waves.

You can skip forward to any screen of the level by holding Ctrl and then pressing the appropriate number key. A move left, D move right, and Space Bar to jump.

Harmonically and Rhythmically Aware Stingers

Harmonically Appropriate Stingers

We’ve looked at stingers before in both Chapter 03 and Chapter 04. On these occasions we were writing our stingers to fit with whatever music was currently playing. This is alright if your stingers are rhythmic in nature or your music is harmonically static since there’s no danger of your indeterminately placed stinger clashing with the harmony of the background music.

If we want to have music that changes chords, then we need to know what chord we’re currently on, and when a stinger is called, we need to pick an appropriate stinger that we know fits with that chord. *Screen 01* (Bookmark 1) of the advanced music map features your stealthy ninja sneaking across the rooftops. As you do so, you can pick up some loot, indicated subtly by the large rotating gold coins.

fig0576

The background music is a repeating three-bar sequence based around the chords of E Minor, F Major and A Minor. Each bar has its own stinger (actually an arpeggio figure), and so we need to set up a system that will play the correct stinger when a coin is picked up, depending on where we are in the music. In order to do this, we’ve set up a <Timeline> with a Float Track containing values that change at the start of each bar.

fig0577

These values are rounded up to the nearest integer and then used to control the -Switch- inside our {Harm_Stinger} Sound Cue.

fig0578

The background elements {AdM_Bassline} and {AdM_Drums_01} are looped by an Event Track in the <Timeline> as well just to keep everything in sync.

fig0579

This means that when you pick up a coin the {Harm_Stinger} cue is triggered, but it will play a different stinger depending on what bar the music is currently in so that the stinger and underlying harmony will match.

Rhythmically-synched Stingers

As well as making your stingers fit harmonically, you might want them to actually play in time with the underlying score so that when a pickup occurs, they don’t play immediately but perhaps wait until the next musical beat to do so. *Screen 02* demonstrates an example of this.

fig0580

Here we continue to use the accompanying music from *Screen 01*, but if you look in the [Level Blueprint] below the <Timeline>, you’ll see that we are also starting a <Timer> object. This loops around and finishes every 0.56 seconds (i.e., every beat of the music). It’s also restarted every bar again to ensure sync between the <Timeline> and the <Timer>.

fig0581

We’ve created a function named “Beat” for the <Timer>, and we pick this up as a <Custom Event> for our rhythmically-aligned stingers. Every time we receive a coin collected event from the coin Blueprints in the level, a <Gate> is opened to allow the next beat through. Now our stingers will play in time with the music.

fig0582

Sequenced Melodies

Play for Proximity

On *Screen 03* (Bookmark 3) there is an enemy guard. To highlight the tension of being near him, we’ve introduced some cartoon style creeping musical footsteps. We check that we’re within a given distance (Footstep Threshold) of the enemy and at the same height (i.e., on the same platform, not shown below), and if both these conditions are true (tested with an <AND>), then the <Footstep> event is allowed to <Branch> to true and play the {Walking Bass} Sound Cue (can you see what we did there?)

fig0583

Chained Melodic Pickups

On *Screen 03* the walking bass notes in the Sound Cue were chosen randomly, but you may want to instill a sense of climax and reward for the player when they achieve a chain of pickups by working through a melodic phrase. For *Screen 04* if you are on a streak of pickups, the melody progresses, but if there’s a gap where you miss one, it resets and you start again. You can instill a sense of progression and reward for chaining together multiple pickups.

fig0584

Our melodic phrase is defined in the {Chained Pickups} Sound Cue through a sequence of -Switch- inputs.

fig0585

Each time a pickup is collected, the Pickup Count is incremented by one and this goes to the <Set Integer Parameter> to control the -Switch-. If the Pickup Count reaches 6 (the length of our phrase) or the player misses a pickup (i.e., hasn’t picked anything up within the last two seconds), then the Pickup Count is reset, ready to start the phrase again.

fig0586

Algorithmic or Procedural Forms

There’s a long history of algorithmic music (built using a systems approach) and generative music (a systems approach that includes elements of randomization) that we can learn a lot from in terms of creating music that is more variable, and potentially more reactive. If we want a finer level of control over our music, then we need to think on a more granular note level. For example you might want to represent the development of a character in an RPG or the evolution of a city in a simulation type game. Like with sound, the number of possible permutations starts to mean that prerecording them becomes an impossibility, and so you might want to start to look at a more procedural or algorithmic approach.

Generative Combinations

Writing music that will be played back with some degree of freedom or emergent behavior requires a change in mindset from the typical linear composition that you may be used to—but it can be quite liberating!

Simple Random Combinations

A simple way to generate variation in your music tracks is to randomly recombine the different layers that make up the track. For example in *Screen 05* (Bookmark 5), the music consists of a nyatiti line {AdM_Bassline}, a low drum part {AdM_Drums_01}, a pizzicato cello part {AdM_C_Bass02_01}, and some stick percussion {AdM_C_Sticks_01}.

fig0587

The Sound Cue {Combinations_01} is set up with a -Random- node and a series of -Mixers- that will combine these in different ways.

fig0588

Combinations with Variations

Musicians will typically add small variations as they play, perhaps a slight change in timing or an extra note or two. By having a few versions of each part, you can maintain interest (as heard in *Screen 06* {Combinations_02}).

fig0589

Asynchronous Loops

We looked at asynchronous or phasing loops (that is the variation that comes from combining loops of different lengths) in terms of ambience, but this can also work well for music so long as they are edited to maintain a whole number of beats. This works best with rhythmic elements, as with harmonic ones the combinations are often too complex to predictably avoid clashes.

If we had one rhythmic loop that lasted three bars and another that actually lasted 4 bars, then we would only hear the same music repeated after 12 bars. In other words what you would hear at arrow marker one in the diagram below would not occur again until arrow marker 5. Although the start of part one will always sound the same, the overall music will sound different because you are hearing this in combination with a different section of part two each time. Only after 4 bars of part one (and three bars of part two) will you hear both the start of part one and the start of part two at the same time like you did in the beginning.

fig0590

A 6-bar and 8-bar combination would only repeat every 24 bars, and 7 bars and 9 bars every 63 bars!

The music for *Screen 07* is from the Sound Cue {Phasing_Percussion} that contains a 4-bar loop (780kb), 7-bar loop (1,171kb), 9-bar loop (1,756kb), and 11-bar loop (2,146kb). By looking at the lowest common multiple between them (72 *7*3*11 = 2772), we can work out that in order for them to come back in sync, and for us to be hearing exactly what we heard when they all began to play, we would have to play the 4-bar cue 693 times, the 7-bar cue 396 times, the 9-bar cue 308 times, and the 11-bar cue 252 times (the lowest common multiple is 2772). The 4-bar loop lasts 9 seconds, so that means we get over 1:44 hours of music before it repeats—not bad for 5.8MB of music (listen to it and check if you don’t believe us).

Obviously although every detail will not repeat until the cycle is complete, this does not mean that it’s necessarily interesting to listen to! That is up to you to make a judgment on, but it is a useful tool for producing non-repetitive textures.

fig0591

Granular Note-level Sequences

*Screen 08* (Bookmark 8) is an example of how you could start to treat your game engine as a musical sequencer. The 16 outputs of the <MultiGate> act as a step sequencer. The <Beat_V02> event is being triggered every beat. The taiko (bottom right) ticks along with a randomized accompaniment while the other elements are attached to different beats. In the Sound Cues of each part, there is randomization and sometimes the inputs to the random are left blank—so sometimes that part won’t play at all. Working like this not only allows a great deal of variation, but also allows the music to be very responsive since you can bring parts in and out (by muting or unmuting the parts) on the next beat (this is not to be confused with granular synthesis—we’re using “granular” in the sense that we’re working with note level grains rather than wave chunks).

fig0592

Using Random Seeds

Interestingly the nature of random numbers produced by computers is that they’re not actually random at all! When writing/constructing procedural or algorithmic music systems, this can be very useful.

If we seed a random number generator, then we actually get a predictable set of outcomes. For example if we have a random seed of 234 and get a range of integers from 1–100, then we will predictably receive 37, 95, 60, 67, 4, 76, 21, etc. The system in *Screen 09* uses this characteristic to generate repeatable musical patterns. We set up a timer to output a pulse at the tempo we wanted (every 0.25 seconds = 240 BPM, but these are actually half-beats, so our tempo is our old friend 120 BPM). Every 8 half-beats we reset the random seed on this measure (or bar), and so we get a predictable, repeatable series of 8 numbers.

fig0593

We could just send these to control the -Switch- in our Sound Cue via a <Set Integer Parameter>, but that would just give us a very repetitive 8-note (one bar) pattern. We have only actually got 5 notes in our Sound Cue, so we’ve restricted the output of the seeded random to give us numbers in this range.

fig0594

Instead we randomly weight the likelihood of each note actually being played. By doing this, we’ll retain the underlying character of the pattern but will introduce variation regarding which notes will actually get played, much like a musician might sometimes add notes or miss ones out to create variety.

fig0595

This random weighting system takes the 8-pulse note count and uses it to read through a curve. This curve is 8 seconds long since we are treating the 8-pulse count as seconds to read through time (although obviously this has nothing to do with time—it is just an easy way to read out the contents of the curve).

fig0596

We generate another random number to compare the output of the curve to. This is a way of giving us a percent chance of the note playing. If our curve defines note one (the reading at 1 second) as being 0.5, then the chances of our <Random Float in Range> producing a number less than or equal to 0.5 is 50%. If our curve output is 0.25, then the chances of the random number falling within this range is 25%.

So our curve is like a probability table, defining the likelihood of each note. For example this curve would never play the first 4 notes (0% chance), but would always play the second (100% chance).

fig0597

And this one would do the opposite.

fig0598

Our actual curve for the bass line is a lot more interesting than this, as it introduces lots of weighted probability giving us a wide variety of possible outcomes.

fig0599

You could extend this to read through arrays of notes that defined certain keys or modes or weight certain harmonically important notes for example. There are many more techniques out there associated with algorithmic composition such as Markov chains or cellular automata that also have potential applications in games that would not only allow us to produce music of great variety (within defined parameters), but also music that can be responsive to gameplay on a note, rather than track, level.

Rhythm-action

Rhythm-action games come in all kinds of guises but are united by the underlying principles that the player will gain advantage through performing actions in synchrony with musical events. Although the core approach of a performance where you play along to pre-existing music remains popular, there has also been a lot of innovation in this field in the last few years that continues to blur the lines between gameplay, performance, improvisation, and even composition.

fig0600

The demo levels here are for a bit of fun (you deserve it!) and as a basis for you to develop further if you wish. Given that this is an advanced chapter, we will trust that you can make some kind of sense of them through examining the systems themselves, so we will just provide a brief overview.

Graphically Led

  • Project: DemoCh12Rhythm01
  • Level: GraphicallyLed01

The descending ships represent the notes of the bass line that you must play in time by pressing the keys A, S, and D to move the target into the correct one of the three positions. We call this a graphically led style of game since the notes are represented in a piano roll style on the screen, and you could theoretically play these with the sound turned off. Other games might be more audio led, in that you actually have to listen in order to master the game.

fig0601

Two synchronized tracks are started, the {Accompaniment} and the {Bassline}. The bass line is immediately muted by the <Set Volume Multiplier> (set to 0.01 so as not to actually stop the track and get out of sync).

The system is controlled by note sequences in a [Matinee]. These are the same but offset by the amount of time it takes the ships to reach the bottom of the screen. The Note01 event in the illustration below spawns a ship that starts to move down the screen, and the PlayNote01 event opens a Window variable (<Set>s to True) for a set length, which is the time we expect the player to attempt to play this note.

fig0602

fig0603

When the player presses a key, we check to see if the window is open (or True). If it is then the player has correctly timed their input with a note, and we spawn the explosion and momentarily unmute the {Bassline} track. If the window is not open (False), then we play a DuffNote sample instead.

fig0604

Call and Response

  • Project: DemoCh12Rhythm01
  • Level: CallandResponse01
fig0605

Rather than muting and unmuting a synchronized music track or stem, call and response games tend to use repeated stingers for the particular notes you must echo. The lines at the top represent 8 musical beats, and the player must successfully respond by imitating the pattern they just heard.

Each pattern is stored in an array with 8 elements. This is read through with a <ForEachLoop> and places each element of the array (if present) at its designated spawn point (the [Target Point] Actors’ positions) along the 8 beats.

fig0606

This gives us a series of pickups placed rhythmically across the screen, each of which is associated with a specific key press A, S, or D. A <FlipFlop> alternates the system between audition mode, where the player listens for the pattern, and play mode, where they attempt to recreate it.

A ship is spawned [GAB_SpaceShooterEnemy_Small] and starts to move across the screen, passing each beat marker at the correct time due to its designated speed. When it overlaps with a pickup, it sets the Boolean variable associated with that pickup (for example AOpen) to True, and when the player presses a key, we check the condition of that Boolean. If it is True, then we play the pickup sound and award some points, and if not then we subtract some points. We can set the degree of accuracy we require by changing the size of the box around the ship that gives us the overlap events.

fig0607

Conclusion

There are many ways to utilize established methods of algorithmic or generative composition in games that have yet to be explored and this continues to be an exciting field of innovation. Once we are able to work on a more granular note level, the possibilities for integrating and adapting music with gameplay are huge, and make many current systems that simply rely on starting, stopping, or changing the volume of prebaked musical stems look pretty primitive. There’s lots of fun to be had here!

For further reading please see the up-to-date list of books and links on the book website.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset