Explaining two main challenges in Adaptive Audio.
By Matthias Wagner
When we say Adaptive Audio some of our friends think that it sounds kind of futuristic, and also pretty complicated. But we believe that it's easy to explain and we’ve had a little chat with our audio developer Felix Niedermair to help us with that.
Let’s ask Felix
At WLA, Felix is responsible for integrating any music, ambience or sound effects into our Engine and he says that if we want to dive into the magic of Adaptive Audio, we’d best start with two key terms: re-sequencing and re-orchestration.
The concept is simple: re-sequencing works horizontally and stands for the organization and seamless blending of different pieces of music or ambience on the timeline. Re-orchestration, on the other hand, can be seen as vertical. This is about shaping the character and sound of the music in real time, using different arrangement layers and sound effects.
The horizontal challenge: re-sequencing
The goal is to create an organic transition between two pieces. By that we mean much more than an ordinary fade-in and fade-out, but a transition that feels so organic that it seems as if it had not been generated by software, but just composed and arranged like that originally.
The crucial thing here is timing, and this, first of all, requires a whole new way of composing and producing that is no longer linear. You have to conceive each transition in such a way that it not only fits the current piece, but also already opens a door for the next piece. And even more: we want to be able to leave one piece and enter any other – at any time.
Feeding tracks with metadata
We are approaching this topic in a completely new way for which Felix needs to feed our engine with certain metadata. A part of those metadata concerns the so-called sync points in a track where the entry or exit is possible.
Other metadata specify how the transitions between different pieces can be designed. For this there are specially-produced musical fills that function as outros and intros. In this context, they are exactly what a composer, band or producer would provide as typical transitions between parts. This could be drum rolls, brass pickups, the glissando signaling the beginning of a new part, a short harmonic modulation, or the "riser" including the "drop" in electronic dance music.
Endless possibilities
For the method of re-sequencing these are the core features that we are working with in the Welove.audio Engine. It's fascinating but also very complex: within a single piece of music that lasts two minutes, there are at least 10-20 sync points and fade descriptions so that the transition to the next piece can be initiated.
In the case of our soon-to-be-released app TableTone, this forementioned next piece is selected from hundreds of options, always depending on the game’s current situation and location. The app automatically selects the perfectly fitting music from the catalogue by means of an intelligent "proximity search" algorithm that incorporates user favorites and preferences.
Learning to trust
This all means that the number of possible combinations and transitions increases exponentially and we can't possibly check every single variation manually. In other words, we've learned to trust the Engine and we keep stumbling upon great moments. Sometimes the result of two pieces in combination sounds quite different from what we expected but it actually always sounds astonishingly good:
The vertical challenge: re-orchestration
While re-sequencing is about the seamless transition between different pieces of music, re-orchestration is about the possibility of discovering different sound worlds within a single piece. As we all know, one can completely change the mood and character of the music by adding or omitting elements such as drums, synthesizers, orchestral parts, guitars, vocals, soloists, etc.
To do this, we work with different layers in the arrangement that can be automatically faded in and out, and also processed, enabling us to make dynamic remixes, controlled by any number and combination of mapped parameters that come from the client app's business logic.
For example, if you love saxophones, wouldn’t it be great to mute everything but that crazy solo in a jazz performance? Or, when you dance, why not change the sound of a track so that it will actually fit your mood and emotions at that very moment? Why not take out the beat of some Billie Eilish song, or maybe, if you feel like it, listen to only the beat?
Keeping it intuitive
To keep it simple for the user, our Engine can be used to translate this process to an intuitive and emotional level. To name an example, in our app TableTone you will end up operating just a few buttons to choose the location and situation of the game – and two sliders: one for the desired basic mood, and a second for the preferred sonic intensity.
At this point, the engine takes over, creating a smooth and seamless transition between the pieces of music and the ambience sounds, and operating all the pre-produced sound layers and effects to produce immersive and emotional soundscape changes from minimal user interaction - in real time.
Special FX
To enable adaptivity through re-orchestration (or re-mixing), the engine is not only capable of playing associated layers in parallel and control volume relations between those, but also features object-related plugin processors, like filter, delay, compression and others, dramatically changing the musical experience according to user/game input.
Real time arrangements, beautiful and organic
As for re-sequencing, lots of metadata need to be implemented to make sure that our engine knows how to manage individual musical elements and effects. Tweaking parameters and values for the perfect performance requires sensitivity and some love of detail from the producer. But the results will sound beautiful and organic, just as if they were composed for that specific situation. We are already having a lot of fun playing around with it:
Kommentare