Welcome back to Scoring for Films by Vito Lo Re and Fabrizio Campanelli. The family has expanded once again today. Who is our guest? Fantastic beasts! A brain not on the run. Here we have an Italian excellence.
Like buffalo mozzarella... Emanuele Paravicini. Welcome. Thanks, hello everyone. Our friends have already heard about it because I have used the SWAM on various occasions. Last time, we did an episode on samples libraries, and today we will look at, I don't want to say a competitor but another way of use virtual sound.
Emanuele with Stefano Lucato and the whole Audio Modeling team is one of the creators of this technology that blew my mind. Today we are talking about something truly special, but before introducing it, let’s show something... first. Did you like how they played? Yes, I would say so. We have to reveal a small secret: we didn’t actually listen to what the people shown in the video were playing, but we were listening to the SWAM.
And that’s the incredible thing. Tell us how the company was born, who had the idea, and a bit about you. It all started at the end of 2009, I received an email from Stefano Lucato – whose name I had read in a music technology magazine – and he said: "I have a project in mind, since you are a software engineer, I need someone to help me develop it." He was working on Kontakt but couldn’t because Kontakt is a sampler, and he wanted to do something much more modeled. I jumped at the chance and told him “Okay, I’ll help you.” We worked for almost two years on the first instrument, which was the Soprano Sax that we released in 2011, and it was very successful. From there, we grew, in 2017 we founded “Audio Modeling,” we continued to grow, and now we are three partners and ten employees developing plugins and other solutions. So, explain to us in the simplest terms what is the difference between a sample library and what you do.
The sample library is a collection of many recorded samples and there is an algorithm that selects them based on either the velocity or the chosen articulation. Modeling, on the other hand, is based on the mathematical virtualization of a mechanical system to achieve certain outputs. Just like in reality: with a violin, we draw the bow which - by rubbing against the string - sets the string into resonance. The same thing happens in the digital world; we create the digital conditions to make these numbers oscillate according to mathematical and physical laws that we have studied for years and that emulate the behavior of the mechanical instrument. What is the difference between what you do and a '70s keyboard that had, for example, the sound of a clarinet? Well, on those keyboards, the sound was generated through an oscillator, meaning the clarinet sound more or less had a square wave, so what did they do?
They started from the result: if I know I need to get a square wave, I create an oscillator that already has a square wave. But obviously, by doing so, you don't have all the nuances that a real instrument has. On the other hand, physical modeling starts from the conditions that will then generate that waveform; that waveform is the final result of processes that simulate the acoustic behavior of the instrument. Let's open the session of our friends who unknowingly played and see... some automation... which, even as a modern painting, isn't bad...
We have lots of automation, and this means we have lots of expressive possibilities. This demo was made by Luigi Zaccheo, who is a professor at the St. Louis of Rome. It's an exceptionally well-executed automation, with a great level of refinement. Any choice regarding the use of instruments is always a trade-off choice. It’s a compromise between the resources we have to employ - including time as a resource - and the goal we need to achieve.
It's not always necessary to automate all these parameters. It depends on the result you want to achieve. Obviously, this was done precisely to show that it is possible to control many parameters and thus achieve all those nuances needed to obtain that performance. And we see in the interface the controls, starting with the most important ones. The most important ones are on the home page and then there are all the pages that allow you to navigate all the various parameters. For example, in the timbre we can choose the instrument body, i.e., the type of instrument body.
We have given names of Italian cities up to the electric violin, which is the one without the body. This is very useful when you create a small ensemble to diversify, so they don't all play in the same exact way. Like in a real orchestra, where each one has a different violin, right? When you buy an SWAM you don't buy just one instrument, you buy many of them; Almost infinite because you can adjust many parameters. In sample libraries you get that sampled violin, it's unlikely that you'll have three or four sampled solo violins. So that's another advantage.
Another thing I find interesting is the bow noise. You may decide how much bow noise you hear, even the amount of rosin. The rosin put on the bow determines these nuances during bow changes or attacks. Maybe if you make just one bow stroke, you don't realize it, but if you listen to the performance... For example, in Baroque music, where you can hear a lot of this very aggressive grip on the string; maybe we want to emphasize the rosin "sound". And then, Emanuele, we also have here something curious: the pitch.
The master tune has been lowered to 415. Actually, compared to the original recording - which we hear now while we're talking - the tuning of the SWAM has been aligned. It’s aligned because the recording actually sounds lower than 440. On Baroque repertoire, generally, A is always very, very low, it’s never set to 440. This is not stretching the sample, but it's actually a relaxation of the string that therefore achieves a lower pitch. I won't even ask how you manage to achieve this result, I don’t want to know!...
Let’s hear the solo violin. You had the wonderful idea not only to offer the modeling of the single instrument, but also of the section. Actually, here we only heard the violin, but we can also hear the section of the first violins, of which we can also choose the number of players. This is another important thing, because we mentioned last time when talking about samples, that obviously each library has a specific number of musicians with which it was recorded. Some might do it with 14 first violins, some with fewer... Being able to choose the number of musicians is interesting, right?
Yes, we've chosen this option: for each violin section, it is possible to choose from 4 to 6 violins, so small sections, but if one wants larger sections, they might use two or three tracks to obtain a larger ensemble. The section cannot exist without spatialization. Okay, so in fact, we also modeled the space in which these instruments are. Because the sound of the ensemble is also heavily influenced by the space in which they are, by reflections... When you have a large section it's clear that the reflections from the violin on your left or behind you reach your ears or the microphones recording differently; They interact with each other with delays and phasing that give this realism. Here’s a difference compared to samples: if we want to expand with samples and we layer them, it’s not the same as—when it comes to samples—recording that sound with a wider ensemble; or—in the case of the model—having a model that expands the generation of the sound created by the larger ensemble.
I had begun imagining the concept behind the modeling of a single instrument. The section honestly left me amazed because understanding how a section is generated of instruments that all have interplay between each other, the harmonics that blend together, and so on, is something I had never even imagined that it could be done. SWAM stands for Synchronous Wave Acoustic Modeling. We started by modeling the woodwinds and - as mentioned - we began with Kontakt; we weren’t yet experts in physical modeling. So, we also started from the samples. These are very short samples with the attack and a bit of sustain.
But everything else is enriched and generated by physical modeling. This is very important because the attack is key to making a sound realistic or less realistic. So you're telling me that the attack of your sound is sampled? Yes, it is sampled but it's not just that. Okay, there is the sample to provide realism, but it is helped by the modeling. So, the sample and modeling work together and that’s why we named it SWAM.
This thing about the attack literally dropped my jaw when I started playing on the woodwinds because two incredible things happened: first, the entire attack - that is, the start of the sound envelope from its minimum to its peak, which happens in a fraction of a second; it’s a decisive element but one we cannot isolate acoustically, so when we talk about using the wave on the attack, it’s just "an injection," but it gives me a realism that I’ve never heard before. Another thing: the possibility, even just while playing - without over-programming - to achieve a truly incredible result; you can therefore play with a level of realism in phrasing that I could only reach with the samples by programming with key switches not note by note, but almost... The principles that guide us in designing these instruments are: the organic nature of the system instead of having a collage of many recordings. Here, the sound is a single one that transforms from one to the other. Another principle: physical adherence to the physical-acoustic behavior of the real instrument. Third element: the interaction between the musician and the instrument through parameters that continuously interact in real-time.
In fact, they are often used even in live concerts and we are seeing an example of how they can be played even with a breath controller. The market offers us many expressive controls: the breath controller, the seaboard, rings with accelerometers. And many of our clients are musicians who also play live. There’s another thing that left me amazed: the Room & Position control, meaning recently a plugin was added called Ambiente, which is an incredible spatialization. The sound we heard is the result of the combination of three parameters: absorption, the room size, and the distance from the microphones. The amazing thing is that we not only can place the SWAM instruments where we want, but we can also place separate audio tracks!
I can record the piano here in the studio and then combine it with woodwinds or strings... I just have to select the SWAM environment, which could be their plugin or even another plugin; but in this case I’m dealing with the piano that needs to replicate the environment I’ve chosen and find the right reverb. But the amazing thing here is: I open Ambiente, place everything inside it, and position the audio tracks inside Ambiente as if they were virtual SWAM instruments. For instance, we see the harpsichord from Pianoteq and the lute playing as if they were in the same environment where the SWAM instruments are playing. Especially if you need to replicate a certain positioning of the orchestra, this is amazing. There is no section without its own environment because the reflections of the various elements in the section arrive with different delays and directions.
What the microphone picks up is a combination of factors, with the reflections being essential. The first idea we had was to use a convolution reverb, but it was impossible because – since we want to place the section wherever we want – you would need endless sampled convolutions: it’s unfeasible. Convolution is a method where you send an impulse sound, record it with a microphone or more microphones, and try to understand how it transformed in the recording and reproduces the way it multiplied through reflections; to move around in space, you’d need infinite recordings of the source... It was too complicated, so since we’re experts in modeling, we modeled the room and studied its acoustics and how the tails are generated... and what we’re hearing is the result of the Ambiente plugin. As mentioned, it was included in the sections and we’ve also added it to the standalone version.
And why not offer it as a standalone plugin? A plugin where anyone can put their own stuff in; we don’t want to limit its use to just the SWAM; if you have dry recorded libraries, you can put them in Ambiente and place them. In our recordings, there is often a Lexicon or a Valhalla that adds to the natural reverb. What we have here is the natural reverberation of what we want: an auditorium, a church... so we place the instruments and then we can further do what we want exactly as we would in a recording in a church or an auditorium. It’s also possible to change the type of room and have different materials and different sizes; like in real life, when an engineer designs a room they try to design it so there aren’t too many resonating modes; we’ve done the same thing: we’ve designed it digitally because the goal is realism; in this case, we’re not looking for modification of many parameters, but instead we keep those parameters that respond to the need for extreme fidelity and extreme realism.
We see all these instruments in this interface, but we can also see all the others; from each instance, we can see the instruments in the other tracks and all the instances communicate with each other. I don’t know if you’ve noticed, but the colors of the dots are taken from the color of the track. It’s not necessary to record everything in the same room: like in a recording studio, there are different rooms, of course, we could choose to put some things in other studios. We have a studio with many rooms available. This plugin just came out, right? It was released a few days ago.
We’ll put the link in the description to Audio Modeling, which has many products, including Camelot... It can be purchased on our website www.audiomodeling.com And we’re approaching Black Friday... Keep an eye out... What are the future developments? I remember you told me about the SWAM at least a couple of years ago, maybe more... I use them a lot for demos: I always need to simulate the effect of the recording; so in my case, there’s no need to refine to an extreme but having a degree of expressive realism of this kind helps you with composition because everyone starts from a basic sound to begin composing.
Having a degree of expressiveness and continuity without touching anything is a great support to creativity. John Powell uses the SWAM exactly so he doesn’t get interrupted in his creative flow and maybe he won’t use the result for the final work, but it’s needed for creation; maybe later he records a real orchestra or uses libraries: it doesn’t matter, the important thing is that his creative flow isn’t interrupted. Do we want to stay in the digital realm and have an output that leaves the session? Then we can refine all those controls and automations we’re seeing and also affect the audio output for example, by opening up those high harmonics with equalization to our liking. That’s also what happens in reality: after recording, processors are applied. The microphone capture is then passed through Avalon equalizers exactly like pulling color correction from a raw photo that has all the information inside; I have all the information here, and I choose to amplify some of it, and I can do that because the model provides everything needed to enhance those characteristics.
Well, we’ve said a lot to help you get to know this wonderful company; as usual, we don’t have affiliate relations: we invited Lele because we believe in his work; so we invite you to check it out. We’ve released the new portal and the new software center a few weeks ago, which allow you to try all the Audio Modeling products for 15 days; just click on try... Additionally, we’re available because customer support responds promptly. And they can also write to you in Italian! Of course, because the team is entirely Italian, but if you want to write in English, we’ll answer in English... Write in the comments if you’ve tried the SWAM and what you think about them.