Welcome back to Scoring for Films with Vito Lo Re and Fabrizio Campanelli. Today, we’re talking about a topic you’ve requested many times: samples, those sound libraries that are widely used by professionals in the industry for purposes we’ll discuss. First, before we begin, we want to clarify that we’re not affiliated with any of the brands we’ll be discussing. We’re simply sharing the libraries we use and giving you our opinion on them. Which samples do we use? It depends!
What we need to understand—even before discussing which samples—is when to use them. Typically, we work with real orchestras. However, there are times when, either to create a demo, or sometimes even to support the final product, we might use samples. After all, everything we create with the orchestra will first need to be approved; it has to sound good! Maybe not too good, though… Yes, because there’s a risk that sometimes the producer and director fall in love with the sample’s sound, and then when you play it with the real orchestra— no matter how hard you work with the mixer— you won’t achieve the same result because it’s different… It often happens that the director becomes attached to a certain sound that they hear many times in editing. They hear it, they absorb all its characteristics— not necessarily positive— and it becomes something “familiar.” This familiarity with that sound is something they don’t find in the actual one and familiarity outweighs quality...
It’s a bit like the same story about temp track. The funny thing is that the temp track is your own, but it ends up becoming the final version. Sometimes we even have to adjust the recorded mix to match the sample’s sound. These libraries are created by recording instruments played in many different ways. There are various levels of detail, different levels of execution. We could create a library by simply playing a violin the same way for each note, then associating that note with an increase or decrease in volume.
But we know that raising or lowering the volume isn’t enough: the violin’s expressiveness changes based on how hard you press the string. So a forte on a violin isn’t achieved just by playing the note louder... In a detailed library, every time a velocity is received via MIDI within that range - the sampler executes the note played exactly in that range. If, for example, we play a violin with a velocity of 60 on MIDI, which is then sent to the virtual instrument, it will play a violin sample recorded at, for example, mezzo-forte; not a forte lowered in volume because the character is completely different. So, as we increase the level of detail we want to achieve in our library, we will have as many recordings as there are groups of dynamics that we want to associate with the virtual performance. So, to delve even further into this topic, we asked a friend and collaborator of ours: Luca Antonini.
Luca is an expert in sampled sounds and libraries; he’s one of those who knows how to make them sound good. Many times you ask us: which is the best library? Well, first of all, you need to understand what purpose it will be used for, but then you also need to know how to use it because it’s not enough just to have a Ferrari, because if you don’t know how to drive it, you’ll crash into a pole on the first curve. Luca is one of those who knows how to drive the Ferrari very well. We start with a short fragment of a piece I recorded with the orchestra, which we will hear at the end. It’s a fragment that really tests the samples because it’s very phrased, with a four-voice polyphonic alternation that creates a texture in a very difficult area for samples: that of the organicity of the phrase and therefore the naturalness that the sample must return, which is very difficult to achieve.
We’ll try to listen to it now with different libraries, then with Luca’s library, for which we asked him to refine the sample execution; doing this requires time and skill; this processing can sometimes be useful and other times completely unnecessary, depending on the result we want to achieve. If you can, use good headphones to hear the differences. We start with what I usually use: an older library. the East West Symphonic Orchestra which does not work on volume via modulation, but works through expression and assigns dynamic switching to note velocity. I use it because - since my goal is to record - I find a good balance between quality, speed of use, and especially uniformity across all sections. Let's hear it: the color scheme assigns fuchsia to the first violins, violet to the second violins, green to the violas, and red to the cellos.
These voices intertwine but are played on independent tracks across four channels. This sound has been slightly processed, you haven't spent days working on it; we're not saying this is the best achievable result, but we’re saying that the same amount of work has been done on each of the libraries we will be listening to. East West is a great compromise; it's a library that sounds pretty good and is quite easy to use, but also very balanced. It may seem like a trivial detail, but having a library that doesn’t include just string sections but also includes all the other instruments allows you to work on velocity and to represent those same dynamics that we’ll find between different sections in a real orchestra. The sound of the samples easily leads to creating a sound with a dynamic different from the actual result we’re listening to. There are samples that allow me to get a very loud sound, but with that level of dynamics that, when transcribed for the orchestra, will not yield the same effect in the recording and then I'll go into panic...
In this case, having a library that is easy to use and also has a balanced dynamic range allows you to be confident in the type of writing you’re doing for the recording. Let's now move on to another library: "Adagietto" is a string ensemble, a single virtual instrument that doesn’t have separated sections; it’s one instrument playing all the strings. At the highest key, you’ll hear the violin; the lowest key will play the cellos. Of course, there’s that middle range where the voices of various instruments overlap, and then in the range of a single instrument, only that instrument’s sound is heard. From this perspective, this configuration is faster because you can play it directly, and it’s ready to use; you don’t need to worry about the instrument’s range limits; you have the entire keyboard at your disposal. Let’s listen to "Adagietto." It's a different performance, to my taste, more detailed and realistic; I didn't quite like the last note...
But there's an issue... on some notes here, they’re playing many, many musicians! It's an ensemble!... and so especially on a single note, there's the risk that the timbre isn't really realistic. For example: a high F# is played by the violins but the violas are also capable of playing that note. In this ensemble, there will be a viola, too; but on that note the viola has a very strained sound.
Sure the mapping crosses over and overlaps gradually. But there are some notes where they all play. Now, let’s listen to another ensemble: the Spitfire Symphonic Strings. We’ll play our passage down a fourth and see what happens… Spot the error! Easy to catch because when transposing to another key, if we play it, there's no problem. But when we take the individual lines of each instrument, we find notes that that poor instrument can’t play...
for instance, we find the first violins playing a very dangerous note, an F# they don’t have… and so this F# on the keyboard will sound, It sounds, so what's the problem? Having an entire ensemble at our disposal to start crafting our thematic arrangement sometimes leads us to forget to check the range in which those intermediate notes move... in this case, we might forget about this F# and even write it in the notation software, then we get to the orchestra and the leader of the orchestra might stand up and say, "Maestro, look, that note... the violin doesn’t have it..." So, the advice is always to switch from an ensemble mode to a split mode: not only will it sound better because we’ll have the voices organized in a logical way, but we’ll also have control over the ranges in which we want the voices to move. Obviously, this problem is supposed to be sorted out before reaching the recording studio, if you have an orchestrator or someone who checks for this type of error; if you don’t and you do it all yourself, check very, very, very carefully because - as we’ve already said - ONE minute lost multiplied many times means time taken from recording. Now, let's see how the Symphonic Strings work, not as an ensemble but as individual instruments.
It sounds less artificial… it flows better, no doubt about it; this is the correct approach when you want to create a demo, that is, develop the voices of your composition. Never think your music related to the piano, but rather think in terms of voices because that way we have awareness of the movement of each component that builds our polyphony, our harmonies, and even themes in a small counterpoint such as this. Let's now listen to the "Adagio" not the ensemble we heard before but the separated sections. This has a good voice definition but in expressiveness, I found it a notch below Spitfire. What differences do we often hear? The number of orchestral players used in the sampling.
Every library has pros and cons. This one has a more intimate output and perhaps allows us to activate controls like these which give us that variety that makes the sample realistic. There’s no perfect library for every use; those who handle them really well have at least 4 or 5. For a specific strings sound, they’ll prefer one; for another type of strings, another; for brass, yet another, etc. Never forget the final purpose for which you're using the samples. The goal is crucial in choosing the right library because it’s impossible to choose one that suits all uses.
Let’s do a test with the Symphobia ensemble, a library that I love a lot and I use it often for very broad sustained chords because it has a peculiar realism. This is interesting; very, very realistic, especially for the amount of time you put into it... we repeat: everything can be improved. We spent little time refining because we want to highlight the differences in perspective when using libraries. I suggest: unless you’re really very experienced in samples sounds, turn to a professional; I always do. Here we bring Luca Antonini back into the discussion; I often work with him; whether it’s a demo or any other use, he knows much more than I do, and if possible, rely on a professional; obviously, a professional might spend days working on a piece of a few minutes, and obviously, the time must be compensated...
Don’t you have an orchestra at hand? Our job requires a digital sound as the final output and thus virtual instruments? Then inevitably our goal will be first to use samples to build our idea and then refine them to achieve the maximum result. And this is a very important step: it’s clear that all of us must have at least a basic knowledge of all aspects of our work, but we won’t be able to do everything in our job like super professionals, obviously. So - if you can - the things that you are not particularly good at should be done by a professional. It’s clear that the professional won’t do it for free, but trust me: the time you save and especially the result you achieve will generally always be worth the expense incurred.
Especially if you are at the beginning of your career and need your work to sound good. Allocate part of your budget to do what you are not really very, very strong at. Luca has provided us from his studio with two recordings of the same excerpt with two different libraries: the Cinematic Studio Strings and the BBC Orchestra from Spitfire. Let’s first listen to the Cinematic Studio Strings. Great stuff! Here you can hear the time it took and, of course, his ability to handle these instruments.
Let’s see the level of detail in the expression and modulation curves. Sure, even the ending was finished smoothly. And also the attack before the last note had already been softened... very elegant, very nice and, above all, very musical. Very well developed for each voice with a movement that belongs to that section and is separate from the others. If we, for example, use an ensemble and want to work on the dynamics, we are working on the dynamics of the entire group.
So working separately and then working on the dynamics - which in many libraries is controlled by modulation - allows us to achieve a degree of realism that is obviously greater. Let’s see the same thing that Luca did with the BBC Orchestra from Spitfire. So here I hear an overall like more, but there is a lesser definition of the individual voices, I don’t know if you agree. Yes, there’s a different sound in terms of amplitude levels of the dynamic ranges of the individual voices. The Spitfire BBC is currently used by many of our colleagues with all the caveats since we have many gigabytes to work with. No bid deal, if we have a huge hard disk...
But it’s clear that the software has to process a huge amount of data... We have another option: these virtual instruments that are physical modeling synthesizers from Audio Modeling. So they are not sampled sounds... You don’t have to download gigabytes of stuff. They are true synths that reproduce the physical model of sound generation of various instruments or individual sessions. We’ve seen them a few times, in the videos about John Williams.
Here they are: SWAM Let’s listen... So, definitely this performance that we heard is quite rough. We absolutely haven’t worked on the details of sound construction. Why wasn’t it done? Because anyway - since the sample and the synth are two different things - two different processes would be required. Which do not interest us at this moment.
So what you’ve listened to is quite rough. And I believe that in the roughness there is a good level of realism with the advantage of not having a hard disk full of gigabytes of sounds. Absolutely: we can have the entire orchestra at our disposal without external samples, just by opening the relevant synths. And this is something really interesting and useful. Now it’s time to hear the fragment we’ve listened to so far in its final version. This is a piece called "Another World", which is part of the soundtrack of a documentary film called "Fukushima: A Nuclear Story" by Matteo Gagliardi.
The recording was made in Studio 22 in Budapest with the Budapest Symphony Orchestra. And let’s listen to it. Well, we have to draw some conclusions, Fabri. The first conclusion in my opinion is that the biggest difficulty in phrasing for samples is dictated by the attacks of the individual notes. Each note, as set in the recording or in the instrument, has the same attack. We saw Luca work on the overlap of the notes to give a sense of legato and continuity.
Although the overlap is different for each sample, with the sample compared to the synth we have an inevitable rigidity. The phrasing in the sample suffers inevitably from that artificiality dictated by the uniform attacks, which requires a lot of hard work, as Luca did. So to conclude, there isn’t a single perfect library for all occasions. Identify what your purpose is, to whom it will be listened to, and - if possible - consult a professional. In this regard, we thank our friend and collaborator Luca Antonini. And I would say we’ll see you in the next episode.
Don’t miss it. Bye.