Welcome back to Scoring for Films, I am plagued and tormented by a terrible cold, Fabrizio is not, but he will be soon! Nevertheless, stoic as Ancient Roman warriors, today we have the second part of the three-episode mini-series where we explain to you how to write a film score or at least how we do that. Someone might even say how NOT to write a film score! So, what we tried to do in the last episode was not only to delve into some technical details that are useful to learn certain mechanisms, but above all to always try to show the work that goes behind it, the thought that goes beyond the organizational phase. Today instead we go into the specifics: we’ve written the music, now we need to move on to the realization. Now from the demo we need to evolve into the orchestral recording.
The piece you write nowadays for a film is typically a piece written on a digital workstation. Inevitably, it will be written on Cubase, on Live, on Logic, some even write on Pro Tools or Digital Performer; at a certain point, you will have to leave that environment to transform into something that can be performed. What actually happens? We can open the session of the piece that we analyzed in the previous episode. Our first goal is to export; to leave Cubase and enter the notation software: Sibelius. Here are the tracks of the virtual instruments, which are obviously written in MIDI; let's open, for example, the violas and the cellos.
The MIDI language is based on three central elements: the note's pitch, its duration in time - which we can see represented by the length of the line in this visualization called Piano Roll - and the velocity, meaning the note's intensity, because the assumption is that the speed of the key press is proportional to the volume of emission. Just like on a piano: the stronger the note, the faster the key press your force has exerted on the key. If you are used to starting everything by playing on a keyboard: when later you need to transfer what you played into MIDI, you need to quantize it properly, because otherwise the export of what you played is unreadable. Maybe one day we could make a video dedicated to quantization. Let's reopen the horns track we saw in the previous episode. Let's look at the expression curves: in this case, on this sample, expression indicates the volume.
The pitch bend, the intonation... the horns I made them decrease in pitch gradually, with what is called a portamento, even though here there's not a proper arrival note. Yes, regardless of what we call it, it's important that the conductor explains what he wants to the musicians. And why are we talking about this? Because we can see right away the equivalent of this MIDI in Sibelius, our notation software. Here the composer has written exactly what he wants, and this kind of little mark has been added, which explains this passage of bending that we also saw analyzing Beltrami.
And then there's this flat sign, which tells us that it must go down. And not really by a whole tone, but by a quarter tone. So, just before the flat. This is a part of aleatoric music, which cannot and should not be played identically by all musicians. Otherwise, you'd lose that sense of reality and naturalness. In this case, the bending is spread over the first two beats.
It could be spread over the whole measure, for example. In fact, when all the horns and trombones come in, the bending is spread over four beats. I tried to be clear in the indication, because what's written here will be read by the musician on his part. For example, here is the part for the horn. So from Cubase, you export all the MIDI tracks that are then imported into the notation software. And from there, they will start to be reworked.
Still staying within the DAW environment, we will have electronic sequences; in this case, there is a lot of electronic music. Or we will have recordings outside the orchestral environment. In my case, for example, most of the pianos are performed here in the studio and then brought into what we call pre-production. Nowadays, there can certainly be an entirely orchestral project; however, almost always, especially in film projects, there is always an addition of something electronic that has been worked on in the studio and that will be added to the orchestra. Moreover, some things are recorded also for practical reasons. Maybe you want that specific guitarist who is in Rome and you’re in Milan: he records remotely, sends it to you, and you will add it.
This is crucial because the musicians, when they play their part, hear the rest. Otherwise, many times, especially in certain orchestral pieces with very long notes, the musician might not understand the sense of the piece; but if they hear what's happening around that very long note, of course, they will play it with a different intention. All the external recordings, done remotely, done in the studio, done in any way, are combined with the audio export of all the synthesizers, and that collection of audio tracks will be inserted by the sound engineer into his session and into his workstation. He will import all those audio tracks and also import another essential thing: the tempo track. In applied music - precisely because it is applied to an image - what we do must be perfectly synchronized with the image it must accompany. Obviously, in order to sync up, many times you need to make adjustments to the tempo.
I watch the image, play along with it to get a vague idea of where something needs to happen, and play, for example, at 120 BPM. But then I discover that to match perfectly with a sync point or a scene change, the right tempo isn't 120 but 123 or 126. That 126 allows us to ensure that the entire phrase is perfectly spaced within the sequence it is meant to comment on. And this is the tempo track, in case the tempo is stable, but - as we've seen many times - the tempo is never stable. Now, in the case of this piece, by chance, the tempo was stable, but the scene has its movements, the comment follows the movements of the scene, the speed increases and sometimes decreases. Depending on what we want to achieve, we can use polyrhythms, where bars have additional or subtracted quarters or eighths: whatever we need to get perfectly on the downbeat of the next measure.
Sometimes, ramps are used, lines where the tempo gradually slows down in the sequencer, it's programmed, and this tempo track will also become the base for the metronome that the orchestra players will hear in their headphones, along with the sequences we want them to hear. In their headphones, the musicians have – besides everything else – the click, and the click is essential. And the question arises: so, what’s the conductor for? Well, this is an interesting topic, because the job that an orchestra conductor does in the recording studio, let's say for a film score, is quite different from what a conductor does in a concert when performing Beethoven's Seventh Symphony. It's a completely different job. What they have in common is the figure, in common they have the baton in hand, and the fact that both have the same educational background, but the function is entirely different.
Since they have to follow a click exactly and without deviating by even a second, the conductor's main task is to ensure that everything that is played is in sync with the click; then, of course, he checks that the notes are correct, the intonation is right, although in this case, I must say that the sound engineer often helps with this. Another uncomfortable thing the first time you conduct in a recording studio is that, of course, even the conductor has to wear headphones, because he too has the click, and also he too has the other tracks to hear, and trust me: listening to the orchestra with headphones or without headphones is a whole different world; those things that you would naturally hear without headphones, you might not notice with headphones because they're at different levels (for example, you might have the cellos louder than the violas, and you hear the violas much less), so you might not catch everything that's going on; in this case, the sound engineer can help a lot. I have given up any ambition to conduct my own music, I prefer to be present in the listening process. Listening on the stage, amidst the orchestra, is unbeatable for satisfaction, and it is one of the most beautiful experiences one can live through, but it is extremely misleading. I, on the other hand, unlike Fabrizio, always conduct my own music but I already know that there will come a moment when I will decide to stay at the console because the real perception of what is happening, you don’t get it standing on the podium waving a baton, but by being at the console, and so theoretically, it's more useful for the composer to be at the console rather than on the podium. In my case, I communicate with the conductor — Enrico Goldoni — those changes, those adjustments on intentions, on intonation corrections, on blending the sections, on changing dynamics.
The demo that we made with virtual sounds does not match the real performance, which has a thousand more variables, a thousand more human elements, and that tends to always bring surprises — sometimes positive, sometimes negative — that need to be worked on. Let's remember that the orchestra plays pieces it has never heard before, the orchestra has not heard the demos. The orchestra is always sight-reading, they have not received the parts a week before to study them. So, the orchestra doesn’t know what they’re in for: whether it's sad music, happy, joyous, heroic, weird... That musical phrase you are following, what does it mean? Why should it be played one way or another?
In a very expressive way or, on the contrary, in a cold way, completely without vibrato? You can write these things, and I tend to write a lot… But "espressivo" or "molto espressivo"... but how much? How expressive should it be? And that's where the interaction with the musicians' sensitivity comes into play, what the composer had imagined in his head, and that point where the two worlds meet — the focus of those two worlds. For example, I want to get a certain effect from the cellos and I write some hairpins.
The hairpins are the dynamics that increase and decrease. Yes, exactly, they are graphic signs that open and close, visually communicating a crescendo or a diminuendo. But these hairpins tend to kill the sound. Maybe at that moment, I don’t want the sound to die. And so, what do I do? Do I have to explain everything to the orchestra every time?
Or do I try to write so that the same section has two ways to read the same notes? We have this example in one piece: half of the stands play with a hairpin, and the other half play held notes. So that means the same note will be played by the same section in two different ways: half the section will play the full note, and the other half will play the diminuendo. This is very important because it's one of those subtle things you can achieve with a real orchestra. The limitation of working with samples is that they all sound the same. Moreover, with the orchestra, you can do something else: if you want a crescendo and you have, for example, 8 cellos, from bar X to bar Y, only 2 play, from bar Z, 4 play, and so on.
This is something you can't achieve with those samples. When all this work we mentioned is done and we go from the sequencer to MIDI and from MIDI to the printed score, often with the role of the copyist who takes care of making the full score and the parts. Here’s the conductor's score printed, including all the instruments that will be playing on that film score. It’s a large format, an A3, because it needs to be easily readable. Another thing we could mention, a common practice that often exists in pop music or film scores: often transposing instruments are written in concert pitch. So, in the parts, transposing instruments will have their own parts written in F, in B-flat, or in A...
While in the conductor's score, it's often written in concert pitch. It’s a matter of habit; for example, I’m one of those who still has the score with transposing instruments. I don't read the real notes for the horns in F, I read the transposed notes in F. That said, it’s just a matter of convenience and practicality for the conductor. Also a matter of speed, to avoid the risk of any confusion, I prefer to have everything in C, because I know that what changes for me is the clefs, but not the accidentals. So we’ve said that the conductor's role in the studio is different from the role of a conductor in a concert, but there’s a little trick we can share...
Yes, because many people wonder what the conductor is for. On my channel, I’ve made two videos just about: "What does an orchestra conductor do?" Even if the orchestra has the click in their headphones, the conductor is still conducting for real, not just with intentions but also with his gesture, which has a precise tempo. Now, what’s the trick to tell if a conductor is a fake or if they are really conducting? Look at whether the conductor’s arm movements are perfectly in time with the music's tempo or not. Why? Many would answer wrongly and say, "He's good if he’s in time." No!
The conductor's gesture must always be ahead of the beat, especially if the orchestra is large, because if I go down like this... by the time I’m down, the beat has already gone. I must be ahead of the downbeat so that when I go down, by the time the movement reaches the last row of violas, they’ll be perfectly in sync. The faster the piece, the smaller the anticipation because the margin is smaller. With slow pieces, you’ll see that often the sound is not... but rather...
That is, the gesture must always be ahead. So if you see a conductor who — especially in a slow tempo piece — is always perfectly on the beat, it means the orchestra is ignoring the conductor because if they were watching him, they would slow down with each beat until they stopped. The real conductor is predicting what needs to be played, and that’s what makes conducting so complex: you have to be independent from the outside world. In other words, you have to listen to what’s being played but think about what needs to be played. Exactly, so... Well, I’d say that for part two of this three-part series, we’ve covered a lot...
but now we’re getting to the real meat. Should we give a preview? Yes, we’ve prepared the recording, now we just need to press the "rec" button. We’ll see visual material that not only shows the setup preparation on stage but also the recording of what has been written and then mixed; not only mixed, but we’ll also get to the final step: the mastering process, so don’t miss part three! If you haven’t already, it’s mandatory to subscribe to the channel! Leave your thoughts, wishes, and requests in the comments.
See you in the next video. Bye!