Humanize Playback

Could you please expand on that particular point a little? I’m not an audio expert by a long stretch, and while I can see the value in panning, I’ve never understood what compression actually does for the sound. (To be honest, I have only the vaguest notion what compression is, anyway.)

First, with regard to panning, if you have good stereo speakers located in a good orientation (each speaker equidistant to your ears in the position you normally take while auditioning material), even a small amount of stereo separation can help individual parts jump out. That is especially true if using the same sound samples for multiple parts. It is easy to do with the Dorico mixer (or if using NP, you must use NP’s mixer.)

Compression has to do with dynamics and it a little more complicated than panning. The basic concept is to reduce volume of the loudest sounds. That lowers the overall sound energy. Generally this action is combined with a boost of the entire sound to bring it back up to the original loudness – this is usually called “make-up gain.” In applying the make-up gain, the softer sounds are amplified, making them easier to discern. Some compressors automatically apply the make-up gain and others require the user to do that manually. Even small amounts of compression can make the inner parts substantially easier to hear. If you go crazy with compression, then everything will have the same level of energy – no dynamics at all. Less is usually better.

That’s the basic principle. However, sound engineers often use compressors for a completely different objective. Most compressors include control over attack time and release time. In this case, “attack” is the opposite of what you would normally think. It is the attack time for the compressor. In other words, how quickly does the compressor decide to attenuate the loudest sounds. And the release determines how long the compressor remains clamped down after that transient has passed. By manipulating these timers, you can achieve a harder-sounding attack, or more pulsation in hard-driving music. For example if you have an attack time of 20 ms, then the first 20 ms of a loud sound king a kick drum will pass at full volume, then be clamped down to let other music be heard after that initial thump. Used in this manner, compression is usually applied to individual instruments. That is something that is more germane to the DAW mixing environment. For Dorico, I would add a compressor to the main output bus only and give it a quick attack time.

For a visual model, you can think of a compressor like somebody manually moving the mixer’s fader up and down to try to soften the loudest notes but let the rest of the material come through at full volume.

===

On edit, I would also add that if one has been using bargain basement $24 computer speakers, one will be surprised at how much much more clarity the Dorico playback has if one were to upgrade even to a $100 set of speakers targeted at the video gamer. Some of those come with a sub-woofer that is at least decent at a satisfactory sound level.

If one is using an external audio interface, one probably already has a speaker setup targeted for the home studio. There are some very nice choices that are very affordable, such as JBL 306 at about $300/pair. If one goes with a separate woofer, then one can use smaller satellite speakers such as Mackie CRS-X at $200 for the pair. A woofer for such a studio might be the JBL LSR310S at $300 (that’s 200W with internal crossover). Nearly as good would be Mackie CR8S-XBT (a new model), but curiously that one doesn’t have XLR connectors. It uses balanced TRS cables, so it would be fine, but one would need to make sure one has balanced cables. These aren’t necessarily recommendations, and a person can easily spend thousands of dollars for high end studio monitors. But a setup such as listed here will be a real awakening if one is accustomed to using really cheap PC speakers.

Agreed, and how nice to get further confirmation from the team that regular strides are to be made:

Dorico has come out of the gates running and done such a great job advancing music writing and publishing. I have already moved most of my workflow into Dorico and look forward to continued improvements in the playback space. Thank you Daniel and team!

I’m one of these people who hears it all internally, but I still agree with you completely. Sure, you can ramble off letters like a speed train and I can recognize every letter I hear instantly, but I can recall less barlines than Mozart. Even Mozart had to go back and listen again at times. Aural recognition is one aspect of charting how our minds work with music. Composition informs aural and vice versa.

I recently switched to a StaffPad (writing) + Studio One (recording) + Dorico (engraving) workflow due to SP’s in-app pre-mapped library integration, something I suggested to most developers years back as the ultimate composer UX end goal and was told it was lightyears away from anything the industry has. Library integration and human playback is essential to keeping people out of frequent piano-roll edits. As a human being, I’m prone to seeing 30 things to do, doing 10 of them, my daughter asks for help, then I come back and wait… oh yeah, I had 8 other things to do (notice that 12 edits on my agenda just got lost to the void). Less performance tweaking makes 10,000x the difference in one’s writing. I’ve written music without computers, and without instruments. Any aural feedback will have influence over writing. But the closer it is to the real thing, the less distracting it is.

I don’t want it out of a need to hear, but out of a need to fight to manage my attention.

Others might disagree, but I don’t believe such people exist.

There may be people who think they need that to “liberate the music inside them,” but the real test is whether anyone else thinks their music was worth liberating.

Let me phrase this another way. Consider much of the vocal music that comes from tribal regions and is created and passed down through oral tradition. If these groups were not allowed to sing as part of the composition process, and instead had to compose parts using a piano, another pitched instrument or just their mind, I am suggesting we would not enjoy much of the rich soul, melodies and harmonies of African and Polynesian chants and spirituals. There is process that occurs when the real thing is brought together and you are exposed to texture, tone, depth, presence and a number of other things beyond pitch, tempo and dynamics. Pitch, tempo and dynamics are not the exclusive informers of worthy music. That is why two groups performing the same number yeild completely different audience reactions.

You may disagree on this point as well, but there is a body of soundscape infused/inspired music that has its place and successfully moves people emotionally, makes them laugh, cry, feel relief, etc. and some of it can only be discovered through actual performance. How do these artists discover and write their expressions without the real thing? The same is true for music with electronic elements… the growl of a snyth, frequency of a sub bass and punchiness of a lead are all critical to the expression, and these elements are often difficult to discover and settle upon without experiencing the real thing and then recognizing: “yes, that is the expression I want to liberate”.

And finally, consider the more traditional improvisation we are already cozy with in notation software. Just put a few slashes and let someone do their thing. This gets directly at the heart of liberating the music within only by experiencing it. Lots of written music and hit tunes have come out of improvisation sessions. I am suggesting these improvisation sessions are liberating in part because they are as close to the real thing as you can get, they are the real thing. If we restricted improvisation sessions for all music to just a piano, I am suggesting you would liberate less and some people would be unable to liberate anything at all despite having an expression within that others would deem completely worthy and desirable.

1 Like

What I think Rob is referring to–and which I agree–is that one can imagine a musical result without hearing it first. Many composers do not even need a keyboard to imagine and create their music, and music came into being for thousands of years with just the voice or a keyboard to use.

It is nice to have modern electronic means to simulate and create or confirm new sounds, but I would say that most people imagine what they want and then (if necessary) search for a means to reproduce it. They do not need the electronic (or even acoustic) crutch as a prerequisite to imagining it.

Absolutely agree… :slight_smile:

I just want to say that it’s great to see such well-articulated sentiment, coming from several people, regarding the importance of playback, and the need for Dorico to continue to develop in this area as a primary consideration.

It seems we’ve come a long way from the earlier days of Sibelius, when expressing concern over playback would be met almost immediately with numerous Luddite admonitions that Sibelius was notation software, dammit.

1 Like

That division was not limited to the Sibelius world. One saw the same arguments in the Finale world. There are people whose musical universe requires them to set music to notation and apparently little else. That’s OK, but as time passes, one finds the competitive environment moving quickly. Many of us find ourselves in a position of needing to pitch our ideas to clients, and the standard of playback is rising every year. A level of playback that might have been successful in 2010 may now come across as amateurish and reflect negatively on the whole enterprise.

Fortunately good notation and good playback are not mutually exclusive, so there is actually little to debate other than priorities.

1 Like

Apart for the need of some composers to touch the sound matter, prelistening a piece composed on score is absolutely needed for students, competitions, film works. I’ve never attended to a composition course or competition, or met a film director, not asking for a “MIDI mockup”. The better it is, the more chances one has to be selected, to win, to be hired.

Paolo

2 Likes

Absolutely. I write without piano or playback - just a blank page. But more and more, what is discussed in meetings is the playback and not the print out.

I don’t have much to add here, except to agree strongly that easy and realistic playback continues to be increasingly important. I know fewer and fewer musicians who are able to audiate strongly, especially as it regards orchestration. Everyone wants an audio mock-up.

We can bemoan the decline of music literacy (and I do), but it’s reality. The technology is there… Let’s use it.

2 Likes

I really don’t see this as any indication of the direction of music literacy. Music existed before the notation. Notation is only an approximation of the actual music. I venture that most of the greatest musical works were not perfect on the first edition. Maybe people like Mozart, Sousa, and Beethoven who were basically mechanics following familiar formulas, opus after opus, could be happy with every note they wrote. But most great composers have always wanted to hear the music and have the opportunity to revise.

If Ravel or Stravinsky wanted to revise their work after hearing it performed, that would surely not indicate any decline on their skills. Indeed, it might indicate they were pushing the envelope with each new work, moving farther and farther from the paint-by-numbers reality of lesser composers.

I don’t think there is anything new or distressing about musicians wanting to hear their compositions auditioned. To the extent that a computer can save the cost of employing a full orchestra, surely that is a good thing.

Fair point.

Well, I suppose if you compare Rondo alla Turca, Fur Elise, and one of the handful of Sousa marches that are still played (if only by the US military) you can put all three composers in the same category if you like.

Except that you are comparing the best of one (99% of Sousa’s compositions are totally forgotten, and for good reason if you spend five minutes looking at the scores - they are endless pages of formulaic trash) with a couple of bits of trivia by the other two.

Actually, I would say that Fur Elise is less paint-by-numbers then his symphonies, but to each his own. None of that would make it on my list of music I’d select if banished to a deserted island.

I wonder if Bach would have done things differently if he had Dorico. Most of his stuff was written quickly under the weekly pressure of his church commission. Even without Dorico and all that time pressure, just about everything he did (that survives) is elegant, and hard to find any angle for improvement. I guess he threw away 100 times as much music as most of us will ever write. Perhaps with the benefit of Dorico and good playback, some of the discards would have become survivors.

Good question, and additionally, who else might have produced works and what types of works might have been produced?

Throughout history, the music business has favored those who had classical training, including intense fluency in writing and reading notated music. But I have had the pleasure of working with some really brilliant musicians who could barely read or write a note of classical notation. Some of them have become real monsters working within DAW environments. And I know a few who are using the notation features in the DAWs as well as notation products like Dorico to teach themselves to understand “our language.” For people like this, it begins with the sound, not the dots.

The only problem is that no mock-up behaves like an orchestra. Note that I didn’t say ‘sounds’ like one - it’s possible to make some great sounding mockups. But in (for example) Dorico with Noteperformer that involves a lot of odd tweaking of dynamics and articulation (and sometimes doubling or removing parts) just to get it to sound like an orchestra would at say ‘mp’. If you don’t know that and just write ‘mp’ you’ll get a shock when it goes in front of a band.