Humanize Playback

Discussions about our next-generation scoring application, Dorico.
scoredfilms
Junior Member
Posts: 193
Joined: Thu Feb 26, 2015 6:28 pm
Contact:

Humanize Playback

Post by scoredfilms »

Greetings again,

VSL sold me on Humanize. Then Cubase had a function to randomize velocities. Imperfect, but I could work with it because the handle bars in the velocity/cc-lane area were brilliant. Now I use another DAW with Humanize baked into the UI, which I use constantly. It's better than Cubase's Function option, except that Cubase has velocity handle bars in the cc-lane area, which are brilliant for velocity editing. I'm adding a million considerations to think about. And programming is something to do one step at a time, I know. So I hope I'm not overwhelming you folks with feedback. I don't expect every feature to happen in v1, just everything I want. ;)

Cheers,
Sean

User avatar
Wizzarts
Junior Member
Posts: 121
Joined: Thu May 09, 2013 12:01 pm
Location: Germany
Contact:

Re: Humanize Playback

Post by Wizzarts »

I am doing a lot of "controlled" humanizing by using the logical editor (Cubase) I would love to see such a feature in one of the following versions.

Also a generic controller feature could be of help to use external midi-editor such like C_Brain for Android/iPad which I use to randomize/edit midi-events.
Master i9 9800 - 256Gb Ram, Win 10 64-bit, Slaves i9 9680 - 128Gb Ram + Angelbird Crest (4x 480Gb SSD als RAM gejumpert), Win 10 64-bit, Cubase 10.5, Dorico Pro 3,5, NI Komplete, VSL inkl. Ensemble Pro + Mir pro, EastWest CCC1 + CCC3, 8Dio, Embertone, div. andere.

External Gear: Avalon, Tascam, TC-Electronics, Manley, Midas F32 Desk (Firewire 32-ch), MoTu Midi Express XT

scoredfilms
Junior Member
Posts: 193
Joined: Thu Feb 26, 2015 6:28 pm
Contact:

Re: Humanize Playback

Post by scoredfilms »

Agreed

When I used cubase, the logical editor was useful, but S1's baked in Humanize I find myself using more. And nothing was as good for this as using c brains. But c brains slows down cubase and causes glitches if you use the full size preset library. The problem is that the editing tools in cubase are a page out of 2001. They work, but so basically it's painful. C brains finally addressed that, and brilliantly, but it needs to be native, not scripts. Speed alone is one argument. But archaic editing tools that aren't built for fast workflow... In 2016? Yeah, that doesn't fly by me.

If steinberg is smart, they'll pay attention to c brains and the midi transforming tools that are getting made, which are leaps and bounds ahead of something as simple as an LFO. Those need to be a native part of workflow, not a half working patch-in. C brains is intelligent. And the native disconnect is massive. That screams loud and clear that DAW makers aren't using their own product enough. Cbrains came from someone who used the piano roll and could code. That's the solution here. It's not one feature or another. It's making tools like this a part of the development philosophy. Cause smooth working native features as robust as Cbrains are what I believe Cubase needs, and Dorico, even if only half as robust, would benefit by.

Pattern based editing is a no-brainer for music.

Rob Tuley
Grand Senior Member
Posts: 3976
Joined: Fri May 20, 2016 12:41 am

Re: Humanize Playback

Post by Rob Tuley »

I must have missed something here - I thought Dorico was supposed to be a music notation program?

Maybe somebody needs to explain to me what "humanized notation" looks like ... :?:

(I know perfectly well what humanized playback is, of course.)

scoredfilms
Junior Member
Posts: 193
Joined: Thu Feb 26, 2015 6:28 pm
Contact:

Re: Humanize Playback

Post by scoredfilms »

Rob,

I can't tell if you're sarcastic or genuinely asking.

I'm not asking for a feature on the Engraving tab, but on the Write/Play tabs. Those parts of the program have less to do with the engraving principles and everything to do with whether Dorico sounds realistic when users hit the play button. And for many of not most film composers that's a necessity many software companies have recognized and developed features for. Thus my interest in Dorico in the area of playback. So I'm not describing a feature to appear on the page, but in the program UI, in an area of Dorico where users who care about realism will rely on. I hope that answers your question.

Cheers,
Sean

fratveno
Senior Member
Posts: 1584
Joined: Thu Dec 04, 2014 1:53 pm
Location: Norway
Contact:

Re: Humanize Playback

Post by fratveno »

Rob Tuley wrote:I must have missed something here - I thought Dorico was supposed to be a music notation program?

Maybe somebody needs to explain to me what "humanized notation" looks like ... :?:

(I know perfectly well what humanized playback is, of course.)
"humanized notation" is making sure no tuplets cross the barlines :D
(re-tired)

rpmseattle
New Member
Posts: 45
Joined: Fri Jun 17, 2016 4:14 pm
Location: Seattle
Contact:

Re: Humanize Playback

Post by rpmseattle »

The fact that Dorico will have separately editable note onset, durations and velocities stored within each notated note is good news, since this provides the most important building block not only for whatever performance editing tools come online later, but also for renotating / retranscribing the visible music notation.

See : viewtopic.php?f=246&t=97746

I've always associated the term "Humanize Playback" with tools which on their own make micro adjustments to performance attributes like timings, dynamics or tempo changes using some set of controlled random variables in order to simulate "live performance". I've never really cared much for this approach, since it doesn't effectively capture human phrasing at all.

Another approach; Logic's "Groove Templates", DP's "Groove Quantize" and Pro Tool's "Feel Injector" (all based on work by acoustic researcher Ernest Cholakis) allow specific performance attributes from one track to be copied to another (and which can *also* include some amount of mild randomization, or can leave existing notes or phrases already within a particular tolerance alone to "humanize" what would otherwise be an exact performance copy.)

Image

The net effect is much more realistic than simply randomizing performance attributes, but is perhaps more ideally suited for music centered around a rhythm section than for orchestral phrasing.

There is also "on the fly" randomization of playback nuance built into some sample libraries, which people seem to like.

However, it seems that the discussion focus here is on editing tools which can filter for specific performance data types within specific ranges, dynamics, octaves or locations etc. and allow quick editing to these selections.

I had to look up C_brains, which I now know is a Cubase MIDI editor for iOS Lemur. Here is a link for those not familiar:

http://www.midikinetics.com/c_brains.html

C_brains is a testament to how talented, third party developers can add tremendous productivity and value to an existing program if the proper scripting hooks are made available to them.
Last edited by rpmseattle on Thu Jul 07, 2016 10:30 pm, edited 2 times in total.
Robert Puff
music preparer, editor, score producer, librarian and educator
RPM Seattle Music Preparation

scoredfilms
Junior Member
Posts: 193
Joined: Thu Feb 26, 2015 6:28 pm
Contact:

Re: Humanize Playback

Post by scoredfilms »

I'm one to prefer the ability to simply randomize all my note start times and velocities with a small percentage. Then if I overwrite I can tweak if needed. But I prefer that to a constantly changing algorithm that will never be consistent and sometimes give more dramatically poor results. Once it sounds right, it should stay that way. One note too loud can ruin a scene (film).

And for me this is not just for percussion. If I copy and paste 4 bars of 16th notes for strings and want the pasted result to be unique, I don't want to have to edit the pasted result note by note. I just want to click a button and get an instantly unique performance. Basically, the program should be as close to working with real human beings as possible. Real people aren't robot that never vary. So while I want exact control, I don't want to have to tell the program to do something I feel it should have already known to do in the first place.

I'm not sure if that's terribly clear, but that's how I view humanization.

rpmseattle
New Member
Posts: 45
Joined: Fri Jun 17, 2016 4:14 pm
Location: Seattle
Contact:

Re: Humanize Playback

Post by rpmseattle »

This article seems somehow related to this thread: "Sound Analysis of Swing In Jazz Drummers"
http://www.numericalsound.com/research/ ... s-of-swing

The author (Ernest Cholakis, referenced above) analyses the "feel" of 16 well known jazz drummers, turning something abstract into definable numerical values. There is a PDF accompaniment to the article as well, which has bar charts comparing each player's velocity and timing over two bars as well as their "swing ratio" and other factors.

Each player produces unique, yet consistent patterns of variations to absolute time that is unique to them. It's not random, rather, these variations in timing or dynamics within the context of a swing groove make up a specific "musical signature" that can be identified with that specific player over and over.

While I could certainly see the usefulness of a tool like the Sibelius "Transform Live Playback" for Dorico in time, the C_brains Lemur editor makes a great case for third party specialized tools that produce a super-set of specific features appealing to individual interests.

This has the additional advantage of lightening developer load so the focus can be on making the core feature set of the program as attractive as possible. Third party programmers are already doing great things for Finale with Lua, so I am excited about the possibilities for Dorico.

All of this is rather outside the topic of music notation specifically (as is this thread, actually), but interesting, nonetheless.
Robert Puff
music preparer, editor, score producer, librarian and educator
RPM Seattle Music Preparation

User avatar
Alberto Maria
Junior Member
Posts: 119
Joined: Wed Oct 05, 2011 7:33 pm
Contact:

Re: Humanize Playback

Post by Alberto Maria »

Humble opinion on: I strongly doubt that humanization could be done randomly and get decent results... ;) \off
AmM
Alberto Maria


wXpPro SP3 still working in C6.5] W10pro 64b et al. like Mac.., SX1.0.6.78, SX2.2.39, SX3.1.1.944, C4.5.2.274, C5.5.3.651, C6.5.5.176, C7.0.7.2276, C7.5.40.315, C8.0.40.623, C8.5.30.192, C9.0.40.292, C9.5.50.345, C10.0.30.288, TgSe 3.1.0.196, VidEng 1.2.1.12, H5.2.2.6.87, HsonicSe3.2.20.194, D1.2.10.139, D2.2.10.1286

scoredfilms
Junior Member
Posts: 193
Joined: Thu Feb 26, 2015 6:28 pm
Contact:

Re: Humanize Playback

Post by scoredfilms »

I've seen humanizing done 2 ways

Random:
VSL's Vienna Instruments Pro can play a random performance every time
Cubase's logical editor can very roughly do the same thing

Fixed:
Cubase's logical editor has a preset to randomize velocity of selected notes
Studio One has the same thing as a UI embedded feature, but also randomizes start times of each note

S1's is the most effective to me as it randomizes both points, it does within a limited range as players don't trend to randomly blurt out excessively loud notes, and it goes the farthest at making a duplicated set of bars just unique enough not to sound mechanical, like a measure by measure machine gun effect.

I assume by using a term like humanize, that developers will be familiar with how existing tools have been approaching this. But some comments lead me to think I will be misunderstood instead. Hopefully this helps to clarify.

-Sean

cparmerlee
Member
Posts: 829
Joined: Sat Jun 18, 2016 4:32 pm
Contact:

Re: Humanize Playback

Post by cparmerlee »

scoredfilms wrote: and it goes the farthest at making a duplicated set of bars just unique enough not to sound mechanical, like a measure by measure machine gun effect.
The start and stop timing is only part of making the performance sound realistic, and I would have thought a relatively small part. I suppose on very repetitive, very rhythmic material, the note timing is pretty important. But to my ears, the things that make the performance sound human are:
  • Smooth dynamics in the natural flow of musical lines
  • Good implementation of fp and similar markings
  • Reasonable treatment of rit and fermatas
  • etc.
In the past, Finale and Sibelius have done a mediocre job on these things. There is clearly a lot more to be done. I would think there could be some work with intonation adjustments. Certainly an advanced wind or string ensemble can apply just intonation. And a not-so-advanced ensemble can have intermittent bits of less-than-perfect intonation.

I think the real question here is whether the Dorico team really appreciates how important playback is to many folks, or if they really consider the intricacies of the printed notation to be paramount. I think we all realize the early releases need to focus on foundations, so surely nobody expects advanced playback early on, but I am curious where this fits in the overall priority scheme.
Dorico 3, Cubase 10.5, Windows 10, Focusrite Scarlett 18i20 audio i/f
http://sonocrafters.com/

DG
Member
Posts: 704
Joined: Wed Dec 15, 2010 8:08 pm
Contact:

Re: Humanize Playback

Post by DG »

cparmerlee wrote:
scoredfilms wrote: and it goes the farthest at making a duplicated set of bars just unique enough not to sound mechanical, like a measure by measure machine gun effect.
The start and stop timing is only part of making the performance sound realistic, and I would have thought a relatively small part.
Yes and no. If you ever have a sampled chord where all the instruments play at the same time, it sounds incredibly fake. In fact, even with live players, when everyone plays exactly at the same time (but crafty editing) it often sounds strangely small. So whilst the actual performance of each and every line is very important, I wouldn't want to play down the advantage that something like the humanisation in VI Pro gives when working with samples.
Nuendo 6.07
Intel Xeon 3.0GHz 10 Core 20 Threads
32GB RAM
Windows 7 (x64)Pro
RME Multiface II
Intensity
nVidia GT 640 graphics card

scoredfilms
Junior Member
Posts: 193
Joined: Thu Feb 26, 2015 6:28 pm
Contact:

Re: Humanize Playback

Post by scoredfilms »

DG, I actually agree with that. But I rarely ever convey that in a way that people understand it. So I just focus on timing and velocity as static elements. I believe a dualistic approach is the best answer to this in order to get the best of both worlds. Without that we run the risk of neglecting performance. The truth is, humanization is a multifaceted area to dive into. But it is one of the most important in a program where musical performance is key to many users, of course.

One of the most difficult things I think to convey is a need for humanization with CC data. Where performance is a dynamic and fluid thing for long notes, and where not every player in the orchestra crescendos the exact same way, there is a need for fluid humanization as well. Much like in VIP, I would like to be able to adjust sliders which delay and increase/decrease the values of CC data by a percentage.

That may not be the perfect solution, but I believe it highlights the problem at very least. The problem is that when I copy a violin part to the viola, The two parts should not sound so identical that they sound fake and mechanical. The thing is, real players listen to each other and adjust to each other. So I find myself editing cc data to have subtle variations in the curves which imitate this effect. I find that without doing this, the music is never believable. I believe this to be the single most important factor in realism from sampled instruments, apart from the obvious like having enough articulations, etc.

I would be interested in hearing of a more effective solution. I just believe that's the problem to solve. LFO's aren't effective because they simply follow a pattern that is not very human.

My process is to create ACC performance which is realistic. I believe everyone will do this as everyone wants an element of custom control. I don't believe you can satisfy most people with an algorithm. But if I can craft the general performance style, and then the program understands how to apply that to different parts in a way that is unique to each instrument section, then the program would save a great deal of headache.

The overall goal I believe we all share is simply stated by saying that the performance should not sound synthetic, but organic. That requires a lot of unique detail.

-Sean

User avatar
aveter
Junior Member
Posts: 53
Joined: Fri May 04, 2018 1:11 pm
Contact:

Re: Humanize Playback

Post by aveter »

Hello, friends!
I'm sorry that restored a very old topic, but a more appropriate topic not found, and a new topic I do not want to produce.
I have a problem with the starting position of the notes, when specifying these notes slur and articulations. For example, if notes are not specified articulations (not Accent, not Legato, not Markato, etc.), then in the Play mode the starting positions of the notes correspond to the parameter "Note start position" in the Playback options. But as soon as I set the Accent or Markato, or Slur, the starting position of the note is shifted strictly on the grid. Humanize disappears.
At first I thought this problem was due to the playback techniques in the expression map I created, but this problem also occurs in the default expression map.
Maybe am I doing something wrong? Maybe there is some parameter that controls this Dorico behavior? I, unfortunately, did not find such parameter.
Dear friends, please tell me how to solve this problem.
Thanks.

User avatar
Daniel at Steinberg
Moderator
Posts: 18524
Joined: Mon Nov 12, 2012 10:35 am
Contact:

Re: Humanize Playback

Post by Daniel at Steinberg »

I believe Dorico does this so that it can be sure that the playing techniques (keyswitches, controller changes, etc.) that it needs to trigger in order to produce the correct result in playback will be synced up with the notes to which they apply.

User avatar
PaulWalmsley
Steinberg Employee
Posts: 2079
Joined: Tue May 17, 2016 9:24 pm
Location: Steinberg, London
Contact:

Re: Humanize Playback

Post by PaulWalmsley »

This is something that is on our backlog. There's some extra phases of processing that we need to do to make this work. The reason it happens is that if you add an articulation to a note then it might require a keyswitch or controller switch. But if the note is humanized then that may cause it to start before the keyswitch, so you would get a note that plays with the wrong sound, so we set the humanize to 0 for that note.
Architect & Developer - Steinberg London

User avatar
aveter
Junior Member
Posts: 53
Joined: Fri May 04, 2018 1:11 pm
Contact:

Re: Humanize Playback

Post by aveter »

Thank you very much for your prompt reply, Daniel and Paul!
I assumed that technique, articulations etc. attached to the objects of the quantization, and not to the event notes. Yes, it is a very difficult task and it can not be corrected quickly.
I'm a little upset about this, but I hope you will be able to solve this difficult problem. Please tell me, do you plan to implement this in the third version?
Thank you very much!Thank you very much!

User avatar
Daniel at Steinberg
Moderator
Posts: 18524
Joined: Mon Nov 12, 2012 10:35 am
Contact:

Re: Humanize Playback

Post by Daniel at Steinberg »

We can't say for sure at this stage, I'm afraid.

User avatar
aveter
Junior Member
Posts: 53
Joined: Fri May 04, 2018 1:11 pm
Contact:

Re: Humanize Playback

Post by aveter »

I understand, thank you, Daniel.

User avatar
aveter
Junior Member
Posts: 53
Joined: Fri May 04, 2018 1:11 pm
Contact:

Re: Humanize Playback

Post by aveter »

I noticed that the end position of the notes in the slur is also lost when the Accent is assigned to these notes. Legato notes are transmitted to VSTi incorrectly as non-Legato. In the VSTi does not work the scripts for Legato notes. :(
At first I didn't understand why my pedal lines were exported incorrectly to Cubase 10. That explains it. Add to that the broken Beat Stress...
As a result, Humanize is currently not working, and as a consequence it is incorrect playback. Very sadly. :cry:
Dear Daniel, Paul, I already pray you as gods, please solve this problem, at least in the third edition.
With respect to you, Alexander.

alindsay55661
Junior Member
Posts: 62
Joined: Thu Sep 05, 2019 6:43 pm
Contact:

Re: Humanize Playback

Post by alindsay55661 »

I'm going to add another voice to this thread. "Humanization" will of course never replace actual performance but, it can mean the difference between something feeling completely synthetic and totally inspiring. This has significant workflow implications, equally as relevant as the beauty and form of high-quality engraving. In fact, if we take a step back, everything Dorico does to support "notation" is really about "humanization," which is to say the features are all designed to help a person craft very specific and intentional visual output for another person:
  • a beautiful piece of art
  • an educationally focused set of practices or examples
  • a score
  • a lead sheet
  • a reduction of popular songs for intermediate guitar players
The point here is that it's all about being intentional, providing a specific message and experience for the reader, or in other words, expressing our humanity or dare I say "humanizing" the written output as much as possible. Why? Because it will be experienced by a human so those little details really make a difference, those details are the difference. Can we agree that output riddled with mechanical constraints and artifacts of a computer is far less palatable than the deliberate work of a person? If not, Dorico would have no user base, or certainly a much smaller set of features that aren't directly tied to efficient writing.

How does this relate to playback? The answer is workflow. If Dorico is nothing more than a tool to notate music you already know or are given from another musician or midi file, it's notation focus is spot on. If, however, Dorico intends to be a tool for writing music there is far more at stake. Notation is critical for communication... but maybe not the most critical element for writing. Consider the diversity of writing styles:
  • Some musicians will write John Williams style and need nothing more than staves and a piano, or maybe not even a piano...
  • Others require, or prefer, a high degree of experimentation, discovery or malleability while writing...
  • Some rely deeply on emotion, intuition and "feeling" the music in real-time at the highest possible fidelity...
What we see here is a range of feedback requirements. While John Williams needs only visual feedback, an understudy may need audible pitches from time to time, but not in every bar as most music is still heard internally. These folks don't need so much aural feedback because they can already hear the live, and very human, orchestra playing in all its glory and imperfection. For them, perhaps Dorico is only a tool of efficiency.

But further down the line is someone that needs to hear a pitch in every bar, and someone that needs to hear harmony, someone that needs to hear rhythm, someone needing to hear the contrast of multiple instruments played together, until you finally get to the writers that need to experience near-human performance before they can liberate the music inside them... should these writers be forced to the piano roll? Nay! Much of this workflow is fairly well covered in Dorico already, consider the feature support: custom VSTs, expression maps, automation lanes, note offsets, but why? Why provide audible feedback and tooling if not to support those whose workflows require more than "notation?" The answer is that Dorico aspires to be more than an exceptional notation tool, it aspires to be an exceptional writing tool. Notation is just the anchor (see below).

Now, before you think I've swung the pendulum too far let's acknowledge that every good product has a degree of focus and constraint. Let's call this an "anchor". We can generally say that Bitwig and Abelton Live have anchors in electronic and live production, but that doesn't mean you can't use them for orchestration. And Cubase is anchored in recording, but can still make effective midi mockups. So yes, Dorico's anchor is notation, it won't aspire to be the next leading notationless DAW and its workflows will always cater to notation-based goals. Nobody is arguing this. A plea to improve aural workflows takes nothing away from the kingdom where notation is king. These expanded borders actually enlarge the kingdom and further glorify notation itself. In the end, all workflows within Dorico, even those requiring high-fidelity audible feedback, come back to notation. Music writers that aren't interested in notation will have never given Dorico a second glance.

So, as a writer who finds notation far more efficient and creative than the piano roll, and would also love to cut out DAWs for high-fidelity aural feedback, I'm asking you to please build the most exceptional flexibility and realism your business can afford to. Why stop at being the best notation software? A great next step for me would be high-quality humanization parameters (something at least as good as Divisimate or an integration with Divisimate). I could stay in Dorico and write more efficiently, accurately and beautifully! :D

cparmerlee
Member
Posts: 829
Joined: Sat Jun 18, 2016 4:32 pm
Contact:

Re: Humanize Playback

Post by cparmerlee »

alindsay55661 wrote:
Fri May 29, 2020 12:25 am
Dorico aspires to be more than an exceptional notation tool, it aspires to be an exceptional writing tool. Notation is just the anchor
Let me just add a few thoughts to what you have put out there. A few years ago -- recently -- a well respected college professor in the area told me emphatically that when scoring any arrangement, every instrument should be set to piano "so you could hear all the notes." He was speaking from a long history with Finale, but that attitude might have come from a person working on any platform. Indeed, in 2000, I probably worked that way for the same reasons.

I wasn't interested in a debate, but the fact is we have come a long way from that point. I have produced a few things on the DAW that fooled some people into thinking it was a human performance, but few people will be fooled by a rendering coming directly from Dorico, or any other notation program, even using NotePerformer. I suppose most people would say that should not be the ultimate objective for any notation program. But I suggest it actually should be one of the measurements of success long-term for all the reasons you mentioned.

Auditioning music (or better yet, evolving music iteratively) is most effective when it sounds realistic. And honestly, we aren't that far off already. Going back to that professor's advice, if one does simple things like panning the instruments across the stereo field and maybe using a touch of compression to bring out the inner voices, one really can hear the music pretty well. And with NP in particular, the interpretation of slurs and simple articulations is good enough that it often motivates me to add markings to the score. That is to say, the same markings that make NP sound more human-like also will help the human play the music as intended. Dynamics are a bit spooky, but there have been many times that the interpretation of dynamics has been realistic enough to cause me to change the score in a constructive way.

And while most of the virtual instruments don't really sound like the "real thing", they do have most of the lower overtones such that you can actually hear how successful different voicings are likely to be when played by humans.

What' I'm trying to say is that I agree with what you wrote, but just wanted to point out how far the technology has come already.
Last edited by cparmerlee on Fri May 29, 2020 4:55 pm, edited 1 time in total.
Dorico 3, Cubase 10.5, Windows 10, Focusrite Scarlett 18i20 audio i/f
http://sonocrafters.com/

kimfierens
Junior Member
Posts: 69
Joined: Sat Apr 18, 2020 2:58 pm
Contact:

Re: Humanize Playback

Post by kimfierens »

cparmerlee wrote:
Fri May 29, 2020 2:48 pm
Going back to that professor's advice, if one does simple things like panning the instruments across the stereo field and maybe using a touch of compression to bring out the inner voices, one really can hear the music pretty well.
Could you please expand on that particular point a little? I'm not an audio expert by a long stretch, and while I can see the value in panning, I've never understood what compression actually does for the sound. (To be honest, I have only the vaguest notion what compression is, anyway.)

cparmerlee
Member
Posts: 829
Joined: Sat Jun 18, 2016 4:32 pm
Contact:

Re: Humanize Playback

Post by cparmerlee »

kimfierens wrote:
Fri May 29, 2020 3:50 pm
cparmerlee wrote:
Fri May 29, 2020 2:48 pm
Going back to that professor's advice, if one does simple things like panning the instruments across the stereo field and maybe using a touch of compression to bring out the inner voices, one really can hear the music pretty well.
Could you please expand on that particular point a little? I'm not an audio expert by a long stretch, and while I can see the value in panning, I've never understood what compression actually does for the sound. (To be honest, I have only the vaguest notion what compression is, anyway.)
First, with regard to panning, if you have good stereo speakers located in a good orientation (each speaker equidistant to your ears in the position you normally take while auditioning material), even a small amount of stereo separation can help individual parts jump out. That is especially true if using the same sound samples for multiple parts. It is easy to do with the Dorico mixer (or if using NP, you must use NP's mixer.)

Compression has to do with dynamics and it a little more complicated than panning. The basic concept is to reduce volume of the loudest sounds. That lowers the overall sound energy. Generally this action is combined with a boost of the entire sound to bring it back up to the original loudness -- this is usually called "make-up gain." In applying the make-up gain, the softer sounds are amplified, making them easier to discern. Some compressors automatically apply the make-up gain and others require the user to do that manually. Even small amounts of compression can make the inner parts substantially easier to hear. If you go crazy with compression, then everything will have the same level of energy -- no dynamics at all. Less is usually better.

That's the basic principle. However, sound engineers often use compressors for a completely different objective. Most compressors include control over attack time and release time. In this case, "attack" is the opposite of what you would normally think. It is the attack time for the compressor. In other words, how quickly does the compressor decide to attenuate the loudest sounds. And the release determines how long the compressor remains clamped down after that transient has passed. By manipulating these timers, you can achieve a harder-sounding attack, or more pulsation in hard-driving music. For example if you have an attack time of 20 ms, then the first 20 ms of a loud sound king a kick drum will pass at full volume, then be clamped down to let other music be heard after that initial thump. Used in this manner, compression is usually applied to individual instruments. That is something that is more germane to the DAW mixing environment. For Dorico, I would add a compressor to the main output bus only and give it a quick attack time.

For a visual model, you can think of a compressor like somebody manually moving the mixer's fader up and down to try to soften the loudest notes but let the rest of the material come through at full volume.

===

On edit, I would also add that if one has been using bargain basement $24 computer speakers, one will be surprised at how much much more clarity the Dorico playback has if one were to upgrade even to a $100 set of speakers targeted at the video gamer. Some of those come with a sub-woofer that is at least decent at a satisfactory sound level.

If one is using an external audio interface, one probably already has a speaker setup targeted for the home studio. There are some very nice choices that are very affordable, such as JBL 306 at about $300/pair. If one goes with a separate woofer, then one can use smaller satellite speakers such as Mackie CRS-X at $200 for the pair. A woofer for such a studio might be the JBL LSR310S at $300 (that's 200W with internal crossover). Nearly as good would be Mackie CR8S-XBT (a new model), but curiously that one doesn't have XLR connectors. It uses balanced TRS cables, so it would be fine, but one would need to make sure one has balanced cables. These aren't necessarily recommendations, and a person can easily spend thousands of dollars for high end studio monitors. But a setup such as listed here will be a real awakening if one is accustomed to using really cheap PC speakers.
Dorico 3, Cubase 10.5, Windows 10, Focusrite Scarlett 18i20 audio i/f
http://sonocrafters.com/

Post Reply

Return to “Dorico”

Who is online

Users browsing this forum: David Tee, dko22, HeiPet, Lillie Harris, mmka, olilo, pianoleo, PjotrB, rchlmusic, Ulf and 7 guests