Humanize Playback

Greetings again,

VSL sold me on Humanize. Then Cubase had a function to randomize velocities. Imperfect, but I could work with it because the handle bars in the velocity/cc-lane area were brilliant. Now I use another DAW with Humanize baked into the UI, which I use constantly. It’s better than Cubase’s Function option, except that Cubase has velocity handle bars in the cc-lane area, which are brilliant for velocity editing. I’m adding a million considerations to think about. And programming is something to do one step at a time, I know. So I hope I’m not overwhelming you folks with feedback. I don’t expect every feature to happen in v1, just everything I want. :wink:

Cheers,
Sean

I am doing a lot of “controlled” humanizing by using the logical editor (Cubase) I would love to see such a feature in one of the following versions.

Also a generic controller feature could be of help to use external midi-editor such like C_Brain for Android/iPad which I use to randomize/edit midi-events.

Agreed

When I used cubase, the logical editor was useful, but S1’s baked in Humanize I find myself using more. And nothing was as good for this as using c brains. But c brains slows down cubase and causes glitches if you use the full size preset library. The problem is that the editing tools in cubase are a page out of 2001. They work, but so basically it’s painful. C brains finally addressed that, and brilliantly, but it needs to be native, not scripts. Speed alone is one argument. But archaic editing tools that aren’t built for fast workflow… In 2016? Yeah, that doesn’t fly by me.

If steinberg is smart, they’ll pay attention to c brains and the midi transforming tools that are getting made, which are leaps and bounds ahead of something as simple as an LFO. Those need to be a native part of workflow, not a half working patch-in. C brains is intelligent. And the native disconnect is massive. That screams loud and clear that DAW makers aren’t using their own product enough. Cbrains came from someone who used the piano roll and could code. That’s the solution here. It’s not one feature or another. It’s making tools like this a part of the development philosophy. Cause smooth working native features as robust as Cbrains are what I believe Cubase needs, and Dorico, even if only half as robust, would benefit by.

Pattern based editing is a no-brainer for music.

I must have missed something here - I thought Dorico was supposed to be a music notation program?

Maybe somebody needs to explain to me what “humanized notation” looks like … :question:

(I know perfectly well what humanized playback is, of course.)

Rob,

I can’t tell if you’re sarcastic or genuinely asking.

I’m not asking for a feature on the Engraving tab, but on the Write/Play tabs. Those parts of the program have less to do with the engraving principles and everything to do with whether Dorico sounds realistic when users hit the play button. And for many of not most film composers that’s a necessity many software companies have recognized and developed features for. Thus my interest in Dorico in the area of playback. So I’m not describing a feature to appear on the page, but in the program UI, in an area of Dorico where users who care about realism will rely on. I hope that answers your question.

Cheers,
Sean

“humanized notation” is making sure no tuplets cross the barlines :smiley:

The fact that Dorico will have separately editable note onset, durations and velocities stored within each notated note is good news, since this provides the most important building block not only for whatever performance editing tools come online later, but also for renotating / retranscribing the visible music notation.

See : Midi files and also Performance settings - Dorico - Steinberg Forums

I’ve always associated the term “Humanize Playback” with tools which on their own make micro adjustments to performance attributes like timings, dynamics or tempo changes using some set of controlled random variables in order to simulate “live performance”. I’ve never really cared much for this approach, since it doesn’t effectively capture human phrasing at all.

Another approach; Logic’s “Groove Templates”, DP’s “Groove Quantize” and Pro Tool’s “Feel Injector” (all based on work by acoustic researcher Ernest Cholakis) allow specific performance attributes from one track to be copied to another (and which can also include some amount of mild randomization, or can leave existing notes or phrases already within a particular tolerance alone to “humanize” what would otherwise be an exact performance copy.)

The net effect is much more realistic than simply randomizing performance attributes, but is perhaps more ideally suited for music centered around a rhythm section than for orchestral phrasing.

There is also “on the fly” randomization of playback nuance built into some sample libraries, which people seem to like.

However, it seems that the discussion focus here is on editing tools which can filter for specific performance data types within specific ranges, dynamics, octaves or locations etc. and allow quick editing to these selections.

I had to look up C_brains, which I now know is a Cubase MIDI editor for iOS Lemur. Here is a link for those not familiar:

http://www.midikinetics.com/c_brains.html

C_brains is a testament to how talented, third party developers can add tremendous productivity and value to an existing program if the proper scripting hooks are made available to them.

I’m one to prefer the ability to simply randomize all my note start times and velocities with a small percentage. Then if I overwrite I can tweak if needed. But I prefer that to a constantly changing algorithm that will never be consistent and sometimes give more dramatically poor results. Once it sounds right, it should stay that way. One note too loud can ruin a scene (film).

And for me this is not just for percussion. If I copy and paste 4 bars of 16th notes for strings and want the pasted result to be unique, I don’t want to have to edit the pasted result note by note. I just want to click a button and get an instantly unique performance. Basically, the program should be as close to working with real human beings as possible. Real people aren’t robot that never vary. So while I want exact control, I don’t want to have to tell the program to do something I feel it should have already known to do in the first place.

I’m not sure if that’s terribly clear, but that’s how I view humanization.

This article seems somehow related to this thread: “Sound Analysis of Swing In Jazz Drummers”

The author (Ernest Cholakis, referenced above) analyses the “feel” of 16 well known jazz drummers, turning something abstract into definable numerical values. There is a PDF accompaniment to the article as well, which has bar charts comparing each player’s velocity and timing over two bars as well as their “swing ratio” and other factors.

Each player produces unique, yet consistent patterns of variations to absolute time that is unique to them. It’s not random, rather, these variations in timing or dynamics within the context of a swing groove make up a specific “musical signature” that can be identified with that specific player over and over.

While I could certainly see the usefulness of a tool like the Sibelius “Transform Live Playback” for Dorico in time, the C_brains Lemur editor makes a great case for third party specialized tools that produce a super-set of specific features appealing to individual interests.

This has the additional advantage of lightening developer load so the focus can be on making the core feature set of the program as attractive as possible. Third party programmers are already doing great things for Finale with Lua, so I am excited about the possibilities for Dorico.

All of this is rather outside the topic of music notation specifically (as is this thread, actually), but interesting, nonetheless.

Humble opinion on: I strongly doubt that humanization could be done randomly and get decent results… :wink: \off

I’ve seen humanizing done 2 ways

Random:
VSL’s Vienna Instruments Pro can play a random performance every time
Cubase’s logical editor can very roughly do the same thing

Fixed:
Cubase’s logical editor has a preset to randomize velocity of selected notes
Studio One has the same thing as a UI embedded feature, but also randomizes start times of each note

S1’s is the most effective to me as it randomizes both points, it does within a limited range as players don’t trend to randomly blurt out excessively loud notes, and it goes the farthest at making a duplicated set of bars just unique enough not to sound mechanical, like a measure by measure machine gun effect.

I assume by using a term like humanize, that developers will be familiar with how existing tools have been approaching this. But some comments lead me to think I will be misunderstood instead. Hopefully this helps to clarify.

-Sean

The start and stop timing is only part of making the performance sound realistic, and I would have thought a relatively small part. I suppose on very repetitive, very rhythmic material, the note timing is pretty important. But to my ears, the things that make the performance sound human are:

  • Smooth dynamics in the natural flow of musical lines
  • Good implementation of fp and similar markings
  • Reasonable treatment of rit and fermatas
  • etc.

In the past, Finale and Sibelius have done a mediocre job on these things. There is clearly a lot more to be done. I would think there could be some work with intonation adjustments. Certainly an advanced wind or string ensemble can apply just intonation. And a not-so-advanced ensemble can have intermittent bits of less-than-perfect intonation.

I think the real question here is whether the Dorico team really appreciates how important playback is to many folks, or if they really consider the intricacies of the printed notation to be paramount. I think we all realize the early releases need to focus on foundations, so surely nobody expects advanced playback early on, but I am curious where this fits in the overall priority scheme.

Yes and no. If you ever have a sampled chord where all the instruments play at the same time, it sounds incredibly fake. In fact, even with live players, when everyone plays exactly at the same time (but crafty editing) it often sounds strangely small. So whilst the actual performance of each and every line is very important, I wouldn’t want to play down the advantage that something like the humanisation in VI Pro gives when working with samples.

DG, I actually agree with that. But I rarely ever convey that in a way that people understand it. So I just focus on timing and velocity as static elements. I believe a dualistic approach is the best answer to this in order to get the best of both worlds. Without that we run the risk of neglecting performance. The truth is, humanization is a multifaceted area to dive into. But it is one of the most important in a program where musical performance is key to many users, of course.

One of the most difficult things I think to convey is a need for humanization with CC data. Where performance is a dynamic and fluid thing for long notes, and where not every player in the orchestra crescendos the exact same way, there is a need for fluid humanization as well. Much like in VIP, I would like to be able to adjust sliders which delay and increase/decrease the values of CC data by a percentage.

That may not be the perfect solution, but I believe it highlights the problem at very least. The problem is that when I copy a violin part to the viola, The two parts should not sound so identical that they sound fake and mechanical. The thing is, real players listen to each other and adjust to each other. So I find myself editing cc data to have subtle variations in the curves which imitate this effect. I find that without doing this, the music is never believable. I believe this to be the single most important factor in realism from sampled instruments, apart from the obvious like having enough articulations, etc.

I would be interested in hearing of a more effective solution. I just believe that’s the problem to solve. LFO’s aren’t effective because they simply follow a pattern that is not very human.

My process is to create ACC performance which is realistic. I believe everyone will do this as everyone wants an element of custom control. I don’t believe you can satisfy most people with an algorithm. But if I can craft the general performance style, and then the program understands how to apply that to different parts in a way that is unique to each instrument section, then the program would save a great deal of headache.

The overall goal I believe we all share is simply stated by saying that the performance should not sound synthetic, but organic. That requires a lot of unique detail.

-Sean

Hello, friends!
I’m sorry that restored a very old topic, but a more appropriate topic not found, and a new topic I do not want to produce.
I have a problem with the starting position of the notes, when specifying these notes slur and articulations. For example, if notes are not specified articulations (not Accent, not Legato, not Markato, etc.), then in the Play mode the starting positions of the notes correspond to the parameter “Note start position” in the Playback options. But as soon as I set the Accent or Markato, or Slur, the starting position of the note is shifted strictly on the grid. Humanize disappears.
At first I thought this problem was due to the playback techniques in the expression map I created, but this problem also occurs in the default expression map.
Maybe am I doing something wrong? Maybe there is some parameter that controls this Dorico behavior? I, unfortunately, did not find such parameter.
Dear friends, please tell me how to solve this problem.
Thanks.

I believe Dorico does this so that it can be sure that the playing techniques (keyswitches, controller changes, etc.) that it needs to trigger in order to produce the correct result in playback will be synced up with the notes to which they apply.

This is something that is on our backlog. There’s some extra phases of processing that we need to do to make this work. The reason it happens is that if you add an articulation to a note then it might require a keyswitch or controller switch. But if the note is humanized then that may cause it to start before the keyswitch, so you would get a note that plays with the wrong sound, so we set the humanize to 0 for that note.

Thank you very much for your prompt reply, Daniel and Paul!
I assumed that technique, articulations etc. attached to the objects of the quantization, and not to the event notes. Yes, it is a very difficult task and it can not be corrected quickly.
I’m a little upset about this, but I hope you will be able to solve this difficult problem. Please tell me, do you plan to implement this in the third version?
Thank you very much!Thank you very much!

We can’t say for sure at this stage, I’m afraid.

I understand, thank you, Daniel.