extensive sound library expression maps vs. just using Note Performer (opinions sought!)

Exactly! Hopefully Dorico has something up their sleeve to improve the ease with which high quality playback is implemented as it feels very difficult right now (as much as I appreciate their attempts to address it all with a Piano Roll, Expression Maps, etc.). I agree that Note Performer seems to have the right idea (now if only NP’s samples could be improved and - in particular for me at least - they offered more appropriate jazz and other non-purely orchestral music playback, etc.).

  • D.D.

And re: adding Rewire to Dorico: barring this, I’ve noticed that Sibelius’s video import allows for the importing of audio files as well (somewhat counterintuitively as with many things with Sibelius, since it’s labelled “video import” :slight_smile:). But just allowing a single stereo audio file to be imported this way into Dorico also would at least save the (slightly tedious) process of attaching an audio file to a static video image, then importing the Video into Dorico (whenever I want to sync in Dorico to something from Logic for which I’ve already imported into Dorico the MIDI) - just as a suggestion to the Dorico team…
Best -

  • D.D.

Yes, and no. We still want/need the pallet of samples, but it’s also true that the modeling for those samples requires constant attention (or at least different templates for style/tempo/timber/intonation/etc.). For the sustained strings example, you basically only ‘need’ about 4 samples, and you can ‘remodel’ them for different phrases (often in real time as the piece plays). Still, you’re going to want a sound shaped for faster aggressive bowing, slower lush bowing, different variances of pressure, routines to handle the tuning scheme being used, and an A/B set for each bow direction.

A good library isn’t just about having a lot of ‘samples’. In fact, they often have dozens of ‘patches/programs’ based on surprisingly few actual ‘samples’. It’s still a viable ‘pallet’ for the studio musician to have a beginning ‘reference point’ to emulate the various aesthetic characteristics of a given instrument throughout a phrase. Once you have the ‘sketch’ in place, one can use the expressive controls to put the polish on it.

I can use the SAME SAMPLE and work up dozens of variations for a sound. ADSR, exciting or filtering specific frequency ranges, tuning characteristics, lfo or looping/enveloped vibrato effects, and more. In the tracking DAW, we often use a lot of channels and just snip the portions we want to use the sound, and place it in a new track pointing there, and then pull up the VST and ‘tweak the thing’ in real time (maybe even using a slider/knob/breath controller/touch tablet screen/anything we want), using our ears, while the DAW is looping.

Case in point…here is a screen shot during a macro building phase where ultimately, I’ve taught HALion to make something like half a dozen bowing variations using the same sustained sample layered up with a remodeled spiccato sample (samples from the the HSO library)…but the dynamics (and sometimes more) are remolded. So, I’m using only using something like 8 or 9 samples to get ALL these variations.

It uses keyswitches to bounce among the bowing variations.

And what it looks like loaded into HALion SE.

Here’s a quick rendering directly from Dorico. Every sample I’ve used is included in Dorico Pro/HSO. The vstsound supplement required to get this running in HALion SE are only around 13mb (most of that is bit-map images, and other information required for the macro page). It only takes a double click to install the supplement.

The primary bowing techniques I’m attempting to mimic here are clarity of when each phrase begins, smooth legato inside the phrases where it’s supposed to be, subtle changes/variations when the bow direction reverse, some degree of controllable vibrato application, and a martele like stroke on most of the inharmonic tones occurring within the first first 2 beats of the measure (dots living under slurs in the score…ta de da). I’ve roughed in a Marcato stroke as well, but it’s pretty bad right now (should be able to come up with something better).

Applied Effects are French Theatre Reverb in the Aux Bus, and Maximser/UV22 16bit Dithering to MP3.

The score I’m attempting to interpret here is Mark Starr’s arrangement of Zortzico

I know, it’s not that great of translation on my part, and it took a while to get it ‘this good’. Oh well…


I did this before we could ‘channel bounce’ and save sound templates! Another added benefit is that I scripted it to delay the legato event so legato slurred phrases make more sense now. Third, I built a ‘cross-fade’ between tutti and solo sounds from the same stave. Now that we can channel bounce and such, I don’t really need to continue the project…I can do most of this with the bog standard HSO library (I side host it in a Bidule instance, and fix the legato pedal issue there).

One can teach Dorico make these adjustments with big lists of CCs in each expression map entry to the same instrument on a single channel, or one can have several copies of the sample loaded, with the ‘choices’ easily available for simple channel bounces. The advantage to the latter is that during the work session, you are able to start the score, isolate phrases, and then manipulate the VST (and/or controller data) while the score is playing (an audition process). Once you get it pretty close, it’s just a matter of bouncing channels as you need that base model for sound.

I agree that modeling is extremely important. Regardless of if you are using 4 samples, or hundreds to shape a phrase, the fact remains that Dorico has to send information about the score to the plugin, and the plugin has to interpret said information to ‘decide’ what to do with it.

Software today pretty much works on the concept of: If an event triggers me, I check all the data attached to the event, and if these conditions are met, I do this, or that to something in memory, if not, I pass it down the line for another ‘if/then’ check…down the line it goes until there is nothing nothing left to check…then it finally takes all the information that had built up down the hill sitting there waiting in memory, and uses it to send the ultimate command that makes a sound (which goes through the whole process yet again, several times over in different stages of sound production before a sound actually comes out of the speakers).

One thing computers don’t do very often is LISTEN to what is coming thorough the speakers, in a real room, while they play, and make constant adjustments (real musicians and conductors do this CONSTANTLY…it’s part of what makes music ‘musical’). So, we must have the machine model based on theoretical sets of rules and make CHOICES. It can get pretty close to what a human might do in terms of what we hear, but humans still usually need to freeze the final performance and make touch-up adjustments and/or override some things.

I think about this a lot, in the sense of trying to crack the code of humanizing music without actually playing it first. At least I think it is a code. Some people would call it the mystery of artistic creation. That part you can’t quite put a solid formula on, but is very important to musicality.

Yep, and an AI that can post analyze adds latency. Sometimes that’s not important, but it can be a serious problem as more and more AI needs are piled on to a given scenario.

For the time being, I’m in full support of doing the best we can with theoretical real time modeling.

The fact that building libraries that are good at that sort of thing will take TIME, I’m also in favor in continuing to beef up that play tab in Dorico, adding enough ‘tracking DAW’ like features for we humans to ‘touch up and polish’ the score interpretation as we see fit.

Until then…true audio shaping software such as Cubase will still be there for us as an optional stage for putting extra detail and polish on a mock-up.

Do you mean programs like Note Performer in terms of it needing to “look ahead” to figure out how to best interpret the score? Couldn’t this be solved by being able to temporarily disable “look ahead” when entering new MIDI performances in realtime (like Logic Pro’s Low Latency Mode button I mentioned previously), and then re-enabling it for playback afterwards? To me, this would be a reasonable trade-off (especially since to achieve this currently I’d taken to simply turning off Midi Thru and using an external piano plug-in to play in realtime - not exactly ideal since the sound doesn’t match the sound I’m trying to enter).

  • D.D.

I think that temporarily disabling look-ahead has been discussed before and it’s not something Arne can currently do. Happy to be corrected if this is not the case.

Has anyone taken a Dorico NotePerformer mockup and made it sound like “real music” in a DAW? Practical examples would be more illustrative than talk, interesting though it has no doubt been to follow the discussion.

Similar, but not really. I think our computers are fast enough now days to handle ‘theoretical’ modeling without much latency. The main reason Note Performer has that designed latency, is because Sibelius, nor Dorico have the VST pins to send information like time signature and tempo, thus keeping it updated with each clock tick. It has to buffer a bit, and ‘analyze’ what’s in the buffer, plus combine it with events sent by the Dorico/Sibelius expression maps, to guess at how to best model the sounds. If Dorcio sent a bit more information as it translates the scores, they could probably cut that latency down quite a bit, thus making it easier to combine NP with other libraries in the same score.

More what was I referring to here is:

Imagine an AI intelligent enough to have mics hooked up, and LISTEN to itself playing, and make adjustments as it plays.

We humans do this constantly when playing in an ensemble. The sound reverberates around and in effect changes how we feel, and how we contribute to the group’s sound. If the euphonium player sitting behind a sax section puts certain inflection into his playing, the sax players may well pick up on it and mimic it, or fight it, or even feel compelled to do something totally different, yet ‘complementary’ of the effect. These things also have a profound effect on overall intonation of a REAL performing group.

The way we currently have affordable mainstream ‘machines’ doing AI…well, even if we could find a way to have the machine hear itself in a real room before reacting to itself…with the way our current mainstream AI software and devices work it’d add considerable amounts of latency…

I’m definitely not a programmer, but I would guess it could be disabled - it’s just that the score interpretation would temporary sound unmusically expressive (but obviously Arne can chime in!) If doing so were a way to avoid latency on realtime MIDI input, it would be a reasonable, temporary trade-off to me…(as it also is using Logic’s Low Latency Mode, which temporarily disables plug-ins)…

  • D.D.

I hope it’s not rude to bump, but in this case I’ve gone back and edited a bit, added images and renderings, etc.
A few have already posted well before I got the edits in.

Here’s the post reference link.

That Google Drive link requires access (just to let you know): https://drive.google.com/file/d/1uygtQW … sp=sharing

Oops, is this better? (also edited above)

Thanks for the upload — I think I get a better idea of where you’re coming from with this Zortzico (quite a nice wee piece by the way). It does look as if you’ve made Halion phrase in a considerably more musical way overall and it’s set in a well-balanced acoustic. But there’s no getting round the fact that this library still sounds often like a barrel organ, particularly near the end, and the shorter notes are particularly artificial. I see trying to mould the timbre as wasted time even for one as talented as yourself – would it not be better trying to start with more sophisticated samples in the first place? My feeling was also that the top line dominates too often, thus not letting the texture fully emerge. Higher notes naturally sound louder than mid-range ones but of course you’ll know all this and it might just be an artistic choice which is fair enough.

I made the rendering a bit too hot as well, a little distortion and unwanted artifacts at the loudest parts of the piece. Nothing a fresh rendering with the main down a bit can’t fix.

No, I have NOT put anything on a scope to balance it out yet. What you hear there was done by ear, mainly to show someone the LUA trick to delay the legato pedal. The native HSO macros were locked down, so I ended up having to do my own. I wanted to learn that process anyway, so I took off with it. While I was in there, I cherry picked some samples (none of which are the default choices of Dorico) and attempted to balance them out quickly, and shape up a few attack styles.

My monitors are too far apart for the distance I sit from them, in a lousy room, tucked back into book shelves that screw up any chance of getting a solid reference…on a PC I built around 2012ish or so. They are small equator montiors, with very little low range at all so I have to kind of guess at it…I’d rather it be too soft in the bottom than cause ‘booming’ or ‘rattling’ on the next user’s system (80% of the things I hear on soundcloud, made with those $800 sample libraries, are BOOMY as heck, and even cause average consumer speakers to CLIP…and almost always go WAY too wet with reverb). I do not have a sub at all.

Going from there to a set of Bose computer speakers, that do quite a bit of sound coloring, also in the same crappy room, with less than ideal placement…it’s pretty balanced. Oh well.

Why not fix my workstation and put the speakers in a better place? Well…lets just say I’m barely allowed to keep a home studio as it is. It MUST be closed in a cabinet, out of sight (even the speakers), when it’s not in use.

As for starting with a ‘better library’. My students and clients don’t have $800 plugins and sample libraries on their rigs. I don’t ‘own’ many myself either (sometimes get to play with borrowed or demo copies, or use someone else’s studio, etc.). So there’s that. If I can send a 20meg supplement along with the score that has it sounding like it on the target user’s system…that’s a big plus for me.

I agree about the mix, but keep in mind that other than my roughed in HALion patches, and simple and short expression map to send slurs/legato pedaling events, and a few key-swithces, this is what Dorico made of it!

If you wanted to bring down the 1st and 2nd violins a bit you’ve got a few options here.

  1. Adjust the dynamic curve showing in the screen shot a couple of posts back.
  2. Pull the faders down.
  3. Set up a graphic or parametric EQ and roll off some frequencies.
  4. Adjust the dynamic curve in Dorico’s playback settings.
  5. Adjust the damping and such in the reverb plugin.
  6. Tweak the terraced dynamics in the score itself.
  7. Make changes in the play tab editor(s).

Other than CCs to bounce between tutti and solo, and one to turn some vibrato effects on/off, I have nothing else customized for expressive data. All the mixer faders are hard set at 50%. All the pans are at center. Only thing in the effect slots are the Steinberg convolution reverb on the aux, on the main is only maximser and dithering (to kind of normalize the rendering). All the fader EQs are disabled.

Yes, there’s a lot more that can be done to make it sound better. That’s a rough template. I probably won’t bother though…people seem to like the default sounds and mixes better anyway.

You can probably tell that the sfz notes are a bit much. Of course they can be toned down a bit.

I ‘put’ it in a barrel on purpose. I felt it fit the piece to have that tight/cramped spacial feeling…like a dance tavern with lots of posts, corners, various objects between the listener and the stage, bodies absorbing a lot of the sound, with a lower ceiling, etc. The samples are raw, dry, close miced, steady, and in tune…a lot one can do with them in terms of staging, or getting them ‘out of the barrel’. It also takes time though, and I spent most of mine making scripts and macros to get that for.

thanks for your thoughts. I take your point of your clients and students not having expensive sample libraries on their rigs so doing what you’ve done has indeed a practical function – and of course you could easily do more if you felt it was worthwhile. There again your students and clients will largely not have your mockup skills if they were wanting to do a lot with little themselves as I would put it, And if they like the default, well there not much more one can say…

Here are a couple of renderings of Schubert’s Scherzo SQ15 from HSO.

  1. Using the Dorico Defaults:
    Schubert-SQ15-Scherzo-HSO.mp3 - Google Drive

  2. With the same rough HSO patches as above in the Zortico rendering, and a very simple expression map. NO effort on the part of shaping things in the play tab. Raw and exposed.

In this case, all that’s used are two key-switches. One for arco, and one for staccato. Legato pedaling mutes a softly layered attack phase from a spiccato sample (optional, the user can adjust the level of this overlay ‘attack’, or even mute it out totally, it can be done on an as need basis using CCs, or directly in the macro if desired), it flattens the dynamic attack of legato notes (any sustained note played while the CC68 pedal is engaged), and extends the ringing of the note a bit. Legato pedal events are delayed 10ms (adjustable from 0 delay, up to about 40ms) before being implemented, which allows the first note of slurred phrases to get that bite, while the rest of the notes living under the slur are smoothly connected (overlapping a bit even).

Only a small touch of extra reverb “Music Academy” from REVerence. It’s a bit boomy on my rig in the mids and lows, as in way too much from the cello (can also hear some phasing issues in cello that I’d need to fix), and that boom seems to be coming from the sampled reverb tails included in the sample library and released on note-off. I didn’t make time to tone those reverb tails down some, and with the setup I’m sitting at now, it’d be mostly ‘guess work’ as to how it translates to other speakers/systems anyway. I just loaded it, set up an expression map for slurs and staccato, and rendered.

My point is meant to be, that even with the bog standard stuff that comes with Dorico Pro, things could be better out of the box.

Maybe something is wrong with my ears, but after paying $599 for it, which results would you rather have on your first raw rendering with the product?

If enough people agree that option 1 is better…then I’m sorely done with trying to use my ears for anything musical. My theory, is that people just get used it, and then something different comes along and doesn’t seem right.

If it’s option 2, well, most of the difference is that I chose a different set of samples from different presets than the defaults, layered in spicatto at a really low volume (unless slurring), and mostly, the way legato is handled the largest factor between the two examples given here.

It’s SO much about the mixing, micro-sliding things to the right spot on the timeline, dynamics, other expressive data, and general sound staging. Here are examples from libraries old and new. Even FREEBIE ones like Sonatina.

Here’s one from way back in 2011…using the very same library that ships with Dorico.
Halion Symphonic Orchestra and Cubase

Another from 2010.
Halion Symphonic Orchestra and Cubase

HSO in Cubase

Here’s one using Garritan GPO 4 and Cubase, it’s an old library that uses mostly modeling as well (not a lot of sampled articulation choices).

VSL and Digital Orchestrator Pro

VSL and Sonar

East West in Cubase (the chiors are East West virtual plugins too)

East West Libraries in Reaper

Sonatina (a freebie from here), Requem Lite & Logic Pro (virtual choirs too)

well, thanks for all that, Brian, it was certainly something to get my teeth into! I must say I’d never heard of Sonatina and I’m astonished how much impact it could have in something like the Verdi though I’ve no idea how it would fare in other rather more intimate music.

With the Schubert, there was no problem whatsoever telling the difference so don’t worry about your ears. Solo strings is, however, a particularly difficult area for VST’s and I suspect there really isn’t much more that can be done with Halion here, I fear. In the Dvorak pieces the programmer seems to have made considerable progress between 2010 and 2011 not just technically but musically and the finale gets off to a great start as the strings do indeed have some real bite. One or two oddities (why staccato horns at 4’09"?) and you can hear that some of the woodwind and brass lack individuality but I don’t think I’ve heard better from Halion overall.

On VSL, well we know that Jay Bacal can programme. In some way even more impressive is the Dvorak scherzo here https://www.vsl.co.at/en/Starter_Editions/Synchronized_Special_Edition_Bundle as it’s using only the most basic patches of the SE and yet has sufficient realism and musicality to sound very close to a real orchestra most of the time. Without hearing a comparable Dorico-only mock-up, it’s hard to really evaluate the achievement but my limited experience of the library so far in my own work is mostly positive. The Beethoven 5 is clearly an artistic experiment which to my taste largely doesn’t work.

Yes - thanks for sharing, Brian. I will shortly take a look at your other examples but I have to agree that your original shared example does have a bit of “barrel organ” sensibility - I think in part by the inherent uniformity of the vibrato, making it feel more “keyboard-played”, which is probably unavoidable (and I’m sure at the same time that playback without your extensive programming would such much inferior). The REAL comparison to me, though, would be for you to playback the exact same track in Dorico with Note Performer instead (if you have it). I suspect it would sound better, which gets back to my original point: you obviously have a serious handle on the state-of-the-art of Dorico (and DAW) programming, but the process of obtaining what you achieved in this example seems extremely involved, while Note Performer in comparison is literally “plug and play” (provided you’ve provided appropriate dynamic/articulation markings in the actual Dorico score). This is why I’m wondering whether Dorico may be “barking up the wrong tree” in doubling down only on expression maps, control of v.i.'s, etc., and also wondering whether they (or a competitor) shouldn’t also consider expanding the capabilities of the Note Performer approach which seems to me so musical and effective (again at least -at present - specifically for more classical orchestral music). But I’d love to hear the above sample just with Note Performer first (just to see!)

  • D.D.

I noticed Sonatina has been taken down from it’s old primary hosting site. I think I still have it around here somewhere. It’s nothing special I assure you. It was basically a random collection of samples someone had collected and got permission to distribute. If I recall correctly, it came in sfz format, which meant using sforzando, Aria, or Cakewalk/Dimension. There was very little if any dynamic/pitch shaping done in the opcodes…all it did was trigger the sample (one shot at that, no loop points or anything), and let it play until you released the key. Some people would also just take the raw samples and either put them in their sample engine of choice, or lay them out directly on audio tracks.

I do believe it is possible to get very high quality, super detailed mock-ups in Dorico. It just seems like it’s going to be much more difficult at this time in terms of painstaking manual labor either with maps and vst programming, or working lanes in the play tab. Doubling instruments, using overlays, setting up delay effects, finding the right spaces in the mix to tuck the right thing, a touch of compression here and there side-chained to the right triggers, etc. The bells and whistles of the DAW offer so much in these areas, as well as tools to speed along the process of making use of them. Even more can be done if one renders the performance to audio wave forms and starts touching minor details in those tracks.

Even with my ‘really rough’ template attempting to improve Dorico’s out of the box interpretation of the score with HSO demonstrated earlier, the right mix and effects applied in the native Dorico mixer can make a profound difference. Something as simple as a parabolic EQ and a multi-band compressor in the main fader can really help bring it ‘out of the barrel’ so to speak. The right touches of reverb on each individual stave can help take the edge of the samples when desired as well (can be done in HALion itself, or via mixer inserts). That’s mixing tasks alone.

That’s before I even go in and touch things up…like the bad sfz attacks at the end (The dynamic curve I set up in HALion is very abrupt…overdid it to hear the extremes of what I was trying to do, getting familiar with what the HALion engine and samples can and cannot do). Just bringing up the level of the martelli like key-switch might help. I haven’t built a con sordino for it yet either (I think all the strings are supposed to be muted at the end as well). Missing articulation marks that I apparently did not finish inputting. Coming up with something better for the brief marcato passage, etc.

Bottom line…for $599, even though it’s touted as a pro line product…kind of like an empty and dumb DAW would be, I still think Dorico should sound better at default settings, out of the box. While the competition’s products may ultimately less capable at efficiently doing superb mock-ups than Dorico, they do give a better impression on sound quality out of the box at this time. They’re pretty warm and fuzzy sounding, and comfortable to the ears as composition/arranging workstation.

In my opinion, there are actually quite a few instruments in Sonic 3 and H6 that are easier to get a good sound, and just using the general midi expression maps at that! Here’s an old rendering I did way back when H6 first came out, using the newer Symphonic Strings content (nothing special about it, it’s mostly to serve as an unlocked example to study how macros are built), just as an experiment. It’s using the general expression map alone (Just velocity based dynamics).

So…at this point I’m going to lay off the thread for a while. I apologize if I’ve sort of hijacked it at this point. Time for me to lay low and others make points on the topic.