extensive sound library expression maps vs. just using Note Performer (opinions sought!)

Similar, but not really. I think our computers are fast enough now days to handle ‘theoretical’ modeling without much latency. The main reason Note Performer has that designed latency, is because Sibelius, nor Dorico have the VST pins to send information like time signature and tempo, thus keeping it updated with each clock tick. It has to buffer a bit, and ‘analyze’ what’s in the buffer, plus combine it with events sent by the Dorico/Sibelius expression maps, to guess at how to best model the sounds. If Dorcio sent a bit more information as it translates the scores, they could probably cut that latency down quite a bit, thus making it easier to combine NP with other libraries in the same score.

More what was I referring to here is:

Imagine an AI intelligent enough to have mics hooked up, and LISTEN to itself playing, and make adjustments as it plays.

We humans do this constantly when playing in an ensemble. The sound reverberates around and in effect changes how we feel, and how we contribute to the group’s sound. If the euphonium player sitting behind a sax section puts certain inflection into his playing, the sax players may well pick up on it and mimic it, or fight it, or even feel compelled to do something totally different, yet ‘complementary’ of the effect. These things also have a profound effect on overall intonation of a REAL performing group.

The way we currently have affordable mainstream ‘machines’ doing AI…well, even if we could find a way to have the machine hear itself in a real room before reacting to itself…with the way our current mainstream AI software and devices work it’d add considerable amounts of latency…