Humanize Playback

I’m one of these people who hears it all internally, but I still agree with you completely. Sure, you can ramble off letters like a speed train and I can recognize every letter I hear instantly, but I can recall less barlines than Mozart. Even Mozart had to go back and listen again at times. Aural recognition is one aspect of charting how our minds work with music. Composition informs aural and vice versa.

I recently switched to a StaffPad (writing) + Studio One (recording) + Dorico (engraving) workflow due to SP’s in-app pre-mapped library integration, something I suggested to most developers years back as the ultimate composer UX end goal and was told it was lightyears away from anything the industry has. Library integration and human playback is essential to keeping people out of frequent piano-roll edits. As a human being, I’m prone to seeing 30 things to do, doing 10 of them, my daughter asks for help, then I come back and wait… oh yeah, I had 8 other things to do (notice that 12 edits on my agenda just got lost to the void). Less performance tweaking makes 10,000x the difference in one’s writing. I’ve written music without computers, and without instruments. Any aural feedback will have influence over writing. But the closer it is to the real thing, the less distracting it is.

I don’t want it out of a need to hear, but out of a need to fight to manage my attention.