Midi Jitter during playback !!!



The clueless kitty is spamming/trolling as usual. Why do you have to display your utter lack of knowledge again and again? OCD, or what?. Convolution, as a mathematical concept, have nothing to do with the OP’s problems. And if you think I’m wrong then please, enlighten me: what is convolution?

And jitter doesn’t have to be a hardware issue. Educate yourself, please!

Kris.

No. You tell me smartass. I will wait for your explanation for a couple of years.
I don’t pointlessly comment like you do to anyone else’s posts. Explain yourself sensibly and professionally or butt out. Idiot.
At least the other posters are willing to be coherent and actually provide reasons for their thinking and allow me mine even if we don’t agree. I’m not a yes man or a + one man or a troll and I don’t have to insult others to explain myself.
I left that kiddies playground a while ago.

This is not true. I can confirm this bug on my XP machine and on my MBP as well.

Some time ago i investigated this problem and could nail it down at least on my machines that, if you compare the midi part against the exported audio in a null test, the midi slips out of time 1 sample back and forth! you can easily check if you switch your time line to “Samples” and then move the exported audio back and forth by using your mouse wheel to change the start point of the audio part in the info line above. I experienced this behavior @44.1kHz. Never checked 48kHz or higher…

Ola Kris, just hit
to mute the unwanted track :wink:

Also depends on the soundcard being used (which can have jitter inherent as it’s hardware). I see now, besides the goalposts being moved slightly, that the game has changed from this being a Cubase problem to a Windows problem.

What you could try besides different sample rates is to increase the soundcard buffers and do the null tests again to see if it tightens up any. Differences in the midi timing depend on what arrives in each buffer cycle and (depending on clocking quality) when. The more you get in each cycle the more chance they have of closer timing. The latency will be horrendous so remember to change it back to your normal inputting buffer rate.

Since the excellent Atari and, I think, the early Apple machines (which both used the same Motorola chips) there has always been great difficulty apparently getting any version of Windows midi timing right.

Is this “jitter” noticeable without nulling? ie: On normal playback.

How can you tell? As I wrote I can confirm on OS X as well.
What I’ve checked by now:

Win XP, RME9632 (see specs), C6 (32bit), different buffer settings: fail
Win 7 (64Bit), C6 (32bit), RME9632, different buffer settings: fail
OS X (my Macbook Pro), C6 (32bit), TC Konnekt 24D, different buffer settings: fail
OS X (my Macbook Pro), C6 (32bit), buit in audio device: fail

So I would not say it’s a Windows thing. Tomorrow I will check several iMacs and see what happens there. As far as I can remember this behavior didn’t occur on SX3, it began with C4.

For my needs I only notice when it comes to layering drums as the highs and mids sound different every hit. See:

Hm, so it also seems an OSX problem. I presume that must be with the newer Macs. As far as friends using Macs were concerned problems with Windows were apparently not present on Macs. But it could have been one of those “Mac v.PC” comments and not exactly true.
But it still seems to be a Windows problem as well as an OSX problem so my comment stands.

I would layer the drums as Audio or put two samples per pad if possible or layer using the same (snare) variations on two or more different pads.
But I would still not expect complete synchronisation there (in GA unconverted to audio) either but you may be lucky.

When layering the object usually is to get an interesting texture from the kit which may well involve a little thickening, chorusing or distortion here or there for character.
On bare drums this may at times sound odd but, like real drums, when the other instruments are laid on top those little sound details (some would say imperfections and you’d be surprised how many howls, zings and fluffs happen on a real kit) are masked and the imperfections actually add something.
Also recording you can get bogged down with details that don’t really matter. I still sometimes do myself. It’s like the recording product, the music, is in a giant fishbowl and every mistake or nuance can seem as big as the planet. I think this can be one of those cases.
Cubase is not a scientific test-bed. It is, after all, just another way of recording music. Music isn’t a science. It’s an art.

PS: Try recording two REAL drummers doing the same part and tell me what sort of comb-filtering they produce. :mrgreen:

Ok, the implication that cubase’s midi timing, good or bad, has anything to do with the hardware used doesn’t hold a lot of water with me. Are you really suggesting that the crystal clock of the converters (interface) is so unstable that cubase using it as a reference for its midi timing can’t keep up? Do you know how bad that clock would have to be. Midi has, what, 1/50th of the timing of a 44.1 clock?

Midi timing inside an application (not addressing external apps or midi sound sources) is a code thing-either cubase or the VI being used. Nuendo2 had some of the most truly hideous midi timing I’ve ever experienced. That said, cubase 4 and now 6 have been relatively stable for me.

I’ve not run the test here…and frankly, could point to GA1, which…well, I just don’t care. Not sure who uses that…however, I ran a test recently to prove a different point to a friend.

Played a track live (clav so it would have some nice transients)-I simultaneously recorded the audio and midi. Then I rerecorded the audio of the midi track. Then I did a third bounce offline. The a forth and fifth in real time but with several reverence and t-racks issues (latency compensation inducing plugs) on OTHER tracks in the project.

Points proved:
C6’s ADC worked 100%
And my point I was actually making-NOTHING replicated the live audio.

Questionable thing:
While all Three real time bounces nulled, and the two offline nulled to each other, they did NOT null between real time and offline. This could very well be due to Scarbee’s K4 scripting being “timing sensitive” and offline does mess with time. Like if it were “don’t play the same variation within 100ms of each other”, bouncing offline could throw that kind of scripting off. I’ve also found that bfd2 doesn’t bounce properly offline all the time, so I just DON’T…real time saves time in the long run because it’s always what you hear.

Anyway…the thing I was trying to prove, and did? You can LOOK at the audio versus the midi rendered to audio and see–some hits in the midi are early, some late. I was making the point that you should be recording audio rather than midi for anything you can play with your ten fingers. Midi, in this day and age should be for programming things you can’t play…drums, horns, strings-things more about programming than playing. The side effect I learned was that cubase 6 on win7x64 is as good as you can ask from a midi sequencer in being consistent in playback timing.

Fwiw, c2quad, 8gb, nvidia mobo chipset, motu 5x5, echogina3g. Kontakt 4–don’t rembrandt the last update I did, maybe with Alicia’s Keys 1.2? I remember 4.1 because it enabled the background loading…I tend to not update unless there’s an issue or worthwhile feature…still on c6.0.

Anyway…try your test with a third party sampler. See if you still have the issue. If not, it’s likely ga1…what are you using in studio one and protools? I’m really picky about midi timing. C4&6, I’ve had no real issues, save offline bouncing…which I resolve by getting a cup of coffee while it bounces in real time!

conman - with almost every post on here you seem to reveal yourself as having no understanding of what it is you’re pretending to be an expert in.

in this thread you’ve gone from:

there’s no problem with cubase

to

it’s a hardware problem

to

it’s a system specific windows problem

to

confusing midi jitter with jitter in digital audio clocking

to

it’s convolution

back to

it’s not a problem (for you?)

to

the problem is with GA

back to

it’s hardware

to

the problem is in with non-motorola chips

to

it’s a windows AND osx problem

to

it’s a desirable feature

I’ve noticed your random uninformed posts spread thickly around this forum. I know you’re ‘probably’ only trying to help but you really need to STFU. It’s like trying to read a book with somebody scribbling sh*t over the page whilst you’re reading. . . . worse still, somebody might read your posts and actually think your random utterances are fact.

sorry I have to be so personal but I suspect that subtlety isn’t in your dictionary.

Help! Who let the chimps out? :mrgreen:
The chimps keep commenting on nothing but others’ posts. You have nothing to say about the subject. These trolls that follow people about with inane uninformed comments are starting to clutter the forum.

It seems I’m not allowed to discuss problems and try to resolve them in anything like a professional way but to browbeat Steinberg. Sorry, I don’t play those schoolkid games.
If you haven’t got any idea about the subject then butt out back to the bedroom, kid.
I’m talking about BASICS here. If you don’t understand or can explain why I’m so wrong in a clear and professional way then you’ve got to be trolling.

Jitter isn’t a Cubase problem.
Jitter is a hardware problem.
Midi has always been hard to implement on Windows.
Midi takes time (latency) to travel around a computer circuit and travels in things called “packets”.All these things I write about and they are all documented and searchable.

I am by no means the expert you say I think I am and you think you are. I never said I was an expert. I just bring stuff to the table to consider and discuss. Which I do without denigrating others.

What I’m saying here is that the OP and the title is wrong. BUT. He does have a problem. What we have to establish is that the problem is with GA and whether it can be fixed by Steinberg or whether the OP needs to change work practise in some way.
My take is that (even sampling) synths like GA generate sounds unevenly and so nulling would rarely take place.
My second take is that a better way to get nulling would be to record two tracks of audio.
And. Double tracking, stacking and / or double sampling is sometimes not easy.

My background: Cubase 20 years, drums, 31 years, engineering 15 years.
Consider yourself told off by a guy who never stops learning but is used listening to fascists who stopped learning long ago. Dr. Chimp.

PS: Apologies to all and thank you popmann for your good post.

Convolution (like multiplication and addition) produces the same result every time. A (straight) convolution reverb ought therefore to produce the same reverb every time - though I believe there are convolution reverbs that employ extra procedures to deliberately introduce some variation/randomisation(?). In (digital) audio, convolution isn’t confined to reverb, but unless there’s another process introducing variation, the results of repeated applications of convolution (in any guise) must always be the same.

IMHO, it’s thus confusing/misleading to say variation is caused by convolution if what’s meant is that another process is acting in conjunction with the convolution to cause the variation (deliberate or otherwise).

Convolution, per se, can’t cause two MIDI playbacks to be different.

I Don’t know who this question is directed to, but I’m suggesting nothing. I only share my findings. Like I said earlier, for most users this isn’t a showstopper. But I just find it remarkable one sequencer is spot on and the other not, on exact the same system.

FWIW I did the same test with Battery, Kontakt and Sampletank (all with a single layer sample) and the result is the same.
Random shifts in time when playback is recorded in real time. I did use GA1 simply because anyone using Cubase could reproduce the test. To exclude any specific VSTi or Asio problems I directly recorded the midi out to midi in too in different sequencers, again on exact the same system, the result is in this post.

Please note, these test are not done by bouncing in real time but recording the playback directly from a group.
E.g. record the audio in two independent takes from the same VSTi with a single layer sample and all effects and variables in volume and filters turned off. When you null them against eachother, they are slightly different every recorded instance. While different exports null out.
Remarkable is that when I do the exact same test in another sequencer (Studio One) on exact the same system with the same VSTi (Battery), the independent recording from the playback nulls out. So on my system Cubase isn’t consistent in realtime playback and another sequencer is. My question is, what could that be?

I would love to know if your system acts different when you record the playback in real time. Because when that’s the case, it could have something to do with our setup and maybe narrows it down.

Convoluted = highly complex or intricate and occasionally devious.
You are right mathematically but we aren’t talking maths we’re talking synth (and reverb) terms for sound reproduction. Convolution in these terms serves to make the sound interesting ie: varied. And when a sound varies to find two instances the same would be rare.
To my ears GA sounds decent enough for me, a drummer to ue. This means that the sounds aren’t boring me to death yet, which means there is some “interest” in there. Some variation must be happening or I’d spot it and not use it.

On nulling. Logic 8 (Apple) seems/ed to have it. Some suggestions are to change the pan law.

Yes, I do know what “convoluted” means in ordinary English.

So, you were using the word “convolution” to mean the condition of being twisted, coiled or intricate, etc, rather than the technical use of the word in maths or digital audio.

I’m surprised to learn that you associate that (complexity-type) meaning with “convolution” when speaking about reverb, instead of meaning the type of reverb that works by convolution (in the digital-audio/mathematical sense). IMHO, that usage is VERY likely to cause confusion amongst readers of this forum - I expect that if you use the word “convolution” when describing reverb, most of us will think you’re talking about convolution reverb.

If I spoke about the frequency of the A above middle C, forum members wouldn’t expect me to mean how often that note is played. If I said I’d just bought a compressor, they wouldn’t envisage a pump that pressurises gas.

Similarly with other words, such as: note, key, scale, interval, staff, quaver, bar, rest, pause, bow, string, wind, roll, amplify, distortion, fidelity, gate, delay, insert, effects, instrumental, conductor, … The context (this forum) sets up an expectation that such words have a meaning that’s not the meaning found in ordinary English.

I may be wrong, but I expect the majority of people reading posts in this forum will expect the word “convolution” to be used in the technical sense when discussing aspects of digital audio. If I’m right, there, using it to mean twisted/coiled/intricate/etc in such circumstances is, IMHO, pretty well asking to be misunderstood.

Anyway, I’m glad you’ve now explained what you meant by “convolution”, because I thought for a while that you weren’t willing to say what you meant by it …



Please note, these test are not done by bouncing in real time but recording the playback directly from a group.
E.g. record the audio in two independent takes from the same VSTi with a single layer sample and all effects and variables in volume and filters turned off. When you null them against eachother, they are slightly different every recorded instance. While different exports null out.

I will certainly check that out next time I’m down there in “geeky mode”. I do get the difference. I never really considered a real time export and a “live bounce” as having different results…interesting.

What I was saying about the thing being hardware related was that your audio NOR midi hardware has anything to do with any INTERNAL software timing, except by providing the master audio crystal clock, to which the audio and midi engines sync. Nor does it matter that midi has been troublesome on windows. Because even though the app is running on windows, only EXTERNAL midi would utilize and of the USB or direct sound midi functions of the OS. So, I think we are in agreement that your issue would be host related…or VI related-and since you clarified you get the same results with other VIs by different developers, I would say it sounds like a cubase bug.

Interesting that the real time exports null, though. It’s as if something is distracting the CPU during the live bounce, where the real time export has a chance to put the hit in the right place after detecting it was interrupted. Do you get these results with a project with only the midi track in question? Also…have you seen any difference between using an instrument track versus the instrument rack+midi track?

Side note, I’ve often wondered why, to solve both our issues, companies don’t just make the midi timing sample accurate. Rather than 480/960…why not make it “snap” to the nearest sample pulse? Effectively making there be no difference in the timing of sequenced midi and “live played” midi. Just something I’ve wondered in all these years of having less than stellar internal VI timing. While I get there will never be a true “midi 2.0”, why not go ahead and establish a bridge to that with the timing resolution locked to the sample clock…giving yet another reason to go to 96k!

I do credit the forum members with at least moderate intelligence enough to tell I’m not talking about a math concept but a variable sound reproduction process. If they didn’t have that intelligence they’d never be able to use Cubase to the full.

I may have found why there’s no nulling here! On copying the same track I did, as expected, find comb filtering. Then I panned one hard left and one hard right. The drum sounds did a nice stereo dance with toms etc. panning across the monitors.
So? What you are summing is, although it doesn’t say so explicitly, already a stereo sound field of drums and this movement is what is causing the “out of phasement” and non-nulling and why recording copies produces no nulling.
I suggest that to properly double any sounds that you perform a “Dissolve part” to get the required sounds, typically mostly snare and bass drum, you might find things are rather more civilised but I suspect you may still get some phasing. Mostly though I’d expect phasing especially if you double different samples from another drum patch. Not that that isn’t mostly the typical idea of doubling ie: To get interesting phases and reinforcements and, at times, weakening a too strong signal. Phasde reversal is, after all, usually used on input from stereo mic pairs and overhead v. drumkit mics and not for checking midi signals. And even live drumkits can have undesirable phase relationships between overheads and the close-mics which can be remedied by careful phase reversal.

Unfortunately (it seems) for me, when you were talking about convolution in the context of digital audio, I did think you meant convolution in the technical sense, ie as used in digital audio.

TBH, I’ve never before known the word “convolution” to be used to describe a “variable sound process” - and TBH I’m not actually sure what that might include: things like “round-robin” samples, perhaps? (used to stop repeated notes sounding too similar); phasing? flanging? free-running oscilators that don’t re-start at a particular part of a wave form when a note is triggered? the liberal use of MIDI Continuous Controllers to breathe life into samples? any live (real) instrument? – Do all real instruments exhibit “convolution” (in this sense) whenever they’re played?

Anyway, now I know I’m doomed never to be able “to use Cubase to the full”, because I don’t have moderate intelligence (for, if I had it, I wouldn’t have thought you were using “convolution” in the technical sense). Fortunately for me, I don’t actually want to use it to the full - I just want to be able to use the small corner of Cubase that deals with the way I make my music, and use it to the depth required to make things sound like real performances.

BTW, as of today, I’m going to use the word “tomato” to mean any food that isn’t green. I think anyone with at least moderate intelligence will know what I actually mean when I ask for a tomato.

‘as a spasmodic chortle burst forth, Como barely managed to avoid spewing his partially masticated mouthful on the computer screen’

‘Words mean what I want them to mean when I want them to mean that.’
Paraphrasing the Hookah Smoking Caterpillar in Alice In Wonderland.

“It all depends what ‘is’ means.” Quoting Bill Clinton.

Como

Please excuse me for butting in… He said butt…

But isn’t this all getting a bit… em, convoluted?

I’m still learning and I bow to the man who obviously knows it all. :mrgreen:

Convolution is an adjective or describing word and not a noun like “tomato”. And tomatoes can also be green.