Is Wavelab a real multicore Daw?

I think with ‘having a hard time’ you mean it took some time? And yes, only 1 core can be used because there is no distribution of tasks possible in a sequential process like rendering. Plugin A needs to be processed before Plugin B can be processed.

Having a hard time meant realtime playback feeling really heavy or sluggish.

So a machine with a higher clock speed work better than having more cores for rendering?

So a machine with a higher clock speed work better than having more cores for rendering?

For rendering a single file, yes, but no when batch processing.

Philippe

When rendering multiple regions from a Montage, would it be possible to use multiple cores? Wouldn’t each sequential process be handled separately?

When rendering multiple regions from a Montage, would it be possible to use multiple cores? Wouldn’t each sequential process be handled separately?

It is currently not possible, but it could become possible in the future.

It is currently not possible, but it could become possible in the future.

Please please please!!!

For me, and I’m sure many others, then this would be a massive time saver. I typically master a project/album in one montage. Once the analog captures have taken place and final processing is done then I bounce off the final masters. CPU hungry, post-analog processing plugins can take quite some time to render. Even on a fast machine. Being able to spread this via multiple cores would result in huge time savings for me.

I would say that if it’s technically possible, it would be very good to make it a priority. It seems that even on pretty decent computers, working at 96k with today’s modern mastering plugins is very easy to make out a CPU and make WaveLab feel slow.

I would 2nd Justin’s comments.

Thirded.

I’ve gone rounds with Presonus and Cakewalk on this. Their audio “engine” in real-time seems to only work on a single core. So, you’re not always likely to see a performance increase by going Hyper-threading, or adding more cores. In fact, more cores requires lower CPU GHz speed, in many cases, and the “scheduler” of tasks to each core slow you down as well. In both cases, I’ve had the recommendation to go with the absolute fastest CPU you can get.

When I’ve mixed 96Khz 30+ channel Presonus mixes in real-time, I’ve had to offload Waves plugins to a Waves Impact server, which provides more CPU. I’m using an i7 3770K 4.8Khz CPU from 2014 and 16Gb of RAM. I can easily go 10 DMG, Waves, Flux, Plugin Alliance, etc. plugins with minimal effort. The best work-around today still remains:

  1. Commit. Commit some processing to free up plugin CPU requirements.
  2. Shut down everything else on your machine while you work.
  3. Set your plugins for Low-CPU for listening, but then set them to highest quality for Rendering.
  4. Down-sample to 48Khz. This one is controversial, because a lot of people honestly believe there is a significant improvement between 48kHz and 96Khz. However, many Mastering engineers admit to using 48Khz.
  5. Go through your system and evaluate some services that run, but you don’t need. You can set them to Low-priority (green leaf) mode and save some CPU there in Windows 10.
  6. Update all your plugins and drivers.
  7. Reboot a few times, and certainly watch your Task Manager to see what’s eating your CPU.

Hope this helps!

Rendering an audio stream is inherently a single-threaded process at heart as you need all the results at any point in the rendering to feed into the next point.

Paul

FWIW, my comments are not aimed at rendering really, mo towards general playback while trying to work in a montage at 96k with commonly used 3rd party plugins for mastering.

It’s tough for me to fully gauge because my iMac Pro is at my studio where I usually lean more on analog gear and less plugins and my maxed out Mac Mini and fairly powerful MacBook Pro are of course only able to do “in the box” mastering and that’s where I feel WaveLab struggling prematurely when trying to work.

Render times are something I’m used to.

With Mac Mini’s and MacBook Pro’s, also keep in mind, Apple doesn’t usually provide the fastest CPU’s. They shoot more for rendering speeds and other work. More cores, slow down a CPU, and the CPU’s of higher core counts are slower in actual CPU speed. The idea is Macs can multi-task quite well. Unfortunately for the MacBookPro, Apple also lowered the speed priority in trade-ff of lowering heat cooling and power requirements.

Real-time playback at 96Khz sampling is also going to suffer if you don’t have the ability to set higher read buffers in your DAW for ASIO outputs, for example.

See if just one plugin consumes lots of CPU.

I think it would be cool if there was a facility in Wavelab that would automatically chop up a single audio file, by the number of cores available. render each segment and put each rendered segment back together again. It would make single renders really fast.

It would also require overlaps at the cuts of double the longest processing tail of any of the plugins in use, and probably various other complications.

Paul

IMO it would also limit the possible use to FX that do nothing in the time domain. Release and attack times for compressors, decay time for reverbs are just two problem areas that I can quickly come up with at the chop up locations. And I’m sure there are more.

I tried today with dialogue audio of 1 hour, using auto split as specific time intervals, it didn’t really work, when I joined them back together, the start of each segment joined back up were jumpy. However, what did work was, I created 10 segments with region markers manually and timed the markers to where a person took a pause of silence, used auto split, put the split files into a batch for sonic treatment, then rejoined the files. it took 2 minutes to complete rather than 14min for the entire hour. I definitely think with these massive core counts there is something there to work within some way that could massively reduce render times, well, in the dialogue world anyway…

It would also require overlaps at the cuts of double the longest processing tail of any of the plugins in use

Exactly

Well, I am getting decent results using auto split, this morning a 1-hour single file render was going to take 28 min, I created 10 regions manually were the dialogue took a break speaking, used auto split to create 10 files, added 1 second top and tail, selected to create batch processor from a template, the processing took 5min which is a significant reduction. The only thing needed now is a solution to automatically compile the files back into 1 file again after the batch processing.

Well, I am getting decent results using auto split, this morning a 1-hour single file render was going to take 28 min, I created 10 regions manually were the dialogue took a break speaking, used auto split to create 10 files, added 1 second top and tail, selected to create batch processor from a template, the processing took 5min which is a significant reduction. The only thing needed now is a solution to automatically compile the files back into 1 file again after the batch processing.

Interesting to know. Maye I will do some experiments about this.

Philippe