DAW/Score Editors and AI's

I think with the advancement of AI, it is time for notation software and DAWs to use the power of Machine learning and AI in music production.

CS

Yeah, hopefully in the future we don’t have to do anything. Why learn and play and compose and stuff when we don’t have to.

I think those links just made me puke. Yeah, screw all that. Seriously? What the …

We’ve been already using algorithms! Chord pad ( chord progressions), arpeggiators, transposition, scale adaptation, etc.

How is this guy producing music? /Ian Kirkpatrick Ian Kirkpatrick on Producing with the Slide Function | Steinberg Spotlights - YouTube, and many others.

it seems this method for music production ( like building something using random lego pieces) is everywhere, as least for digital/electronic music. Although I have never composed music this way myself! Instead of blindly going through samples, applying slip/slide function on samples an AI system can help us find material more suitable via a well-designed recommender system. Like i said, it is already there and we come an hit us soon!

Using algorithms to help with chord progression is maybe fine if you’re intending to learn how to do it yourself. If you continue to use it year after year you’re just lazy. Same with anything similar.

It’s not really “AI” though, it’s algorithms.

But it isn’t “AI”. He’s still using some amount of skill to match rhythm, pitch and form. I wouldn’t call it being a “musician” per se, but he is using some amount of skill.

This is what I see:

Like, why bother? Is there no pride and value in learning a craft and creating art yourself?

Completely agree with MattiasNYC…

AI, as most people imagine it, is still in its very early days… just look at how useless auto-correct is on most mobile phones, AI still can’t cope with things like context and it’ll be a long time until it can.
If one can be bothered to google (other search engines are available and most likely pay their taxes) around, there are myriad examples of ‘AI created’ ‘music’ and ‘poetry’ floating around and they’re all dreadful! … how do you think we ended up with dubstep! :wink:

Thank you guys for the feedback.

Skill is part of Art ( and also music) and it old history and many cultures, the definition of art has changed drastically over years.

In Middle English usually with a sense of “skill in scholarship and learning” (c. 1300), especially in the seven sciences, or liberal arts. This sense remains in Bachelor of Arts, etc. Meaning “human workmanship” (as opposed to nature) is from late 14c. Meaning “system of rules and traditions for performing certain actions” is from late 15c. Sense of “skill in cunning and trickery” first attested late 16c. (the sense in artful, artless). Meaning “skill in creative arts” is first recorded 1610s; especially of painting, sculpture, etc., from 1660s.

It is hard to say he is not a musician!

  • the definition for musician: a composer, conductor, or performer of music
  • Definition of music: the science or art of ordering tones or sounds in succession, in combination, and in temporal relationships to produce a composition having unity and continuity.

I suppose no one likes poetry and music generated by ML ( deep learning, the current state of art for music and natural language generations). But looking at his resume and his success in the main stream media, tell us that song writers today can no longer use orchestration techniques ( most likely they don’t know it) and try to stay away from it. Just listening to a music like this, tells us how different we are now. It takes many year to play & memories this on violin, and even the composer: Max Karl August Bruch in this case has asked the Joseph Joachim and Willy Hess to advise Bruch on his writing for violin and many took years to write this music. Now just like fast food, composers/songwriters want to create many artworks almost instantly. That is unfortunate.

I was referring to the task, not the person.

It is the epitome of our culture, zero output and 100% return in less than a Quibi of course

I think that the real question is: when I produce music, do I put into it something really new, personal, something that can’t be automatically inferred from the analysis of other pieces of music? If the answer is yes, there’s no problem, at least for many years from now. If the answer is no, future developments in artificial intelligence could prove to be really really harmful for our business…

Well, the big problem comes when listeners can ‘change the mood’ of music on the fly and decide for themselves how it evolves as listening, and thus there may exist a future where you buy AI artists/instrumentalists to listen to day-to-day music in your car/phone/ipod which you can interactive with and affect - and therefore not listen to traditional fix passages of music/prewritten songs and albums.

If/When that becomes the hot fashion in music and listeners can share their own ‘compositions’ based on the sliders/settings used to friends - yes, it’s a really big hit to traditional artists.

The big question is would a listener get more enjoyment/stimulation from an AI controlled ‘real time’ music generator that changes with their activity/time/house lighting and decisions. Or an album that has been personally worked on for hundreds of hours?

Also, does the ‘listener’ then become a ‘user’ of music. This will happen, and there’s a clear business model for it too, when put into a competitive sphere the progression will be very rapid. It would even extend to the point where you could ‘buy’ someone like Drake/Ed Sheeran etc. to lay vocal phrases over the music in realtime.

I know so many more people today that listen to instrument/soundscapes than ever before, growing up it was all about albums one-by-one, now it’s just about playlists. Now, it’s much more about social media and sharing experiences and this plays right into that market. “Everyone” will feel that they’re a musician or producer.

What you wrote sounds extremely fascinating to me. I’m not thinking of an AI system that creates music from scratch, but of a way to produce music that could give the listener some level of interactivity, based on our choices as composers/producers. You don’t produce a song that is “fixed”, written in the stone, but somehow an alive creature, that can change depending on the interaction with the listener. After all, sounds like music for videogames, but here the focus would be the music, not the game. I think that somebody is working on similar ideas, and I’d be more than happy to work on “open” pieces of music of this kind, soooo stimulating…

Seems like you have written a good book!

Thanks Chikitin! I hope that it will help as many people as possible to enter the world of music production, or to improve Cubase’s knowledge…

I prefer music created by man, rather than artificial intelligence, so to speak, the music in which the soul was invested.

AI is here to stay! Although I hate it in the sense that it will write music for us? But it’s already been here for some time. Think about chord tracks, audio, midi, arpeggiators, randomizers, etc

The thing is…Everyone seriously involved in music wants to ‘create’! They don’t want ‘something’ to create it for them! It’s only people that are not musicians or not talented at all that are looking for stuff like this?

For a novice musician it may be very cool to set a few parameters and press a few buttons and end up with a complete song? But he/she will soon find out there’s a lot more to music than just generating chords and a melody?

Music needs character and storytelling for it to be remotely interesting to the listener. AI (still?) doesn’t have that. It can create chords and melody based on mathematical models. But it can’t create music based on human emotion. You can put in all the right parameters and press all the right buttons on your Tensorflow app, and it still can’t tell from what emotion your trying to write this piece of music. So it will always be IA writing the music based on models, not you!

And let’s for a minute suppose you can get the perfect chords and melody out of this Tensorflow application?

Is it also going to arrange, orchestrate mix and produce this perfect song for you? Is it also going to pick the synth sounds, samples you had in mind? The fades between sounds that makes it special? Guess not!

Whenever ‘stuff’ like this comes up in music production I always compare it to photography as opposed to drawing/painting.Our camera’s today can capture any scenery and can capture every person or face in the greatest of detail with just a push of a button. Why is it then that some people still spend hours, days or even weeks to capture that same scenery, person or face in a drawing or a painting?

It’s because artists feel the urge to create! To create something unique!

Tensorflow can maybe create generic chords and melody for the masses?

It will never replace the creative mind. The creative mind that has lived the situation that it wants to portrait in music.

“And after the music has been created the next process starts……

I think about what parts should be played by whom? And what drums/percussion should be used? Who should I invite to sing the part? Maybe I could shift this melody here to the clarinet so the singer could switch to the second voice? I might put in a string arrangement on the chorus? Ahh! Here, on those two notes I really need the piano to do solid block chords….I have a patch on the Prophet5 somewhere that’s just perfect for this part, let me….! Mmmm…This snare is just not…it! Let’s try the Yamaha for this? I really think this melody sucks here and the chords are wrong? I’ll rearrange this….”

That’s a common example of the difference between a computer program like Tensorflow dictating you chords and melody and a real live human being involved in making music….

Except it doesn’t ‘need’ that globally, beauty is in the eye of the beholder… Or rather ‘ear’, in this case. The listener can add/apply whatever emotions they want to a piece of music - no matter what the original artist had in mind.

So many songs are written about sorrow or happiness and millions perceive it as something completely different. Don’t discount the vast amount of instrumentals that people listen to while working, or just chilling out - not all genres are vocal driven, and AI will be huge in this domain - particularly when interactivity from the listener becomes mainstream.

AI generated art has sold for nearly half million dollars in the past, so even at a high level an somewhat intriguing demand exists.

I think there’s a lot of the novelty based interest with AI . Personally I think AI tools can help with the more mundane aspects of production like sorting samples eg: XLN XO, Sononym, Algonaut Atlas etc. Maybe some of the more tedious aspects of EQing and mixing can be taken over by AI like izotope products. But composition and production and the more creative aspects even if AI gets good at it, and fills a certain demand, there’s always going to be listeners who re interested in music as human expression. That won’t change I think.