Well, the big problem comes when listeners can ‘change the mood’ of music on the fly and decide for themselves how it evolves as listening, and thus there may exist a future where you buy AI artists/instrumentalists to listen to day-to-day music in your car/phone/ipod which you can interactive with and affect - and therefore not listen to traditional fix passages of music/prewritten songs and albums.
If/When that becomes the hot fashion in music and listeners can share their own ‘compositions’ based on the sliders/settings used to friends - yes, it’s a really big hit to traditional artists.
The big question is would a listener get more enjoyment/stimulation from an AI controlled ‘real time’ music generator that changes with their activity/time/house lighting and decisions. Or an album that has been personally worked on for hundreds of hours?
Also, does the ‘listener’ then become a ‘user’ of music. This will happen, and there’s a clear business model for it too, when put into a competitive sphere the progression will be very rapid. It would even extend to the point where you could ‘buy’ someone like Drake/Ed Sheeran etc. to lay vocal phrases over the music in realtime.
I know so many more people today that listen to instrument/soundscapes than ever before, growing up it was all about albums one-by-one, now it’s just about playlists. Now, it’s much more about social media and sharing experiences and this plays right into that market. “Everyone” will feel that they’re a musician or producer.