Deezer says consumption of AI-generated music on the platform is still very low, between 1-3% of the total streams, and that 85% of these streams are detected as fraudulent and are demonetized.
There are also good reasons for people to use AI music. If you just want some music as background in a video you want to post somewhere, that totally is a legal nightmare here where I live. If you’re some small business, that is even more nightmarish. Licensing songs is expensive and hard to do, so just generating some ok tune is the best way forward
I think people should be very careful about how dependent they become on such things, because inevitably if adoption ever does creep up the spike in prices of accessing those models is going to be astronomically more than having some jingle writer slap something together. Right now they’re desperate for adoption but those servers aren’t free to run. If they’re ever going to turn a profit the fees for accessing these tools are going to be orders of magnitude more than any small business owner can afford, and by then, there won’t be any aspiring new artists to take a cash job; they’ll have either starved to death or moved on. You’re basically Wille E. Coyote-ing yourself off an advertising cliff using AI like that, and same for other similar uses.
Music doesn’t stay under copywrite, forever. You could use anything that’s aged out of copywrite, too. And then you, as a business won’t alienate people who choose not to consume ai for ethical reasons.
I hear that, but it really depends on the service and prompt (including services’ internal prompt that is hidden) and result, which are many times black boxes.
I personally think artists & labels will have a tough time proving infringement for non-infringing outputs based purely on training data. But there’s really no way of being sure that the “generated” and “uncopyrightable” AI track that’s distilled from unlicensed source music is not itself infringing as a pure substantial similarity (or whatever your locality’s infringement legal test is) question.
There are also good reasons for people to use AI music. If you just want some music as background in a video you want to post somewhere, that totally is a legal nightmare here where I live. If you’re some small business, that is even more nightmarish. Licensing songs is expensive and hard to do, so just generating some ok tune is the best way forward
I think people should be very careful about how dependent they become on such things, because inevitably if adoption ever does creep up the spike in prices of accessing those models is going to be astronomically more than having some jingle writer slap something together. Right now they’re desperate for adoption but those servers aren’t free to run. If they’re ever going to turn a profit the fees for accessing these tools are going to be orders of magnitude more than any small business owner can afford, and by then, there won’t be any aspiring new artists to take a cash job; they’ll have either starved to death or moved on. You’re basically Wille E. Coyote-ing yourself off an advertising cliff using AI like that, and same for other similar uses.
Music doesn’t stay under copywrite, forever. You could use anything that’s aged out of copywrite, too. And then you, as a business won’t alienate people who choose not to consume ai for ethical reasons.
I hear that, but it really depends on the service and prompt (including services’ internal prompt that is hidden) and result, which are many times black boxes.
I personally think artists & labels will have a tough time proving infringement for non-infringing outputs based purely on training data. But there’s really no way of being sure that the “generated” and “uncopyrightable” AI track that’s distilled from unlicensed source music is not itself infringing as a pure substantial similarity (or whatever your locality’s infringement legal test is) question.