Although not yet released due to copyright issues, MusicLM is a signal about what comes into use AI to generate music from text descriptions.
What is the impact of taking all that musical knowledge and feeding an AI brain with just asking questions that express your musical taste? Who needs to be a musician to entertain when we will soon be able to make our own music even easier and harness the human genius of any musician who has a recording?
Google researchers say MusicLM is based on a model that generates high-quality music from text descriptions such as “a soothing violin melody backed by a distorted guitar riff”. You can find the details at GitHub.
MusicLM is built on a neural network and trained on a large music dataset of over 280,000 hours of music, enabling it to automatically produce innovative music tracks of different instruments, genres and concepts based on textual descriptions.
Essentially, if an AI attempts to mimic a human brain by taking in all the musical patterns and sound frequencies it is exposed to. One only needs to search for AI-generated Carti tracks on YouTube to listen to this kind of technology like Digital Butterflies.
Like a magic wand, MusicLM can produce more accurate sound quality with higher quality, you can even hum a tune to train the AI algorithm to get the right beat you want to hear. According to Google researchers The model generates music at 24kHz that remains consistent over several minutes, according to researchers. Google researchers have also published an AI training dataset of 5,500 pieces of music to support other researchers working on automated song generation.
This is truly a sign of things to come in the music world. We need to answer harder questions and build strategies for effective policy and legal legislation such as:
- What is the risk of AI algorithms creating their own compositions and works, and who owns this work but AI or man?
- Who owns the music when it is a mix of everything on the world wide web creating a new song from the brilliance of our musician?
- When you buy music, do you also buy the right to use the audio as AI training data?
Since YouTube sensation and American Idol Taryn Southern began composing music with AI, musicians around the world are trying to understand the impact of AI on musicians.
What is clear is that we need to improve our legislation regarding ownership of musicians’ music and understand how AI algorithms should be treated and managed in the music industry.
Although Google, Meta, Microsoft, OpenAI and many other AI market leaders will continue to push the boundaries of all industries that use AI. We as humans have an ethical responsibility to think harder about the world we want to create and leave for future generations.
If you are a chairman or CEO or C-level in the music industry, it is important to learn more about AI to understand the long-term effects of AI in the music industry and shape the world you want to protect as “human brains and musical talent” have value, and we are rapidly commoditizing precious creative DNA into bits and bytes that will have a major impact on musicians’ creative ownership rights.
At least ask the hard question and do some scenario risk analysis?
To support future research, Google has also published MusicCaps, a dataset consisting of 5.5,000 music-text pairs, with rich text descriptions provided by human experts.