Your brain activity is used to create music by Google's new AI model

Your brain activity is used to create music by Google's new AI model

SHARE IT

10 August 2023

AI can utilize music to mimic certain brain functions that produce sounds. Google launched its MusicLM in January to produce music from text, so the company is not new to employing AI to produce music. In order to read your brain and create sound depending on your brain activity, Google has now upped the ante and developed AI.

Google employs artificial intelligence (AI) to rebuild music from brain activity as detected by functional magnetic resonance imaging (fMRI) data in a new study titled Brain2Music. Five test individuals were asked to listen to identical 15-second music clips from a variety of musical genres, including blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, and rock. The fMRI data from these subjects was analyzed by researchers.

After that, they utilized the information to train a deep neural network to discover the connections between various musical aspects, including rhythm and emotion, and brain activity patterns.After being trained, the model could use MusicLM to recreate music from an fMRI. MusicLM was trained to produce music that was semantically close to the original music stimuli since it creates music from text.

When put to the test, the produced music had characteristics like genre, instrumentation, mood, and more in common with the musical stimuli that the subject had first listened to.You may hear a number of snippets of the original musical stimuli and contrast them with the reconstructions produced by MusicLM on the research page's website. The outcomes are really amazing..

Essentially, the model can read your mind (technically your brain patterns) to produce music similar to what you were listening to. A 15-second clip of Britney Spears' classic song "Oops!...I Did It Again" served as the trigger for one clip. Like the original, the three restorations were peppy and perky.Since the study only considers the musical parts and not the lyrical content, the audio, of course, did not sound exactly like the original.

The model essentially reads your mind—or, more precisely, your brain patterns—to create music that is comparable to what you were listening to.

View them all