The artificially intelligent marimba-playing ‘Shimon’ robot can compose and perform its own musical pieces.
Researchers at Georgia Tech Center for Music Technology have created a four-armed robot that is capable of machine learning semantic relationships between musical units and then writing and playing its own music.
BYPASS THE CENSORS
Sign up to get unfiltered news delivered straight to your inbox.
In 2013, Compressorhead — the world’s first robotic musical group — wowed audiences the world over with their renditions of hits by AC/DC, Black Sabbath and other rock giants.
Disturbing Proof the WEF and UN Are Quietly Deleting the Internet
WEF Insider Reveals 'Bug-Eating Agenda' Is About Destroying the Human Soul
Coolio Was About To Take Down Hollywood Pedophile Ring Before He Died
Pope Francis Vows To Usher In ‘One World Religion’
Bill Gates Caught Admitting ‘Climate Change Is WEF Scam’ to Inner Circle
Elites Panic As Queen’s Death Threatens To Expose Pedophile Ring
WEF Anoint Charles ‘The Great Reset King’
WEF To Force Public To Wear ‘Brain Implants’ So the Elite Can Read Their Minds
Woody Harrelson Slams Big Pharma: 'The Last People You Should Trust With Your Health'
However, while undoubtedly a very impressive venture, the German “band” was nonetheless reliant on pre-programmed sequences to play their tunes. Improvisation, let alone penning pieces of their own, was totally out of the question.
Now, thanks to the groundbreaking work at the Georgia Institute of Technology, the prospect of a fully robotic band performing original music doesn’t seem so ridiculous. Mason Bretan, a PhD candidate at the Institute and lead researcher on the project, explains they set out to create a robot capable of engaging in meaningful musical interactions with humans, leading to novel musical experiences and outcomes for man and machine alike.
Named Shimon, the bot is equipped with four arms and eight sticks for playing the marimba, a camera to ensure it plays the right notes and — crucially — artificial intelligence and deep learning capabilities.
“Deep learning allows Shimon to learn from example — we don’t tell it explicitly what to learn, but feed it with compositions by great artists old and new, ranging from Beethoven and Mozart to The Beatles. From here, Shimon learns musical concepts, and develops its own ideas of what is musically important — motifs, rhythmic patterns, pitch relationships, beat locations, genres and moods, and more. With this understanding, it can detect patterns that appear across many different genres and styles,” Mr. Bretan told Sputnik.
For Mr. Bretan, Shimon has been a labor of love — he has worked on the project for seven years as of 2017, and it is only in the first half of the year Shimon has managed to compose its own material. His output to date consists of two calming 30-second bursts of totally original music, recalling jazz and classical.
Before composing the pieces, Shimon was “seeded” with a rough starting point to work from — in the case of the first case eight notes, in the second 16. The finished works represent huge leaps from the robot’s first efforts, in which it depended on probabilistic note-to-note transitions to write very basic, brief, repetitive ditties.
While a landmark achievement, Shimon’s work is not the first time machines have created music — there are a number of software applications available for general use that can generate music autonomously, and artificial intelligence applications have composed tunes.
In 2016 alone, Sony artificial intelligence algorithms wrote “Daddy’s Car” — a song based on the style of The Beatles — and a Google “Brain” machine produced a 90-second melody, which through machine learning could duet with humans.
What’s new is Shimon actually physically plays its symphonies on traditional instruments. Mr. Bretan says Shimon successfully integrating its comprehensions of both its physical self and musical structures was key to this advancement — the robot understands its limitations and opportunities based on its physical restraints, and can work from that basis.
Shimon thinks about itself, and music, optimizing both variables to generate a performance specific to its physical design. In time, Shimon will be able to compose and interact with fellow musicians — be they human or robot.
“Companies like Google and Sony are pumping resources into similar research, trying to identify how to get machines to generate music through their own design. We could have robotic saxophone, guitar and piano players in future, relaying musical ideas through their own ‘bodies’,” Mr. Bretan continues.
However, Shimon’s capabilities don’t merely have musical applications — teaching machines to think creatively is a major ongoing project, as such skills are fundamental to creating robots that can survive successfully in the real world. The ability to think creatively “on the fly” and make corresponding decisions is what separates humanity from machines, and has long been thought to be a permanent dividing line. Shimon’s achievements suggest machines can master these human processes.
If musicians — or indeed atonal members of the public — are scared of the prospect of machines superseding them in time, Mr. Bretan is quick to offer reassurance.
“The purpose of Shimon and other such applications is to augment human capabilities. We’re not trying to supplant people here, but put them into new scenarios, and allowing them to expand their own creative processes with machine input. Humans may be able to do things, such as write and perform songs, they were never able to do before with the help of robots,” Mr. Bretan told Sputnik.
Latest posts by Edmondo Burr (see all)
- Police Arrest Suspect In Supermarket Baby Food Poisoning - October 1, 2017
- Seoul Secures Data From Electromagnetic Interference By N Korea - September 30, 2017
- The ‘World’s First Internet War’ Has Begun: Julian Assange - September 30, 2017