AI Music's CEO explains dynamic music solutions | Music Tech Fest podcast

Towards the end of 2020, the MTF Labs’ Director Andrew Dubber welcomed AI Music CEO Siavash Mahdavi to the podcast to talk about all things, well, AI music: What we do, how we train machines to music, and the possibilities of it all... as well as so much more.


In this extract, we’ve included some of the key highlights from their Q&A but you can listen to the full episode here if you want to skip ahead!


Q. Dubber: (...) So you’re CEO of something called AI Music, which obviously puts it very squarely into our area of interest. What’s AI Music?


Siavash: At AI Music, we are looking at different ways of interacting with music. We always saw music as, for me, one of the most creative art-forms and yet one where ninety-nine percent of people consume in a passive, lean-back way. Unless you play an instrument or can produce or can sing, your interaction with music is to sit back and someone says “Here’s my song.” and you go “I’m listening to your song.”, and that’s all you get.


What we’re exploring is can we shift music from what we call static consumption of music to something more along the lines of dynamic co-creation, where an artist, a musician, will make a song but you decide how to interpret that song. “Can you make the song a bit faster? Can you change the key to make it something you can sing along to? Can you change the genre to suit maybe your activity or mood?”.


That was the high-level philosophy of the company, and then we went deeper into “Okay, what does that actually mean?”. “Can we make a product out of that?”. We explored shape-changing music. We explored creating hyper-customised remixes of songs so that a song gets released, we can create ten-thousand versions of that song, and everyone gets their own super-unique version to suit them.


Some of the things we’re doing now are essentially the same thing, but maybe a little bit more practical, around music beds for audio adverts. So can a brand - they’re launching a new phone, they’re launching a new restaurant, whatever it might be - when they create an advert have the music bed of the advert customise itself to the music you were listening to before the advert came in? And what does that do?


Let’s say you’re listening to jazz - you’re listening to Jazz FM or something - and then this advert pops in. Which, no one wants adverts, anyway, but they’re there, and they pay for the musicians. They pay for everything. Can that be a jazzy version of the ad so that you go “Oh.”? It wasn’t as disruptive. I can almost click along to it. It might be that good. But also I’d feel the brand understands me more and wants to engage with me more, and I’m actually more likely to then buy the product.


Dubber: To make ads less offensive.


Siavash: Yeah. Make them less offensive. Less disruptive.


Q. Dubber: And when you hear ‘AI music’, particularly… I talk to a lot of musicians,

and the phrase ‘AI music’ really makes them bristle because the assumption is that it’s a replacement for creative musicians. Is that how you see it?


Siavash: No. So my background is in the AI bit. I studied a masters in machine learning. This was back in… What? 2002. We were actually the first cohort at UCL, in London, that did this master’s in machine learning. It didn’t exist. AI has been around since the ‘60s/’70s, but, for some reason, no one thought to make it into an actual master’s degree. So we were the first guys that did it there. I’ve always been fascinated by the role of automation and where AI sits, and what can machines do better than humans, where can they support humans, and where do humans win? And that’s always been really exciting for me.


Actually, I started my first company when I finished my PhD in machine learning in 2008, and there we used machine learning to automatically design objects that were then 3D printed - so big 3D shapes - for a range of industries. And we focussed on aerospace and Formula One and medical, and in those applications, the same challenge was there.


So there are engineers who are also very creative. They might use a bit more maths, but I would argue that they’re as creative as musicians, and they’re designing very complicated things. (...). And what’s happening is that people are simply designing more things. They’re focussing on other aspects of the car or the components and focussing on driving these tools using higher-level abstractions, using higher-level control. And the outcome is better, and the engineer gets to focus more on what he or she is interested in doing.


Moving back onto music, what we’re looking at doing is creating tools that for musicians, also for professional musicians, continue to, firstly, lower the barriers of entry for the ones that need it. If you don’t have access to a studio, we have some tools that allow you to sing straight into the microphone of your mobile phone and we use some clever machine-learning to augment that signal and make it sound better. So make it sound a little bit more like you’re singing in the studio. That tool is purely just helping people lower the barriers of entry to creating content. But also when it comes to composition and to creating assets, again, allowing musicians to focus more on the creative stuff and less on the searching, less on the mixing, less on the production-y bits.



Q. Dubber: Where does the co-creation element come into your music and how can people use it?


What we’ve been focussing on is creating the underlying elements of a song. We then have something that sits on top of what we call the remix engine, and that’s where the co-creation comes in. So that allows the end user – and that could be just an individual that wants to have fun to a big corporation that wants to make radio ads – to then interact with that music.


One of the fun things we’re working on now is fully dynamic music that changes in real-time. So the applications there would be in fitness, for example, where you might fit sensors off a smartwatch to the elements of the song. As an example, we took someone’s running pace and mapped that to the tempo of a song, and then we took their heart rate and we mapped that to the energy of the song. And so as you’re running, if you’re running slowly, to begin with, maybe you listen to some hip-hop, and as you go faster and faster, it moves into the house and then ends up in drum and bass at 174 bpm.


This kind of technology allows an adaptation by the user that’s actually unconscious. They’re not actively controlling it by saying “Make it faster” they’re just running faster and the music is adapting to them. We’re speaking to some fitness companies about that. We’re also speaking to VR companies about the possibilities in Gaming as well. For instance, if you’re playing games the music will just interact with your gameplay.


Dubber: Increased peril equals heightened music. That sort of thing?


Siavash: Yeah, exactly.



Up next


...and that’s not all. We’ve given you just a taste of what we’re up to at AI Music but to get the full story and the answers to questions such as “Will the machine take over?”, “How do you train AI to compose a masterpiece?” or “How do you solve the music licensing issues?” you’ll have to listen to the full episode here