June 2022 | Page 33

Powell , noting that “ Season of the Witch ” by Donovan was one of the Peer-supplied songs that stumped the AI for a long time , as the vocal would be confused with the guitar and bleed into the vocal stem .
To overly simplify it , the AI model is trained on thousands of existing stems to learn the sonic characteristics of vocals , guitars , piano , drums , etc ., and then is tested continuously on songs that it hasn ’ t seen . Initially , guitar stems proved tricky to master , Powell says , because of the guitar ’ s ability to sound like other instruments . Synth bass , likewise , is also very difficult for the AI to identify and separate . Currently , AudioShake provides guitar , drums , bass , vocal , and piano stems . And because of the continuously-learning nature of the AI , the stems it separates today are better than those it generated last year , and next year ’ s stems will be even better and so on .
“ What ’ s interesting about sound separation , and I don ’ t know if you ever fully resolve this , but sometimes you can do this perfect separation where things are really cleanly separated , but you lose a certain element of presence . Our goal is to deliver you the cleanest separation possible , but also retain all those other elements of the mix that you need . I ’ ve seen cases where , according to the way sound separation gets measured , the separation is just perfect . But you take that to an audio engineer , and they ’ re like , ‘ I would have taken that with a few artifacts . Give me the version that actually performs less according to the metrics and I can cover up a few artifacts in the mix .’ So , there ’ s a very subjective element to it ,” notes Powell . “ Another example would be if there ’ s ducking in the master , then there will be ducking in the stems . But there ’ s a good argument to be made that you actually don ’ t want that stem with ducking . Could you do some sort of post-processing afterwards that compensates , even though you ’ re retaining what actually was happening in the master ?”
Since Audioshake launched commercially with an Enterprise subscription-based service in 2021 , it has been used by all three major labels , publishing companies like Primary Wave , Downtown , and others ; and distributors like CD Baby . As one example , because the original masters were lost , Crush Music used AudioShake to separate Green Day ’ s pop-punk song “ Kerplunk ,” then shared the stems on TikTok so that fans could play along . Another example is Audiosocket , a music licensing and technology company , which used AudioShake to recreate the lost stems for a song to be used in a Super Bowl commercial . Cleopatra Records has used it for remixes and sync licensing , and Vulture ’ s popular Switched on Pop podcast uses it for educational and analysis purposes .
While sync licensing and remixes are AudioShake ’ s bread and butter uses at the moment , Powell sees broad horizons for the technology . For one , as immersive mixes for virtual reality are more in demand for movies , TV , and video games , the need for stems will grow . But , maybe not so surprising given her
FRENCH DJ MADEON ON INSTAGRAM
past life with Google , Powell also sees some truly massive potential for it in social media and communication .
“ If you think of social media , it seems pretty clear to us that the next TikTok is going to let us manipulate audio , and edit , customize , and experience audio with the same ease that we do images and video today . Like , neither you or I necessarily knew how to use Photoshop 10 years ago , but we ’ ve probably used a couple Instagram filters or done something on TikTok ,” she says . Powell notes that TikTok already has the Duet feature and predicts that in the near future , whether on TikTok or totally new platform , users will be able to insert themselves into a song or pull it apart and remix it with some dramatically simplified editing tools and presets , essentially doing for audio what Instagram filters did for photos or video .
“ That stuff to me seems pretty straightforward — not from a rights perspective , necessarily , but for an end-user experience . It makes sense to me that it will happen and that it should happen relatively soon ,” adds Powell . “ What I also think could happen , but it ’ s a little harder for me to immediately imagine the UI for it , is … I could easily imagine a world where audio is memeable in the same way that images are in texts and other forms of communication .”
But as Powell alluded to , any time you ’ re dealing with public and commercial uses of music , respecting songs rights is a concern . The Enterprise service for labels , publishers , and other companies who need regular access to stems from a large quantity of songs , requires that they assert they own the rights to the songs they ’ re separating . Likewise , the newly-launched Indie service – which allows artists and others to pay on a per-song basis or for a subscription at a more affordable price than the Enterprise service – also requires that the user asserts they own the rights to the song . That said , there is no sure-fire , workable way to ensure that the user owns the song rights that they claim .
“ I think that the price points , while they are designed to be friendly for indie artists , are such that I don ’ t think you ’ re going to have a lot of bedroom producers using it . The bulk of the content that , say , a bedroom producer might be trying to get at is Drake and Cardi B songs or an old Motown record or something like that , and less so that independent content . So , in terms of what drives infringement , it ’ s not this content for the most part . And secondly , if you just want to remix someone ’ s song , you ’ re probably going to head to one of the [ free or cheaper ] open-access sites and not bother with our product ,” says Powell . “ Having said that , if any artists ever want to not be on the site , we work with Audible Magic , which is the industry-leading content recognition technology , so it ’ s fine . But we had to develop a way because , otherwise , indie artists could never use our tool .”
“ To hear musical stems created with this technology — on demand — at this high quality gives any songwriter , artist , producer , or rights holder not just new life creatively for their life ’ s work , but also the ability to create new revenue opportunities for every stakeholder involved ,” said songwriter and producer Billy Mann when AudioShake launched in July 2021 . Not to mention its convenient potential for remixing and remastering old records where the masters have been lost , which is surprisingly common .
But what matters here is the technology , not necessarily the specific company , and how this innovative use of AI can bolster and propel the work and careers of artists , producers , engineers , and right holders .
“ I want to see people build stuff with stems . I want to see artists and songwriters contributing to that conversation . It shouldn ’ t just be what some random technologists think that the world wants . I think artists have some pretty great ideas about how they ’ d like to see their content , their music , being consumed and interesting ways to play it ,” says Powell towards the end of our conversation . “ I think there ’ s a lot of artists , particularly on the producing side , who are doing interesting things around letting listeners play with their stems and remix them and some playful interfaces , too , for them to do that without having to work in a DAW .”
The future , indeed , is very intriguing .
For the full conversation with AudioShake CEO & Co-Founder Jessica Powell , listen to the March 2 , 2022 , episode of the Canadian Musician Podcast .
Michael Raine is the Editor-in-Chief of Professional Sound .
PROFESSIONAL SOUND 33