Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » Ableton-Live


Exploring a journey from Bengali heritage to electronic invention

Delivered... Peter Kirn | Artists,Labels,Scene | Mon 16 Jul 2018 8:42 pm

Can electronic music tell a story about who we are? Debashis Sinha talks about his LP for Establishment, The White Dog, and how everything from Toronto noodle bowls to Bengali field recordings got involved.

The Canadian artist has a unique knack for melding live percussion techniques and electro-acoustic sound with digital manipulation, and in The White Dog, he dives deep into his own Bengali heritage. Just don’t think of “world music.” What emerges is deeply his and composed in a way that’s entirely electro-acoustic in course, not a pastiche of someone else’s musical tradition glued onto some beats. And that’s what drew me to it – this is really the sound of the culture of Debashis, the individual.

And that seems connected to what electronic music production can be – where its relative ease and accessibility can allow us to focus on our own performance technique and a deeper sense of expression. So it’s a great chance not just to explore this album, but what that trip in this work might say to the rest of us.

CDM’s label side project Establishment put out the new release. I spoke to Debashis just after he finished a trip to Germany and a live performance of the album at our event in Berlin. He writes us from his home Toronto.

First, the album:

I want to start with this journey you took across India. What was that experience like? How did you manage to gather research while in that process?

I’ve been to India many times to travel on my own since I turned 18 – usually I spend time with family in and near Kolkata, West Bengal and then travel around, backpacking style. Since the days of Walkman cassette recorders, I’ve always carried something with me to record sound. I didn’t have a real agenda in mind when I started doing it – it was the time of cassettes, really, so in my mind there wasn’t much I could do with these recordings – but it seemed like an important process to undertake. I never really knew what I was going to do with them. I had no knowledge of what sound art was, or radio art, or electroacoustic music. I switched on the recorder when I felt I had to – I just knew I had to collect these sounds, somehow, for me.

As the years went on and I understood the possibilities for using sound captured in the wild on both a conceptual and technical level, and with the advent of tools to use them easily, I found that to my surprise that the act of recording (when in India, at least) didn’t really change. I still felt I was documenting something that was personal and vital to my identity or heart, and the urge to turn on the recorder still came from a very deep place. It could easily have been that I gathered field sound in response to or in order to complete some kind of musical idea, but every time I tried to turn on the recorder in order to gather “assets” for my music, I found myself resisting. So in the end I just let it be, safe in the knowledge that whatever I gathered had a function for me, and may (or may not) in future have a function for my music or sound work. It didn’t feel authentic to gather sound otherwise.

Even though this is your own heritage, I suppose it’s simultaneously something foreign. How did you relate to that, both before and after the trip?

My father moved to Winnipeg, in the center of Canada, almost 60 years ago, and at the time there were next to no Indian (i.e. people from India) there. I grew up knowing all the brown people in the city. It was a different time, and the community was so small, and from all over India and the subcontinent. Passing on art, stories, myth and music was important, but not so much language, and it was easy to feel overwhelmed – I think that passing on of culture operated very differently from family to family, with no overall cultural support at large to bolster that identity for us.

My mom – who used to dance with Uday Shankar’s troupe would corral all the community children to choreograph “dance-dramas” based on Hindu myths. The first wave of Indian people in Winnipeg finally built the first Hindu temple in my childhood – until then we would congregate in people’s basement altars, or in apartment building common rooms.

There was definitely a relationship with India, but it was one that left me what I call “in/between” cultures. I had to find my own way to incorporate my cultural heritage with my life in Canada. For a long time, I had two parallel lives — which seemed to work fine, but when I started getting serious about music it became something I really had to wrestle with. On the one hand, there was this deep and rich musical heritage that I had tenuous connections to. On the other hand, I was also interested in the 2-Tone music of the UK, American hardcore, and experimental music. I took tabla lessons in my youth, as I was interested in and playing drums, but I knew enough to know I would never be a classical player, and had no interest in pursuing that path, understanding even then that my practice would be eclectic.

I did have a desire to contribute to my Indian heritage from where I sat – to express somehow that “in/between”-ness. And the various trips I undertook on my own to India since I was a young person were in part an effort to explore what that expression might take, whether I knew it or not. The collections of field recordings (audio and later video) became a parcel of sound that somehow was a thread to my practice in Canada on the “world music” stage and later in the realms of sound art and composition.

One of the projects I do is a durational improvised concert called “The (X) Music Conference”, which is modeled after the all-night classical music concerts that take place across India. They start in the evening and the headliner usually goes on around 4am and plays for 3 or more hours. Listening to music for that long, and all night, does something to your brain. I wanted to give that experience to audience members, but I’m only one person, so my concert starts at midnight and goes to 7am. There is tea and other snacks, and people can sit or lie down. I wanted to actualize this idea of form (the classical music concert) suffused with my own content (sound improvisations) – it was a way to connect the music culture of India to my own practice. Using field recordings in my solo work is another, or re-presenting/-imagining Hindu myths another.

I think with the development of the various facets of my sound practice, I’ve found a way to incorporate this “form and content” approach, allowing the way that my cultural heritage functions in my psyche to express itself through the tools I use in various ways. It wasn’t an easy process to come to this balance, but along the way I played music with a lot of amazing people that encouraged me in my explorations.

In terms of integrating what you learned, what was the process of applying that material to your work? How did your work change from its usual idioms?

I went through a long process of compartmentalizing when I discovered (and consumer technology supported) producing electroacoustic work easily. When I was concentrating on playing live music with others on the stage, I spent a lot of time studying various drumming traditions under masters all over – Cairo, Athens, NYC, LA, Toronto – and that was really what kept me curious and driven, knowing I was only glimpsing something that was almost unknowable completely.

As the “world music” industry developed, though, I found the “story” of playing music based on these traditions less and less engaging, and the straight folk festival concert format more and more trivial – fun, but trivial – in some ways. I was driven to tell stories with sound in ways that were more satisfying to me, that ran deeper. These field recordings were a way in, and I made my first record with this in mind – Quell. I simply sat down and gathered my ideas and field recordings, and started to work. It was the first time I really sustained an artistic intention all the way through a major project on my own. As I gained facility with my tools, and as I became more educated on what was out there in the world of this kind of sound practice, I found myself seeking these kinds of sound contexts more and more.

However, what I also started to do was eschew my percussion experience. I’m not sure why, but it was a long time before I gave myself permission to introduce more musical and percussion elements into the sound art type of work I was producing. I think in retrospect I was making up rules that I thought applied, in an effort to navigate this new world of sound production – maybe that was what was happening. I think now I’m finding a balance between music, sound, and story that feels good to me. It took a while though.

I’m curious about how you constructed this. You’ve talked a bit about assembling materials over a longer span of time (which is interesting, too, as I know Robert is working the same way). As we come along on this journey of the album, what are we hearing; how did it come together? I know some of it is live… how did you then organize it?

This balance between the various facets of my sound practice is a delicate one, but it’s also driven by instinct, because really, instinct is all I have to depend on. Whereas before I would give myself very strict parameters about how or what I would produce for a given project, now I’m more comfortable drawing from many kinds of sound production practice.

Many of the pieces on “The White Dog” started as small ideas – procedural or mixing explorations. The “Harmonium” pieces were from a remix of the soundtrack to a video art piece I made at the Banff Centre in Canada (White Dog video link here???), where I wanted to make that video piece a kind of club project. “entr’acte” is from a live concert I did with prepared guitar and laptop accompanying the works of Canadian visual artist Clive Holden. Tracks on other records were part of scores for contemporary dance choreographer Peggy Baker (who has been a huge influence on how I make music, speaking of being open). What brought all these pieces together was in a large part instinct, but also a kind of story that I felt was being told. This cross pollination of an implied dramatic thread is important to me.

And there’s some really beautiful range of percussion and the like. What are the sources for the record? How did you layer them?

I’ve quite a collection, and luckily I’ve built that collection through real relationships with the instruments, both technical and emotional/spiritual. They aren’t just cool sounds (although they’re that, too) — but each has a kind of voice that I’ve explored and understood in how I play it. In that regard, it’s pretty clear to me what instrument needs to be played or added as I build a track.

Something new happens when you add a live person playing a real thing inside an electronic environment. It’s something I feel is a deep part of my voice. It’s not the only way to hear a person inside a piece of music, but it;s the way I put myself in my works. I love metallic sounds, and sounds with a lot of sustain, or power. I’m intrigued by how percussion can be a texture as well as a rhythm, so that is something I explore. I’m a huge fan of French percussionist Le Quan Ninh, so the bass-drum-as-tabletop is a big part of my live setup and also my studio setup.

This programmatic element is part of what makes this so compelling to me as a full LP. How has your experience in the theater imprinted on your musical narratives?

My theater work encompasses a wide range of theater practice – from very experimental and small to quite large stages. Usually I do both the sound design and the music, meaning pretty much anything coming out of a speaker from sound effects to music.

My inspiration starts from many non-musical places. That’s mostly, the text/story, but not always — anything could spark a cue, from the set design to the director’s ideas to even how an actor moves. Being open to these elements has made me a better composer, as I often end up reacting to something that someone says or does, and follow a path that ends up in music that I never would have made on my own. It has also made me understand better how to tell stories, or rather maybe how not to – the importance of inviting the audience into the construction of the story and the emotion of it in real time. Making the listener lean forward instead of lean back, if you get me.

This practice of collaborative storytelling of course has impact on my solo work (and vice versa) – it’s made me find a voice that is more rooted in story, in comparison to when I was spending all my time in bands. I think it’s made my work deeper and simpler in many ways — distilled it, maybe — so that the story becomes the main focus. Of course when I say “story” I mean not necessarily an explicit narrative, but something that draws the listener from end to end. This is really what drives the collecting and composition of a group of tracks for me (as well as the tracks themselves) and even my improvisations.

Oh, and on the narrative side – what’s going on with Buddha here, actually, as narrated by the ever Buddha-like Robert Lippok [composer/artist on Raster Media]?

I asked Robert Lippok to record some text for me many years ago, a kind of reimagining the mind of Gautama Buddha under the bodhi tree in the days leading to his enlightenment. I had this idea that maybe what was going through his mind might not have been what we may imagine when we think of the myth itself. I’m not sure where this idea came from – although I’m sure that hearing many different versions of the same myths from various sources while growing up had its effect – but it was something I thought was interesting. I do this often with my works (see above link to Kailash) and again, it’s a way I feel I can contribute to the understanding of my own cultural heritage in a way that is rooted in both my ancestor’s history as well as my own.

And of course, when one thinks of what the Buddha might have sounded like, I defy you to find someone who sounds more perfect than Robert Lippok.

Techno is some kind of undercurrent for this label, maybe not in the strict definition of the genre… I wonder actually if you could talk a bit about pattern and structure. There are these rhythms throughout that are really hypnotic, that regularity seems really important. How do you go about thinking about those musical structures?

The rhythms I seem drawn to run the gamut of time signatures and tempos. Of course, this comes from my studies of various music traditions and repertoire (Arabic, Greek, Turkish, West Asian, south Indian…). As a hand percussionist for many years playing and studying music from various cultures, I found a lot of parallels and cross talk particularly in the rhythms of the material I encountered. I delighted in finding the groove in various tempos and time signatures. There is a certain lilt to any rhythm; if you put your mind and hands to it, the muscles will reveal this lilt. At the same time, the sound material of electronic music I find very satisfying and clear. I’m at best a middling recording engineer, so capturing audio is not my forte – working in the box I find way easier. As I developed skills in programming and sound design, I seemed to be drawn to trying to express the rhythms I’ve encountered in my life with new tools and sounds.

Regularity and grid is important in rhythm – even breaking the grid, or stretching it to its breaking point has a place. (You can hear this very well in south Indian music, among others.) This grid undercurrent is the basis of electronic music and the tools used to make it. The juxtaposition of the human element with various degrees of quantization of electronic sound is something I think I’ll never stop exploring. Even working strongly with a grid has a kind of energy and urgency to it if you’re playing acoustic instruments. There’s a lot to dive into, and I’m planning to work with that idea a lot more for the next release(s).

And where does Alvin Lucier fit in, amidst this Bengali context?

The real interest for me in creating art lies in actualizing ideas, and Lucier is perhaps one of the masters of this – taking an idea of sound and making it real and spellbinding. “Ng Ta (Lucier Mix)” was a piece I started to make with a number of noodle bowls I found in Toronto’s Chinatown – the white ones with blue fishes on them. The (over)tones and rhythms of the piece as it came together reminded me of a piece I’m really interested in performing, “Silver Streetcar for The Orchestra”, a piece for amplified triangle by Lucier. Essentially the musician plays an amplified triangle, muting and playing it in various places for the duration of the piece. It’s an incredible meditation, and to me Ng Ta on The White Dog is a meditation as well – it certainly came together in that way. And so the title.

I wrestle with the degree with which I invoke my cultural heritage in my work. Sometimes it’s very close to the surface, and the work is derived very directly from Hindu myth say, or field recordings from Kolkata. Sometimes it simmers in other ways, and with varying strength. I struggle with allowing it to be expressed instinctually or more directly and with more intent. Ultimately, the music I make is from me, and all those ideas apply whether or not I think of them consciously.

One of the problems I have with the term “world music” is it’s a marketing term to allow the lumping together of basically “music not made by white people”, which is ludicrous (as well as other harsher words that could apply). To that end, the urge to classify my music as “Indian” in some way, while true, can also be a misnomer or an “out” for lazy listening. There are a billion people in India, I believe, and more on the subcontinent and abroad. Why wouldn’t a track like “entr’acte” be “Indian”? On the other hand, why would it? I’m also a product of the west. How can I manage those worlds and expectations and still be authentic? It’s something I work on and think about all the time – but not when I’m actually making music, thank goodness.

I’m curious about your live set, how you were working with the Novation controllers, and how you were looping, etc.

My live sets are always, always constructed differently – I’m horrible that way. I design new effects chains and different ways of using my outboard MIDI gear depending on the context. I might use contact mics on a kalimba and a prepared guitar for one show, and then a bunch of external percussion that I loop and chop live for another, and for another just my voice, and for yet another only field recordings from India. I’ve used Ableton Live to drive a lot of sound installations as well, using follow actions on clips (“any” comes in handy a lot), and I’ve even made some installations that do the same thing with live input (making sure I have a 5 second delay on that input has….been occasionally useful, shall we say).

The concert I put together for The White Dog project is one that I try and keep live as much as possible. It’s important to me to make sure there is room in the set for me to react to the room or the moment of performance – this is generally true for my live shows, but since I’m re-presenting songs that have a life on a record, finding a meaningful space for improv was trickier.

Essentially, I try and have as many physical knobs and faders as possible – either a Novation Launch Control XL or a Behringer BCR2000 [rotary controller], which is a fantastic piece of gear (I know – Behringer?!). I use a Launchpad Mini to launch clips and deal with grid-based effects, and I also have a little Launch Control mapped to the effects parameters and track views or effects I need to see and interact with quickly. Since I’m usually using both hands to play/mix, I always have a Logidy UMI3 to control live looping from a microphone. It’s a 3 button pedal which is luckily built like a tank, considering how many times I’ve dropped it. I program it in various ways depending on the project – for The White Dog concerts with MIDI learn in the Ableton looper to record/overdub, undo and clear button, but the Logidy software allows you to go a lot deeper. I have the option to feed up to 3 effects chains, which I sometimes switch on the fly with dummy clips.

The Max For Live community has been amazing and I often keep some kind of chopper on one of the effect chains, and use the User mode on the Launchpad Mini to punch in and out or alter the length of the loop or whatnot. Sometimes I keep controls for another looper on that grid.

Basically, if you want an overview – I’m triggering clips, and have a live mic that I use for percussion and voice for the looper. I try and keep the mixer in a 1:1 relationship with what’s being played/played back/routed to effects because I’m old school – I find it tricky to do much jumping around when I’m playing live instruments. It’s not the most complicated setup but it gets the job done, and I feel like I’ve struck a balance between electronics and live percussion, at least for this project.

What else are you listening to? Do you find that your musical diet is part of keeping you creative, or is it somehow partly separate?

I jump back and forth – sometimes I listen to tons of music with an ear to try and expand my mind, sometimes just to enjoy myself. Sometimes I stop listening to music just because I’m making a lot on my own. One thing I try to always take care of is my mind. I try to keep it open and curious, and try to always find new ideas to ponder. I am inspired by a lot of different things – paintings, visual art, music, sound art, books – and in general I’m really curious about how people make an idea manifest – science, art, economics, architecture, fashion, it doesn’t matter. Looking into or trying to derive that jump from the mind idea to the actual real life expression of it I find endlessly fascinating and inspiring, even when I’m not totally sure how it might have happened. It’s the guessing that fuels me.

That being said, at the moment I’m listening to lots of things that I feel are percolating some ideas in me for future projects, and most of it coming from digging around the amazing Bandcamp site. Frank Bretschneider turned me on to goat(jp), which is an incredible quartet from Japan with incredible rhythmic and textural muscle. I’ve rediscovered the fun of listening to lots of Stereolab, who always seem to release the same record but still make it sound fresh. Our pal Robert Lippok just released a new record and I am so down with it – he always makes music that straddles the emotional and the electronic, which is something I’m so interested in doing.

I continue to make my way through the catalog of French percussionist Le Quan Ninh, who is an absolute warrior in his solo percussion improvisations. Tanya Tagaq is an incredible singer from Canada – I’m sure many of the people reading this know of her – and her live band, drummer Jean Martin, violinist Jesse Zubot, and choirmaster Christine Duncan, an incredible improv vocalist in her own right are unstoppable. We have a great free music scene in Toronto, and I love so many of the musicians who are active in it, many of them internationally known – Nick Fraser (drummer/composer), Lina Allemano (trumpet), Andrew Downing (cello/composer), Brodie West (sax) – not to mention folks like Sandro Perri and Ryan Driver. They’ve really lit a fire under me to be fierce and in the moment – listening to them is a recurring lesson in what it means to be really punk rock.

Buy and download the album now on Bandcamp.

https://debsinha.bandcamp.com/album/the-white-dog

The post Exploring a journey from Bengali heritage to electronic invention appeared first on CDM Create Digital Music.

Escape look-alike Ableton Live colors with these free themes

Delivered... Peter Kirn | Scene | Tue 10 Jul 2018 1:04 pm

You stare at its interface for hours on end. Why not give your eyes something different to look at? Now Ableton Live 10, too, gets access to custom colors.

Judging by looking over people’s shoulders, a lot of Live users simply don’t know that you can hack into Ableton’s custom theme files and modify things. And so we’re all caught in drab uniformity, with the same color theme – both unoriginal and uninspiring.

Fortunately, we have Berlin native and leading Ableton Live guru and educator Madeleine Bloom to come to our rescue. Madeleine has long made some pleasing variations for Live’s colors. Now she’s got two new sets (with more on the way) for Ableton Live 10. Live 10 can still read your old color modifications, but because of some minor changes to the interface, files made for its new XML-based format will work better. (Ableton also changed the name from “skins” to “themes,” for some reason.)

Free Ableton Live Themes Set #1

Free Ableton Live Themes Set #2 [I spot a naming pattern here]

To install theme, follow this tutorial (for both Live 10 and Live 9 and earlier):

Ableton Live Tutorial: How to install new Skins

And if you think these colors aren’t quite right, Madeleine has also written a tutorial for creating your own themes or making modifications to these:

How to Create Your Own Ableton Live Themes & Free PDF Theming Guide

There’s even a link there to a graphical theme editor for Mac and Windows with previews, in case you don’t like editing XML files.

“But, Peter!” says you, “you’re just now a paid shill for Ableton, trying to force me to upgrade to Live 10 when I don’t need it!”

Why, you’ve just made me spit out some of this lifetime supply of Club-Mate soda that Ableton has delivered to my flat every day, you ungrateful readers! Of course, I can’t imagine why you wouldn’t upgrade to Live 10 — why, it’s The Future of Sound. Oh… wait, actually, that’s Native Instruments’ slogan. Sometimes I forget who I’m shilling for.

Anyway, if you are stuck on the clearly inferior and not-having-an-Echo effect Live 9 or earlier, Madeleine is nice enough to have you covered, too, with a whole bunch of skins for those versions. There are dozens of those, including various from readers:

https://sonicbloom.net/en/?s=ableton+live+skins&submit=Search

And there’s an accompanying guide to making your own skins, as well.

Now, enjoy. I have to go lie down, as I think all this Club-Mate sponsorship has made me feel a bit lightheaded.

You’ll find a ton of resources for Live at Sonic Bloom, the site Madeleine runs. It’s a complete hub for information, which is way better than trying to navigate random YouTube uploads:

https://sonicbloom.net/en/

The post Escape look-alike Ableton Live colors with these free themes appeared first on CDM Create Digital Music.

Pretend you can play and produce drums with this free plug-in

Delivered... Peter Kirn | Scene | Tue 3 Jul 2018 5:18 pm

Spitfire’s latest LABS plug-in release is out, with the theme “DRUMS.” Here’s how to get started with it – and why it may make you feel like you magically know how to actually play and properly record an acoustic drum kit.

Okay, apologies – I’m projecting a little. Some of you I know can do both those things. Me, that counts as “not at all,” and “yes, but only in theory, please hire an actual producer.”

But DRUMS packs an enormous amount of nuance into a deceptively simple, two-octave mapping. Ever had a chocolate sundae and said, you know, I’m really kind of about the cherry and this bit of peanuts covered in chocolate most? You get the feeling that that’s what’s in this pack.

Here’s a sample. This is literally just me mucking around on the keys. (I ran the sound through the Arturia TridA-Pre, from Arturia’s 3 Preamps You’ll Actually Use set, just to add some dynamics.)

Ready to get started? Here’s where to begin.

Get going with LABS – don’t fear the app!

If you missed our first story on LABS, we covered its launch, which came with a lovely soft piano and chamber string ensemble through vintage mic:

LABS is a free series of sound tools for everyone, and you’ll want it now

Your first step is to head to the LABS site, and choose the free sound you want. If you created a login before at Spitfire, that will work for “DRUMS” – just click ‘get’ and login. If you haven’t got a login yet, you can register with an email address and password.

https://www.spitfireaudio.com/labs/

I find two things scare people about free software, and I understand your frustration, so to allay those fears:

They’re not signing you up for a newsletter, unless you want one!

Some useful assistance, not annoying intrusion. The app is only there to aid in downloading. It doesn’t launch at startup or anything like that. Basically, it’s there because it’s better than your Web browser – it will actually put the files in the right place and let you choose where those hundreds of megs go, and it will finish a download if interrupted. (That’s especially useful on a slow connection.)

Specifically on Windows, you can make sure it finds your correct VST folder so you don’t load up your DAW and wonder where the heck it went.

Grabbing the app helps make sure you complete the download, and that it goes into the right place. The app downloads and installs the content in one step. It doesn’t load on startup or do anything else weird.

Another key feature of the Spitfire app – you can select where the sample content goes, so you can use an external drive if you’re short on space on your internal drive.

Give it a play!

Once LABS is installed, you have your drum kit, which Spitfire says is the creation of drummer Oliver Waton and engineer Stanley Gabriel.

That minimal interface shouldn’t worry you – have a fiddle with the controls and dial in whatever variation you like. Most of the nuance to the LABS kits is really in actually playing them, so the best idea here is to connect your favorite velocity-sensitive instrument and play, whether that’s a drum pad controller or keyboard or whatever else you have handy.

In my case, I wirelessly paired a ROLI Seaboard Block. It’s conveniently also two octaves, so you just need to set the octave range to match the DRUMS.

As opposed to sprawling sample libraries, LABS are simple and compact, so don’t worry – just go ahead and play.

Beginning some ideas with a familiar sound can also be the basis of doing something a bit radical, because a well-recorded acoustic source will give you a rich sonic range – and dares you to make it sound like something else. So, using another bit of free add-on we’ve covered lately, I loaded up the Creative Extensions Pack from Ableton, which works in Live Suite 10 or any copy of Live 10 with a Max for Live license.

To bend this into experimental/IDM territory, I stacked on various effects, including reversing and gating the sound and adding spectral ambience … generally mucking about. The idea was to keep the character of the drum source, but make it sound like spacetime had gone a bit amiss.

Pairing conventional sounds with out-there effects is one way to go. Ableton Live 10 users can grab another freebie (for Suite or Max for Live). Choose Creative Extensions from the browser and download.

And here is a not terribly-well-thought-out effects chain using those Creative Extensions. Could your cat do better? Possibly. I like cats, though. Give those felines some production opportunities, too.

This time I finished off the sound using Native Instruments’ VC 76 compressor and Enhanced EQ.

But I was just having a bit of fun. So I’d love to hear what you come up with using these sorts of sounds. One of the common complaints about production today is that everyone has easy access to sounds and very often the same tools. But let’s use that – let’s see what you all come up with.

If you’re interested in learning more about how to better record drums, I’m happy to ask Spitfire about how they recorded this set, too. Playing with it actually does make me want to grab some mics and a kit, too.

Feel free to post thoughts, questions, and sound links in comments.

The post Pretend you can play and produce drums with this free plug-in appeared first on CDM Create Digital Music.

Arturia’s KeyLab MKII: a more metal, more connected keyboard controller

Delivered... Peter Kirn | Scene | Fri 29 Jun 2018 9:07 pm

Oh, look, a new MIDI controller keyboard ranks there with “wow, a new moderately-priced mid-sized sedan.” But… Arturia may have a hit on their hands with the MKII KeyLab. Here’s why.

While everyone else guns for the elusive entry level “everyone,” Arturia has won over specific bands of enthusiasts. The BeatStep Pro is a prime example: by connecting to both MIDI and control voltage, these compact pad-sequencer units have become utterly ubiquitous in modular rigs. They’re the devices that prevent modular performances from turning into aimless noodling. (Well, or at least they give your aimless noodling a set of predictable patterns and rhythm.)

Now, is the modular market big enough to sell the majority of BeatSteps Pro? Probably not. But the agnostic design approach here makes this a multitasker tool in every kitchen, and so word of mouth spreads.

So, keyboards. Native Instruments, love them or hate them, have had a pretty big hit with the Komplete Kontrol line, partly because they do less. They’re elegant looking, they’re not overcrowded, and their encoders let you access not only NI’s software, but lots of other plug-ins via the NKS format.

But the KeyLab MKII looks like it could fit a different niche, by connecting easily to hardware and DAWs.

Backlit pads. 4×4 pads (with velocity and continuous pressure – good), which can also be assigned to chords in case finger drumming isn’t what you had in mind.

DAW control. A lot of people record/edit while playing in parts on the keyboard. So here’s your DAW control layout with some handy shortcut buttons.

Faders/mixing. You get 9 faders with 9 rotaries – so that can be 8 channels plus a master fader. There are assignable buttons underneath those.

Pitch and mod wheels. Dear Arturia: thank you for not being innovative here, as wheels are what many people prefer.

And a big navigator. This bit lets you pull up existing presets.

Okay, none of that is all that exciting – we’ve literally seen exactly this set of features before. But Arturia have pulled it together in some nice ways, like adding a dedicated switch to move into chord mode, letting you change MIDI channel with a button on the front panel (hello, hardware owners), and even thoughtfully including not only those shortcut keys for DAWs, but a magnetic overlay to access them.

Still, keyboards from Nektar and M-Audio, to name just two, cover similar ground. So where Arturia set themselves apart is connectivity.

Class-compliant USB MIDI operation. No drivers mean you can pair this with anything, including iOS and Android and Linux (including Raspberry Pi).

Control Voltage. 4 CV/Gate outputs, controlling pitch, gate, and modulation. Yes, four. Also one CV input.

MIDI in and out.

Pedals. Expression, sustain, and 3 assignable auxiliary pedal inputs.

Software integration. This is obviously a winner if you’re into Arturia’s Analog Collection library, which has gone from varied and pretty okay to really, really great as it’s matured. And since there are so many instruments, having this hardware to navigate them is a godsend. There’s also the obligatory software bundle to sweeten the pot, but I suspect the real draw here is out-of-box compatibility with the DAW of your choice – including Pro Tools, Logic Pro X, FL Studio, Bitwig, Cubase, Ableton Live, Digital Performer, and Studio One.

Made of metal. Okay, not the keys. (That’d be awesome, if… wrong.) But the chassis is aluminum, and the wheels are event metal.

There’s a pretty nice piano and a bunch of analog presets built in here, making this a good deal.

I think if your workflow isn’t tied to Native Instruments software and plug-ins, the connectivity and standalone operation here could make the Arturia the one to beat. The thing to check, obviously, is hardware and build quality, though note that Arturia say the keybed at least is what’s found on the Brute line.

There are 49- and 61- key variations, and they come in either black or white, so you can, you know, coordinate with your studio and tastes.

Video, of course:

Arturia KeyLab MKII

The post Arturia’s KeyLab MKII: a more metal, more connected keyboard controller appeared first on CDM Create Digital Music.

Moving AV architectures of sine waves: Zeno van den Broek

Delivered... Peter Kirn | Scene | Thu 28 Jun 2018 4:07 pm

Dutch-born, Danish-based audiovisual artist Zeno van den Broek continues to enchant with his immersive, minimalistic constructions. We talk to him about how his work clicks.

Zeno had a richly entrancing audiovisual release with our Establishment label in late 2016, Shift Symm. But he’s been prolific in his work for AV sound, with structures made of vector lines in sight and raw, chest rattling sine waves. It’s abstract an intellectual in the sense of there’s always a clear sense of form and intent – but it’s also visceral, both for the eyes and ears, as these mechanisms are set into motion, overlapping and interacting. They tug you into another world.

Zeno is joining a lineup of artists around our Establishment label tonight in Berlin – come round if you see this in time and happen to be in town with us.

But wherever you are, we want to share his work and the way he thinks about it.

CDM: So you’ve relocated from the Netherlands to Copenhagen – what’s that location like for you now, as an artist or individually?

Zeno: Yes, I’ve been living there for little over two years now; it’s been a very interesting shift both personally and workwise. Copenhagen is a very pleasant city to live in – it’s so spacious, green and calm. For my work, it took some more time to feel at home, since it’s structured quite differently from Holland, and interdisciplinary work isn’t as common as in Amsterdam or Berlin. I’ve recently joined a composers’ society, which is a totally new thing to me, so I’m very curious to see where this will lead in the future. Living in such a tranquil environment has enabled me to focus my work and to dive deeper into the concepts behind my work, it feels like a good and healthy base to explore the world from, like being in Berlin these days!

Working with these raw elements, I wonder how you go about conceiving the composition. Is there some experimentation process, adjustment? Do you stand back from it and work on it at all?

Well, it all starts from the concepts. I’ve been adapting the ‘conceptual art’ practise more and more, by using the ideas as the ‘engine’ that creates the work.

For Paranon, this concept came to life out of the desire to deepen my knowledge of sine waves and interference, which always play a role in my art but often more in an instinctive way. Before I created a single tone of Paranon I did more research on this subject and discovered the need for a structural element in time: the canon, which turned out to a very interesting method for structuring sine wave developments and to create patterns of interference that emerge from the shifting repetitions.

Based on this research, I composed canon structures for various parameters of my sine wave generators, such as frequency deviation and phase shifting, and movements of visual elements, such as lines and grids. After reworking the composition into Ableton, I pressed play and experienced the outcome. It doesn’t make sense to me to do adjustments or experiment with the outcome of the piece because all decisions have a reason, related to the concept. To me, those reasons are more important than if something sounds pleasant.

If I want to make changes, I have to go back to the concept, and see where my translation from concept to sound or image can be interpreted differently.

There’s such a strong synesthetic element to how you merge audio and visual in all your works. Do you imagine visuals as you’re working with the sound? What do they look like?

I try to avoid creating an image based on the sound. To me, both senses and media are equally important, so I treat them equally in my methods, going from concept to creation. Because I work with fundamental elements in both the visuals and the sound — such as sine waves, lines, grids, and pulses — they create strong relationships and new, often unexpected, results appear from the merging of the elements.

Can you tell us a bit about your process – and I think this has changed – in terms of how you’re constructing your sonic and visual materials?

Yes, that’s true; I’ve been changing my tools to better match my methods. Because of my background in architecture, drawing was always the foundation of my work — to form structures and concepts, but also to create the visual elements. My audiovisual work Shift Symm was still mainly built up out of animated vector drawings in combination with generative elements.

But I’ve been working on moving to more algorithmic methods, because the connection to the concepts feels more natural and it gives more freedom, not being limited by my drawing ability and going almost directly from concept to algorithm to result. So I’ve been incorporating more and more Max in my Ableton sets, and I started using [Derivative] TouchDesigner for the visuals. So Paranon was completely generated in TouchDesigner.

You’ve also been playing out live a lot more. What’s evolving as you perform these works?

Live performances are really important to me, because I love the feeling of having to perform a piece on exactly that time and place, with all the tension of being able to f*** it up — the uncompromising and unforgiving nature of a performance. This tension, in combination with being able to shape the work to the acoustics of the venue, make a performance into something much bigger than I can rationally explain. It means that in order to achieve this I have to really perform it live: I always give myself the freedom to shape the path a performance takes, to time various phrases and transitions and to be able to adjust many parameters of the piece. This does give a certain friction with the more rational algorithmic foundation of the work but I believe this friction is exactly what makes a live performance worthwhile.

So on our release of yours Shift Symm, we got to play a little bit with distribution methods – which, while I don’t know if that was a huge business breakthrough, was interesting at least in changing the relationship to the listener. Where are you currently deploying your artwork; what’s the significance of these different gallery / performance / club contexts for you?

Yes our Shift Symm release was my first ‘digital only’ audiovisual release; this new form has given me many opportunities in the realm of film festivals, where it has been screened and performed worldwide. I enjoy showing my work at these film festivals because of the more equal approach to the sound and image and the more focused attention of the audience. But I also enjoy performing in a club context a lot, because of the energy and the possibilities to work outside the ‘black box’, to explore and incorporate the architecture of the venues in my work.

It strikes me that minimalism in art or sound isn’t what it once was. Obviously, minimal art has its own history. And I got to talk to Carsten Nicolai and Olaf Bender at SONAR a couple years back about the genesis of their work in the DDR – why it was a way of escaping images containing propaganda. What does it mean to you to focus on raw and abstract materials now, as an artist working in this moment? Is there something different about that sensibility – aesthetically, historically, technologically – because of what you’ve been through?

I think my love for the minimal aesthetics come from when I worked as an architect in programs like Autocad — the beautiful minimalistic world of the black screen, with the thin monochromatic lines representing spaces and physical structures. And, of course, there is a strong historic relation between conceptual art and minimalism with artists like Sol LeWitt.

But to me, it most strongly relates to what I want to evoke in the person experiencing my work: I’m not looking to offer a way to escape reality or to give an immersive blanket of atmosphere with a certain ambiance. I’m aiming to ‘activate’ by creating a very abstract but coherent world. It’s one in which expectations are being created, but also distorted the next moment — perspectives shift and the audience only has these fundamental elements to relate to which don’t have a predefined connotation but evoke questions, moments of surprise, and some insights into the conceptual foundation of the work. The reviews and responses I’m getting on a quite ‘rational’ and ‘objective’ piece like Paranon are surprisingly emotional and subjective, the abstract and minimalistic world of sound and images seemingly opens up and activates while keeping enough space for personal interpretation.

What will be your technical setup in Berlin tonight; how will you work?

For my Paranon performance in Berlin, I’ll work with custom-programmed sine wave generators in [Cycling ’74] Max, of which the canon structures are composed in Ableton Live. These structures are receive messages via OSC and audio signal is sent to TouchDesigner for the visuals. On stage, I’m working with various parameters of the sound and image that control fundamental elements of which the slightest alteration have a big impact in the whole process.

Any works upcoming next?

Besides performing and screening my audiovisual pieces such as Paranon and Hysteresis, I’m working on two big projects.

One is an ongoing concert series in the Old Church of Amsterdam, where the installation Anastasis by Giorgio Andreotta Calò filters all the natural light in the church into a deep red. In June, I’ve performed a first piece in the church, where I composed a short piece for organ and church bells and re-amplified this in the church with the process made famous by Alvin Lucier’s “I’m sitting in a room” — slowly forming the organ and bells to the resonant frequencies of the church. In August, this will get a continuation in a collaboration with B.J. Nilsen, expanding on the resonant frequencies and getting deeper into the surface of the bells.

The other project is a collaboration with Robin Koek named Raumklang: with this project, we aim to create immaterial sound sculptures that are based on the acoustic characteristics of the location they will be presented in. Currently, we are developing the technical system to realize this, based on spatial tracking and choreographies of recording. In the last months, we’ve done residencies at V2 in Rotterdam and STEIM in Amsterdam and we’re aiming to present a first prototype in September.

Thanks, Zeno! Really looking forward to tonight!

If you missed Shift Symm on Establishment, here’s your chance:

And tonight in Berlin, at ACUD:

Debashis Sinha / Jemma Woolmore / Zeno van den Broek / Marsch

http://zenovandenbroek.com

The post Moving AV architectures of sine waves: Zeno van den Broek appeared first on CDM Create Digital Music.

AutoTrig and TATAT generate rhythms for Ableton, modular gear

Delivered... Peter Kirn | Scene | Mon 25 Jun 2018 6:44 pm

Composer Alessio Santini is back with more tools for Ableton Live, both intended to help you get off the grid and generate elaborate, insane rhythms.

Developer K-Devices, Santini’s music software house, literally calls this series “Out Of Grid,” or OOG for short. They’re a set of Max for Live devices with interfaces that look like the flowcharts inside a nuclear power plant, but the idea is all about making patterns.

AutoTrig: multiple tracks of shifting structures and grooves, based on transformation and probability, primarily for beat makers. Includes Push 2, outboard modular/analog support.

TATAT: input time, note, and parameter structures, output melodic (or other) patterns. Control via MIDI keyboard, and export to clips (so you can dial up settings until you find some clips you like, then populate your session with those).

AutoTrig spits out multiple tracks of rhythms for beat mangling.

And for anyone who complains that rhythms are repetitive, dull, and dumb on computers, these tools do none of that. This is about climbing into the cockpit of an advanced alien spacecraft, mashing some buttons, and then getting warped all over hyperspace, your face melting into another dimension.

Here’s the difference: those patterns are generated by an audio engine, not a note or event engine per se. So the things you’d do to shape an audio signal – sync, phase distortion – then spit out complex and (if you like) unpredictable streams of notes or percussion, translating that fuzzy audio world into the MIDI events you use elsewhere.

TATAT is built more for melodic purposes, but the main thing here is, you can spawn patterns using time and note structures. And you can even save the results as clips.

And that’s only if you stay in the box. If you have some analog or modular gear, you can route audio to those directly, making Ableton Live a brain for spawning musical events outside via control voltage connection. (Their free MiMu6 Max for Live device handles this, making use of the new multichannel support in Max for Live added to Live 10).

Making sense of this madness are a set of features to produce some order, like snapshots and probability switches on AutoTrig, and sliders that adjust timing and probability on TATAT. TATAT also lets you use a keyboard to set pitch, so you can use this more easily live.

If you were just sent into the wilderness with these crazy machines, you might get a bit lost. But they’ve built a pack for each so you can try out sounds. AutoTrig works with a custom Push 2 template, and TATAT works well with any MIDI controller.

Pricing:
AutoTrig 29€ ($34 US)
TATAT 29€ ($34 US)
Bundle AutoTrig + TATAT 39€ ($45 US)

Bundle MOOR + Twistor + AutoTrig + TATAT 69€ ($81)

They’ve presumably already worked out that this sort of thing will appeal mainly to the sorts of folks who read CDM, as they’ve made a little discount coupon for us.

The code is “koog18”

Enter that at checkout, and your pricing is reduced to 29€ ($34 US) for both AutoTrig and TATAT.

Check out their stuff on the K-Devices site:

OOG part 2: AutoTrig and TATAT, lunatic Max For Live devices

https://k-devices.com/

See, the problem with this job is, I find a bunch of stuff that would require me to quit this job to use but … I will find a way to play with Monday’s sequencing haul! I know we all feel the same pain there.

Here we go in videos:

The post AutoTrig and TATAT generate rhythms for Ableton, modular gear appeared first on CDM Create Digital Music.

New Faderfox, from mixer-style controllers to RGB insanity

Delivered... Peter Kirn | Scene | Fri 22 Jun 2018 6:46 pm

Want something vanilla, like a MIDI controller with a classic mixer layout or a bunch of pots? Or want something crazy – like a psycho-bright light-up show controller? Faderfox has you either way.

The one-person German boutique controller company Faderfox has been making clever controllers since some of the first days of doing that for software, and they keep getting better. Mathias – he really is just one guy – wrote with the latest. He’s Mathias is now shipping the first controllers in his “MODULE” line. The idea here is, you get to mix and match some simple options to build up a virtual mixing surface, for your hardware or software.

These are pre-configured to work with Ableton Live and Elektron’s boxes, and the form factor even matches the Elektron so you can arrange or rack them together neatly.

You can use the new MODULE line with any MIDI-enabled hardware or software, but fans of Elektron will notice something about the dimensions.

On the MX12, you get twelve fader strips – 12 faders, 24 pots, and 24 buttons. Those still send whatever you want, so you can control whatever hardware or software tool you wish (via control change, program change, all the goodies), either by manually creating templates or using MIDI learn to automatically assign them.

On the PC12, you get just pots – 72 of them. You could put those two together, or use them individually, or build a monster system by chaining these together.

You might get away with the generic one and some adjustment in your software, but there are up to 30 custom setups if not.

And each comes in an aluminum case with both two MIDI in and two MIDI out, plus USB. An extension port lets you connect to other stuff.

Oh, and this is cute and useful – these come with a dry erase marker and empty overlay, so you can mark up your controller and know what everything is. There’s even a matching stand. Different colored fader caps let you add additional visual feedback.

399EUR (before VAT) for each.

Okay, so that’s the practical – now let’s get to the impractical (but fun). Mathias has done various one-off custom controller builds, but the GT1 is the craziest, biggest yet – a light-up, 144 RGB LED show controller.

So, for anyone complaining about laptop performance behind a blue glow, uh… take this.

Of course it syncs to music. Of course.

More:

http://www.faderfox.de/gt1.html

The post New Faderfox, from mixer-style controllers to RGB insanity appeared first on CDM Create Digital Music.

Ableton’s Creative Extensions are a set of free tools for sound, inspiration

Delivered... Peter Kirn | Scene | Wed 6 Jun 2018 12:00 pm

On the surface, Ableton’s new free download today is just a set of sound tools. But Ableton also seem focused on helping you find some inspiration to get ideas going.

Creative Extensions are now a free addition to Live 10. They’re built in Max for Live, so you’ll need either Ableton Live 10 Suite or a copy of Live 10 Standard and Max for Live. (Apparently some of you do fit the latter scenario.)

To find the tools, once you have those prerequisites, you’ll just launch the new Live 10 browser. then click Packs in the sidebar, and Creative Extensions will pop up under “Available Packs” as a download option. Like so:

I’m never without my trusty copy of Sax for Live. The rest I can download here.

Then once you’re there, you get a tool for experimenting with melodies, two virtual analog instruments (a Bass, and a polysynth with modulation and chorus), and effects (two delays, a limiter, an envelope processor, and a “spectral blur” reverb).

Have a look:

Melodic Steps is a note sequencer with lots of options for exploration.

Bass is a virtual analog monosynth, with four oscillators. (Interesting that this is the opposite approach taken by Native Instruments with the one-oscillator bass synth in Maschine.)

Poli is a virtual analog polysynth, basically staking out some more accessible ground versus the AAS-developed Analog already in Live.

Pitch Hack is a delay – here’s where things start to get interesting. You can transpose, reverse audio, randomize transposition interval, and fold the delayed signal back into the effect. If you’ve been waiting for a wild new delay from the company that launched with Grain Delay, this could be it.

Gated Delay is a second delay, combining a gate sequencer and delay. (Logic Pro 10.4 added some similar business via acquired developer Camel, but nice to have this in Live, too.)

Color Limited is modeled on hardware limiters.

Re-enveloper is a three-band, frequency dependent envelope processor. That gives you some more precise control of envelope on a sound – or you could theoretically use this in combination with other effects. Very useful stuff, so this could quietly turn out to be the tool out of this set you use the most.

Spectral Blur is perhaps the most interesting – it creates dense clouds of delays, which produce a unique reverb-style effect (but one distinct from other reverbs).

And the launch video:

All in all, it’s a nice addition to Ableton you can grab as a free update, and a welcome thank you to Live 10 adopters. I’m going to try some experimentation with the delays and re-enveloper, and I can already tell I’m going to be into this Spectral Blur. (Logic Pro’s ChromeVerb goes a similar direction, and I’m stupidly hooked on that, too.)

Creative Extensions: New in Live 10 Suite

If these feel a little pedestrian and vanilla to you – the world certainly does have a lot of traditional virtual analog – you might want to check out the other creations by this developer, Amazing Noises. They have something Granular Lab on the Max for Live side, plus a bunch of wonderful iOS effects. And you can always use an iPad or iPhone as an outboard effects processor for your Live set, too, taking advantage of the touch-centric controls. (Think Studiomux.)

https://www.ableton.com/en/packs/by/amazing-noises/

https://www.amazingnoises.com/

http://apps.amazingnoises.com/

If you’re a Max for Live user or developer and want to recommend one of your creations, too, please do!

Want some more quick inspiration / need to unstick your creative imagination today? Check out the Sonic Bloom Oblique Strategies. Here’s today’s:

And plenty more where that came from:

http://sonicbloom.net/en/category/oblique-strategies/

The post Ableton’s Creative Extensions are a set of free tools for sound, inspiration appeared first on CDM Create Digital Music.

Listening, the secret of sound design: Francis Preve at Loop

Delivered... Peter Kirn | Scene | Fri 18 May 2018 3:55 pm

To master sound design, no technology can top your own hearing. That’s the message from Francis Preve, who gave a gripping talk at Ableton Loop. Now we’ve got video – and more discussion. Nothing is sacred – not even the vaunted TB-303 filter.

It’s really easy to fall into the trap of trying to define specialization in the narrowest terms possible, chasing worth in whatever trend is generating it at the moment. But part of why I’ve been glad to know Fran over the years is, he has knowledge and experience that is deep and far-reaching, and that he adapts that ability to a range work. That is, if ever you worry about how to live off your love of music and machines, Fran is a great model: he’s built a skill set that can shift to new opportunities when times change.

So, essentially what he can do is understand sound, technology, and music, put them together, and apply that to diverse results. He’s quietly been a big part of sound design for clients from Dave Smith to KORG to Ableton. He teaches, and keeps up a huge workload of writing and editing. He’s run a label, been a producer, and made hit remixes. And now he has his own unique sound design products, Symplesound and his Scapes series, which act as a calling card for his ability to produce sounds and articulate their significance.

Francis isn’t shy about sharing his thought process. But as with his presets, that means you can learn that thinking method and then apply it to your own work. And that’s how we started at Ableton Loop, beginning with some listening.

Maybe most poetic: finding the same joy in teaching as you do in gardening.

About the 303…

There are a bunch of mini TED talk-style inspirational moments in there, but maybe the most quotable came in Francis’ take on resonance – and the TB-303.

But wait a minute – even if you love the 303, it’s worth listening to Francis’ analysis of why it sits at the edge between success and failure. (And actually, part of why I like the TB-303 personally is because I don’t feel obligated by anyone else that I have to like it.) Fran re-watched our talk and chose to elaborate for CDM:

To further explain my point, Nate Harrison’s Bassline Baseline is a wonderful historical analysis the whole 303 phenomena and why it was initially unsuccessful.

That said, I feel quite differently about the TB-03 and expressed this in my 2016 review for Electronic Musician. For starters, it expands greatly on the original’s synthesis parameters—adding distortion, delay, and reverb—which vastly broadens its tonal palette. These effects were also essential components of the “acid house” sound, as most 303 owners relied on them to beef up its thin, resonant flavor. The TB-03 also addressed the original 303’s absolutely opaque approach to sequencing, which resolves my other issue with the first unit (and the music it produced).

So, while I generally dislike the sound of envelope modulated resonant lowpass filters, I wanted to clarify my statements on the 303 and specifically the TB-03. It’s common knowledge that I’m a diehard Roland user and frankly, the TR-8S and System-8 are cornerstones of my current rig (as well as an original SH-101), but after 35 years, I still can’t find a way to enjoy the original 303.

https://www.emusician.com/gear/review-roland-tb-03-and-tr-09Francis’ TB-03, TR-09 review for EM

Here’s actually where Francis and I agree – and I’ve taken some flak for saying I thought the TB-03 improves on the original. But that little Boutique often finds its way into my luggage when I’m playing live for this very reason, and I know I’m not alone. (And I do like the original 303 and acid house and acid techno – and I love cilantro, too, as it happens!)

Get more of Fran’s brain (and sounds)

Francis has a regular masterclass series for Electronic Musician. Of particular interest: delve deep into Ableton’s new Wavetable in Live 10 and the latest Propellerhead Reason instruments, the phenomenal Europa and Grain.

https://www.francispreve.com/blog/

And meanwhile, he’s continuing to teach sound design to college students including making Scapes part of the curriculum – which is timely, thanks to growing demand in augmented and virtual reality.

More…

https://www.francispreve.com/bio/

https://www.francispreve.com/scapes/

http://www.symplesound.com

https://www.xferrecords.com/preset_packs

Since 2016, Francis has added sounds to:
– Ableton Live 10
– Korg Prologue
– Dave Smith REV2
– Korg Gadget
– Korg iMonoPoly
– Propellerhead Reason
– Xfer preset packs
– PurpleDrums
– Various Symplesound products

New physical modeling sounds for AAS’ unique Chromaphone.

Serum is a heavyweight among producers; Fran’s got your tools for Xfer.

(Other clients over the years: Propellerhead, Roland, iZotope)

And this year, so far:
DSI Prophet X
AAS Solids Chromaphone 2 Pack (arriving next week – rather keen for this one; physical modeling in Chromaphone is great!)
System-8 and Roland Cloud Synthwave pack (with Carma Studios)

Xfer Serum Toolkit Vol 3 (summer release)
Major multi-platform Symplesound release
More Scapes based on field recordings (Fran is roaming with a camper van now) – he says he’s “cracked the code for recreating fire in Ableton”

Live 10 (literally hundreds of presets, mostly Operator and quite a few wavetables)
Korg Prologue, Gadget, and iMonoPoly
Dave Smith REV2
Xfer Serum Toolkit Vol 2 expansion pack -https://www.xferrecords.com/preset_packs/serum_toolkit_2
Scapes – https://www.francispreve.com/scapes/ (or your piece)

But the big hit is perhaps the one we debuted here on CDM:

Get a free pack that recreates Prince’s signature drum sounds

Stay tuned for whatever’s next.

The post Listening, the secret of sound design: Francis Preve at Loop appeared first on CDM Create Digital Music.

Free new tools for Live 10 unlock 3D spatial audio, VR, AR

Delivered... Peter Kirn | Scene | Wed 25 Apr 2018 7:06 pm

Envelop began life by opening a space for exploring 3D sound, directed by Christopher Willits. But today, the nonprofit is also releasing a set of free spatial sound tools you can use in Ableton Live 10 – and we’ve got an exclusive first look.

First, let’s back up. Listening to sound in three dimensions is not just some high-tech gimmick. It’s how you hear naturally with two ears. The way that actually works is complex – the Wikipedia overview alone is dense – but close your eyes, tilt your head a little, and listen to what’s around you. Space is everything.

And just as in the leap from mono to stereo, space can change a musical mix – it allows clarity and composition of sonic elements in a new way, which can transform its impact. So it really feels like the time is right to add three dimensions to the experience of music and sound, personally and in performance.

Intuitively, 3D sound seems even more natural than visual counterparts. You don’t need to don weird new stuff on your head, or accept disorienting inputs, or rely on something like 19th century stereoscopic illusions. Sound is already as ephemeral as air (quite literally), and so, too, is 3D sound.

So, what’s holding us back?

Well, stereo sound required a chain of gear, from delivery to speaker. But those delivery mechanisms are fast evolving for 3D, and not just in terms of proprietary cinema setups.

But stereo audio also required something else to take off: mixers with pan pots. Stereo effects. (Okay, some musicians still don’t know how to use this and leave everything dead center, but that only proves my point.) Stereo only happened because tools made its use accessible to musicians.

Looking at something like Envelop’s new tools for Ableton Live 10, you see something like the equivalent of those first pan pots. Add some free devices to Live, and you can improvise with space, hear the results through headphones, and scale up to as many speakers as you want, or deliver to a growing, standardized set of virtual reality / 3D / game / immersive environments.

And that could open the floodgates for 3D mixing music. (Maybe even it could open your own floodgates there.)

Envelop tools for Live 10

Today, Envelope for Live (E4L) has hit GitHub. It’s not a completely free set of tools – you need the full version of Ableton Live Suite. Live 10 minimum is required (since it provides the requisite set of multi-point audio plumbing.) Provided you’re working from that as a base, though, musicians get a set of Max for Live-powered devices for working with spatial audio production and live performance, and developers get a set of tools for creating their own effects.

Start here for the download, installation instructions, and overview:

https://github.com/EnvelopSound/EnvelopForLive/

Read an overview of the system, and some basic explanations of how it works (including some definitions of 3D sound terminology):

https://github.com/EnvelopSound/EnvelopForLive/wiki/System-Overview

And then find a getting started guide, routing, devices, and other reference materials on the wiki:

https://github.com/EnvelopSound/EnvelopForLive/wiki

Here’s the basic idea of how the whole package works, though.

Output. There’s a Master Bus device that stands in for your output buses. It decodes your spatial audio, and adapts routing to however many speakers you’ve got connected – whether that’s just your headphones or four speakers or a huge speaker array. (That’s the advantage of having a scalable system – more on that in a moment.)

Sources. Live 10’s Mixer may be built largely with the idea of mixing tracks down to stereo, but you probably already think of it sort of as a set of particular musical materials – as sources. The Source Panner device, added to each track, lets you position that particular musical/sonic entity in three-dimensional space.

Processors. Any good 3D system needs not only 3D positioning, but also separate effects and tools – because normal delays, reverbs, and the like presume left/right or mid/side stereo output. (Part of what completes the immersive effect is hearing not only the positioning of the source, but reflections around it.)

In this package, you get:
Spinner: automates motion in 3D space horizontally and with vertical oscillations
B-Format Sampler: plays back existing Ambisonics wave files (think samples with spatial information already encoded in them)
B-Format Convolution Reverb: imagine a convolution reverb that works with three-dimensional information, not just two-dimensional – in other words, exactly what you’d want from a convolution reverb
Multi-Delay: cascading, three-dimensional delays out of a mono source
HOA Transform: without explaining Ambisonics, this basically molds and shapes the spatial sound field in real-time
Meter: Spatial metering. Cool.

Spinner, for automating movement.

Spatial multi-delay.

Convolution reverb, Ambisonics style.

Envelop SF and Envelop Satellite venues also have some LED effects, so you’ll find some devices for controlling those (which might also be useful templates for stuff you’re doing).

All of this spatial information is represented via a technique called Ambisonics. Basically, any spatial system – even stereo – involves applying some maths to determine relative amplitude and timing of a signal to create particular impressions of space and depth. What sets Ambisonics apart is, it represents the spatial field – the sphere of sound positions around the listener – separately from the individual speakers. So you can imagine your sound positions existing in some perfect virtual space, then being translated back to however many speakers are available.

This scalability really matters. Just want to check things out with headphones? Set your master device to “binaural,” and you’ll get a decent approximation through your headphones. Or set up four speakers in your studio, or eight. Or plug into a big array of speakers at a planetarium or a cinema. You just have to route the outputs, and the software decoding adapts.

Envelop is by no means the first set of tools to help you do this – the technique dates back to the 70s, and various software implementations have evolved over the years, many of them free – but it is uniquely easy to use inside Ableton Live.

Open source, standards

Free software. It’s significant that Envelop’s tools are available as free and open source. Max/MSP, Max for Live, and Ableton Live are proprietary tools, but the patches and externals exist independently, and a free license means you’re free to learn from or modify the code and patches. Plus, because they’re free in cost, you can share your projects across machines and users, provided everybody’s on Live 10 Suite.

Advanced Max/MSP users will probably already be familiar with the basic tools on which the Envelop team have built. They’re the work of the Institute for Computer Music and Sound Technology, at the Zürcher Hochschule der Künste in Zurich, Switzerland. ICMST have produced a set of open source externals for Max/MSP:

https://www.zhdk.ch/downloads-ambisonics-externals-for-maxmsp-5381

Their site is a wealth of research and other free tools, many of them additionally applicable to fully free and open source environments like Pure Data and Csound.

But Live has always been uniquely accessible for trying out ideas. Building a set of friendly Live devices takes these tools and makes them make more sense in the Live paradigm.

Non-proprietary standards. There’s a strong push to proprietary techniques in spatial audio in the cinema – Dolby, for instance, we’re looking at you. But while proprietary technology and licensing may make sense for big cinema distributors, it’s absolute death for musicians, who likely want to tour with their work from place to place.

The underlying techniques here are all fully open and standardized. Ambisonics work with a whole lot of different 3D use cases, from personal VR to big live performances. By definition, they don’t define the sound space in a way that’s particular to any specific set of speakers, so they’re mobile by design.

The larger open ecosystem. Envelop will make these tools new to people who haven’t seen them before, but it’s also important that they share an approach, a basis in research, and technological compatibility with other tools.

That includes the German ZKM’s Zirkonium system, HoaLibrary (that repository is deprecated but links to a bunch of implementations for Pd, Csound, OpenFrameworks, and so on), and IRCAM’s SPAT. All these systems support ambisonics – some support other systems, too – and some or all components include free and open licensing.

I bring that up because I think Envelop is stronger for being part of that ecosystem. None of these systems requires a proprietary speaker delivery system – though they’ll work with those cinema setups, too, if called upon to do so. Musical techniques, and even some encoded spatial data, can transfer between systems.

That is, if you’re learning spatial sound as a kind of instrument, here you don’t have to learn each new corporate-controlled system as if it’s a new instrument, or remake your music to move from one setting to another.

Envelop, the physical version

You do need compelling venues to make spatial sound’s payoff apparent – and Envelop are building their own venues for musicians. Their Envelop SF venue is a permanent space in San Francisco, dedicated to spatial listening and research. Envelop Satellite is a mobile counterpart to that, which can tour festivals and so on.

Envelop SF: 32 speakers with speakers above. 24 speakers set in 3 rings of 8 (the speakers in the columns) + 4 subs, and 4 ceiling speakers. (28.4)

Envelop Satellite: 28 speakers. 24 in 3 rings + 4 subs (overhead speakers coming soon) (24.4)

The competition, as far as venues: 4DSOUND and Berlin’s Monom, which houses a 4DSOUND system, are similar in function, but use their own proprietary tools paired with the system. They’ve said they plan a mobile system, but no word on when it will be available. The Berlin Institute of Sound and Music’s Hexadome uses off-the-shelf ZKM and IRCAM tools and pairs projection surfaces. It’s a mobile system by design, but there’s nothing particularly unique about its sound array or toolset. In fact, you could certainly use Envelop’s tools with any of these venues, and I suspect some musicians will.

There are also many multi-speaker arrays housed in music venues, immersive audiovisual venues, planetariums, cinemas, and so on. So long as you can get access to multichannel interfacing with those systems, you could use Envelop for Live with all of these. The only obstacle, really, is whether these venues embrace immersive, 3D programming and live performance.

But if you thought you had to be Brian Eno to get to play with this stuff, that’s not likely to be the situation for long.

VR, AR, and beyond

In addition to venues, there’s also a growing ecosystem of products for production and delivery, one that spans musical venues and personal immersive media.

To put that more simply: after well over a century of recording devices and production products assuming mono or stereo, now they’re also accommodating the three dimensions your two ears and brain have always been able to perceive. And you’ll be able to enjoy the results whether you’re on your couch with a headset on, or whether you prefer to go out to a live venue.

Ambisonics-powered products now include Facebook 360, Google VR, Waves, GoPro, and others, with more on the way, for virtual and augmented reality. So you can use Live 10 and Envelop for Live as a production tool for making music and sound design for those environments.

Steinberg are adopting ambisonics, too (via Nuendo). Here’s Waves’ guide – they now make plug-ins that support the format, and this is perhaps easier to follow than the Wikipedia article (and relevant to Envelop for Live, too):

https://www.waves.com/ambisonics-explained-guide-for-sound-engineers

Ableton Live with Max for Live has served as an effective prototyping environment for audio plug-ins, too. So developers could pick up Envelop for Live’s components, try out an idea, and later turn that into other software or hardware.

I’m personally excited about these tools and the direction of live venues and new art experiences – well beyond what’s just in commercial VR and gaming. And I’ve worked enough on spatial audio systems to at least say, there’s real potential. I wouldn’t want to keep stereo panning to myself, so it’s great to get to share this with you, too. Let us know what you’d like to see in terms of coverage, tutorial or otherwise, and if there’s more you want to know from the Envelop team.

Thanks to Christoper Willits for his help on this.

More to follow…

http://envelop.us

https://github.com/EnvelopSound/EnvelopForLive/

Further reading

Inside a new immersive AV system, as Brian Eno premieres it in Berlin [Extensive coverage of the Hexadome system and how it works]

Here’s a report from the hacklab on 4DSOUND I co-hosted during Amsterdam Dance Event in 2014 – relevant to these other contexts, having open tools and more experimentation will expand our understanding of what’s possible, what works, and what doesn’t work:

Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND

And some history and reflection on the significance of that system:
Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos]

Plus, for fun, here’s Robert Lippok [Raster] and me playing live on that system and exploring architecture in sound, as captured in a binaural recording by Frank Bretschneider [also Raster] during our performance for 2014 ADE. Binaural recording of spatial systems is really challenging, but I found it interesting in that it created its own sort of sonic entity. Frank’s work was just on the Hexadome.

One thing we couldn’t easily do was move that performance to other systems. Now, this begins to evolve.

The post Free new tools for Live 10 unlock 3D spatial audio, VR, AR appeared first on CDM Create Digital Music.

Mod Max: One free download fixes Live 10’s new kick

Delivered... Peter Kirn | Scene | Tue 24 Apr 2018 4:12 pm

Ableton Live 10 has some great new drum synth devices, as part of Max for Live. But that kick could be better. Max modifications, to the rescue!

The Max for Live kick sounds great – especially if you combine it with a Drum Buss or even some distortion via the Pedal, also both new in Live 10. But it makes some peculiar decisions. The biggest problem is, it ignores the pitch of incoming MIDI.

Green Kick fixes that, by mapping MIDI note to Pitch of the Kick, so you can tap different pads or keyboard keys to pitch the kick where you want it. (You can still trigger a C0 by pressing the Kick button in the interface.)

Also: “It seemed strange to have Attack as a numbox and the Decay as a dial.”

Yes, that does seem strange. So you also get knobs for both Attack and Decay, which makes more sense.

Now, all of this is possible thanks to the fact that this is a Max for Live device, not a closed-box internal device. While it’s a pain to have to pony up for the full cost of Live Suite to get Max for Live, the upside is, everything is editable and modifiable. And it’d be great to see that kind of openness in other tools, for reasons just like this.

Likewise, if this green color bothers you, you can edit this mod and … so on.

Go grab it:

http://maxforlive.com/library/device/4680/green-kick

Thanks to Sonic Bloom for this one. They’ve got tons more tips like this, so go check them out:

https://twitter.com/sonicbloomtuts

The post Mod Max: One free download fixes Live 10’s new kick appeared first on CDM Create Digital Music.

Mix Ableton and Maschine, Komplete Kontrol, in new updates

Delivered... Peter Kirn | Scene | Thu 19 Apr 2018 2:29 pm

There’s a big push among software makers to deliver integrated solutions – and that’s great. But if you’re a big user of both, say, MASCHINE MK3 and Ableton Live, here’s some good news.

NI made available two software updates yesterday, for their Maschine groove workstation software and for Komplete Kontrol, their software layer for hosting instruments and effects and interfacing with their keyboards. So, the hardware proposition there is the 4×4 pad grid of the MP3, and the Komplete Kontrol keyboards.

For Maschine users, the ability to use Ableton Live and Maschine seamlessly could make a lot of producers and live performers happy. Now, unlike working with Ableton Push, the setup isn’t entirely seamless, and there’s not total integration of hardware and software. But it’s still a big step forward. For instance, I often find myself starting a project with Maschine, because I’ve got a kit I like (including my own samples), or I’m using some of its internal drum synths or bass synth, or just want to wail on four pads and use its workflow for sampling and groove creation. But then, once I’ve built up some materials, I may shift back to playing with Ableton’s workflow in Session or Arrange view to compose an idea. And I know lots of users work the same way. It makes sense, given the whole idea of Maschine is to have the feeling of a piece of hardware.

So, you’ve got this big square piece of gear plugged in. Then sometimes literally you’re unplugging the USB port and connecting Push or something else… or it just sits there, useless.

Having these templates means you switch from one tool to the other, without changing workflow. You could already do this with Maschine Jam, which has a bunch of shortcuts for different tasks and a big grid of triggers (which fits Session View). But the appeal of Maschine for a lot of us is those big, expressive pads on the MK3, so this is what we were waiting for.

On the Komplete Kontrol side, there’s a related set of use cases. Whether you’re the sort to just pull up some presets from Komplete, or at the opposite end of the spectrum, you’re using Komplete Kontrol to manipulate custom Reaktor ensembles, it’s nice to have a set of encoders and transport controls at the ready. The MK2 keyboards brought that to the party – so, for instance, now it’s really easy in Apple’s Logic Pro to play some stuff on the keys, then do another take, without, like – ugh – moving over to the table your computer is on, fumbling for the mouse or keyboard shortcut … you get the idea.

And again, a lot of us are using Ableton Live. I love Logic, but there have been times where I find myself comically missing the Session View as a way of storing ideas.

The notion here is, of course, to get you to buy into Native Instruments’ keyboards. But there is an awfully big ecosystem now of third-party instruments (like those from Output, among some of my favorites) that take advantage of compatibility via the NKS format. (NI likes to call that a “standard,” which I think is a bit of a stretch, given for now there’s no SDK for other hardware and host software makers. But it’s a useful step for now, anyway.)

So, here’s how to get going and what else is new.

Maschine 2.7.4

The big deal with 2.7.4 is new controller workflows (JAM, MK3) and Live integration (MK3). Live users, you’ll want to begin here:

How to Set Up the MASCHINE MK3 Integration for Ableton Live [Native Instruments Support]

There are actually two big improvements here workflow-wise. One is Live support, but the other is easier creation of Loop recordings. With the “Target” parameter, you can drop recordings into:

1. Takes
2. “Sounds” (the Audio plug-in, where you can layer up sounds)
3. Pattern (creates both an Audio plug-in recording and a pattern with the playback)

I think the two together could be a godsend, actually, for composing ideas in a more improvisatory flow. The Target workflow also works on MASCHINE JAM (via different controllers).

There’s also footswitch-triggered recording.

So, Native Instruments are finally listening to feedback from people for whom live sampling is at the heart of their music making process. It’s about time, given that Maschine was modeled on hardware samplers.

The Live integration includes just the basics, but important basics – and it might still be useful even with Push and Maschine side-by-side. The MK3 can access the mixer (Volume, Pan, Mute / Solo / Arm states), clip navigation and launching, recording and quantize, undo/redo, automation toggle, tap tempo, and loop tempo.

As always, you also get various other fixes.

Komplete Kontrol 2.0

Again, you’ll start with the (slightly annoying) installation process, and then you’ll get to playing. NI support has a set of instructions with that, plus some useful detailed links on how the integration works (scroll to the botto, read the whole thing!):

Setting Up Ableton Live for KOMPLETE KONTROL

The other big update here is all about supporting more plug-ins, so your NI keyboard becomes the command center for lots of other instruments and effects you own. NI now boasts hundreds of supporting plug-ins for its NKS format, which maps hardware controls to instrument parameters.

Now that includes effects, too. And that’s cool, since sometimes playing is about loading an instrument on the keys, but manipulating the parameters of an effect that processes that instrument. Those plug-ins show up in the browser, now, if they’ve added support, and they also map to the controls.

Scoff if you like, but I know these keyboards have been big sellers. If nothing else, the lesson here is that making your software sounds and effects accessible with a keyboard for tangible control is something people like.

By the way, NI also quietly pushed out a Kontakt sampler update with a whole bunch of power-user improvements to KSP, their custom language for extending/scripting sound patches. That’s of immediate interest only to Kontakt sound content developers, but you can bet some of those little things will mean more improvements to Kontakt-based content you use, if you’re on NI’s ecosystem.

All three updates are available from NI’s Service Center.

If you’ve found a useful workflow with any of this, if you’ve got any tips or hacks, as always – shout out; we’re curious to hear! (I assume you might even be making some music with all this, so that, too.)

The post Mix Ableton and Maschine, Komplete Kontrol, in new updates appeared first on CDM Create Digital Music.

Learn how Tennyson translate between Ableton and percussion on kits

Delivered... Peter Kirn | Artists,Scene | Fri 6 Apr 2018 5:19 pm

One of them likes to solve Rubik’s Cubes, blindfolded, on tour. The other is capable of executing elaborate drum programs programmed on a computer, on live percussion. Meet Tennyson and learn how they work.

As we saw before, Ableton Loop is a place not just for learning about a particular product for musicians, but gathering together ideas from the electronic music community as a whole. And Ableton have been sharing some of that work in an online minisite, so you get a free front row ticket to some of the event from wherever you are.

Tennyson is a good example of how explorations at Loop can cover playing technology as instrument – and everything that means for musicians. Watch:

Tennyson are a young Canadian brother and sister duo, with a unique musical idiom they tested together in live acoustic-electronic improvisations in jazz cafes. Complicated, angular rhythms flow effortlessly and gently, the line between kit and machine blurring. For Loop, they’re interviewed by Jesse Terry, who is product owner for Ableton Push (and has a long history working with the hardware that interacts with Live).

And the sample programming is insane: you get Runescape samples. A baby sneezing. The Mac volume control sound. It’s obsessive Internet-age programming – and then Tess plays this all as acoustic percussion and kit.

In this talk, they talk about jazz education, getting started as kids, Skype lessons. And then they get into the workings of a song.

The big trick here: the duo use Live’s Racks, using the Chain function, so that consistently mapped drum parts can cycle through different sounds as she plays. (I’ll review that technique in more detail soon.) 24 variable pads play all the sounds as Tess is playing.

Working with Chains in Ableton Live’s Device Racks can let you cycle through samples, patches, and layered/split instrument settings.

Part of why the video is interesting to watch is it’s really as much about how Tess has gradually learned how to memorize and recall these elaborate percussion parts. It’s a beautiful example of the human brain expanding to keep up with, then surpass, what the machine makes available.

For Luke’s part, there’s a monome [grid controller], keyboard triggers, and still more electronic pads. The monome loops chopped up samples, sticks can trigger more samples manually — it’s dense. He plays melodic parts both on keyboard and 4×4 pad grid.

The track makeup:

  • Arrangement view contains the song structure
  • A click track (obviously)
  • Software synths each have set lists of sounds, with clips triggering sound changes as MIDI program changes
  • The monome / mlrv sequencer

Here’s an (older) extended live set, so you can see more of how they play:

Here’s their dreamy, poppy latest music video (released March) – made all the more impressive when you realize they basically sound like this live:

More background on the band:

Welcome to the Magically Playful World of Tennyson [Red Bull Music]

New band of the week: Tennyson (No 14) [The Guardian]

Image courtesy the artists.

Check out a growing selection of content from Loop on Ableton’s minisite:

https://www.ableton.com/en/blog/loop/

Bonus: for a quick run-down on chains, here’s AfroDjMac:

The post Learn how Tennyson translate between Ableton and percussion on kits appeared first on CDM Create Digital Music.

Learn how Tennyson translate between Ableton and percussion on kits

Delivered... Peter Kirn | Artists,Scene | Fri 6 Apr 2018 5:19 pm

One of them likes to solve Rubik’s Cubes, blindfolded, on tour. The other is capable of executing elaborate drum programs programmed on a computer, on live percussion. Meet Tennyson and learn how they work.

As we saw before, Ableton Loop is a place not just for learning about a particular product for musicians, but gathering together ideas from the electronic music community as a whole. And Ableton have been sharing some of that work in an online minisite, so you get a free front row ticket to some of the event from wherever you are.

Tennyson is a good example of how explorations at Loop can cover playing technology as instrument – and everything that means for musicians. Watch:

Tennyson are a young Canadian brother and sister duo, with a unique musical idiom they tested together in live acoustic-electronic improvisations in jazz cafes. Complicated, angular rhythms flow effortlessly and gently, the line between kit and machine blurring. For Loop, they’re interviewed by Jesse Terry, who is product owner for Ableton Push (and has a long history working with the hardware that interacts with Live).

And the sample programming is insane: you get Runescape samples. A baby sneezing. The Mac volume control sound. It’s obsessive Internet-age programming – and then Tess plays this all as acoustic percussion and kit.

In this talk, they talk about jazz education, getting started as kids, Skype lessons. And then they get into the workings of a song.

The big trick here: the duo use Live’s Racks, using the Chain function, so that consistently mapped drum parts can cycle through different sounds as she plays. (I’ll review that technique in more detail soon.) 24 variable pads play all the sounds as Tess is playing.

Working with Chains in Ableton Live’s Device Racks can let you cycle through samples, patches, and layered/split instrument settings.

Part of why the video is interesting to watch is it’s really as much about how Tess has gradually learned how to memorize and recall these elaborate percussion parts. It’s a beautiful example of the human brain expanding to keep up with, then surpass, what the machine makes available.

For Luke’s part, there’s a monome [grid controller], keyboard triggers, and still more electronic pads. The monome loops chopped up samples, sticks can trigger more samples manually — it’s dense. He plays melodic parts both on keyboard and 4×4 pad grid.

The track makeup:

  • Arrangement view contains the song structure
  • A click track (obviously)
  • Software synths each have set lists of sounds, with clips triggering sound changes as MIDI program changes
  • The monome / mlrv sequencer

Here’s an (older) extended live set, so you can see more of how they play:

Here’s their dreamy, poppy latest music video (released March) – made all the more impressive when you realize they basically sound like this live:

More background on the band:

Welcome to the Magically Playful World of Tennyson [Red Bull Music]

New band of the week: Tennyson (No 14) [The Guardian]

Image courtesy the artists.

Check out a growing selection of content from Loop on Ableton’s minisite:

https://www.ableton.com/en/blog/loop/

Bonus: for a quick run-down on chains, here’s AfroDjMac:

The post Learn how Tennyson translate between Ableton and percussion on kits appeared first on CDM Create Digital Music.

Route audio from anywhere to anywhere in Ableton, free

Delivered... Peter Kirn | Scene | Thu 29 Mar 2018 3:20 pm

The quiet addition of arbitrary audio routing in Max for Live in Live 10 has opened the floodgates to new tools. This one free device could transform how you route signal in the software.

One of the frustrations of ongoing Ableton Live users, in fact, is that routing options are fairly restricted. You’ve got sends and returns, sure, plus some easy and convenient drop-downs in the I/O section of each channel. But if you’ve ever discovered a particular sidechaining wasn’t possible, or you just couldn’t get there from here, you know what I’m talking about.

And so, you knew something like Outist was coming. Amidst a bunch of Max for Live plug-in developers thinking up creative things to do with the new routing tools (like spatialization or visualization), this one is dead-simple. It just uses that loophole to give you a device you can easily insert to add a routing wherever you want – a bit like having a virtual patch cable you can plug into your DAW.

And it’s free.

Description (which spells “buss” instead of “bus” for some reason):

outist is a maxforlive device that lets you route any signal to any internal or external destination.

It’s originally designed to bypass Live’s restricted return buss routing. With outist you can have pre and post send PER return channel.

You can also simply use it to send the signal to any physical output or just anywhere in your set…

Findt Outist and a bunch of other weird and interesting stuff:

https://gumroad.com/valiumdupeuple

With those floodgates open, as I said, there may well be a better tool out there. So please, readers – don’t be shy. Happy to hear other tips, or about your patch that’s better, or other ideas – shoot!

And yeah, I definitely wish Ableton just did this by default, natively – but I’ll take this hack as a solution!

The post Route audio from anywhere to anywhere in Ableton, free appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme