Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » Software


VCV Rack modular is about to get gamepad support

Delivered... Peter Kirn | Scene | Wed 23 May 2018 4:14 pm

Computer or Eurorack, you still want to get those grubby hands on your sounds. So the latest update to the free and open modular platform Rack makes that cheap and easy, with gamepad support.

Developer Andrew Belt is clearly a busy man. His latest update maps from gamepads to virtual voltage inside the software modular environment. Watch via this — uh gentle ambient demo?

Anrew explains on Facebook:

Just added gamepad and computer keyboard support to VCV Rack, soon to be released in Rack 0.6.1.

Joysticks are mapped to voltages -10 to 10V for each axis using the MIDI-CC module from Core with the new “Gamepad” MIDI driver. Buttons can be converted to 10V gates using MIDI-Trig. Similarly to actual MIDI controllers, click the CC or note name display to learn/assign a gamepad joystick/button.

“But I don’t have a USB gamepad controller!”

They’re super cheap on eBay or Amazon by searching “usb gamepad” for around $10. Compare that with $300 MIDI controllers, and this is more fun per dollar if you’re on a budget (so you can save your money for the next upcoming VCV module!)

The “Computer Keyboard” driver supports the QWERTY US layout and spans two octaves with octave up/down buttons.

This update also adds the ability to use the same MIDI device on Windows with multiple virtual MIDI modules. Previously this was caused by the Windows MIDI API requiring exclusive access to each MIDI device, so having multiple instances would crash. I have written a MIDI “multiplexer” that solves this.

Good stuff. I can also imagine an ultra-portable sound rig with a compact PC and a gamepad and keyboard attached – running Linux, of course.

More:
http://vcvrack.com/

Speaking of Linux, until there’s native support in JACK at some point, hopefully, there’s already a hack for the best audio system on Linux (and the best way of piping sound between software):

https://gitlab.com/sonusdept/hijack

Oh yeah, and while VCV Rack is free (with inexpensive software add-ons for high-quality modules), there is this problem – it could make you buy hardware.

The post VCV Rack modular is about to get gamepad support appeared first on CDM Create Digital Music.

Get a powerful spectral delay, free, in MSpectralDelay plug-in

Delivered... Peter Kirn | Scene | Wed 23 May 2018 3:29 pm

What makes a delay more interesting? A delay that’s combined with spectral controls. What makes that better? Getting it for free. MSpectralDelay is here – and looks like a must-download.

It’s been a while – I’m sure I’m not alone in missing Native Instruments’ Spektral Delay, discontinued some years back. MSpectralDelay is a different animal – NI’s offering had a whopping 160 bands, whereas this has just six – but you do get a powerful, musical interface that lets you treat delays in a different way.

The idea is this: divide up your sound by frequency, with one to six bands, then add the delay effect with tempo sync and apply modulation.

What the developers Melda have done that set their offering apart is to provide really precise parameter controls with clear visual feedback, MIDI control of everything, and clever features like automatic gain compensation and a “safety” limiter to prevent you from overdriving the results.

Also surprising: not only is there mid/side processing, but you can set up to eight channels of surround, offering some spatial applications.

Melda plugins also feature some nice standard features like modulators with time signatures, morphing and preset recall, different channel modes, and more.

Full feature list from the devs:

The most advanced user interface on the market – stylable, resizable, GPU accelerated
Dual user interface, easy screen for beginners, edit screen for professionals
Unique visualisation engine with classic meters and time graphs
1-6 fully configurable independent bands
Modulators
Adjustable oscillator shape technology
Multiparameters
M/S, single channel, up to 8 channels surround processing…
Smart randomization
Automatic gain compensation (AGC)
Safety limiter
Adjustable up-sampling 1x-16x
Synchronization to host tempo
MIDI controllers with MIDI learn
64-bit processing and an unlimited sampling rate
Extremely fast, optimized for newest AVX2 capable processors
Global preset management and online preset exchange
Supports VST, VST3, AU and AAX interfaces on Windows & Mac, both 32-bit and 64-bit
No dongle nor internet access is required for activation
Free-for-life updates

There’s also this kind of funny demo video, which first explains why you want a delay, and then – as is custom in our industry – tell you that, naturally, everyone from complete beginners who barely know how to switch on their computer to advanced professionals will be able to have exactly the same experience because presets parameters blah blah.

That said… well, you do need a delay. And this is awesome. And beginners and pros will probably have fun with it. And there are presets. So… fair points, all.

Go grab it:

http://www.meldaproduction.com/MSpectralDelay

via Sonic State

Free download requires registration; the offer ends June 3.

The post Get a powerful spectral delay, free, in MSpectralDelay plug-in appeared first on CDM Create Digital Music.

FL Studio 20 for Windows and now Mac, with Hell-freezing functionality

Delivered... Peter Kirn | Scene | Tue 22 May 2018 1:21 pm

FL Studio aka Fruity Loops has hit a version the developers are dubbing FL Studio 20. At age 20, the software still includes lifetime free updates – and a bunch of new features, including freezing of audio, and Hell freezing over.

The “Hell freezing over” bit you’ll see a lot around this release. It’s a reference to a claim developers Image-Line made that they’d add native Mac support “when Hell freezes over.” The comment at the time wasn’t so outrageous: FL Studio had been built a Windows-native development toolchain that made porting unthinkable. And while about ten years ago the company flirted with using emulation layer WINE to provide rudimentary support, that approach wasn’t terribly satisfying.

Now, Mac users can be first class FL Studio citizens if they so choose. FL Studio 20 is entirely Mac native – not running any kind of emulation. Of course, it may be hard to Image-Line to shake the Windows association, and some Mac users are coming the opposite direction, opting for the power-for-price ratio on Windows PCs. But the Mac still represents a huge portion of musicians, and this means choosing FL doesn’t require choosing a particular OS.

(I will say, though – a new Razer Blade is out. And even the old Razer Blade remains cheaper and better equipped than the Mac. Now you do have to disable some Windows 10 annoyances, like a CPU-hogging malware check and automatic updates on by default. Ahem.)

Hell isn’t the only thing FL Studio can freeze. You can now bounce selected audio and pattern clips to audio, render clips to audio, consolidate clips or tracks or takes by bouncing, and more. That’s a huge difference in the FL workflow.

There are plenty of other new features in version 20, too:

Time Signature support (both in playlists and patterns, independently – so, yes, polymetric support if you like – and you thought FL Studio was just for 4/4 trance.)

Playlist Arrangements. Here’s something I find I’m often missing in linear DAWs – you can now set up multiple alternate arrangements, including audio, automation, and pattern clips, all in one project. That could be massive for tasks from trying out alternative song ideas to specific game or live performance sound designs. (I could see a theater show design using this … or fitting a score to different versions of a film trailer … and so on.)

Plugin Delay Compensation, rebuilt. FL already had delay compensation, both automatic and plugin varieties, but it’s been rebuilt from the ground up, say the developers. And it sounds very useful: “Mixer send compensation, Wet/Dry mixer FX compensation, Audio input compensation, Metronome compensation, Plugin Wrapper custom values remembered per-plugin and improved PDC controls in the Mixer.”

Graph Editor is back! This should never really have left, but a “classic” FL feature has returned, letting you edit MIDI information from the Channel Rack – a very Fruity Loops workflow.

Better recording. There’s now a live display of recorded audio and automatic grouping of tracks as you record – both overdue but welcome.

There are loads of improvements to various plugins, of course, plus lots of other fixes and improvements. Details in the manual:

New Features in FL Studio 20

It’s also pretty remarkable that FL Studio has hit 20 years without ditching its lifetime free upgrade policy. FL users have a substantially different relationship with the software than do users of most typical DAWs, both because of its unique workflow and interface and that lifetime policy. But I’m personally intrigued to give it another go – bouncing and working delay compensation make a big dfference, and FL remains a peculiar, interesting toybox full of nice stuff. I think the fact that FL has perhaps not been taken as seriously as tools like Cubase or Ableton Live might itself be a badge of honor – if you can adapt to its often nonstandard ways of working, it offers some big rewards on a small budget.

Announcing FL STUDIO 20 [FL Studio News]

And… uff… Image-Line again launch with a video with truly terrible music. (Sorry, guys!) But… who cares? Go make whatever music you want in it. It’s a production tool!

How to watch the Image-Line launch video without clawing out your eadrums

Okay, so… I have a theory.

Maybe one reason people assume FL Studio is for people making terrible dance music is … because Image-Line (sorry, guys) insist on putting terrible dance music beds underneath the videos. Oh, sure, Ableton can throw a big posh party in Berlin and toss moody high-contrast artist photos beneath a stylish typeface they hired a London design consultancy to choose for them. FL Studio’s video is slightly more … uh … pedestrian.

So I’ve found a solution. First, cue up this delightful live performance of “Söngur heiftar” by classic Icelandic black metal band Misþyrming. It’s a little longer than the FL Studio 20 launch video, so don’t panic … you’ve got up to 60 seconds to then hit play on the FL Studio launch video, and hit the mute button in YouTube.

It’s the “Dark Side of the Moon” / Wizard of Oz approach to making music tech marketing videos more palatable. And it kind of fits. You’re welcome.

You’ll need the sound back on for this one, but here’s an extended tutorial video explaining what’s new:

The post FL Studio 20 for Windows and now Mac, with Hell-freezing functionality appeared first on CDM Create Digital Music.

iZotope adds modeling features to Vocal Synth, makes a creative bundle

Delivered... Peter Kirn | Scene | Thu 10 May 2018 6:48 pm

Singing – it’s the simple most important human instrument, but it’s too often overlooked in technology. iZotope has doubled up on innovations there with Vocal Synth 2 – and in case you haven’t been keeping track, they’ve bundled all their production tools together into the new Creative Suite.

Vocal Synth 2 upgrade

Vocal Synth was already a compelling, semi-modular set of tools for processing vocals and applying vocal tech to incoming signal – something you can do creative stuff with whether you’re a singer, or a producer who sings, or a producer working with vocalists, or a producer pretending you can sing. (Yes, it’s useful even on other inputs, even if you lack the vocal chops yourself.)

It’s really, really good, but – the one thing I sort of expected when I first heard about the product was something like physical vocal modeling in the box. Now, sure enough, they’ve added just that.

So, Vocal Synth 2 delivers:

Biovox: A module for physical modeling of the human vocal track (with “science,” say iZotope), from nasality to formants. This isn’t something like Vocaloid – it’s not about voice synthesis or faking vocals – but in a way, it’s something more musically useful, a model of all the good stuff that happens inside your vocal tract and the resonant cavities in your head, delivered as an effect. That’s really important, because our perception is trained to take all this sort of nuance for granted.

There’s more, too…

Chorus and Ring Mod effects. Yep, less futuristic than Biovox, but very essential.

Improved Shred effect.

Advanced sound, advanced controls. So, hiding controls doesn’t always make things more intuitive – sometimes you actually want to dive down and get something that’s missing in the panel. iZotope say they’ve both improved the sound model, and added the ability to get advanced control over parameters, including “access to Vocoder band controls, per module Oscillator presets, and per module panning and filters.”

Integration with other plug-ins. Since iZotope are selling their production stuff as a suite, they’ve also added the ability for Vocal Synth 2 to show up in Neutron 2’s Masking Meter and Visual Mixer, and in Tonal Balance Control. That means a nice chance to apply Vocal Synth where it does – and doesn’t – belong.

I definitely will review this one soon; this stuff is very much up my alley, and a lot of yours’, I’m sure, too.

Creative Suite

Okay, those of us who also do design or video editing work may shudder and think of big monthly subscription fees from Adobe when we read those words but – don’t panic.

Creative Suite is just the new bundle of iZotope production tools. While they may be more well known for mastering and post production offerings, iZotope have applied sonic science to an impressive and unique stable of stuff you’d use when actually making the music and designing sounds. So, what had been “Creative Bundle” is now the more complete “Creative Suite.”

Included: VocalSynth 2, Iris 2, Trash 2 Expanded, BreakTweaker Expanded, Stutter Edit, DDLY, and Mobius Filter. (I’m pretty sure someone caught on that filter, because I’ve heard it cropping up in new releases. Don’t know if that’s CDM’s fault in part or not. But it is great fun.)

You can buy Creative Bundle for US$349 now, a steep discount, and then get the bigger Suite when it ships – including the new VocalSynth 2. See:

https://www.izotope.com/en/store/deals.html#vox

There are of course equivalent suites for the other interest areas – an RX Suite for post production and correction/cleanup, plus the O2N2 Bundle that covers mixing and mastering, including the industry favorite Ozone.

Yeah, Ozone – there are definitely some mastering engineers out there keeping big racks of impressive looking gear, then, like, doing most of the mastering on Ozone. (And why not? Just sayin’. Ducks…)

More:

Coming Soon: VocalSynth 2 and New iZotope Creative Suite

The post iZotope adds modeling features to Vocal Synth, makes a creative bundle appeared first on CDM Create Digital Music.

MIDI Polyphonic Expression is now a thing, with new gear and software

Delivered... Peter Kirn | Scene | Mon 7 May 2018 5:37 pm

MIDI Polyphonic Expression (MPE) is now an official part of the MIDI standard. And Superbooth Berlin shows it’s catching on everywhere from granular synths to modular gear.

For decades now, it’s been easy enough to add expression to a single, monophonic line, via various additional controls. But humans have more than one finger. And with MIDI, there was until recently no standard way of adding additional expressiveness for multiple notes/fingers at the same time. All of that changed with the adoption of the MPE (MIDI Polyphonic Expression) specification.

Here’s a nice video explanation from our friend, musician and developer Geert Bevin:

“Oh, fine,” naysayers were able to say, “but is that really for very many people?” And sure enough, there haven’t been so many instruments that knew what to do with the MPE data from a controller. So while you can pick up a controller like the ROLI Seaboard (or more boutique items from Roger Linn and Madrona Labs), and see support in major DAWs like Logic, Cubase, Reaper, GarageBand, and Bitwig Studio, mostly what you’d play would be specialized instruments made for them.

But that’s changing. It’s changing fast enough that you could spot the theme even at an analog-focused show like Superbooth.

Here’s a round-up of what was shown just at that show – and that isn’t even a complete list of the hardware and software support available now.

Thanks to Konstantin Hess from ROLI who helped me compile this list and provided some photos.

Polyend/Dreadbox Medusa. This all-in-one sequencer/synth is one I’ll write up separately. That grid has dedicated X/Y/Z movement on it, and it’s terrifically expressive. What’s great is, it uses MPE so you can record and play that data in supported hosts – or presumably use the same to sequence oteher MPE-compatible gear. And that also means:

Polyend SEQ. The Polish builder’s standalone sequencer also works with SEQ. As on the Medusa, you can play that live, or increment through, or step sequence control input.

Tasty Chips GR-1 Granular Synthesizer. Granular instruments have always posed a challenge when it comes to live performance, because they require manipulating multiple parameters at once. That of course makes them a natural for MPE – and sure enough, when Tasty Chips crowd-funded their GR-1 grain synth, they made MPE one of the selling points. Connect something like a Seaboard, and you have a granular instrument at your command. (An ultra-mobile, affordable Seaboard BLOCK was there for the demo in Berlin.)

The singular Gaz Williams recently gave this a go:

Audio Damage Quanta. The newest iOS app/desktop plug-in from Audio Damage isn’t ready to use yet, but an early build was already at Superbooth connected to both a Linnstrument and a ROLI Seaboard for control. Set an iPad with your controller, and you have a mobile grain instrument solution.

Expert Sleepers FH-1. The FH-1 is a unique MIDI-to-CV modular interface, with both onboard USB host capabilities and polyphonic support. But what would polyphonic input be if you couldn’t also add polyphonic expression? And sure enough, the FH-1 is adding support for that natively. I’m hopeful that Bastl Instruments will choose to do the same with their own 1983 MIDI module.

Polyend Poly module. Also from Polyend, the Poly is designed around polyphony – note the eight-row matrix of CV out jacks, which makes it a sophisticated gateway from MIDI and USB MIDI to voltage. But this digital-to-analog gateway also has native support for MPE, meaning the moment you connect an MPE-sending controller, you can patch that expression into whatever you like.

Endorphin.es Shuttle Control. Shuttle Control is both a (high res) 12-bit MIDI-to-CV converter and practically a little computer-in-a-module all its own. It’s got MPE support, and was showing off that capability at Superbooth.

Once you have that MIDI bridge to voltage, of course, MPE gives you additional powers over a modular rig, so this opens up a lot more than just the stuff mentioned here.

I even know some people switching from Ableton Live to Bitwig Studio just for the added convenience of native MPE support. (That’s a niche, for sure, but it’s real.) I guess the key here is, it takes just one instrument or one controller you love to get you hooked – and then sophisticated modular and software environments can connect to still more possibilities.

It’s not something you’re going to need for every bassline or use all the time, but for some instruments, it adds another dimension to sound and playability.

Got some MPE-supporting picks of your own, or your own creations? Do let us know.

The post MIDI Polyphonic Expression is now a thing, with new gear and software appeared first on CDM Create Digital Music.

Deadbeat’s secret sauce Reaktor picks for “weirdo” production

Delivered... Peter Kirn | Scene | Mon 30 Apr 2018 10:39 am

It’s time for another trip into the strange and wonderful world of artist-created Reaktor ensembles. This time, our guide is dub techno maestro Deadbeat.

The Canadian-born, Berlin-based Scott Monteith is an artist whose chops are at peak maturity, from timbre to rhythm, recording to mix. And Scott’s latest, Wax Poetic For This Our Great Resolve, is both more personal — pulling from inspirational texts from friends — and more sonically intimate. The entire album sounds open and airy and organic, thanks to using acoustic re-recording of electronic elements. Every percussion hit, every synth line was either recorded in real space in the studio or recorded out of the box and into that open space and then miked.

Scott and I got to spend a pleasurably leisurely interview talking about the record, which I wrote up for Native Instruments’ blog:
Deadbeat on a return to hope, sound in real space

With all this focus on acoustic recording and re-recording, you’d think there wouldn’t be much to say about software – but you’d be wrong. There’s yet more shade and color around these sounds that’s produced by synthetic processing, a whole lot of it in Reaktor.

“There’s tons and tons of extra stuff that you would normally delete in vocal takes or guitar takes or whatever that ended up as sauce for feeding vocoders or feeding [Reaktor ensemble] grainstates,” says Scott, “or even some of the real classic [ensembles].” You’re hearing some of that in the hyperreal, clear color of the arrangements and mix.

“I think it’s nice to treat that stuff completely independently,” Scott says, “and then you end up with this bank of stuff that you know is going to be in key. And it’s somehow relatable, whether it be melodically or aesthetically – because you’ve fed it this stuff from a particular track. And then you go back to arrangement mode, because then I can take off my sound designer’s hat and put on my arrangers’ hat.”

Scott is confident enough in his skills to give that secret sauce away, so here’s a tour. Some of these are some long-lost gems of the library, too, so don’t expect to find them just by sorting for the latest or most popular ensembles. Some of these were used on this particular record, others represent a related techniques but have been used on other productions.

g-Transcoder
Gabriel Mulzer
Spectral vocoder/delay/reverb

“I’m using that just to add color to things. I love vocoders, period.

It’s like taking the vocals of Gudrun talking or Fatima talking, and using that as the modulator and the carrier signal being the chords in the track. Or it could also be the extra recording of the high hats in the room, and vocoding the vocals with that. So, then you have something rhythmic that’s the same, and in the same air, but then can be free as its own track. Or taking the guitar or the bass…”

GRIP Grain Cloud Synth
Uwe G. Hoenig
Polyphonic granular synth

“This is a playable one – this is one you can play with the keyboard. And you can load the oscillator is whatever you load into it.”

MOLEKULAR
Denis Gökdag / zynaptiq, Native Instruments
Modular multi-effects
KOMPLETE effect; available à la carte or in KOMPLETE ULTIMATE

“It’s fantastic. It’s beautiful. It’s a beautiful combination of super, super simple granular synth process combined with lovely lush reverb. And it’s just amazing.”

The Swarm
Eduard Telik
Random sound generator

“There goes a few hours of time,” says Scott. “This whole frequency modulation and detune and weird shit that’s going on in these guys is amazing.”

Ultimate Reverb
Guenther Fleischmann
Reverberator

“There’s this preset – ‘Coming Up From Hell.’ I use that a lot – I’ve been using that for years. If you’re rolling along, and you want to create density, it’s like, okay, flip this into the Ultimate Reverb, and all of a sudden you’ve got this underlying loud of ffffoooooosssssh. You’ve made things thick without adding another element.

And that with some sort of distortion, and some sort of sidechain compression to make sure that it doesn’t get in the way of anything — all of a sudden, you’ve created raging hell.”

grainstates
Martin Brinkmann
Granular effect processor

Don’t forget the granular Reaktor ensemble that started the craze. Martin’s landmark granular processor has had an influence even outside the Reaktor community on imagining how grain processing effects can be used as instruments.

Hacking together custom ensembles

The biggest advantage of using Reaktor as a modular environment is, you can hack together what you need if a particular tool doesn’t do exactly what you want. Scott long ago made his name as a Reaktor patcher, but don’t feel obligated to achieve mastery — even he doesn’t necessarily go that route now. “The last one that I did … this thing [Deadbeats] 13 years ago.”

The aforementioned Grain Cloud synth, for instance, he used to substitute oscillators inside a drum machine. Or with granular processors, he’s swapped a sample player with a live input, as on The Swarm. These aren’t complicated hacks – you barely need to know how to operate Reaktor to pull them off. But they then open worlds of new performance and sound design possibilities.

In another instance, Scott had a happy accident hacking mmmd1, the “morphing minimal drum machine” by grainstates creator Martin Brinkmann. That ensemble includes a series of assignable X/Y controllers which can modulate the filter, bitcrush, and so on, with step-based sequencing.

Scott tried applying a child ensemble with a crossfader for interpolating between presets – and that’s when he was surprised. “Because this is step-based, morphing between presets on this thing, as you would go across, it would go thththththththththt …. and you would get these totally twisted, glitchy crossfade things.”

Thanks, Scott! Got more favorite Reaktor ensembles, other granular tools, or the like? Let us know in comments.

Deadbeat on a return to hope, sound in real space [NI Blog]

Deadbeat Wax Poetic For This Our Great Resolve [Review: XLR8R]

https://soundcloud.com/deadbeat

The post Deadbeat’s secret sauce Reaktor picks for “weirdo” production appeared first on CDM Create Digital Music.

The story of the Eventide gear that transformed music, coined “plug-ins”

Delivered... Peter Kirn | Scene | Thu 26 Apr 2018 3:57 pm

From the extraordinary first digital breakthroughs of the 70s, when lightbulbs stood in for LEDs, to what may have been the first use of the word “plug-in,” we the inventors of Eventide’s classics – who now have a Grammy nod of their own.

Rock and pop have their heroes, their great records. But when you’ve got an engineering hero, their work finds realization behind the scenes in all that music, in hit music and obscure music. And then it can find its way into your work, too.

These inventions have already indirectly won plenty of Grammy Awards, if you care about that sort of thing. But at the beginning of this year, the pioneers at Eventide got a Lifetime Achievement Award, putting their technical achievements alongside the musical contributions of Tina Turner, Emmylou Harris, and Queen, among others.

Why are these engineers smiling? Because they got a Grammy for their inventions. Tony Agnello (left) and Richard Factor (right) at the headquarters.

Electrical engineers and inventors are rarely household names. But you’ve heard the creations of Richard Factor and Tony Agnello, who remain at Eventide today (as do those inventions, in various hardware and software recreations, including for the Universal Audio platform). For instance, David Bowie’s “Low,” Kraftwerk’s “Computer World” and AC/DC’s “Back In Black” all use their H910 harmonizer, the gear called out specifically by the Grammy organization. And that’s before even getting into Eventide’s harmonizers, delays, the Omnipressor, and many others.

1974 radio advertising:

Here’s the thing – whether or not you care about sounding like a classic record or lived through all of the 1970s (that’s, uh, “not so much” for me on both of those, sorry), the story of how this gear was made is totally fascinating. You’d expect an electrical engineering tale to be dry as dust, but – this is frontier adventure stuff, like, if you’re a total nerd.

Here’s the story of the DDL 1745 from 1971, back when engineers had to “rewind the f***ing tape machines” just to hear a delay.

Eventide founder Richard Factor started experimenting with digital delays while working a day job in the defense industry, at the height of the Vietnam War, working with shift registers that work in bits.

Their advice from the 70s still holds. What do you do with a delay? “Put stuff in it!” Do you need to know what the knobs are doing? No! (Sorry, I may have just spoiled potentially thousands of dollars in audio training. My apologies to the sound schools of the world.)

Susan Rogers of Prince fame (who we’ve been talking about lately) also talks about how she “had to have” her Eventide harmonizer and delays. I now have come to feel that way about my plug-in folder, and their software recreations, just because then you have the ability to dial up unexpected possibilities.

Or, there’s the Omnipressor, the classic early 70s gear that introduced the very concept of the dynamics processor. Here, inventor Richard Factor explains how its creation grew out of the Richard Nixon tapes. No – seriously. I’ll let him tell the story:

Tony deals with those philosophical questions of imaginative possibility, perhaps most eloquently – in a way perhaps only an engineer can. Let’s get to it.

The first commercial digital delay looked like… this. DDL1745, 1971.

So you’ve already told this amazing story of the Omnipressor. Maybe you can tell us a bit about how the H910 came about?

When I joined Eventide in early 1973, the first model of the Digital Delay Line, the DDL1745, had just started shipping. At that time, there were no digital audio products of any kind in any studio anywhere.

The DDL was a primitive box. It predated memory (no RAM), LEDs (it had incandescent bulbs), and integrated Analog-to-Digital Converters [ADCs]. It offered 200 msec of delay for the price of a new car — US$4,100 in 1973 which is equivalent to ~$22,000 today! The fact is that DDLs were expensive and rare and only installed in a few world-class studios. They were used to replace tape delay.

At the time, studios were using tape delay for ADT (automatic double tracking) and, in some cases, as a pre-delay to feed plate reverbs. Plate reverbs had replaced ‘echo chambers’ but fell short in that, unlike a real room, a plate reverb’s onset is instantaneous.

I don’t believe that any recording studio had more than one DDL installed because they were so expensive. I was lucky. On the second floor of Eventide’s building was a recording studio – Sound Exchange. I was able to use the studio when it wasn’t booked to record my friends and relatives. And I had access to several DDLs! I remember carrying a few DDLs up to the studio and patching them into the console and having fun (a la Les Paul) with varying delay and using the console’s faders and feedback. By 1974 Richard Factor had designed the 1745M DDL which used RAM and had an option for a simple pitch change module.

At that point, I became convinced that I could create a product that combined delay, feedback, and pitch change that would open up a world of possible effects. I also thought that a keyboard would make it possible to ‘play’ a harmony while singing. In fact, my prototype had a 2-octave keyboard bolted to the top. Playing the keyboard was unorthodox in that center C was unison, C# would shift the voice up a half step, B down a half step, etc.

The H910 – tagline: F@*ks with the Fabric of Time”. (Cool – kind of like me and deadlines, actually.)

Now you can “f***” (to use the technical term) with the H910 in plug-in form, which turns out to be f***ing fun, actually.

Squint at this outboard gear shot for Michael Jackson’s “Thriller” and you can see the H910 – essential.

I liked in particular the idea of trying things out from an engineering perspective – as you put it, from what you think might sound interesting, rather than guessing in advance what the musical application would be. So, how do you decide something will sound interesting before it exists? How much is trial and error; how much do you envision how things will sound in advance?

Hmmm. First off, it starts with a technical advance. Integrated circuits made digital audio practical and every advance in technology makes new techniques/things possible, and new capabilities ensue.

At the dawn of digital audio, the mission was clear and simple from my perspective. I had studied DSP in grad school and read about the work being done at places like Bell Labs. At the time, the researchers couldn’t experiment with real-time audio, which was a huge limitation.

It was obvious that if you could digitize audio, you could delay it. It was also somewhat obvious that you should be able to play the audio back at a different rate than it was recorded (sampled). The question was, how can you do that without changing duration? In retrospect, splicing is obvious and that’s what I did in the H910. Splicing resulted in glitches, however (I’m pretty sure that we introduced that word into the audio lexicon). So, my next challenge: I needed to come up with a method for splicing without glitches.

My design of the H949 was the first de-glitched pitch changer. With that project behind me, the next obvious challenge was digitally simulating a room – reverb. At Bell Labs, Manfred Schroeder had done some preliminary work, and I tried implementing his approach, but the results were awful. I came to the conclusion that I needed a programmable array processor to meet this challenge. This was before DSP chips became available. I designed the SP2016 and developed reverb algorithms that are now available as plug-ins and still highly regarded.

The “de-glitched” classic, the H949, also in plug-in form (thanks to Eventide Anthology).

Given that the SP2016 was general purpose, I had some other ideas that seemed obvious. For instance, Band Delays — create a set of band pass filters and delay their outputs differentially. Suzanne Ciani famously used Band Delays on her ground-breaking “Seven Waves” composition.

http://sevwave.com/

I also developed vocoders, timescramble, and gated reverb for the SP2016. The SP2016 had a complete development system that allowed third parties to create their own effects. The effects were stored in EPROMs (Erasable Programmable Read Only Memory) that plugged into sockets. We called them ‘plug-ins’ back in 1982 long before anyone else in the audio community used that phase.

Did I think that these effects would be musical? Yes! For example, while my goal with reverb was to create a convincing simulation of a real room, I mindfully brought out user controls to allow the algorithm to sound unreal. I was never concerned that an artist would have a ‘failure of imagination.’ I simply strove to create new and flexible tools.

On that same note, I wonder if maybe what made this inventions – and hopefully future inventions – useful to musicians is that they were just some new sound. Do you get the sense that this makes them more useful in different musical applications, more novel? Or maybe you just don’t know in advance?

I think that novel is good in that it broadens the acoustic pallet. Music is a uniquely human phenomenon. It conveys emotion in a rich and powerful way. Broadening the pallet broadens the impact. We don’t create a single static effect; we create a tool that can be manipulated. Our recent breakthrough with Physion is a wonderful example. We’re now able to surgically separate the tonal and transient components of a sound – what the artist does what does pieces of the puzzle is up to them.

It’s funny in that a sound is a sound. It’s tonal and transient components are simply have we perceive the sound. I find it amazing that our team has developed software that perceives these components of sound the way that we humans do and have figured out how to split sounds accordingly.

We’re really fortunate to have all these reissues. Your Grammy nomination referred mainly seminal, big-selling records. Do you think there’s special significance to that – or have you found interest in more experimental applications? What about your users, are they largely looking to recreate those things, or to find new applications – or is it a balance of those two things?

Well the H910 was used not only because it did something new but because it had a particular sound. In the same sense that artists prefer different mics or EQs or amps, a device like the H910 has a certain characteristic. The digital portion of the H910 was simple – most of the audio path was analog and the analog portion was tuned to sound good to me! Recreating the analog subtleties and (not so subtleties) was quite the challenge but I think nailed it. The Omnipressor is another case in point. That product deserves a lot more respect and attention than it gets and the plugin emulation is excellent. On the other hand, our emulation of the Instant Phaser isn’t even close. That’s why we don’t offer it as a standalone plugin. In fact, we’re working on a much improved version of it and are getting pretty darn close. Stay tuned…

On the third hand, our Stereo Room emulation of the original reverb of the SP2016 is very close, but even so, we’re not satisfied so we’re busily measuring it in fine detail with the hope of improving it. In fact, there are a couple of other SP2016 reverbs that were popular and we’ve taken a look at emulating those.

The Stereo Room plug-in recreates the Eventide SP2016 reverb. And while it’s really good, Tony says they’re still thinking how to make it better – ah, obsessive engineers, we love you.

And, yes while there’s a balance between old and new, our goal is always to take the next step. The algorithms in our stompboxes and plugins are mostly new and in a few cases ground-breaking. Crushstation, PitchFuzz and Sculpt represent advances in simulating the non-linearities of analog distortion.

[Ed.: This is a topic I’ve heard repeated many, many times by DSP engineers. If you’re curious why software sounds better, and why it now can pass for outboard gear whereas in the past it very much couldn’t, the ability to recreate analog distortion is a big key. And it turns out our ears seem to like those kind of non-linearities, with or without a historical context.]

What’s the relationship you have with engineers and artists? What kind of feedback do you get from them – and does it change your products at all? (Any specific examples in terms of products we’d know?)

We have a good relationship with artists. They give us ideas for new products and, more often, help us create better UIs by explaining how they would like to work.

One specific example that is our work with Tony Visconti. I am honored that he was open to working with us to create a plug-in, Tverb, that emulated his 3 mic recording technique used on Bowie’s “Heroes.” Tony was generous with his time and brilliant in suggesting enhancements that weren’t possible in the real world. The industry response to Tverb has been incredibly gratifying – there is nothing else like it.

https://www.eventideaudio.com/products/plugins/visconti-reverb/tverb

Eventide’s Tverb plug-in, which allows you, impossibly, to say “I wish I had Tony Visconti’s entire recording studio rig from “Heroes” on this channel in my DAW.” And it does still more from there. Visconti himself was a collaborator.

We are currently exploring new ways to use our structural effects method and having discussions with engineers and artists. We also have a few secret projects.

How would you relate what something like the H9 or the H9000 [Eventide’s new digital effects platforms] is to the early history like the H910 and Omnipressor? What does that heritage mean – and what do you do to move it forward? Where do recreations fit in with the newer ideas?

The consistent thread over all these years is ‘the next step.’ As technology advances, as processing power increases, new techniques and new approaches become possible. The H9000 is capable of thousands of times the sheer processing power of the H910, plus it is our first network-attached processor. Its ability to sit on an audio network and handle 32 channels of audio opens up possibilities for surround processing.

Ed.: I tried out the H9000 in a technical demo at AES in Berlin last year. It’s astonishingly powerful – and also represents the first Eventide gear to make use of the ARM platform instead of DSPs (or native software running on Intel, etc.).

One major difference, obviously, is that you now have so many plug-in users – even so many more hardware users than before. What does it mean for Eventide to have a global culture where there are so many producers? Is that expanding the kind of musical applications?

As I said earlier, there is no fear of failure of imagination of our species. Art and music define us, enrich us. The more the merrier.

What was your experience of the Grammies – obviously, nice to have this recognition; did anything come out of it personally or in terms of how this made people reflect on Eventide’s history and present?

The ‘lifetime achievement’ aspect if the Grammy award is confirmation that I’m old.

Ha, well you just have to achieve more after, and you’re fine! Thanks, Tony – as far as I’m concerned, your stuff always makes me feel like a kid.

Eventide’s Richard Factor and Tony Agnello Join Queen, Tina Turner, Neil Diamond, Bill Graham and Others Named as Grammy Honorees [Eventide Press Release]

Check out Eventide’s stuff at their site:

https://www.eventideaudio.com/

Including the Anthology bundle:

https://www.eventideaudio.com/products/plugins/bundle/anthology-xi

Also, because I know that bundle is out of reach of beginning producers or musicians on a budget, it’s worth checking out Gobbler’s subscription plans. That gives you all the essentials here, including my personal must-haves, the H3000 band delays, Omnipressor, Blackhole reverb, and the H910, plus – well a lot of other great ones, too:

https://www.gobbler.com/subscription-plan/eventide-ensemble-bundle/

This is both cheaper than and way more fun than many of the Adobe subscription bundles. Just sayin’.

The post The story of the Eventide gear that transformed music, coined “plug-ins” appeared first on CDM Create Digital Music.

The story of the Eventide gear that transformed music, coined “plug-ins”

Delivered... Peter Kirn | Scene | Thu 26 Apr 2018 3:57 pm

From the extraordinary first digital breakthroughs of the 70s, when lightbulbs stood in for LEDs, to what may have been the first use of the word “plug-in,” we the inventors of Eventide’s classics – who now have a Grammy nod of their own.

Rock and pop have their heroes, their great records. But when you’ve got an engineering hero, their work finds realization behind the scenes in all that music, in hit music and obscure music. And then it can find its way into your work, too.

These inventions have already indirectly won plenty of Grammy Awards, if you care about that sort of thing. But at the beginning of this year, the pioneers at Eventide got a Lifetime Achievement Award, putting their technical achievements alongside the musical contributions of Tina Turner, Emmylou Harris, and Queen, among others.

Why are these engineers smiling? Because they got a Grammy for their inventions. Tony Agnello (left) and Richard Factor (right) at the headquarters.

Electrical engineers and inventors are rarely household names. But you’ve heard the creations of Richard Factor and Tony Agnello, who remain at Eventide today (as do those inventions, in various hardware and software recreations, including for the Universal Audio platform). For instance, David Bowie’s “Low,” Kraftwerk’s “Computer World” and AC/DC’s “Back In Black” all use their H910 harmonizer, the gear called out specifically by the Grammy organization. And that’s before even getting into Eventide’s harmonizers, delays, the Omnipressor, and many others.

1974 radio advertising:

Here’s the thing – whether or not you care about sounding like a classic record or lived through all of the 1970s (that’s, uh, “not so much” for me on both of those, sorry), the story of how this gear was made is totally fascinating. You’d expect an electrical engineering tale to be dry as dust, but – this is frontier adventure stuff, like, if you’re a total nerd.

Here’s the story of the DDL 1745 from 1971, back when engineers had to “rewind the f***ing tape machines” just to hear a delay.

Eventide founder Richard Factor started experimenting with digital delays while working a day job in the defense industry, at the height of the Vietnam War, working with shift registers that work in bits.

Their advice from the 70s still holds. What do you do with a delay? “Put stuff in it!” Do you need to know what the knobs are doing? No! (Sorry, I may have just spoiled potentially thousands of dollars in audio training. My apologies to the sound schools of the world.)

Susan Rogers of Prince fame (who we’ve been talking about lately) also talks about how she “had to have” her Eventide harmonizer and delays. I now have come to feel that way about my plug-in folder, and their software recreations, just because then you have the ability to dial up unexpected possibilities.

Or, there’s the Omnipressor, the classic early 70s gear that introduced the very concept of the dynamics processor. Here, inventor Richard Factor explains how its creation grew out of the Richard Nixon tapes. No – seriously. I’ll let him tell the story:

Tony deals with those philosophical questions of imaginative possibility, perhaps most eloquently – in a way perhaps only an engineer can. Let’s get to it.

The first commercial digital delay looked like… this. DDL1745, 1971.

So you’ve already told this amazing story of the Omnipressor. Maybe you can tell us a bit about how the H910 came about?

When I joined Eventide in early 1973, the first model of the Digital Delay Line, the DDL1745, had just started shipping. At that time, there were no digital audio products of any kind in any studio anywhere.

The DDL was a primitive box. It predated memory (no RAM), LEDs (it had incandescent bulbs), and integrated Analog-to-Digital Converters [ADCs]. It offered 200 msec of delay for the price of a new car — US$4,100 in 1973 which is equivalent to ~$22,000 today! The fact is that DDLs were expensive and rare and only installed in a few world-class studios. They were used to replace tape delay.

At the time, studios were using tape delay for ADT (automatic double tracking) and, in some cases, as a pre-delay to feed plate reverbs. Plate reverbs had replaced ‘echo chambers’ but fell short in that, unlike a real room, a plate reverb’s onset is instantaneous.

I don’t believe that any recording studio had more than one DDL installed because they were so expensive. I was lucky. On the second floor of Eventide’s building was a recording studio – Sound Exchange. I was able to use the studio when it wasn’t booked to record my friends and relatives. And I had access to several DDLs! I remember carrying a few DDLs up to the studio and patching them into the console and having fun (a la Les Paul) with varying delay and using the console’s faders and feedback. By 1974 Richard Factor had designed the 1745M DDL which used RAM and had an option for a simple pitch change module.

At that point, I became convinced that I could create a product that combined delay, feedback, and pitch change that would open up a world of possible effects. I also thought that a keyboard would make it possible to ‘play’ a harmony while singing. In fact, my prototype had a 2-octave keyboard bolted to the top. Playing the keyboard was unorthodox in that center C was unison, C# would shift the voice up a half step, B down a half step, etc.

The H910 – tagline: F@*ks with the Fabric of Time”. (Cool – kind of like me and deadlines, actually.)

Now you can “f***” (to use the technical term) with the H910 in plug-in form, which turns out to be f***ing fun, actually.

Squint at this outboard gear shot for Michael Jackson’s “Thriller” and you can see the H910 – essential.

I liked in particular the idea of trying things out from an engineering perspective – as you put it, from what you think might sound interesting, rather than guessing in advance what the musical application would be. So, how do you decide something will sound interesting before it exists? How much is trial and error; how much do you envision how things will sound in advance?

Hmmm. First off, it starts with a technical advance. Integrated circuits made digital audio practical and every advance in technology makes new techniques/things possible, and new capabilities ensue.

At the dawn of digital audio, the mission was clear and simple from my perspective. I had studied DSP in grad school and read about the work being done at places like Bell Labs. At the time, the researchers couldn’t experiment with real-time audio, which was a huge limitation.

It was obvious that if you could digitize audio, you could delay it. It was also somewhat obvious that you should be able to play the audio back at a different rate than it was recorded (sampled). The question was, how can you do that without changing duration? In retrospect, splicing is obvious and that’s what I did in the H910. Splicing resulted in glitches, however (I’m pretty sure that we introduced that word into the audio lexicon). So, my next challenge: I needed to come up with a method for splicing without glitches.

My design of the H949 was the first de-glitched pitch changer. With that project behind me, the next obvious challenge was digitally simulating a room – reverb. At Bell Labs, Manfred Schroeder had done some preliminary work, and I tried implementing his approach, but the results were awful. I came to the conclusion that I needed a programmable array processor to meet this challenge. This was before DSP chips became available. I designed the SP2016 and developed reverb algorithms that are now available as plug-ins and still highly regarded.

The “de-glitched” classic, the H949, also in plug-in form (thanks to Eventide Anthology).

Given that the SP2016 was general purpose, I had some other ideas that seemed obvious. For instance, Band Delays — create a set of band pass filters and delay their outputs differentially. Suzanne Ciani famously used Band Delays on her ground-breaking “Seven Waves” composition.

http://sevwave.com/

I also developed vocoders, timescramble, and gated reverb for the SP2016. The SP2016 had a complete development system that allowed third parties to create their own effects. The effects were stored in EPROMs (Erasable Programmable Read Only Memory) that plugged into sockets. We called them ‘plug-ins’ back in 1982 long before anyone else in the audio community used that phase.

Did I think that these effects would be musical? Yes! For example, while my goal with reverb was to create a convincing simulation of a real room, I mindfully brought out user controls to allow the algorithm to sound unreal. I was never concerned that an artist would have a ‘failure of imagination.’ I simply strove to create new and flexible tools.

On that same note, I wonder if maybe what made this inventions – and hopefully future inventions – useful to musicians is that they were just some new sound. Do you get the sense that this makes them more useful in different musical applications, more novel? Or maybe you just don’t know in advance?

I think that novel is good in that it broadens the acoustic pallet. Music is a uniquely human phenomenon. It conveys emotion in a rich and powerful way. Broadening the pallet broadens the impact. We don’t create a single static effect; we create a tool that can be manipulated. Our recent breakthrough with Physion is a wonderful example. We’re now able to surgically separate the tonal and transient components of a sound – what the artist does what does pieces of the puzzle is up to them.

It’s funny in that a sound is a sound. It’s tonal and transient components are simply have we perceive the sound. I find it amazing that our team has developed software that perceives these components of sound the way that we humans do and have figured out how to split sounds accordingly.

We’re really fortunate to have all these reissues. Your Grammy nomination referred mainly seminal, big-selling records. Do you think there’s special significance to that – or have you found interest in more experimental applications? What about your users, are they largely looking to recreate those things, or to find new applications – or is it a balance of those two things?

Well the H910 was used not only because it did something new but because it had a particular sound. In the same sense that artists prefer different mics or EQs or amps, a device like the H910 has a certain characteristic. The digital portion of the H910 was simple – most of the audio path was analog and the analog portion was tuned to sound good to me! Recreating the analog subtleties and (not so subtleties) was quite the challenge but I think nailed it. The Omnipressor is another case in point. That product deserves a lot more respect and attention than it gets and the plugin emulation is excellent. On the other hand, our emulation of the Instant Phaser isn’t even close. That’s why we don’t offer it as a standalone plugin. In fact, we’re working on a much improved version of it and are getting pretty darn close. Stay tuned…

On the third hand, our Stereo Room emulation of the original reverb of the SP2016 is very close, but even so, we’re not satisfied so we’re busily measuring it in fine detail with the hope of improving it. In fact, there are a couple of other SP2016 reverbs that were popular and we’ve taken a look at emulating those.

The Stereo Room plug-in recreates the Eventide SP2016 reverb. And while it’s really good, Tony says they’re still thinking how to make it better – ah, obsessive engineers, we love you.

And, yes while there’s a balance between old and new, our goal is always to take the next step. The algorithms in our stompboxes and plugins are mostly new and in a few cases ground-breaking. Crushstation, PitchFuzz and Sculpt represent advances in simulating the non-linearities of analog distortion.

[Ed.: This is a topic I’ve heard repeated many, many times by DSP engineers. If you’re curious why software sounds better, and why it now can pass for outboard gear whereas in the past it very much couldn’t, the ability to recreate analog distortion is a big key. And it turns out our ears seem to like those kind of non-linearities, with or without a historical context.]

What’s the relationship you have with engineers and artists? What kind of feedback do you get from them – and does it change your products at all? (Any specific examples in terms of products we’d know?)

We have a good relationship with artists. They give us ideas for new products and, more often, help us create better UIs by explaining how they would like to work.

One specific example that is our work with Tony Visconti. I am honored that he was open to working with us to create a plug-in, Tverb, that emulated his 3 mic recording technique used on Bowie’s “Heroes.” Tony was generous with his time and brilliant in suggesting enhancements that weren’t possible in the real world. The industry response to Tverb has been incredibly gratifying – there is nothing else like it.

https://www.eventideaudio.com/products/plugins/visconti-reverb/tverb

Eventide’s Tverb plug-in, which allows you, impossibly, to say “I wish I had Tony Visconti’s entire recording studio rig from “Heroes” on this channel in my DAW.” And it does still more from there. Visconti himself was a collaborator.

We are currently exploring new ways to use our structural effects method and having discussions with engineers and artists. We also have a few secret projects.

How would you relate what something like the H9 or the H9000 [Eventide’s new digital effects platforms] is to the early history like the H910 and Omnipressor? What does that heritage mean – and what do you do to move it forward? Where do recreations fit in with the newer ideas?

The consistent thread over all these years is ‘the next step.’ As technology advances, as processing power increases, new techniques and new approaches become possible. The H9000 is capable of thousands of times the sheer processing power of the H910, plus it is our first network-attached processor. Its ability to sit on an audio network and handle 32 channels of audio opens up possibilities for surround processing.

Ed.: I tried out the H9000 in a technical demo at AES in Berlin last year. It’s astonishingly powerful – and also represents the first Eventide gear to make use of the ARM platform instead of DSPs (or native software running on Intel, etc.).

One major difference, obviously, is that you now have so many plug-in users – even so many more hardware users than before. What does it mean for Eventide to have a global culture where there are so many producers? Is that expanding the kind of musical applications?

As I said earlier, there is no fear of failure of imagination of our species. Art and music define us, enrich us. The more the merrier.

What was your experience of the Grammies – obviously, nice to have this recognition; did anything come out of it personally or in terms of how this made people reflect on Eventide’s history and present?

The ‘lifetime achievement’ aspect if the Grammy award is confirmation that I’m old.

Ha, well you just have to achieve more after, and you’re fine! Thanks, Tony – as far as I’m concerned, your stuff always makes me feel like a kid.

Eventide’s Richard Factor and Tony Agnello Join Queen, Tina Turner, Neil Diamond, Bill Graham and Others Named as Grammy Honorees [Eventide Press Release]

Check out Eventide’s stuff at their site:

https://www.eventideaudio.com/

Including the Anthology bundle:

https://www.eventideaudio.com/products/plugins/bundle/anthology-xi

Also, because I know that bundle is out of reach of beginning producers or musicians on a budget, it’s worth checking out Gobbler’s subscription plans. That gives you all the essentials here, including my personal must-haves, the H3000 band delays, Omnipressor, Blackhole reverb, and the H910, plus – well a lot of other great ones, too:

https://www.gobbler.com/subscription-plan/eventide-ensemble-bundle/

This is both cheaper than and way more fun than many of the Adobe subscription bundles. Just sayin’.

The post The story of the Eventide gear that transformed music, coined “plug-ins” appeared first on CDM Create Digital Music.

Free new tools for Live 10 unlock 3D spatial audio, VR, AR

Delivered... Peter Kirn | Scene | Wed 25 Apr 2018 7:06 pm

Envelop began life by opening a space for exploring 3D sound, directed by Christopher Willits. But today, the nonprofit is also releasing a set of free spatial sound tools you can use in Ableton Live 10 – and we’ve got an exclusive first look.

First, let’s back up. Listening to sound in three dimensions is not just some high-tech gimmick. It’s how you hear naturally with two ears. The way that actually works is complex – the Wikipedia overview alone is dense – but close your eyes, tilt your head a little, and listen to what’s around you. Space is everything.

And just as in the leap from mono to stereo, space can change a musical mix – it allows clarity and composition of sonic elements in a new way, which can transform its impact. So it really feels like the time is right to add three dimensions to the experience of music and sound, personally and in performance.

Intuitively, 3D sound seems even more natural than visual counterparts. You don’t need to don weird new stuff on your head, or accept disorienting inputs, or rely on something like 19th century stereoscopic illusions. Sound is already as ephemeral as air (quite literally), and so, too, is 3D sound.

So, what’s holding us back?

Well, stereo sound required a chain of gear, from delivery to speaker. But those delivery mechanisms are fast evolving for 3D, and not just in terms of proprietary cinema setups.

But stereo audio also required something else to take off: mixers with pan pots. Stereo effects. (Okay, some musicians still don’t know how to use this and leave everything dead center, but that only proves my point.) Stereo only happened because tools made its use accessible to musicians.

Looking at something like Envelop’s new tools for Ableton Live 10, you see something like the equivalent of those first pan pots. Add some free devices to Live, and you can improvise with space, hear the results through headphones, and scale up to as many speakers as you want, or deliver to a growing, standardized set of virtual reality / 3D / game / immersive environments.

And that could open the floodgates for 3D mixing music. (Maybe even it could open your own floodgates there.)

Envelop tools for Live 10

Today, Envelope for Live (E4L) has hit GitHub. It’s not a completely free set of tools – you need the full version of Ableton Live Suite. Live 10 minimum is required (since it provides the requisite set of multi-point audio plumbing.) Provided you’re working from that as a base, though, musicians get a set of Max for Live-powered devices for working with spatial audio production and live performance, and developers get a set of tools for creating their own effects.

Start here for the download, installation instructions, and overview:

https://github.com/EnvelopSound/EnvelopForLive/

Read an overview of the system, and some basic explanations of how it works (including some definitions of 3D sound terminology):

https://github.com/EnvelopSound/EnvelopForLive/wiki/System-Overview

And then find a getting started guide, routing, devices, and other reference materials on the wiki:

https://github.com/EnvelopSound/EnvelopForLive/wiki

Here’s the basic idea of how the whole package works, though.

Output. There’s a Master Bus device that stands in for your output buses. It decodes your spatial audio, and adapts routing to however many speakers you’ve got connected – whether that’s just your headphones or four speakers or a huge speaker array. (That’s the advantage of having a scalable system – more on that in a moment.)

Sources. Live 10’s Mixer may be built largely with the idea of mixing tracks down to stereo, but you probably already think of it sort of as a set of particular musical materials – as sources. The Source Panner device, added to each track, lets you position that particular musical/sonic entity in three-dimensional space.

Processors. Any good 3D system needs not only 3D positioning, but also separate effects and tools – because normal delays, reverbs, and the like presume left/right or mid/side stereo output. (Part of what completes the immersive effect is hearing not only the positioning of the source, but reflections around it.)

In this package, you get:
Spinner: automates motion in 3D space horizontally and with vertical oscillations
B-Format Sampler: plays back existing Ambisonics wave files (think samples with spatial information already encoded in them)
B-Format Convolution Reverb: imagine a convolution reverb that works with three-dimensional information, not just two-dimensional – in other words, exactly what you’d want from a convolution reverb
Multi-Delay: cascading, three-dimensional delays out of a mono source
HOA Transform: without explaining Ambisonics, this basically molds and shapes the spatial sound field in real-time
Meter: Spatial metering. Cool.

Spinner, for automating movement.

Spatial multi-delay.

Convolution reverb, Ambisonics style.

Envelop SF and Envelop Satellite venues also have some LED effects, so you’ll find some devices for controlling those (which might also be useful templates for stuff you’re doing).

All of this spatial information is represented via a technique called Ambisonics. Basically, any spatial system – even stereo – involves applying some maths to determine relative amplitude and timing of a signal to create particular impressions of space and depth. What sets Ambisonics apart is, it represents the spatial field – the sphere of sound positions around the listener – separately from the individual speakers. So you can imagine your sound positions existing in some perfect virtual space, then being translated back to however many speakers are available.

This scalability really matters. Just want to check things out with headphones? Set your master device to “binaural,” and you’ll get a decent approximation through your headphones. Or set up four speakers in your studio, or eight. Or plug into a big array of speakers at a planetarium or a cinema. You just have to route the outputs, and the software decoding adapts.

Envelop is by no means the first set of tools to help you do this – the technique dates back to the 70s, and various software implementations have evolved over the years, many of them free – but it is uniquely easy to use inside Ableton Live.

Open source, standards

Free software. It’s significant that Envelop’s tools are available as free and open source. Max/MSP, Max for Live, and Ableton Live are proprietary tools, but the patches and externals exist independently, and a free license means you’re free to learn from or modify the code and patches. Plus, because they’re free in cost, you can share your projects across machines and users, provided everybody’s on Live 10 Suite.

Advanced Max/MSP users will probably already be familiar with the basic tools on which the Envelop team have built. They’re the work of the Institute for Computer Music and Sound Technology, at the Zürcher Hochschule der Künste in Zurich, Switzerland. ICMST have produced a set of open source externals for Max/MSP:

https://www.zhdk.ch/downloads-ambisonics-externals-for-maxmsp-5381

Their site is a wealth of research and other free tools, many of them additionally applicable to fully free and open source environments like Pure Data and Csound.

But Live has always been uniquely accessible for trying out ideas. Building a set of friendly Live devices takes these tools and makes them make more sense in the Live paradigm.

Non-proprietary standards. There’s a strong push to proprietary techniques in spatial audio in the cinema – Dolby, for instance, we’re looking at you. But while proprietary technology and licensing may make sense for big cinema distributors, it’s absolute death for musicians, who likely want to tour with their work from place to place.

The underlying techniques here are all fully open and standardized. Ambisonics work with a whole lot of different 3D use cases, from personal VR to big live performances. By definition, they don’t define the sound space in a way that’s particular to any specific set of speakers, so they’re mobile by design.

The larger open ecosystem. Envelop will make these tools new to people who haven’t seen them before, but it’s also important that they share an approach, a basis in research, and technological compatibility with other tools.

That includes the German ZKM’s Zirkonium system, HoaLibrary (that repository is deprecated but links to a bunch of implementations for Pd, Csound, OpenFrameworks, and so on), and IRCAM’s SPAT. All these systems support ambisonics – some support other systems, too – and some or all components include free and open licensing.

I bring that up because I think Envelop is stronger for being part of that ecosystem. None of these systems requires a proprietary speaker delivery system – though they’ll work with those cinema setups, too, if called upon to do so. Musical techniques, and even some encoded spatial data, can transfer between systems.

That is, if you’re learning spatial sound as a kind of instrument, here you don’t have to learn each new corporate-controlled system as if it’s a new instrument, or remake your music to move from one setting to another.

Envelop, the physical version

You do need compelling venues to make spatial sound’s payoff apparent – and Envelop are building their own venues for musicians. Their Envelop SF venue is a permanent space in San Francisco, dedicated to spatial listening and research. Envelop Satellite is a mobile counterpart to that, which can tour festivals and so on.

Envelop SF: 32 speakers with speakers above. 24 speakers set in 3 rings of 8 (the speakers in the columns) + 4 subs, and 4 ceiling speakers. (28.4)

Envelop Satellite: 28 speakers. 24 in 3 rings + 4 subs (overhead speakers coming soon) (24.4)

The competition, as far as venues: 4DSOUND and Berlin’s Monom, which houses a 4DSOUND system, are similar in function, but use their own proprietary tools paired with the system. They’ve said they plan a mobile system, but no word on when it will be available. The Berlin Institute of Sound and Music’s Hexadome uses off-the-shelf ZKM and IRCAM tools and pairs projection surfaces. It’s a mobile system by design, but there’s nothing particularly unique about its sound array or toolset. In fact, you could certainly use Envelop’s tools with any of these venues, and I suspect some musicians will.

There are also many multi-speaker arrays housed in music venues, immersive audiovisual venues, planetariums, cinemas, and so on. So long as you can get access to multichannel interfacing with those systems, you could use Envelop for Live with all of these. The only obstacle, really, is whether these venues embrace immersive, 3D programming and live performance.

But if you thought you had to be Brian Eno to get to play with this stuff, that’s not likely to be the situation for long.

VR, AR, and beyond

In addition to venues, there’s also a growing ecosystem of products for production and delivery, one that spans musical venues and personal immersive media.

To put that more simply: after well over a century of recording devices and production products assuming mono or stereo, now they’re also accommodating the three dimensions your two ears and brain have always been able to perceive. And you’ll be able to enjoy the results whether you’re on your couch with a headset on, or whether you prefer to go out to a live venue.

Ambisonics-powered products now include Facebook 360, Google VR, Waves, GoPro, and others, with more on the way, for virtual and augmented reality. So you can use Live 10 and Envelop for Live as a production tool for making music and sound design for those environments.

Steinberg are adopting ambisonics, too (via Nuendo). Here’s Waves’ guide – they now make plug-ins that support the format, and this is perhaps easier to follow than the Wikipedia article (and relevant to Envelop for Live, too):

https://www.waves.com/ambisonics-explained-guide-for-sound-engineers

Ableton Live with Max for Live has served as an effective prototyping environment for audio plug-ins, too. So developers could pick up Envelop for Live’s components, try out an idea, and later turn that into other software or hardware.

I’m personally excited about these tools and the direction of live venues and new art experiences – well beyond what’s just in commercial VR and gaming. And I’ve worked enough on spatial audio systems to at least say, there’s real potential. I wouldn’t want to keep stereo panning to myself, so it’s great to get to share this with you, too. Let us know what you’d like to see in terms of coverage, tutorial or otherwise, and if there’s more you want to know from the Envelop team.

Thanks to Christoper Willits for his help on this.

More to follow…

http://envelop.us

https://github.com/EnvelopSound/EnvelopForLive/

Further reading

Inside a new immersive AV system, as Brian Eno premieres it in Berlin [Extensive coverage of the Hexadome system and how it works]

Here’s a report from the hacklab on 4DSOUND I co-hosted during Amsterdam Dance Event in 2014 – relevant to these other contexts, having open tools and more experimentation will expand our understanding of what’s possible, what works, and what doesn’t work:

Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND

And some history and reflection on the significance of that system:
Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos]

Plus, for fun, here’s Robert Lippok [Raster] and me playing live on that system and exploring architecture in sound, as captured in a binaural recording by Frank Bretschneider [also Raster] during our performance for 2014 ADE. Binaural recording of spatial systems is really challenging, but I found it interesting in that it created its own sort of sonic entity. Frank’s work was just on the Hexadome.

One thing we couldn’t easily do was move that performance to other systems. Now, this begins to evolve.

The post Free new tools for Live 10 unlock 3D spatial audio, VR, AR appeared first on CDM Create Digital Music.

The 90s are alive, with a free, modern clone of FastTracker II

Delivered... Peter Kirn | Scene | Tue 24 Apr 2018 6:44 pm

It ran natively in MS-DOS, then died by the end of the 90s. But now it’s back: one of the greatest chip music trackers of all time has been cloned to run on modern machines.

FastTracker II will now run on Windows and Mac (and should run on Linux). The clone project started last year, but it seems to have picked up pace – a new set of binaries are out this week, and MIDI input support was added this month.

FastTracker II is a singular piece of software that helped define trackers, demoscene, and the music produced with it. If you’ve used it, I don’t really have to say more. If you haven’t, but you’ve used other trackers – even up to modern takes on the genre like Renoise – you’ve used software influenced by its design.

Like all trackers, the fundamental use of the tool is as a sequencer. But unlike other sequencer concepts – piano rolls which represent time visually like pianolas and music boxes do, multitrack recorders and DAWs modeled on mixers and tape, or notation views – the tracker is a natively computer-oriented tool. Its paradigm is simply about a vertical grid, with shortcuts for entry (represented as numerals) via the computer interface.

That makes trackers uncommonly quick via the computer interface. In the case of FastTracker II, you program every note and timbral change via mouse or keyboard shortcut, and it’s represented compactly in characters onscreen. FT2’s doubling up of mouse and keyboard shortcuts also makes it quick to learn and still quicker to use once you’ve mastered it.

In fact, firing up this build (in 64-bit on Windows 10, no less), I’m struck by how friendly and immediate it is. It’s not a bad introduction to the genre.

MIDI in is great, too, though MIDI out will “never” happen (in a message from the 13th of April).

But it’s kind of amazing this thing even exists. The clone is built in SDL, a cross-platform media library, the work of one Olav “8bitbubsy” Sørensen, who apparently got permission to do this. And it was never supposed to even happen. Heck, the thing was even buried with this note:

“FT2 has been put on hold indefinitely. […] If this was an ideal world, where there was infinite time and no need to make a living, there would definitely be a multiplatform Fasttracker3. Unfortunately this world is nothing like that.”

So, we may not live in an ideal world. But we live in a world where FT2 again runs on our machines. (Amiga fans, there’s also a ProTracker clone.)

Download it:

https://16-bits.org/ft2.php

Thanks to Nicolas Bougaïeff for this one, fresh off his Berghain debut. I want some new chip music from you, man.

And it’s … like the 90s are alive.

The post The 90s are alive, with a free, modern clone of FastTracker II appeared first on CDM Create Digital Music.

Mod Max: One free download fixes Live 10’s new kick

Delivered... Peter Kirn | Scene | Tue 24 Apr 2018 4:12 pm

Ableton Live 10 has some great new drum synth devices, as part of Max for Live. But that kick could be better. Max modifications, to the rescue!

The Max for Live kick sounds great – especially if you combine it with a Drum Buss or even some distortion via the Pedal, also both new in Live 10. But it makes some peculiar decisions. The biggest problem is, it ignores the pitch of incoming MIDI.

Green Kick fixes that, by mapping MIDI note to Pitch of the Kick, so you can tap different pads or keyboard keys to pitch the kick where you want it. (You can still trigger a C0 by pressing the Kick button in the interface.)

Also: “It seemed strange to have Attack as a numbox and the Decay as a dial.”

Yes, that does seem strange. So you also get knobs for both Attack and Decay, which makes more sense.

Now, all of this is possible thanks to the fact that this is a Max for Live device, not a closed-box internal device. While it’s a pain to have to pony up for the full cost of Live Suite to get Max for Live, the upside is, everything is editable and modifiable. And it’d be great to see that kind of openness in other tools, for reasons just like this.

Likewise, if this green color bothers you, you can edit this mod and … so on.

Go grab it:

http://maxforlive.com/library/device/4680/green-kick

Thanks to Sonic Bloom for this one. They’ve got tons more tips like this, so go check them out:

https://twitter.com/sonicbloomtuts

The post Mod Max: One free download fixes Live 10’s new kick appeared first on CDM Create Digital Music.

Mix Ableton and Maschine, Komplete Kontrol, in new updates

Delivered... Peter Kirn | Scene | Thu 19 Apr 2018 2:29 pm

There’s a big push among software makers to deliver integrated solutions – and that’s great. But if you’re a big user of both, say, MASCHINE MK3 and Ableton Live, here’s some good news.

NI made available two software updates yesterday, for their Maschine groove workstation software and for Komplete Kontrol, their software layer for hosting instruments and effects and interfacing with their keyboards. So, the hardware proposition there is the 4×4 pad grid of the MP3, and the Komplete Kontrol keyboards.

For Maschine users, the ability to use Ableton Live and Maschine seamlessly could make a lot of producers and live performers happy. Now, unlike working with Ableton Push, the setup isn’t entirely seamless, and there’s not total integration of hardware and software. But it’s still a big step forward. For instance, I often find myself starting a project with Maschine, because I’ve got a kit I like (including my own samples), or I’m using some of its internal drum synths or bass synth, or just want to wail on four pads and use its workflow for sampling and groove creation. But then, once I’ve built up some materials, I may shift back to playing with Ableton’s workflow in Session or Arrange view to compose an idea. And I know lots of users work the same way. It makes sense, given the whole idea of Maschine is to have the feeling of a piece of hardware.

So, you’ve got this big square piece of gear plugged in. Then sometimes literally you’re unplugging the USB port and connecting Push or something else… or it just sits there, useless.

Having these templates means you switch from one tool to the other, without changing workflow. You could already do this with Maschine Jam, which has a bunch of shortcuts for different tasks and a big grid of triggers (which fits Session View). But the appeal of Maschine for a lot of us is those big, expressive pads on the MK3, so this is what we were waiting for.

On the Komplete Kontrol side, there’s a related set of use cases. Whether you’re the sort to just pull up some presets from Komplete, or at the opposite end of the spectrum, you’re using Komplete Kontrol to manipulate custom Reaktor ensembles, it’s nice to have a set of encoders and transport controls at the ready. The MK2 keyboards brought that to the party – so, for instance, now it’s really easy in Apple’s Logic Pro to play some stuff on the keys, then do another take, without, like – ugh – moving over to the table your computer is on, fumbling for the mouse or keyboard shortcut … you get the idea.

And again, a lot of us are using Ableton Live. I love Logic, but there have been times where I find myself comically missing the Session View as a way of storing ideas.

The notion here is, of course, to get you to buy into Native Instruments’ keyboards. But there is an awfully big ecosystem now of third-party instruments (like those from Output, among some of my favorites) that take advantage of compatibility via the NKS format. (NI likes to call that a “standard,” which I think is a bit of a stretch, given for now there’s no SDK for other hardware and host software makers. But it’s a useful step for now, anyway.)

So, here’s how to get going and what else is new.

Maschine 2.7.4

The big deal with 2.7.4 is new controller workflows (JAM, MK3) and Live integration (MK3). Live users, you’ll want to begin here:

How to Set Up the MASCHINE MK3 Integration for Ableton Live [Native Instruments Support]

There are actually two big improvements here workflow-wise. One is Live support, but the other is easier creation of Loop recordings. With the “Target” parameter, you can drop recordings into:

1. Takes
2. “Sounds” (the Audio plug-in, where you can layer up sounds)
3. Pattern (creates both an Audio plug-in recording and a pattern with the playback)

I think the two together could be a godsend, actually, for composing ideas in a more improvisatory flow. The Target workflow also works on MASCHINE JAM (via different controllers).

There’s also footswitch-triggered recording.

So, Native Instruments are finally listening to feedback from people for whom live sampling is at the heart of their music making process. It’s about time, given that Maschine was modeled on hardware samplers.

The Live integration includes just the basics, but important basics – and it might still be useful even with Push and Maschine side-by-side. The MK3 can access the mixer (Volume, Pan, Mute / Solo / Arm states), clip navigation and launching, recording and quantize, undo/redo, automation toggle, tap tempo, and loop tempo.

As always, you also get various other fixes.

Komplete Kontrol 2.0

Again, you’ll start with the (slightly annoying) installation process, and then you’ll get to playing. NI support has a set of instructions with that, plus some useful detailed links on how the integration works (scroll to the botto, read the whole thing!):

Setting Up Ableton Live for KOMPLETE KONTROL

The other big update here is all about supporting more plug-ins, so your NI keyboard becomes the command center for lots of other instruments and effects you own. NI now boasts hundreds of supporting plug-ins for its NKS format, which maps hardware controls to instrument parameters.

Now that includes effects, too. And that’s cool, since sometimes playing is about loading an instrument on the keys, but manipulating the parameters of an effect that processes that instrument. Those plug-ins show up in the browser, now, if they’ve added support, and they also map to the controls.

Scoff if you like, but I know these keyboards have been big sellers. If nothing else, the lesson here is that making your software sounds and effects accessible with a keyboard for tangible control is something people like.

By the way, NI also quietly pushed out a Kontakt sampler update with a whole bunch of power-user improvements to KSP, their custom language for extending/scripting sound patches. That’s of immediate interest only to Kontakt sound content developers, but you can bet some of those little things will mean more improvements to Kontakt-based content you use, if you’re on NI’s ecosystem.

All three updates are available from NI’s Service Center.

If you’ve found a useful workflow with any of this, if you’ve got any tips or hacks, as always – shout out; we’re curious to hear! (I assume you might even be making some music with all this, so that, too.)

The post Mix Ableton and Maschine, Komplete Kontrol, in new updates appeared first on CDM Create Digital Music.

This hidden gem adds a sub bass to anything, because you want that

Delivered... Peter Kirn | Scene | Mon 16 Apr 2018 10:48 pm

Serendipitous collaboration can be magical. Combine an eccentric high-tech guitar company from Switzerland with some high-powered nerds from the USA, and you get some spectacular ways of adding sub octaves and picking apart and modulating sounds.

From Memphis to Messe: on a hot tip from one of the engineers, I found myself roaming Hall 8.0 at Musikmesse in Frankfurt Friday. Just this one hall is already cavernous; I passed a portrait of Hillary Hahn in a violin booth, stumbled across two nice women giving away CDs of unsigned Estonian concert music, and strolled past the signature-blue of the G. Henle Urtext (which my piano teacher called the “Voice of God edition.”).

But this is how music instrument design should work. It should be collaborative; it should have unexpected combinations of new and old. I love Berlin’s SuperBooth, but by no means would I ever imagine modular synths to exist at the center of the music world.

And so I found myself in the narrow booth of Paradis Products. They’re a legendary, boutique guitar maker out of a Swiss small town, producing exotic creations that look like what you’d splurge on if you’d just won a Eurovision contest. But they know their stuff, from electrical engineering to woodworking.

The woodworking side of the equation is who I got on Friday afternoon, so apologies to Heinz for I think terrorizing him. (I kept repeating the word “Eurorack” to his utter befuddlement. I unfortunately have less to say about mechanical engineering and wood. Matthias Grob is the engineer who’s more to the electrical side. )

Paradis make wonderful guitars, but they also make leading guitar technology. The Polybass is an instrument that seems enchanted – as bass notes follow every articulation. It’s analog technology which means there’s nothing stopping it from appearing outside guitars.

Side by side comparisons of the original and the new Polybass board – the latter coming soon to a Eurorack near you.

So here’s the plan: take the Polybass, and make, hopefully, a Eurorack modular by the end of the year. That’s where America’s Delta Sound Labs comes in. They explain to CDM: “Polybass by Paradis is a radical rework of the legendary Polysubbass that provides an audibly clear, sub-octave effect below performed notes.”

On the guitar, I could already hear how it sounds – that is to say, incredible. I can’t wait to hear this applied to other things.

And there’s more. The CHOPhilter is a classic attack detection and modulation VST. It’s got a UI that’s ugly as sin, but Paradis, Mathons, and Delta Sound Labs will work together to port it to 64-bit (done) and add a more aesthetically pleasing Delta skin (coming soon).

This is also a very Good Thing: apply amplitude modulation on note attacks, with amplitude and filter modulation effects and envelope controls. It also responds to MIDI input for more live performance options. (A quick play-around revealed some crazy possibilities – look past the UI at those parameters for a sense of what this can do.)

Memphis-based Delta Sound Labs, for their part, have done sound research and technology from gaming to film to music industries. And they do modules. And they’re musicians. Here’s Ricky playing around with their other project – a pitch follower that interfaces both with Ableton Live and via control voltage with other gear:

CTRL Module + Helmholtz Pitch Follower – Initial Tests

Stay tuned. We’ll be watching for these finished products.

http://www.paradis-guitars.com/

https://www.deltasoundlabs.com/

The post This hidden gem adds a sub bass to anything, because you want that appeared first on CDM Create Digital Music.

Learn how Tennyson translate between Ableton and percussion on kits

Delivered... Peter Kirn | Artists,Scene | Fri 6 Apr 2018 5:19 pm

One of them likes to solve Rubik’s Cubes, blindfolded, on tour. The other is capable of executing elaborate drum programs programmed on a computer, on live percussion. Meet Tennyson and learn how they work.

As we saw before, Ableton Loop is a place not just for learning about a particular product for musicians, but gathering together ideas from the electronic music community as a whole. And Ableton have been sharing some of that work in an online minisite, so you get a free front row ticket to some of the event from wherever you are.

Tennyson is a good example of how explorations at Loop can cover playing technology as instrument – and everything that means for musicians. Watch:

Tennyson are a young Canadian brother and sister duo, with a unique musical idiom they tested together in live acoustic-electronic improvisations in jazz cafes. Complicated, angular rhythms flow effortlessly and gently, the line between kit and machine blurring. For Loop, they’re interviewed by Jesse Terry, who is product owner for Ableton Push (and has a long history working with the hardware that interacts with Live).

And the sample programming is insane: you get Runescape samples. A baby sneezing. The Mac volume control sound. It’s obsessive Internet-age programming – and then Tess plays this all as acoustic percussion and kit.

In this talk, they talk about jazz education, getting started as kids, Skype lessons. And then they get into the workings of a song.

The big trick here: the duo use Live’s Racks, using the Chain function, so that consistently mapped drum parts can cycle through different sounds as she plays. (I’ll review that technique in more detail soon.) 24 variable pads play all the sounds as Tess is playing.

Working with Chains in Ableton Live’s Device Racks can let you cycle through samples, patches, and layered/split instrument settings.

Part of why the video is interesting to watch is it’s really as much about how Tess has gradually learned how to memorize and recall these elaborate percussion parts. It’s a beautiful example of the human brain expanding to keep up with, then surpass, what the machine makes available.

For Luke’s part, there’s a monome [grid controller], keyboard triggers, and still more electronic pads. The monome loops chopped up samples, sticks can trigger more samples manually — it’s dense. He plays melodic parts both on keyboard and 4×4 pad grid.

The track makeup:

  • Arrangement view contains the song structure
  • A click track (obviously)
  • Software synths each have set lists of sounds, with clips triggering sound changes as MIDI program changes
  • The monome / mlrv sequencer

Here’s an (older) extended live set, so you can see more of how they play:

Here’s their dreamy, poppy latest music video (released March) – made all the more impressive when you realize they basically sound like this live:

More background on the band:

Welcome to the Magically Playful World of Tennyson [Red Bull Music]

New band of the week: Tennyson (No 14) [The Guardian]

Image courtesy the artists.

Check out a growing selection of content from Loop on Ableton’s minisite:

https://www.ableton.com/en/blog/loop/

Bonus: for a quick run-down on chains, here’s AfroDjMac:

The post Learn how Tennyson translate between Ableton and percussion on kits appeared first on CDM Create Digital Music.

Learn how Tennyson translate between Ableton and percussion on kits

Delivered... Peter Kirn | Artists,Scene | Fri 6 Apr 2018 5:19 pm

One of them likes to solve Rubik’s Cubes, blindfolded, on tour. The other is capable of executing elaborate drum programs programmed on a computer, on live percussion. Meet Tennyson and learn how they work.

As we saw before, Ableton Loop is a place not just for learning about a particular product for musicians, but gathering together ideas from the electronic music community as a whole. And Ableton have been sharing some of that work in an online minisite, so you get a free front row ticket to some of the event from wherever you are.

Tennyson is a good example of how explorations at Loop can cover playing technology as instrument – and everything that means for musicians. Watch:

Tennyson are a young Canadian brother and sister duo, with a unique musical idiom they tested together in live acoustic-electronic improvisations in jazz cafes. Complicated, angular rhythms flow effortlessly and gently, the line between kit and machine blurring. For Loop, they’re interviewed by Jesse Terry, who is product owner for Ableton Push (and has a long history working with the hardware that interacts with Live).

And the sample programming is insane: you get Runescape samples. A baby sneezing. The Mac volume control sound. It’s obsessive Internet-age programming – and then Tess plays this all as acoustic percussion and kit.

In this talk, they talk about jazz education, getting started as kids, Skype lessons. And then they get into the workings of a song.

The big trick here: the duo use Live’s Racks, using the Chain function, so that consistently mapped drum parts can cycle through different sounds as she plays. (I’ll review that technique in more detail soon.) 24 variable pads play all the sounds as Tess is playing.

Working with Chains in Ableton Live’s Device Racks can let you cycle through samples, patches, and layered/split instrument settings.

Part of why the video is interesting to watch is it’s really as much about how Tess has gradually learned how to memorize and recall these elaborate percussion parts. It’s a beautiful example of the human brain expanding to keep up with, then surpass, what the machine makes available.

For Luke’s part, there’s a monome [grid controller], keyboard triggers, and still more electronic pads. The monome loops chopped up samples, sticks can trigger more samples manually — it’s dense. He plays melodic parts both on keyboard and 4×4 pad grid.

The track makeup:

  • Arrangement view contains the song structure
  • A click track (obviously)
  • Software synths each have set lists of sounds, with clips triggering sound changes as MIDI program changes
  • The monome / mlrv sequencer

Here’s an (older) extended live set, so you can see more of how they play:

Here’s their dreamy, poppy latest music video (released March) – made all the more impressive when you realize they basically sound like this live:

More background on the band:

Welcome to the Magically Playful World of Tennyson [Red Bull Music]

New band of the week: Tennyson (No 14) [The Guardian]

Image courtesy the artists.

Check out a growing selection of content from Loop on Ableton’s minisite:

https://www.ableton.com/en/blog/loop/

Bonus: for a quick run-down on chains, here’s AfroDjMac:

The post Learn how Tennyson translate between Ableton and percussion on kits appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme