Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » MIDI


Touché now puts expressive control at hand for $229

Delivered... Peter Kirn | Scene | Tue 18 Sep 2018 5:10 pm

“Expressive control” has largely translated to “wiggly keyboards” and “squishy grids,” with one notable exception – the unique, paddle-like Touché from Expressive E. And while keeping essentially the same design, they’ve gotten the price down to just US$/EUR229, making this potentially a no-brainer.

The result: add this little device to your rig, and play gesturally with a whole bunch of instruments, either using provided examples or creating your own.

Preset-packed paddle?

Expressive E’s approach has set itself apart in two key ways. First, they’ve gone with a design that’s completely different than anyone else working in expressive control. It’s not a ribbon, not a grid, not an X/Y pad, and not a keyboard, in other words.

The Touché is best described as a paddle, a standalone object that you sit next to your computer or instrument. There’s a patented mechanism in there that responds to mechanical movements, so with the slightest pressure or tap, you can activate it, or push harder for multi-axis control.

And that, in turn, opens this up to lots of different control applications. Expressive E market this mainly for controlling instruments, like synthesizers, but any music or visual performance input could be relevant.

The second clever element in Expressive E’s approach is to bundle a whole bunch of presets. The first Touché had loads of support even for hardware synths. The new one is focused more on software. But together, this means that while you can map your own ideas, you’ve got a load of places to start.

Touché SE

The original Touché is US$/EUR 399.

Touché SE is just $/EUR 229.

Here’s the cool thing about that price break: the only real sacrifice here is the standalone operation with hardware. (The SE works with bus-powered USB only.)

Other than that, it’s the same hardware as before, though with a polycarbonate touch plate.

In fact, otherwise you get more:

  • Lié hosting software, with VST hosting so you can use your own plug-ins
  • UVI-powered internal sound engine with leads and mallets and loads of other things
  • 200 ready-to-play internal sounds, which you can call up using dedicated buttons on the device
  • 200+ presets for popular plug-ins (like Native Instruments’ Massive and Prism, Serum, Arturia software, etc.)

So connect this USB bus-powered device (they put a huge four-foot cable in the box), and you get multi-dimensional gestural control.

Standalone, VST, AU, Mac, Windows. (Would love to see a Linux/Raspi version!)

I’ve been playing one for a bit and – it’s hugely powerful, likely of appeal both to plug-in and synth lovers and DIYers alike.

http://www.expressivee.com/touche-se

The post Touché now puts expressive control at hand for $229 appeared first on CDM Create Digital Music.

Hack a Launchpad Pro into a 16-channel step sequencer, free

Delivered... Peter Kirn | Scene | Tue 11 Sep 2018 4:59 pm

Novation’s Launchpad Pro is unique among controller hardware: not only does it operate in standalone mode, but it has an easy-to-modify, open source firmware. This mod lets you exploit that to transform it into a 32-step sequencer.

French musician and engineer Quentin Lamerand writes us to share his mod for Novation’s firmware. And you don’t have to be a coder to use this – you can easily install it without any coding background, which was part of the idea of opening up the firmware in the first place.

The project looks really useful. You get 16 channels (for controlling multiple sound parts or devices), plus 32-steps for longer phrases. And since the Launchpad Pro works as standalone hardware, you could use all of this without a computer. (You can output notes on either the USB port – even in standalone mode – or the MIDI DIN out port.)

You’ll need something else to supply clock – the sequencer only works in slave mode – but once you do that (hihi, drum machine), you’re good to go.

Bonus features:

  • Note input with velocity (adjustable using aftertouch on the pads)
  • Repeat notes
  • Adjustable octave
  • Setup mode with track selection, parameters, mute, clear, and MIDI thru toggle
  • Tap steps to select track length
  • Adjust step length (to 32nd, 16th, 16th note triplet, 8th, 8th note triplet, quarter, quarter note triplet, half note)
  • Rotate steps

On one hand, this is what I think most of us believe Novation should have shipped in the first place. On the other hand, look at some of those power-user features – by opening up the firmware, we get some extras the manufacturer probably wouldn’t have added. And if you are handy with some simple code, you can modify this further to get it exactly how you want.

It’s a shame, actually, that we haven’t seen more hackable tools like this. But that’s all the more reason to go grab this – especially as Launchpads Pro can be had on the cheap. (Time to dust mine off, which was the other beauty of this project!)

Go try Quentin’s work and let us know what you think:

http://faqtor.fr/launchpadpro.html

Got some hacks of your own, or inspired by this to give it a try? Definitely give a shout.

The open firmware project you’ll find on Novation’s GitHub:

https://github.com/dvhdr/launchpad-pro

More:

Hack a Grid: Novation Makes Launchpad Pro Firmware Open Source

Launchpad Pro Grid Controller: Hands-on Comprehensive Guide

The post Hack a Launchpad Pro into a 16-channel step sequencer, free appeared first on CDM Create Digital Music.

Inside Cypher2, and what could be a more expressive future for synths

Delivered... Peter Kirn | Scene | Mon 3 Sep 2018 11:01 am

For all the great sounds they can make, software synths eventually fit a repetitive mold: lots of knobs onscreen, simplistic keyboard controls when you actually play. ROLI’s Cypher2 could change that. Lead developer Angus chats with us about why.

Angus Hewlett has been in the plug-in synth game a while, having founded his own FXpansion, maker of various wonderful software instruments and drums. That London company is now part of another London company, fast-paced ROLI, and thus has a unique charge to make instruments that can exploit the additional control potential of ROLI’s controllers. The old MIDI model – note on, note off, and wheels and aftertouch that impact all notes at once – gives way to something that maps more of the synth’s sounds to the gestures you make with your hands.

So let’s nerd out with Angus a bit about what they’ve done with Cypher2, the new instrument. Background:

A soft synth that’s made to be played with futuristic, expressive control

Peter: Okay, Cypher2 is sounding terrific! Who made the demos and so on?

Angus: Demos – Rafael Szaban, Heen-Wah Wai, Rory Dow. Sound Design – Rory Dow, Mayur Maha, Lawrence King & Rafael Szaban

Can you tell us a little bit about what architecture lies under the hood here?

Sure – think of it as a multi-oscillator subtractive synth. Three oscillators with audio-rate intermodulation (FM, S&H, waveshape modulation and ring mod), each switchable between Saw and Sin cores. Then you’ve got two waveshapers (each with a selection of analogue circuit models and tone controls, and a couple of digital wavefolders), and two filters, each with a choice of five different analogue filter circuit models – two variations on the diode ladder type, OTA ladder, state variable, Sallen-Key – and a digital comb filter. Finally, you’ve got a polyphonic, twin stereo output amp stage which gives you a lot of control over how the signal hits the effects chain – for example, you can send just the attack of every note to the “A” chain and the sustain/release phase to the “B” chain, all manner of possibilities there.

Controlling all of that, you’ve got our most powerful TransMod yet. 16 assignable modulation slots, each with over a hundred possible sources to choose from, everything from basics like Velocity and LFO through to function processors, step sequencers, paraphonic mod sources and other exotics. Then there’s eight fixed-function mod slots to support the five dimensions of MPE control and the three performance macros. So 24 TransMods in total, three times as many as v1.

Okay, so Cypher2 is built around MPE, or MIDI Polyphonic Expression. For those readers just joining us, this is a development of the existing MIDI specification that standardizes additional control around polyphonic inputs – that is, instead of adding expression to the whole sound all at once, you can get control under each finger, which makes way more sense and is more fun to play. What does it mean to build a synth around MPE control? How did you think about that in designing it?

It’s all about giving the sound designers maximum possibility to create expressive sound, and to manage how their sound behaves across the instrument’s range. When you’re patching for a conventional synth, you really only need to think about pitch and velocity: does the sound play nicely across the keyboard. With 5D MPE sounds, sound designers start having to think more like a software engineer or a game world designer – there’s so many possibilities for how the player might interact with the sound, and they’ve got to have the tools to make it sound musical and believable across the whole range.

What this translates to in the specific case of Cypher2 is adapting our TransMod system (which is, at its heart, a sophisticated modulation matrix) to make it easy for sound designers to map the various MPE control inputs, via dynamically controllable transfer function curves, on to any and every parameter on the synth.

How does this relate to your past line of instruments?

Clearly, Cypher2 is a successor to the original Cypher which was one of the DCAM Synth Squad synths; it inherits many of the same functional upgrades that Strobe 2 gained over its predecessor a couple of years ago – the extended TransMod system, the effects engine, the Retina-friendly, scalable, skinnable GUI – but goes further, and builds on a lot of user and sound-designer feedback we had from Strobe2. So the modulation system is friendlier, the effects engine is more powerful, and it’s got a brand new and much more powerful step-sequencer and arpeggiator. In terms of its relationship to the original Cypher – the overall layout is similar, but the oscillator section has been upgraded with the sine cores and additional FM paths; the shaper section gains wavefolders and tone controls; the filters have six circuits to chose from, up from two in the original, so there’s a much wider range of tones available there; the envelopes give you more choice of curve responses; the LFOs each have a sub oscillator and quadrature outputs; and obviously there’s MPE as described above.

Of course, ROLI hope that folks will use this with their hardware, naturally. But since part of the beauty is that this is open on MPE, any interesting applications working with some other MPE hardware; have you tried it out on non-ROLI stuff (or with testers, etc.)?

Yes, we’ve tried it (with Linnstrument, mainly), and yes, it very much works – although with one caveat. Namely, MPE, as with MIDI, is a protocol which specifies how devices should talk to one another – but it doesn’t specify, at a higher level, what the interaction between the musician and their sound should feel like.

That’s a problem that I actually first encountered during the development of BFD2 in the mid-2000s: “MIDI Velocity 0-127” is adequate to specify the interaction between a basic keyboard and a sound module, and some of the more sophisticated stage controller boards (Kurzweil, etc.) have had velocity curves at least since the 90s. But as you increase the realism and resolution of the sounds – and BFD2 was the first time we really did so in software to the extent that it became a problem – it becomes apparent that MIDI doesn’t specify how velocity should map on to dB, or foot-pounds-per-second force equivalent, or any real-world units.

That’s tolerable for a keyboard, where a discerning user can set one range for the whole instrument, but when you’re dealing with a V-Drums kit with, potentially, ten or twelve pads, of different types, to set up, and little in the way of a standard curve to aim for, the process becomes cumbersome and off-putting for the end-user. What does “Velocity 72” actually mean from Manufacturer A’s snare drum controller, at a sensitivity setting B, via drum brain C triggering sample D?

Essentially, you run into something of an Uncanny Valley effect (a term from the world of movies / games where, as computer generated graphics moved from obviously artificial 8-bit pixel art to today’s motion-captured, super-sampled cinematic epics, paradoxically audiences would in some cases be less satisfied with the result). So it’s certainly a necessary step to get expressive hardware and software talking to one another – and MPE accomplishes that very nicely indeed – but it’s not sufficient to guarantee that a patch will result in a satisfactory, believable playing experience OOTB.

Some sound-synth-controller-player combinations will be fine, others may not quite live up to expectations, but right now I think it’s natural to expect that it may be a bit hit-and-miss. Feedback on this is something I’d like to actively encourage, we have a great dialogue with the other hardware vendors and are keen for to achieve a high standard of interoperation, but it’s a learning process for all involved.

Thanks, Angus! I’ll be playing with Cypher2 and seeing what I can do with it – but fascinating to hear this take on synths and control mapping. More food for thought.

https://fxpansion.com/products/cypher2/

http://roli.com/

The post Inside Cypher2, and what could be a more expressive future for synths appeared first on CDM Create Digital Music.

Inside Cypher2, and what could be a more expressive future for synths

Delivered... Peter Kirn | Scene | Mon 3 Sep 2018 11:01 am

For all the great sounds they can make, software synths eventually fit a repetitive mold: lots of knobs onscreen, simplistic keyboard controls when you actually play. ROLI’s Cypher2 could change that. Lead developer Angus chats with us about why.

Angus Hewlett has been in the plug-in synth game a while, having founded his own FXpansion, maker of various wonderful software instruments and drums. That London company is now part of another London company, fast-paced ROLI, and thus has a unique charge to make instruments that can exploit the additional control potential of ROLI’s controllers. The old MIDI model – note on, note off, and wheels and aftertouch that impact all notes at once – gives way to something that maps more of the synth’s sounds to the gestures you make with your hands.

So let’s nerd out with Angus a bit about what they’ve done with Cypher2, the new instrument. Background:

A soft synth that’s made to be played with futuristic, expressive control

Peter: Okay, Cypher2 is sounding terrific! Who made the demos and so on?

Angus: Demos – Rafael Szaban, Heen-Wah Wai, Rory Dow. Sound Design – Rory Dow, Mayur Maha, Lawrence King & Rafael Szaban

Can you tell us a little bit about what architecture lies under the hood here?

Sure – think of it as a multi-oscillator subtractive synth. Three oscillators with audio-rate intermodulation (FM, S&H, waveshape modulation and ring mod), each switchable between Saw and Sin cores. Then you’ve got two waveshapers (each with a selection of analogue circuit models and tone controls, and a couple of digital wavefolders), and two filters, each with a choice of five different analogue filter circuit models – two variations on the diode ladder type, OTA ladder, state variable, Sallen-Key – and a digital comb filter. Finally, you’ve got a polyphonic, twin stereo output amp stage which gives you a lot of control over how the signal hits the effects chain – for example, you can send just the attack of every note to the “A” chain and the sustain/release phase to the “B” chain, all manner of possibilities there.

Controlling all of that, you’ve got our most powerful TransMod yet. 16 assignable modulation slots, each with over a hundred possible sources to choose from, everything from basics like Velocity and LFO through to function processors, step sequencers, paraphonic mod sources and other exotics. Then there’s eight fixed-function mod slots to support the five dimensions of MPE control and the three performance macros. So 24 TransMods in total, three times as many as v1.

Okay, so Cypher2 is built around MPE, or MIDI Polyphonic Expression. For those readers just joining us, this is a development of the existing MIDI specification that standardizes additional control around polyphonic inputs – that is, instead of adding expression to the whole sound all at once, you can get control under each finger, which makes way more sense and is more fun to play. What does it mean to build a synth around MPE control? How did you think about that in designing it?

It’s all about giving the sound designers maximum possibility to create expressive sound, and to manage how their sound behaves across the instrument’s range. When you’re patching for a conventional synth, you really only need to think about pitch and velocity: does the sound play nicely across the keyboard. With 5D MPE sounds, sound designers start having to think more like a software engineer or a game world designer – there’s so many possibilities for how the player might interact with the sound, and they’ve got to have the tools to make it sound musical and believable across the whole range.

What this translates to in the specific case of Cypher2 is adapting our TransMod system (which is, at its heart, a sophisticated modulation matrix) to make it easy for sound designers to map the various MPE control inputs, via dynamically controllable transfer function curves, on to any and every parameter on the synth.

How does this relate to your past line of instruments?

Clearly, Cypher2 is a successor to the original Cypher which was one of the DCAM Synth Squad synths; it inherits many of the same functional upgrades that Strobe 2 gained over its predecessor a couple of years ago – the extended TransMod system, the effects engine, the Retina-friendly, scalable, skinnable GUI – but goes further, and builds on a lot of user and sound-designer feedback we had from Strobe2. So the modulation system is friendlier, the effects engine is more powerful, and it’s got a brand new and much more powerful step-sequencer and arpeggiator. In terms of its relationship to the original Cypher – the overall layout is similar, but the oscillator section has been upgraded with the sine cores and additional FM paths; the shaper section gains wavefolders and tone controls; the filters have six circuits to chose from, up from two in the original, so there’s a much wider range of tones available there; the envelopes give you more choice of curve responses; the LFOs each have a sub oscillator and quadrature outputs; and obviously there’s MPE as described above.

Of course, ROLI hope that folks will use this with their hardware, naturally. But since part of the beauty is that this is open on MPE, any interesting applications working with some other MPE hardware; have you tried it out on non-ROLI stuff (or with testers, etc.)?

Yes, we’ve tried it (with Linnstrument, mainly), and yes, it very much works – although with one caveat. Namely, MPE, as with MIDI, is a protocol which specifies how devices should talk to one another – but it doesn’t specify, at a higher level, what the interaction between the musician and their sound should feel like.

That’s a problem that I actually first encountered during the development of BFD2 in the mid-2000s: “MIDI Velocity 0-127” is adequate to specify the interaction between a basic keyboard and a sound module, and some of the more sophisticated stage controller boards (Kurzweil, etc.) have had velocity curves at least since the 90s. But as you increase the realism and resolution of the sounds – and BFD2 was the first time we really did so in software to the extent that it became a problem – it becomes apparent that MIDI doesn’t specify how velocity should map on to dB, or foot-pounds-per-second force equivalent, or any real-world units.

That’s tolerable for a keyboard, where a discerning user can set one range for the whole instrument, but when you’re dealing with a V-Drums kit with, potentially, ten or twelve pads, of different types, to set up, and little in the way of a standard curve to aim for, the process becomes cumbersome and off-putting for the end-user. What does “Velocity 72” actually mean from Manufacturer A’s snare drum controller, at a sensitivity setting B, via drum brain C triggering sample D?

Essentially, you run into something of an Uncanny Valley effect (a term from the world of movies / games where, as computer generated graphics moved from obviously artificial 8-bit pixel art to today’s motion-captured, super-sampled cinematic epics, paradoxically audiences would in some cases be less satisfied with the result). So it’s certainly a necessary step to get expressive hardware and software talking to one another – and MPE accomplishes that very nicely indeed – but it’s not sufficient to guarantee that a patch will result in a satisfactory, believable playing experience OOTB.

Some sound-synth-controller-player combinations will be fine, others may not quite live up to expectations, but right now I think it’s natural to expect that it may be a bit hit-and-miss. Feedback on this is something I’d like to actively encourage, we have a great dialogue with the other hardware vendors and are keen for to achieve a high standard of interoperation, but it’s a learning process for all involved.

Thanks, Angus! I’ll be playing with Cypher2 and seeing what I can do with it – but fascinating to hear this take on synths and control mapping. More food for thought.

https://fxpansion.com/products/cypher2/

http://roli.com/

The post Inside Cypher2, and what could be a more expressive future for synths appeared first on CDM Create Digital Music.

Numerical Audio brings us a new virtual tape machine in RE-1

Delivered... Ashley Elsdon | Scene | Mon 27 Aug 2018 10:51 pm

Numerical Audio / Kai Aras brings us yet another highly capable audio unit for giving your iOS sound a distinctive and unique sound. Some of Numerical’s other audio units are pretty special. Some of my favourites include RF-1 and RP-1, Volt (an excellent synth) and Theremidi.

Now we get RE-1, a full featured virtual Tape Machine capable of delivering authentic tape based echo and chorus effects. It doesn’t stop there.RE-1 is an interactive tape player, sample, loop and overdub features it’s possible to use it like a virtual tape recorder, sample player, looper or simply as a master effect.

Overview:

  • Virtual Tape deck including 3 individually controllable read heads, variable delay time and feedback
  • Interactive user interface and realtime visualisation of various parameters related to the tape simulation
  • Dedicated controls for Wow, Flutter, Color, Tape Hiss & saturation amount
  • Tape Loop mode with overdub, tape reverse and time-stretching
  • Sample/Loop library loads wave files onto the virtual tape loop
  • Transport and Tempo Sync
  • Stereo Spread & Stereo Panner
  • Input processing: Highpass filter
  • Output processing: 2 band eq

Tape Echo:

  • Authentic tape echo emulation
  • Multi-tap delay with 3 individually controllable read heads
  • Color Control adjusts the echo’s tone from dark to bright
  • Delay Time: 5ms – 1000ms (or 1/32th to 1/1)
  • Wow & Flutter
  • Tape Hiss and Saturation
  • Stereo spread
  • Stereo panner

Looper / Sample Player:

  • Uses the tape like a traditional looper with unlimited overdubs
  • Configurable Loop length with 1 – 8 bars and tempo sync and tape reverse
  • Tape transport can be linked to the host’s transport controls
  • Time stretching keeps loops in sync
  • Samples/loops can be loaded onto the tape directly
  • Samples can be created from the tape loop and stored in the sample library at any time
  • Factory content includes a variety of loops grouped by style
  • Samples/loops can be imported from and exported to other apps

Connectivity:

  • Standalone
  • AUv3
  • InterApp-Audio
  • Audiobus
  • Ableton Link
  • MIDI (Tempo, CC, Program Change)

RE-1 requires iOS 11+

RE-1 costs $4.99 on the app store now

The post Numerical Audio brings us a new virtual tape machine in RE-1 appeared first on CDM Create Digital Music.

It’s official: minijack connections are now kosher for MIDI

Delivered... Peter Kirn | Scene | Tue 21 Aug 2018 8:40 pm

For years, manufacturers have been substituting small minijack connectors for MIDI – but there wasn’t any official word on how to do that, or how to wire them. That changes now, as these space saving connections get official.

Our story so far:

MIDI, the de facto standard first introduced in the early 1980s, specifies a really big physical connector. That’ll be the 5-pin DIN connection, named for the earlier German standard connector, one that once served other serial connections but nowadays is seen more or less exclusively on MIDI devices. It’s rugged. It’s time tested. It’s … too big to fit in a lot of smaller housings.

So, manufacturers have solved the problem by substituting 2.5mm “minijack” connections and providing adapters in the box. Here’s the problem: since there wasn’t a standard, no one knew which way to wire them. A jack connection is called TRS because it has three electrical points – tip, ring, and sleeve. There are three necessary electrical connections for MIDI. And sure enough, not everyone did it the same way.

In the summer of 2015, I had been talking to a handful of people interested in getting some kind of convention:

What if we used stereo minijack cables for MIDI?

That in turn was based on a 2011 forum discussion of people making their own adapters.

Some manufacturers even used that diagram as the basis for their own wiring, but since no one was really checking with anyone else, two half-standards emerged. KORG, Akai, and others did it one way … Novation, Arturia, and ilk did it another.

The good news is, we now have an official standard from the MIDI Manufacturers Association (MMA). The bad news is, there can be only one – the KORG standard beat out the Arturia one, so sorry, BeatStep Pro.

Wiring diagram. The “mating face” is also what I put on when I start a flirtatious conversation about TRS wiring.

That said, now that there is a standard, you could certainly wire up an adapter.

2.5mm is recommended, though bigger TRS jack (1/4″) is also possibly. Mainly, your caveat is this: standard audio cables are not

If you’re thinking this now means you can use standard audio minijack cables The MMA document adds that you should use specialized cables with shielded twisted pair internal wiring. Shhh — audio cables probably would work, but you might have signal quality issues.

Twisted what? That’s literally twisting the wires together and adding an extra layer of shielding, which reduces electrical interference and improves reliability. (See Wikipedia for an explanation, plus the fun factoid that you can thank Alexander Graham Bell.)

The recommendation is made by the MMA together with the Association of Musical Electronics Industry (AMEI), and was ratified over the summer:

MMA Technical Standards Board/ AMEI MIDI Committee
Letter of Agreement for Recommend Practice
Specification for use of TRS Connectors with MIDI Devices [RP-054]

News and (for members) link to the PDF download on the MMA blog:

Specification for TRS Adapters Adopted and Released

Updated: I feel specifically obligated to respond to this:

Actually, no, not really.

The most likely use case would be users plugging in minijack headphone adapters. But part of the reason to use 2.5mm minijack is, those other examples – microphones and guitar jacks – don’t typically use the smaller plug.

Anyway, to the extent that people would do this, presumably they were already doing it wrong on gear from various manufacturers that use these adapters. Those makers helpfully include adapter dongles in the box, though, and as the MMA/AMEI doc recommends, manufacturers may still want to include electrical protection so someone doesn’t accidentally fry their hardware. (And engineers do try to anticipate all those mistakes as best they can, in my experience.)

Really, nothing much changes here apart from because there’s an official MMA document out there, it’s more likely makers will choose one system of wiring for these plugs so those dongles and cables are interchangeable. And that’s good.

The post It’s official: minijack connections are now kosher for MIDI appeared first on CDM Create Digital Music.

Fred Anton Corvest’s Envolver app takes a big step forward with version 2

Delivered... Ashley Elsdon | Scene | Wed 8 Aug 2018 6:07 am

Apps from the FAC stable are fast becoming some of my most used AUv3 apps at the moment. Fred’s Maxima, and Transient seem to get used almost all the time, but his Envolver app is one that had, so far, not really hit my radar. That is, until now, with version 2, and the reason that version 2 has caught my attention, is that it is a seriously significant upgrade in this app’s capability. First it’s probably worth giving you a quick idea of what the app itself does. Here’s the description from the app store.

Being a MIDI effect in its essence, controlling your synthesizers parameters (e.g volume, cutoff, res) with MIDI CC and notes, FAC Envolver is also able to transform the input audio via two exclusive effects: a Noise Gate and a Trance Gate. This combination is the key to breathe life into your sound, providing interesting natural modulation and sequences that will always be different by nature.

Under the hood, FAC Envolver is an envelope follower delivering MIDI data generated from the contour of a signal. The source of this signal may come from two distinctive circuits: the first one is a classic envelope follower and the second one is a rythmical pattern called trigger gate. Both circuits are responsible for providing a signal envelope that can be altered by a set of parameters and delivered as MIDI control change messages (CC).

The generated signal envelope also passes through a gate detection mechanism driven by a threshold. When the signal goes above the threshold, the gate is open and when the signal goes below, the gate is closed. This open/close sequence provides a second envelope which is delivered as MIDI CC (on/off) and notes. The pitch of the note may be set within the effect or from an external value provided by MIDI input. Both may follow a particular scale progression.

The envelope follower circuit also provides a noise gate to control precisely the level of your audio signal. This uses the same open/close sequence built by the gate detection mechanism that delivers MIDI data but acts on the audio. Following the same logic, when the second circuit – the trigger gate – is engaged, the open/close sequence provides a trance gate style effect to the audio.

FAC Envolver is a stereo effect and provides two slots, one slot per channel. Both can be linked to facilitate the manipulation. The audio can also be turned to mono and the dry and wet audio can be mixed.

What’s arrived in version 2.0 really does take this AUv3 up to the next level though:

• Two circuits: Env Follower and Trigger Gate (Audio/MIDI fx)
• Each circuit provides MIDI out (CC/Note) and optional audio transformation
• Env Follower provides a Noise Gate with threshold, hysteresis and curve smoother
• Trigger Gate provides a Trance Gate Style fx with optional resolution and patterns
• Stereo/Mono processing – a slot per channel
• Fine tuning of each slot; rising and falling time, bias, depth and inverter
• MIDI IN/OUT support: CC#, channels, int/ext note pitch and scale progression
• Multi-waveform graph: input signal, env contour, gate signal and threshold
• Host tempo sync

Note: The MIDI/AUDIO combo makes FAC Envolver usable in every host, even the ones that still do not provide MIDI out support, in this case the MIDI generator is disabled and the fx can be used as standard AUDIO fx.

I’m really looking forward to getting to grips with FAC Envolver very soon, as I can image that it’s going to be yet another FAC app that ends up being heavily used by me on a regular basis.

FAC Envolver is on the app store and costs $9.99

The post Fred Anton Corvest’s Envolver app takes a big step forward with version 2 appeared first on CDM Create Digital Music.

Put some physics in your MIDI with this iOS Audio Unit

Delivered... Ashley Elsdon | Scene | Wed 1 Aug 2018 11:25 pm

We seem to be getting more and more unusual and interesting audio units these days. Which, in my view, can only be a good thing. It reminds me that these days you can do almost everything that you might want to do on a desktop, but now on mobile. A new audio unit I’ve just noticed is Physicles. The app is a container of physics-based MIDI Audio Unit plugins. These AU MIDI plugins generate MIDI messages through the use of an underlying physics engine, which models the physical interactions between various entities.

In some ways it shares some similarities with Bram Bos Rozeta Sequencer Suite.

In the current version, only the following plugin is included:

  • Physicle Bouncy: In this playground, multiple balls bounce inside a polygon. MIDI messages are generated whenever a ball collides with the side of the polygon.

I’m guessing that the developer is planning on adding additional units / functionality over time. The app’s description sort of suggests that.

Physicles is currently free on the app store

Please Note:

  • The plugin requires a compatible AU Host to work. You could use AUM, AudioBus 3, apeMatrix, Beatmaker, Cubasis 2, or Sequencism
  • The plugin does not generate any sound at all, and only creates and sends MIDI messages

The post Put some physics in your MIDI with this iOS Audio Unit appeared first on CDM Create Digital Music.

What do you play? Berklee adds electronic digital instrument program

Delivered... Peter Kirn | Scene | Tue 31 Jul 2018 11:14 pm

Musicians have majored in trumpets and voice, conducting and reeds. Now, they can choose the “electronic digital instrument” at Berklee College of Music, as music education works to redefine itself in the post-digital age.

The underlying idea here itself isn’t new – turntables and computers have been singled out before as instrumental or educational categories – but making a complete program in this way is novel. And maybe the most interesting thing about Berklee’s approach is bringing a range of different subcategories into one theme, the “electronic digital instrument,” or EDI. (Uh… okay, the search for a great name here continues. Maybe we can give away an Ableton Push as a naming contest?)

In Berklee’s formulation, this is computing device + software + controller.

I wonder if the “controller” formulation will stand the test of time, as computation and sound modeling is brought increasingly into the same box as whatever has controls on it. (You don’t think of the knobs on a synthesizer as a distinct “controller,” even though the functional relationship is the same.)

But most encouraging is the cast of characters and the program Berklee is assembling here. I’m very interested to hear more about their curriculum and how it’s taught – plus apparently know quite a few people involved – so let’s definitely follow up soon with an interview. Here’s their launch video:

The curricular objectives:

Upon completion of the performance core program with an electronic digital instrument, you will be able to:

design and configure a versatile, responsive, and musically expressive electronic performance system;
synthesize and integrate knowledge of musical styles to develop effective electronic performance strategies;
play in a variety of electronic performance modes using a variety of controllers;
use common types of synthesizers;
produce audio assets from a variety of sources, and use them in a live performance;
demonstrate proficiency in effect processing in a live performance; and
perform in solo and ensemble settings, taking on melodic, harmonic, rhythmic, and textural roles as well as arranging, mixing, remixing, and real-time compositional musical roles using all parts of one’s performance system.

And the required coursework is interesting, as well. The program includes improvisation, and a bunch of ensemble work – with turntables, techno/rave and “DJ sampling,” hip-hop, and synth technique for live ensembles. That builds in turn on the development of laptop ensembles and more experimental improvisational work in programs in some other schools. Berklee students in the program will work with turntables (which some schools have offered in the past, if sporadically), but also studies in “performance” and “grid” controllers. (Dear Brian Crabtree, Toshio Iwai, and Roger Linn – did you imagine you would all help turn “grids” into an instrumental study?)

This is all over a four semester study.

The program announcement:

Principal Instruments: Electronic Digital Instrument

https://www.berklee.edu/

The post What do you play? Berklee adds electronic digital instrument program appeared first on CDM Create Digital Music.

Creative software can now configure itself for control, with OSC

Delivered... Peter Kirn | Scene | Tue 31 Jul 2018 5:10 pm

Wouldn’t it be nice if, instead of manually assigning every knob and parameter, software was smart enough to configure itself? Now, visual software and OSC are making that possible.

Creative tech has been moving forward lately thanks to a new attitude among developers: want something cool? Do it. Open source and/or publish it. Get other people to do it, too. We’ve seen seen that as Ableton Link transformed sync wirelessly across iOS and desktop. And we saw it again as software and hardware makers embraced more expression data with MIDI Polyphonic Expression. It’s a way around “chicken and egg” worries – make your own chickens.

Open Sound Control (OSC) has for years been a way of getting descriptive, high-resolution data around. It’s mostly been used in visual apps and DIY audiovisual creations, with some exceptions – Native Instruments’ Reaktor has a nice implementation on the audio side. But what it was missing was a way to query those descriptive messages.

What would that mean? Well, basically, the idea would be for you to connect a new visual app or audio tool or hardware instrument and interactively navigate and assign parameters and controls.

That can make tools smarter and auto-configuring. Or to put it another way – no more typing in the names of parameters you want to control. (MIDI is moving in a similar direction, if via a very different structure and implementation, with something called MIDI-CI or “Capability Inquiry.” It doesn’t really work the same way, but the basic goal – and, with some work, the end user experience – is more or less the same.)

OSC Queries are something I’ve heard people talk about for almost a decade now. But now we have something real you can use right away. Not only is there a detailed proposal for how to make the idea work, but visual tools VDMX, Mad Mapper, and Mitti all have support now, and there’s an open source implementation for others to follow.

Vidvox (makers of VDMX) have led the way, as they have with a number of open source ideas lately. (See also: a video codec called Hap, and an interoperable shader standard for hardware-accelerated graphics.)

Their implementation is already in a new build of VDMX, their live visuals / audiovisual media software:

https://docs.vidvox.net/vdmx_b8700.html

You can check out the proposal on their site:

https://github.com/vidvox/oscqueryproposal

Plus there’s a whole dump of open source code. Developers on the Mac get a Cocoa framework that’s ready to use, but you’ll find some code examples that could be very easily ported to a platform / language of your choice:

https://github.com/Vidvox/VVOSCQueryProtocol

There’s even an implementation that provides compatibility in apps that support MIDI but don’t support OSC (which is to say, a whole mess of apps). That could also be a choice for hardware and not just software.

They’ve even done this in-progress implementation in a browser (though they say they will make it prettier):

Here’s how it works in practice:

Let’s say you’ve got one application you want to control (like some software running generative visuals for a live show), and then another tool – or a computer with a browser open – connected on the same network. You want the controller tool to map to the visual tool.

Now, the moment you open the right address and port, all the parameters you want in the visual tool just show up automatically, complete with widgets to control them.

And it’s (optionally) bi-directional. If you change your visual patch, the controls update.

In VDMX, for instance, you can browse parameters you want to control in a tool elsewhere (whether that’s someone else’s VDMX rig or MadMapper or something altogether different):

And then you can take the parameters you’ve selected and control them via a client module:

All of this is stored as structured data – JSON files, if you’re curious. But this means you could also save and assign mappings from OSC to MIDI, for instance.

Another example: you could have an Ableton Live file with a bunch of MIDI mappings. Then you could, via experimental code in the archive above, read that ALS file, and have a utility assign all those arbitrary MIDI CC numbers to automatically-queried OSC controls.

Think about that for a second: then your animation software could automatically be assigned to trigger controls in your Live set, or your live music controls could automatically be assigned to generative visuals, or an iPad control surface could automatically map to the music set when you don’t have your hardware controller handy, or… well, a lot of things become possible.

We’ll be watching OSCquery. But this may be of enough interest to developers to facilitate some discussion here on CDM to move things forward.

Follow Vidvox:

https://vdmx.vidvox.net/blog

And previously, watching MIDI get smarter (smarter is better, we think):

MIDI evolves, adding more expressiveness and easier configuration

MIDI Polyphonic Expression is now a thing, with new gear and software

Plus an example of cool things done with VDMX, by artist Lucy Benson:

The post Creative software can now configure itself for control, with OSC appeared first on CDM Create Digital Music.

Free Ableton Live tool lets you control even more arcane hardware

Delivered... Peter Kirn | Artists,Scene | Tue 24 Jul 2018 5:21 pm

They’re called “NRPN”‘s. It sounds like some covert military code, or your cat walked on your keyboard. But they’re a key way to control certain instruments via MIDI – and now you have a powerful way to do just that in Ableton Live, for free.

NRPN stands for “Non-Registered Parameter Number” in MIDI, which is a fancy way of saying “we have a bunch of extra MIDI messages and no earthly clue how to identify them.” But what that means in practical terms is, many of your favorite synthesizers have powerful features you’d like to control and automate and … you can’t. Ableton Live doesn’t support these messages out of the box.

It’s likely a lot of people own synths that require NRPN messages, even if they’ve never heard of them. The Dave Smith Instruments Prophet series, DSI Tetra, Novation Peak, Roger Linn Linnstrument, and Korg EMX are just a few examples. (Check your manual and you’ll see.)

Now, you could dig into Max for Live and do this by hand. But better than that is to download a powerful free tool that does the hard work for you, via a friendly interface.

Uruguay-born, Brazil based superstar artist and ultra-hacker Gustavo Bravetti has come to our rescue. This is now the second generation version of his free Max for Live device – and it’s got some serious power inside. The original version was already the first programmable NRPN generator for Live; the new edition adds MIDI learn and bidirectional communication.

It’s built in Max 8 with Live 10, so for consistency you’ll likely want to use Live 10 or later. (Max for Live is required, which is also included in Suite.)

Features:

Up to 8 NRPN messages per device
Multiple devices can be stacked
Setup parameters in NRPN or MSB/LSB [that’s “most significant” and “least significant” byte – basically, a method of packing extra data resolution into MIDI by combining two values]
Bidirectional control and visual feedback
Record automation directly from your synthesizer
MIDI Learn function for easy parameter and data size setup
Adjustable data rate and redundancy filters
Configurable MIDI Thru Filter
Easy draw and edit automation with multiple Data Sizes

User guide

Download from Maxforlive.com

https://www.facebook.com/gustavobravettilive/

The post Free Ableton Live tool lets you control even more arcane hardware appeared first on CDM Create Digital Music.

iBassist is a virtual bass player for your iPad

Delivered... Ashley Elsdon | Scene | Tue 17 Jul 2018 10:57 pm

Unless you routinely have a bassist on hand whenever you need one then you might want to take a look at iBassist as a cheaper option than having the bass player on retainer, and of course, it’s almost certainly a cheaper route to getting a bass line done.

The app basically turns your iPad into a versatile bass player to jam or compose anywhere and create grooves for installed drum apps. The app sends progression chords by MIDI, so you can have any synth app running in background audio for a more consistent jamming experience.

According to the developer’s description:

Bass lines are based in degrees, so you can apply any chord progression to any bass line. A valuable tool to apply different bass grooves to your songs. And the jam tool brings musical variations and new ideas on the way.

The Chord Progression editor is quick, easy to use and allows to create or edit your progressions choosing Key Notes – harmony by steps, midi detection or randomizing.

iBassist includes 10 Round Robin sampled natural bass sounds. Different styles and colors, from Modern Finger Bass, to warm Double Bass.

Live Pads lets to play live sessions on the way with 8 assignable pads for Line-Progression-Jam, and change between them by MIDI.

Song Mode. Choosing “Make Drums” in song mode will create the whole song structure drums.

Export Midi function to create MIDI Files with the Bass Line /- Progression – Jam combination or whole song structures.

Built-In Effects: Compressor, Delay, Chorus, Reverb.
Parametric EQ

Whilst this might not be my regular choice of app I can see the appeal of something like iBassist for jamming and working out tracks.

iBassist costs $17.99 on the app store now

The post iBassist is a virtual bass player for your iPad appeared first on CDM Create Digital Music.

Exploring a journey from Bengali heritage to electronic invention

Delivered... Peter Kirn | Artists,Labels,Scene | Mon 16 Jul 2018 8:42 pm

Can electronic music tell a story about who we are? Debashis Sinha talks about his LP for Establishment, The White Dog, and how everything from Toronto noodle bowls to Bengali field recordings got involved.

The Canadian artist has a unique knack for melding live percussion techniques and electro-acoustic sound with digital manipulation, and in The White Dog, he dives deep into his own Bengali heritage. Just don’t think of “world music.” What emerges is deeply his and composed in a way that’s entirely electro-acoustic in course, not a pastiche of someone else’s musical tradition glued onto some beats. And that’s what drew me to it – this is really the sound of the culture of Debashis, the individual.

And that seems connected to what electronic music production can be – where its relative ease and accessibility can allow us to focus on our own performance technique and a deeper sense of expression. So it’s a great chance not just to explore this album, but what that trip in this work might say to the rest of us.

CDM’s label side project Establishment put out the new release. I spoke to Debashis just after he finished a trip to Germany and a live performance of the album at our event in Berlin. He writes us from his home Toronto.

First, the album:

I want to start with this journey you took across India. What was that experience like? How did you manage to gather research while in that process?

I’ve been to India many times to travel on my own since I turned 18 – usually I spend time with family in and near Kolkata, West Bengal and then travel around, backpacking style. Since the days of Walkman cassette recorders, I’ve always carried something with me to record sound. I didn’t have a real agenda in mind when I started doing it – it was the time of cassettes, really, so in my mind there wasn’t much I could do with these recordings – but it seemed like an important process to undertake. I never really knew what I was going to do with them. I had no knowledge of what sound art was, or radio art, or electroacoustic music. I switched on the recorder when I felt I had to – I just knew I had to collect these sounds, somehow, for me.

As the years went on and I understood the possibilities for using sound captured in the wild on both a conceptual and technical level, and with the advent of tools to use them easily, I found that to my surprise that the act of recording (when in India, at least) didn’t really change. I still felt I was documenting something that was personal and vital to my identity or heart, and the urge to turn on the recorder still came from a very deep place. It could easily have been that I gathered field sound in response to or in order to complete some kind of musical idea, but every time I tried to turn on the recorder in order to gather “assets” for my music, I found myself resisting. So in the end I just let it be, safe in the knowledge that whatever I gathered had a function for me, and may (or may not) in future have a function for my music or sound work. It didn’t feel authentic to gather sound otherwise.

Even though this is your own heritage, I suppose it’s simultaneously something foreign. How did you relate to that, both before and after the trip?

My father moved to Winnipeg, in the center of Canada, almost 60 years ago, and at the time there were next to no Indian (i.e. people from India) there. I grew up knowing all the brown people in the city. It was a different time, and the community was so small, and from all over India and the subcontinent. Passing on art, stories, myth and music was important, but not so much language, and it was easy to feel overwhelmed – I think that passing on of culture operated very differently from family to family, with no overall cultural support at large to bolster that identity for us.

My mom – who used to dance with Uday Shankar’s troupe would corral all the community children to choreograph “dance-dramas” based on Hindu myths. The first wave of Indian people in Winnipeg finally built the first Hindu temple in my childhood – until then we would congregate in people’s basement altars, or in apartment building common rooms.

There was definitely a relationship with India, but it was one that left me what I call “in/between” cultures. I had to find my own way to incorporate my cultural heritage with my life in Canada. For a long time, I had two parallel lives — which seemed to work fine, but when I started getting serious about music it became something I really had to wrestle with. On the one hand, there was this deep and rich musical heritage that I had tenuous connections to. On the other hand, I was also interested in the 2-Tone music of the UK, American hardcore, and experimental music. I took tabla lessons in my youth, as I was interested in and playing drums, but I knew enough to know I would never be a classical player, and had no interest in pursuing that path, understanding even then that my practice would be eclectic.

I did have a desire to contribute to my Indian heritage from where I sat – to express somehow that “in/between”-ness. And the various trips I undertook on my own to India since I was a young person were in part an effort to explore what that expression might take, whether I knew it or not. The collections of field recordings (audio and later video) became a parcel of sound that somehow was a thread to my practice in Canada on the “world music” stage and later in the realms of sound art and composition.

One of the projects I do is a durational improvised concert called “The (X) Music Conference”, which is modeled after the all-night classical music concerts that take place across India. They start in the evening and the headliner usually goes on around 4am and plays for 3 or more hours. Listening to music for that long, and all night, does something to your brain. I wanted to give that experience to audience members, but I’m only one person, so my concert starts at midnight and goes to 7am. There is tea and other snacks, and people can sit or lie down. I wanted to actualize this idea of form (the classical music concert) suffused with my own content (sound improvisations) – it was a way to connect the music culture of India to my own practice. Using field recordings in my solo work is another, or re-presenting/-imagining Hindu myths another.

I think with the development of the various facets of my sound practice, I’ve found a way to incorporate this “form and content” approach, allowing the way that my cultural heritage functions in my psyche to express itself through the tools I use in various ways. It wasn’t an easy process to come to this balance, but along the way I played music with a lot of amazing people that encouraged me in my explorations.

In terms of integrating what you learned, what was the process of applying that material to your work? How did your work change from its usual idioms?

I went through a long process of compartmentalizing when I discovered (and consumer technology supported) producing electroacoustic work easily. When I was concentrating on playing live music with others on the stage, I spent a lot of time studying various drumming traditions under masters all over – Cairo, Athens, NYC, LA, Toronto – and that was really what kept me curious and driven, knowing I was only glimpsing something that was almost unknowable completely.

As the “world music” industry developed, though, I found the “story” of playing music based on these traditions less and less engaging, and the straight folk festival concert format more and more trivial – fun, but trivial – in some ways. I was driven to tell stories with sound in ways that were more satisfying to me, that ran deeper. These field recordings were a way in, and I made my first record with this in mind – Quell. I simply sat down and gathered my ideas and field recordings, and started to work. It was the first time I really sustained an artistic intention all the way through a major project on my own. As I gained facility with my tools, and as I became more educated on what was out there in the world of this kind of sound practice, I found myself seeking these kinds of sound contexts more and more.

However, what I also started to do was eschew my percussion experience. I’m not sure why, but it was a long time before I gave myself permission to introduce more musical and percussion elements into the sound art type of work I was producing. I think in retrospect I was making up rules that I thought applied, in an effort to navigate this new world of sound production – maybe that was what was happening. I think now I’m finding a balance between music, sound, and story that feels good to me. It took a while though.

I’m curious about how you constructed this. You’ve talked a bit about assembling materials over a longer span of time (which is interesting, too, as I know Robert is working the same way). As we come along on this journey of the album, what are we hearing; how did it come together? I know some of it is live… how did you then organize it?

This balance between the various facets of my sound practice is a delicate one, but it’s also driven by instinct, because really, instinct is all I have to depend on. Whereas before I would give myself very strict parameters about how or what I would produce for a given project, now I’m more comfortable drawing from many kinds of sound production practice.

Many of the pieces on “The White Dog” started as small ideas – procedural or mixing explorations. The “Harmonium” pieces were from a remix of the soundtrack to a video art piece I made at the Banff Centre in Canada (White Dog video link here???), where I wanted to make that video piece a kind of club project. “entr’acte” is from a live concert I did with prepared guitar and laptop accompanying the works of Canadian visual artist Clive Holden. Tracks on other records were part of scores for contemporary dance choreographer Peggy Baker (who has been a huge influence on how I make music, speaking of being open). What brought all these pieces together was in a large part instinct, but also a kind of story that I felt was being told. This cross pollination of an implied dramatic thread is important to me.

And there’s some really beautiful range of percussion and the like. What are the sources for the record? How did you layer them?

I’ve quite a collection, and luckily I’ve built that collection through real relationships with the instruments, both technical and emotional/spiritual. They aren’t just cool sounds (although they’re that, too) — but each has a kind of voice that I’ve explored and understood in how I play it. In that regard, it’s pretty clear to me what instrument needs to be played or added as I build a track.

Something new happens when you add a live person playing a real thing inside an electronic environment. It’s something I feel is a deep part of my voice. It’s not the only way to hear a person inside a piece of music, but it;s the way I put myself in my works. I love metallic sounds, and sounds with a lot of sustain, or power. I’m intrigued by how percussion can be a texture as well as a rhythm, so that is something I explore. I’m a huge fan of French percussionist Le Quan Ninh, so the bass-drum-as-tabletop is a big part of my live setup and also my studio setup.

This programmatic element is part of what makes this so compelling to me as a full LP. How has your experience in the theater imprinted on your musical narratives?

My theater work encompasses a wide range of theater practice – from very experimental and small to quite large stages. Usually I do both the sound design and the music, meaning pretty much anything coming out of a speaker from sound effects to music.

My inspiration starts from many non-musical places. That’s mostly, the text/story, but not always — anything could spark a cue, from the set design to the director’s ideas to even how an actor moves. Being open to these elements has made me a better composer, as I often end up reacting to something that someone says or does, and follow a path that ends up in music that I never would have made on my own. It has also made me understand better how to tell stories, or rather maybe how not to – the importance of inviting the audience into the construction of the story and the emotion of it in real time. Making the listener lean forward instead of lean back, if you get me.

This practice of collaborative storytelling of course has impact on my solo work (and vice versa) – it’s made me find a voice that is more rooted in story, in comparison to when I was spending all my time in bands. I think it’s made my work deeper and simpler in many ways — distilled it, maybe — so that the story becomes the main focus. Of course when I say “story” I mean not necessarily an explicit narrative, but something that draws the listener from end to end. This is really what drives the collecting and composition of a group of tracks for me (as well as the tracks themselves) and even my improvisations.

Oh, and on the narrative side – what’s going on with Buddha here, actually, as narrated by the ever Buddha-like Robert Lippok [composer/artist on Raster Media]?

I asked Robert Lippok to record some text for me many years ago, a kind of reimagining the mind of Gautama Buddha under the bodhi tree in the days leading to his enlightenment. I had this idea that maybe what was going through his mind might not have been what we may imagine when we think of the myth itself. I’m not sure where this idea came from – although I’m sure that hearing many different versions of the same myths from various sources while growing up had its effect – but it was something I thought was interesting. I do this often with my works (see above link to Kailash) and again, it’s a way I feel I can contribute to the understanding of my own cultural heritage in a way that is rooted in both my ancestor’s history as well as my own.

And of course, when one thinks of what the Buddha might have sounded like, I defy you to find someone who sounds more perfect than Robert Lippok.

Techno is some kind of undercurrent for this label, maybe not in the strict definition of the genre… I wonder actually if you could talk a bit about pattern and structure. There are these rhythms throughout that are really hypnotic, that regularity seems really important. How do you go about thinking about those musical structures?

The rhythms I seem drawn to run the gamut of time signatures and tempos. Of course, this comes from my studies of various music traditions and repertoire (Arabic, Greek, Turkish, West Asian, south Indian…). As a hand percussionist for many years playing and studying music from various cultures, I found a lot of parallels and cross talk particularly in the rhythms of the material I encountered. I delighted in finding the groove in various tempos and time signatures. There is a certain lilt to any rhythm; if you put your mind and hands to it, the muscles will reveal this lilt. At the same time, the sound material of electronic music I find very satisfying and clear. I’m at best a middling recording engineer, so capturing audio is not my forte – working in the box I find way easier. As I developed skills in programming and sound design, I seemed to be drawn to trying to express the rhythms I’ve encountered in my life with new tools and sounds.

Regularity and grid is important in rhythm – even breaking the grid, or stretching it to its breaking point has a place. (You can hear this very well in south Indian music, among others.) This grid undercurrent is the basis of electronic music and the tools used to make it. The juxtaposition of the human element with various degrees of quantization of electronic sound is something I think I’ll never stop exploring. Even working strongly with a grid has a kind of energy and urgency to it if you’re playing acoustic instruments. There’s a lot to dive into, and I’m planning to work with that idea a lot more for the next release(s).

And where does Alvin Lucier fit in, amidst this Bengali context?

The real interest for me in creating art lies in actualizing ideas, and Lucier is perhaps one of the masters of this – taking an idea of sound and making it real and spellbinding. “Ng Ta (Lucier Mix)” was a piece I started to make with a number of noodle bowls I found in Toronto’s Chinatown – the white ones with blue fishes on them. The (over)tones and rhythms of the piece as it came together reminded me of a piece I’m really interested in performing, “Silver Streetcar for The Orchestra”, a piece for amplified triangle by Lucier. Essentially the musician plays an amplified triangle, muting and playing it in various places for the duration of the piece. It’s an incredible meditation, and to me Ng Ta on The White Dog is a meditation as well – it certainly came together in that way. And so the title.

I wrestle with the degree with which I invoke my cultural heritage in my work. Sometimes it’s very close to the surface, and the work is derived very directly from Hindu myth say, or field recordings from Kolkata. Sometimes it simmers in other ways, and with varying strength. I struggle with allowing it to be expressed instinctually or more directly and with more intent. Ultimately, the music I make is from me, and all those ideas apply whether or not I think of them consciously.

One of the problems I have with the term “world music” is it’s a marketing term to allow the lumping together of basically “music not made by white people”, which is ludicrous (as well as other harsher words that could apply). To that end, the urge to classify my music as “Indian” in some way, while true, can also be a misnomer or an “out” for lazy listening. There are a billion people in India, I believe, and more on the subcontinent and abroad. Why wouldn’t a track like “entr’acte” be “Indian”? On the other hand, why would it? I’m also a product of the west. How can I manage those worlds and expectations and still be authentic? It’s something I work on and think about all the time – but not when I’m actually making music, thank goodness.

I’m curious about your live set, how you were working with the Novation controllers, and how you were looping, etc.

My live sets are always, always constructed differently – I’m horrible that way. I design new effects chains and different ways of using my outboard MIDI gear depending on the context. I might use contact mics on a kalimba and a prepared guitar for one show, and then a bunch of external percussion that I loop and chop live for another, and for another just my voice, and for yet another only field recordings from India. I’ve used Ableton Live to drive a lot of sound installations as well, using follow actions on clips (“any” comes in handy a lot), and I’ve even made some installations that do the same thing with live input (making sure I have a 5 second delay on that input has….been occasionally useful, shall we say).

The concert I put together for The White Dog project is one that I try and keep live as much as possible. It’s important to me to make sure there is room in the set for me to react to the room or the moment of performance – this is generally true for my live shows, but since I’m re-presenting songs that have a life on a record, finding a meaningful space for improv was trickier.

Essentially, I try and have as many physical knobs and faders as possible – either a Novation Launch Control XL or a Behringer BCR2000 [rotary controller], which is a fantastic piece of gear (I know – Behringer?!). I use a Launchpad Mini to launch clips and deal with grid-based effects, and I also have a little Launch Control mapped to the effects parameters and track views or effects I need to see and interact with quickly. Since I’m usually using both hands to play/mix, I always have a Logidy UMI3 to control live looping from a microphone. It’s a 3 button pedal which is luckily built like a tank, considering how many times I’ve dropped it. I program it in various ways depending on the project – for The White Dog concerts with MIDI learn in the Ableton looper to record/overdub, undo and clear button, but the Logidy software allows you to go a lot deeper. I have the option to feed up to 3 effects chains, which I sometimes switch on the fly with dummy clips.

The Max For Live community has been amazing and I often keep some kind of chopper on one of the effect chains, and use the User mode on the Launchpad Mini to punch in and out or alter the length of the loop or whatnot. Sometimes I keep controls for another looper on that grid.

Basically, if you want an overview – I’m triggering clips, and have a live mic that I use for percussion and voice for the looper. I try and keep the mixer in a 1:1 relationship with what’s being played/played back/routed to effects because I’m old school – I find it tricky to do much jumping around when I’m playing live instruments. It’s not the most complicated setup but it gets the job done, and I feel like I’ve struck a balance between electronics and live percussion, at least for this project.

What else are you listening to? Do you find that your musical diet is part of keeping you creative, or is it somehow partly separate?

I jump back and forth – sometimes I listen to tons of music with an ear to try and expand my mind, sometimes just to enjoy myself. Sometimes I stop listening to music just because I’m making a lot on my own. One thing I try to always take care of is my mind. I try to keep it open and curious, and try to always find new ideas to ponder. I am inspired by a lot of different things – paintings, visual art, music, sound art, books – and in general I’m really curious about how people make an idea manifest – science, art, economics, architecture, fashion, it doesn’t matter. Looking into or trying to derive that jump from the mind idea to the actual real life expression of it I find endlessly fascinating and inspiring, even when I’m not totally sure how it might have happened. It’s the guessing that fuels me.

That being said, at the moment I’m listening to lots of things that I feel are percolating some ideas in me for future projects, and most of it coming from digging around the amazing Bandcamp site. Frank Bretschneider turned me on to goat(jp), which is an incredible quartet from Japan with incredible rhythmic and textural muscle. I’ve rediscovered the fun of listening to lots of Stereolab, who always seem to release the same record but still make it sound fresh. Our pal Robert Lippok just released a new record and I am so down with it – he always makes music that straddles the emotional and the electronic, which is something I’m so interested in doing.

I continue to make my way through the catalog of French percussionist Le Quan Ninh, who is an absolute warrior in his solo percussion improvisations. Tanya Tagaq is an incredible singer from Canada – I’m sure many of the people reading this know of her – and her live band, drummer Jean Martin, violinist Jesse Zubot, and choirmaster Christine Duncan, an incredible improv vocalist in her own right are unstoppable. We have a great free music scene in Toronto, and I love so many of the musicians who are active in it, many of them internationally known – Nick Fraser (drummer/composer), Lina Allemano (trumpet), Andrew Downing (cello/composer), Brodie West (sax) – not to mention folks like Sandro Perri and Ryan Driver. They’ve really lit a fire under me to be fierce and in the moment – listening to them is a recurring lesson in what it means to be really punk rock.

Buy and download the album now on Bandcamp.

https://debsinha.bandcamp.com/album/the-white-dog

The post Exploring a journey from Bengali heritage to electronic invention appeared first on CDM Create Digital Music.

Bitwig Studio 2.4: crazy powerful sampler, easier control

Delivered... Peter Kirn | Scene | Thu 12 Jul 2018 6:44 pm

The folks at Bitwig have been picking up speed. And version 2.4, beta testing now, brings some promising sampler and controller features.

The big deal here is that Bitwig is going with a full-functioning sampler. And as Ableton Live and Native Instruments’ Maschine pursue somewhat complex and fragmented approaches, maybe Bitwig will step in and deliver a sampler that just does all the stuff you expect in one place. (I’m ready to put these different devices head to head. I like to switch workflows to keep fresh, anyway, so no complaints. Bitwig just wins by default on Linux since Ableton and NI don’t show up for the competition. Ahem.)

Meet the new Sampler: manipulate pitch, time, and the two in combination, either together in a traditional fashion or independently as a digital wavetable or granular instrument. Those modes on their own aren’t new, but this is a nice way of combining everything into a single interface.

Sampler

The re-built Sampler introduces a powerful wavetable/granular instrument. At its heart are multiple modes that combine effectively different instruments and ways of working with sound into a single interface:

“Repitch” / Speed + pitch together: The traditional sampler mode, with negative speeds, too (allowing it to behave the way a record player / record-scratch / tape transport does).

“Cycles” / Speed only: Speed changes, pitches stay the same. There’s also a Formant control, and the ability to switch on and off keyboard tracking. (In other words, you can scale from realistic-sounding speed changes to extreme metallic variations.)

“Textures” / Granular resampling / independent pitch and speed: Granular resynthesis divides up the sound into tiny bits allowing independent pitch and time manipulation (in combination), and textural effects. Independent speed, grain size, and grain motion (randomization) are all available as parameters.

Freeze: Each mode lets you directly manipulate the sample playhead live, using a controller or the Bitwig modulators. That emulates the position of a needle on a record or playhead on a tape, or the position in a granular playback device, depending on mode – and this is in every single mode.

Oh. Okay. Yeah, so those last two are to me the way Ableton Live should have worked from the beginning – and the way a lot of Max, Reaktor, Pd, and SuperCollider patches/code might work – but it’s fantastic to see them in a DAW. This opens up a lot of live performance and production options. If they’ve nailed it, it could be a reason to switch to Bitwig.

But there’s more:

Updated Multisampler Editor: Bitwig’s Sampler already had multisampler capabilities – letting you combine different samples into a single patch, as you might do for a complex instrument, for instance. Now, you can make groups, choose more easily what you see when editing (revealing samples as you play, for instance), and set modulation per zone. There’s also ping-pong looping and automatic zero-crossing edits (so you can slice up sounds without getting pops and clicks).

Multi-sample mode lets you work with zones in new ways, for more complex sampling patches.

Sequence modulation

There’s a new device that lets you step sequence modulation. Here’s how they describe that:

ParSeq-8 is a step sequencer for modulation.

ParSeq-8 is a unique parameter modulation sequencer, where each step is its own modulation source. It can use the project’s clock, advance on note input, or just run freely in either direction. As it advances, each step’s targets are modulated and then reset. It’s a great way to make projects more dynamic, whether in the studio or on the stage. (Along the way, our Steps modulator got some improvements such as ping-pong looping so check it out too.)

Also in the modulation category, there’s a Note Counter — count up each incoming note and create cycles of modulation as a result.

Note Counter.

Note FX Layer.

More powerful with controllers

Bitwig has been moving forward in making it easy to map hardware controls to software, even as rival tools (cough, Ableton) haven’t advanced since early versions. That’s useful if you have a particular custom hardware controller you want to use to manipulate the instruments, effects, and mixing onscreen.

Now there’s a new visualization to give you clear onscreen feedback of what you’re doing, making that hardware/software connection much easier to see.

Visualize controllers as you use them – so the knob you turn on your hardware makes something visible onscreen.

There’s also MIDI channel support. MIDI has had channels since the protocol was unveiled in the 80s – a way of dividing up multiple streams of information. Now you can put them to use: incoming MIDI can be mapped and filtered by channel. That’s … not exciting, okay, but there are dedicated devices for making those channels useful in chains and so on. And that is fairly exciting.

MIDI channel support – essential for working with MIDI, but implemented here in a way that’s powerful for manipulating streams of control and information.

And more stuff

Also in this release:

Bit-8 audio degrader gets new quantization and parameters for glitching or lightly distorting sound
Note FX layer creates parallel note effects
There’s more feedback in the footer of the screen when you hover over parameters/values
Resize track widths, scene widths
Color-code scenes

Looks like a great upgrade. Beta testing starts soon, to be followed by a release as a free upgrade for Upgrade Plan users this summer.

http://bitwig.com

The post Bitwig Studio 2.4: crazy powerful sampler, easier control appeared first on CDM Create Digital Music.

Audeonic bring us StreamByter, an AUv3 that lets you make your own MIDI FX

Delivered... Ashley Elsdon | Scene | Sun 1 Jul 2018 10:26 pm

Audeonic are well known for their MIDI apps on iOS, and on macOS too. Now they’ve broken out a module from their very popular app MIDIFire. StreamByter is now available as an audio unit plugin for creating your own custom MIDI effects. StreamByter can be used as an Apple Audio Unit (AU) effect or as a standalone app connected via CoreMIDI virtual ports.

To make use of StreamByter you’re going to need your iOS device to be at a minimum of iOS11, and you’re also going to need to be using a suitable audio unit host app like AUM, apeMatrix, Cubasis or Sequencism. If you want to use StreamByter with CoreMIDI, then a routing app like MidiFire would be recommended and an iOS device with at least iOS 8 is required.

So what can you actually use StreamByter to do. Here are some exaples:

  • You can extend the MIDI processing functionality of any AU host, such as AUM, apeMatrix, Cubasis
  • Remap channels, notes, controllers (anything MIDI)
  • Filter MIDI events coarsely or finely
  • Clone or Delay any event
  • Send any event automatically when plugin is loaded
  • You can also create complex effects using programming concepts like conditionals, loops, variables (including array, timing and random), and math operators.

StreamByter is configured using a textual rules ‘language’ that defines how the effect should operate. Please see the support link to go to our website for full details.

StreamByter costs just $6.99 on the app store, which is a lot cheaper than I expected, given how powerful it is

I had a look on the app store and saw this review on there, which was quite inspiring

“Whether in the context of MIDIFire or now as a stand-alone app, StreamByter has allowed me to realize my musical intentions more fully than ever before. This is true because of Audeonic’s unwavering support in helping its users to create StreamByter code to realize creative musical ideas and functions.

StreamByter should be part of every MIDI system.

Thank you Audeonic! You have brought joy to my musical world and that of many others.”

So, if you’ve been looking for a way to bring your own MIDI FX to life, I think that maybe StreamByter may be a good place to start.

The post Audeonic bring us StreamByter, an AUv3 that lets you make your own MIDI FX appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme