Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » Software


NI Massive X synth sees first features, interface revealed

Delivered... Peter Kirn | Scene | Thu 14 Mar 2019 2:58 pm

Native Instruments’ Massive synth defined a generation of soft synths and left a whole genre or two in its wake. But its sequel remains mysterious. Now the company is revealing some of what we can expect.

First, temper your expectations: NI aren’t giving us any sound samples or a release date. (It’s unclear whether the blog talking about “coming months” refers just to this blog series or … whether we’re waiting some months for the software, which seems possible.)

What you do get to see, though, is some of what I got a preview of last fall.

After a decade and a half, making a satisfying reboot of Massive is a tall order. What’s encouraging about Massive X is that it seems to return to some of the original vision of creator Mike Daliot. (Mike is still heavily involved in the new release, too, having crafted all 125 wavetables himself, among other things.)

So Massive X, like Massive before it, is all about making complex modulation accessible – about providing some of the depth of a modular in a fully designed semi-modular environment. Those are packaged into a UI that’s cleaner, clearer, prettier – and finally, scalable. And since this is not 2006, the sound engine beneath has been rewritten – another reason I’m eager to finally hear it in public form.

Massive X is still Massive. That means it incorporates features that are now so widely copied, you would be forgiven forgetting that Massive did them first. That includes drag-and drop modulation, the signature ‘saturn ring’ indicators of modulation around knobs, and even aspects of the approach to sections in the UI.

What’s promising is really the approach to sound and modulation. In short, revealed publicly in this blog piece for the first time:

Two dedicated phase modulation oscillators. Phase modulation was one of the deeper features of the original – and, if you could figure out Yamaha’s arcane approach to programming, instruments like the DX7. But now it’s more deeply integrated with the Massive architecture, and there’s more of it.

Lots of noise. In addition to those hundred-plus wavetables for the oscillators, you also get dozens of noise sources. (Rain! Birdies!) That rather makes Massive into an interesting noise synth, and should open up lots of sounds that aren’t, you know, angry EDM risers and basslines.

New filters. Comb filters, parallel and serial routing, and new sound. The filters are really what make a lot of NI’s latest generation stuff sound so good (as with a lot of newer software), so this is one to listen for.

New effects algorithms. Ditto.

Expanded Insert FX. This was another of the deeper features in Massive – and a case of the semi-modular offering some of the power of a full-blown modular, in a different (arguably, if you like, more useful) context. Since this can include both effects and oscillators, there are some major routing possibilities. Speaking of which:

Audio routing. Route an oscillator to itself (phase feedback), or to one another (yet more phase modulation), and make other connections you would normally expect of a modular synth, not necessarily even a semi-modular one.

Modulators route to the audio bus, too – so again like modular hardware, you can treat audio and modulation interchangeably.

More envelopes. Now you get up to nine of these, and unique new devices like a “switcher” LFO. New “Performers” can use time signature-specific rhythms for modulation, and you can trigger snapshots.

It’s a “tracker.” Four Trackers let you use MIDI as assignable modulation.

Maybe this is an oversimplification, but at the end of the day, it seems to me this is really about whether you want to get deep with this specific, semi-modular design, or go into a more open-ended modular environment. The tricky thing about Massive X is, it might have just enough goodies to draw in even the latter camp.

And, yeah, sure, it’s late. But … Reaktor has proven to us in the past that some of the stuff NI does slowest can also be the stuff the company does best. Blame some obsessive engineers who are totally uninterested in your calendar dates, or, like, the forward progression of time.

For a lot of us, Massive X will have to compete with the fact that on the one hand, the original Massive is easy and light on CPU, and on the other, there are so many new synths and modulars to play with in software. But let’s keep an eye on this one.

And yes, NI, can we please hear the thing soon?

https://blog.native-instruments.com/massive-x-lab-welcome-to-massive-x/

Hey, at least I can say – I think I was the first foreign press to see the original (maybe even the first press meeting, full stop), I’m sure because at the time, NI figured Massive would appeal only to CDM-ish synth nerds. (Then, oops, Skrillex happened.) So I look forward to Massive X accidentally creating the Hardstyle Bluegrass Laser Tag craze. Be ready.

The post NI Massive X synth sees first features, interface revealed appeared first on CDM Create Digital Music.

Use Ableton Live faster with the free Live Enhancement Suite

Delivered... Peter Kirn | Scene | Mon 11 Mar 2019 6:30 pm

Day in, day out, a lot of producers spend a lot of time editing in Ableton Live. Here’s a free tool that automates some common tasks so you can work more quickly – easing some FL Studio envy in the process.

This one comes to us from Madeleine Bloom’s terrific Sonic Bloom, the best destination for resources on learning and using Ableton Live. Live Enhancement Suite is Windows-only for the moment, but a Mac version is coming soon.

The basic idea is, LES adds shortcuts for producers, and some custom features (like sane drawing) you might expect from other tools:

Add devices (like your favorite plug-ins) using a customizable pop-up menu of your favorites (with a double right-click)

Draw notes easily with the ~ key in Piano Roll.

Pop up a shortcut menu with scales in Piano Roll

Add locators (right shift + L) at the cursor

Pan with your mouse, not just the keyboard (via the middle mouse button, so you’ll need a three-button mouse for this one)

Save multiple versions (a feature FL Studio users know well)

Ctrl-shift-Z to redo

Alt-E to view envelope mode in piano roll

And there’s more customizations and multi-monitor support, too.

Ableton are gradually addressing long-running user requests to make editing easier; Live 10.1 builds on the work of Live 10. Case in point: 10.1 finally lets you solo a selected track (mentioned in the video as previously requiring one of these shortcuts). But it’s likewise nice to see users add in what’s missing.

Oh, and… you’re totally allowed to call it “Ableton.” People regularly refer to cars by the make rather than the model. We know what you mean.

Here’s a video walking through these tools and the creator Dylan Tallchief’s approach:

More info:

LES Collaborators:
Inverted Silence: https://soundcloud.com/invertedsilence
Aevi: https://twitter.com/aevitunes
Sylvian: https://sylvian.co/

https://www.patreon.com/dylantallchief
https://www.twitter.com/dylantallchief
https://soundcloud.com/dylantallchief
https://facebook.com/dylantallchief
https://www.twitch.tv/dylantallchief

Give it a go – will try to check in when there’s a Mac version.

https://enhancementsuite.me/

PS, Windows users will want to check out the excellent open source AutoHotkey for automation, generally.

The post Use Ableton Live faster with the free Live Enhancement Suite appeared first on CDM Create Digital Music.

Unique takes on delay and tremolo from K-Devices, now as plug-ins

Delivered... Peter Kirn | Scene | Fri 8 Mar 2019 7:20 pm

K-Devices have brought alien interfaces and deep modulation to Max patches – now they’re doing plug-ins. And their approach to delay and tremolo isn’t quite like what you’ve seen before, a chance of break out of the usual patterns of how those work. Meet TTAP and WOV.

“Phoenix” is the new series of plug-ins from K-Devices, who previously had focused on Max for Live. Think equal parts glitchy IDM, part spacey analog retro – and the ability to mix the two.

TTAP

TTAP is obviously both a play on multi-tap delay and tape, and there’s another multi-faceted experiment with analog and digital effects.

At its heart, there are two buffers with controls for delay time, speed, and feedback. You can sync time controls or set them free. But the basic idea here is you get smooth or glitchy buffers warping around based on modulation and time you can control. There are some really beautiful effects possible:

WOV

WOV is a tremolo that’s evolved into something new. So you can leave it as a plain vanilla tremolo (a regular rate amplitude shifter), but you can also adjust sensitivity to responding to an incoming signal. And there’s an eight-step sequencer. There are extensive controls for shaping waves for the effect, and a Depth section that’s well, deep – or that lets you turn this tremolo into a kind of gate.

These are the sorts of things you could do with a modular and a number of modules, but having it in a single, efficient, integrated plug-in where you get straight at the controls without having to do a bunch of patching – that’s something.


Right now, each plug-in is on sale (25% off) for 45EUR including VAT (about forty two bucks for the USA). 40% off if you buy both. Through March 17.

VST/VST3/AU/AAX, Mac and Windows.

More:

https://k-devices.com/

The post Unique takes on delay and tremolo from K-Devices, now as plug-ins appeared first on CDM Create Digital Music.

How to make a multitrack recording in VCV Rack modular, free

Delivered... Peter Kirn | Scene | Wed 6 Mar 2019 6:00 pm

In the original modular synth era, your only way to capture ideas was to record to tape. But that same approach can be liberating even in the digital age – and it’s a perfect match for the open VCV Rack software modular platform.

Competing modular environments like Reaktor, Softube Modular, and Cherry Audio Voltage Modular all run well as plug-ins. That functionality is coming soon to a VCV Rack update, too – see my recent write-up on that. In the meanwhile, VCV Rack is already capable of routing audio into a DAW or multitrack recorder – via the existing (though soon-to-be-deprecated) VST Bridge, or via inter-app routing schemes on each OS, including JACK.

Those are all good solutions, so why would you bother with a module inside the rack?

Well, for one, there’s workflow. There’s something nice about being able to just keep this record module handy and grab a weird sound or nice groove at will, without having to shift to another tool.

Two, the big ongoing disadvantage of software modular is that it’s still pretty CPU intensive – sometimes unpredictably so. Running Rack standalone means you don’t have to worry about overhead from the host, or its audio driver settings, or anything like that.

A free recording solution inside VCV Rack

What you’ll need to make this work is the free NYSTHI modules for VCV Rack, available via Rack’s plug-in manager. They’re free, though – get ready, there’s a hell of a lot of them.

Type “recorder” into the search box for modules, and you’ll see different options options from NYSTHI – current at least as of this writing.

2 Channel MasterRecorder is a simple stereo recorder.
2 Channel MasterReocorder 2 adds various features: monitoring outs, autosave, a compressor, and “stereo massaging.”
Multitrack Recorder is an multitrack recorder with 4- or 8-channel modes.

The multitrack is the one I use the most. It allows you to create stems you can then mix in another host, or turn into samples (or, say, load onto a drum machine or the like), making this a great sound design tool and sound starter.

This is creatively liberating for the same reason it’s actually fun to have a multitrack tape recorder in the same studio as a modular, speaking of vintage gear. You can muck about with knobs, find something magical, and record it – and then not worry about going on to do something else later.

The AS mixer, routed into NYSTHI’s multitrack recorder.

Set up your mix. The free included Fundamental modules in Rack will cover the basics, but I would also go download Alfredo Santamaria’s excellent selection , the AS modules, also in the Plugin Manager, and also free. Alfredo has created friendly, easy-to-use 2-, 4-, and 8-channel mixers that pair perfectly with the NYSTHI recorders.

Add the mixer, route your various parts, set level (maybe with some temporary panning), and route the output of the mixer to the Audio device for monitoring. Then use the ‘O’ row to get a post-fader output with the level.

Configure the recorder. Right-click on the recorder for an option to set 24-bit audio if you want more headroom, or to pre-select a destination. Set 4- or 8-track mode with the switch. Set CHOOSE FILE if you want to manually select where to record.

There are trigger ins and outs, too, so apart from just pressing the START and STOP buttons, you can either trigger a sequencer or clock directly from the recorder, or visa versa.

Record away! And go to town… when you’re done, you’ll get a stereo WAV file, or a 4- or 8-track WAV file. Yes, that’s one file with all the tracks. So about that…

Splitting up the multitrack file

This module produces a single, multichannel WAV file. Some software will know what to do with that. Reaper, for instance, has excellent multichannel support throughout, so you can just drag and drop into it. Adobe’s Audition CS also opens these files, but it can’t quickly export all the stems.

Software like Ableton Live, meanwhile, will just throw up an error if you try to open the file. (Bad Ableton! No!)

It’s useful to have individual stems anyway. ffmpeg is an insanely powerful cross-platform tool capable of doing all kinds of things with media. It’s completely free and open source, it runs on every platform, and it’s fast and deep. (It converts! It streams! It records!)

Installing is easier than it used to be, thanks to a cleaned-up site and pre-built binaries for Mac and Windows (plus of course the usual easy Linux installs):

https://ffmpeg.org/

Unfortunately, it’s so deep and powerful, it can also be confusing to figure out how to do something. Case in point – this audio channel manipulation wiki page.

In this case, you can use the map channel “filter” to make this happen. So for eight channels, I do this:

ffmpeg -i input.wav -map_channel 0.0.0 0.wav -map_channel 0.0.1 1.wav -map_channel 0.0.2 2.wav -map_channel 0.0.3 3.wav -map_channel 0.0.4 4.wav -map_channel 0.0.5 5.wav -map_channel 0.0.6 6.wav -map_channel 0.0.7 7.wav

But because this is a command line tool, you could create some powerful automated workflows for your modular outputs now that you know this technique.

Sound Devices, the folks who make excellent multichannel recorders, also have a free Mac and Windows tool called Wave Agent which handles this task if you want a GUI instead of the command line.

https://www.sounddevices.com/products/accessories/software/wave-agent

That’s worth keeping around, too, since it can also mix and monitor your output. (No Linux version, though.)

Record away!

I really like this way of working, in that it lets you focus on the modular environment instead of juggling tools. I actually hope we’ll see a Fundamental module for the task in the future. Rack’s modular ecosystem changes fast, so if you find other useful recorders, let us know.

https://vcvrack.com/

The post How to make a multitrack recording in VCV Rack modular, free appeared first on CDM Create Digital Music.

This free Ableton Live device makes images into wavetables

Delivered... Peter Kirn | Scene | Thu 28 Feb 2019 7:46 pm

It’s the season of the wavetable – again. With Ableton Live 10.1 on the horizon and its free Wavetable device, we’ve got yet another free Max for Live device for making sound materials – and this time, you can make your wavetables from images.

Let’s catch you up first.

Ableton Live 10.1 will bring Wavetable as a new instrument to Standard and Suite editions – arguably one of the bigger native synth editions to Live in its history, ranking with the likes of Operator. And sure, as when Operator came out, you already have plug-ins that do the same; Ableton’s pitch is as always their unique approach to UI (love it or hate it), and integration with the host, and … having it right in the box:

Ableton Live 10.1: more sound shaping, work faster, free update

Earlier this week, we saw one free device that makes wavetables for you, built as a Max for Live device. (Odds are anyone able to run this will have a copy of Live with Wavetable in it, since it targets 10.1, but it also exports to other tools). Wave Weld focuses on dialing in the sounds you need and spitting out precise, algorithmic results:

Generate wavetables for free, for Ableton Live 10.1 and other synths

One thing Wave Weld cannot do, however, is make a wavetable out of a picture of a cat.

For that, you want Image2Wavetable. The name says it all: it generates wavetable samples from image data.

This means if you’re handy with graphics software, or graphics code like Processing, you can also make visual patterns that generate interesting wavetables. It reminds me of my happy hours and hours spent using U+I Software’s ground-breaking MetaSynth, which employs some similar concepts to build an entire sound laboratory around graphic tools. (It’s still worth a spin today if you’ve got a Mac; among other things, it is evidently responsible for those sweeping digital sounds in the original Matrix film, I’m told.)

Image2Wavetable is new, the creation of Dillon Bastan and Carlo Cattano – and there are some rough edges, so be patient and it sounds like they’re ready to hear some feedback on how it works.

But the workflow is really simple: drag and drop image, drag and drop resulting wavetable into the Wavetable instrument.

Okay, I suspect I know what I’m doing for the rest of the night.

Image2Wavetable Device [maxforlive.com]

The post This free Ableton Live device makes images into wavetables appeared first on CDM Create Digital Music.

Generate wavetables for free, for Ableton Live 10.1 and other synths

Delivered... Peter Kirn | Scene | Tue 26 Feb 2019 10:08 pm

Wavetables are capable of a vast array of sounds. But just dumping arbitrary audio content into a wavetable is unlikely to get the results you want. And that’s why Wave Weld looks invaluable: it makes it easy to generate useful wavetables, in an add-on that’s free for Max for Live.

Ableton Live users are going to want their own wavetable maker very soon. Live 10.1 will add Wavetable, a new synth based on the technique. See our previous preview:

Ableton Live 10.1: more sound shaping, work faster, free update

Live 10.1 is in public beta now, and will be free to all Live 10 users soon.

So long as you have Max for Live to run it, Wave Weld will be useful to other synths, as well – including the developer’s own Wave Junction.

Because wavetables are periodic by their very nature, it’s more likely helpful to generate content algorithmically than just dump sample content of your own. (Nothing against the latter – it’s definitely fun – but you may soon find yourself limited by the results.)

Wave Wend handles generating those materials for you, as well as exporting them in the format you need.

1. Make the wavetable: use waveshaping controls to dial in the sound materials you want.

2. Build up a library: adapt existing content or collect your own custom creations.

3. Export in the format you need: adjusting the size les you support Live 10.1’s own device or other hardware and plug-ins.

The waveshaping features are really the cool part:

Unique waveshaping controls to generate custom wavetables
Sine waveshape phase shift and curve shape controls
Additive style synthesis via choice of twenty four sine waveshape harmonics for both positive and negative phase angles
Saw waveshape curve sharpen and partial controls
Pulse waveshape width, phase shift, curve smooth and curve sharpen controls
Triangle waveshape phase shift, curve smooth and curve sharpen controls
Random waveshape quantization, curve smooth and thinning controls

Wave Weld isn’t really intended as a synth, but one advantage of it being an M4L device is, you can easily preview sounds as you work.

More information on the developer’s site – http://metafunction.co.uk/wave-weld/

The download is free with a sign-up for their mailing list.

They’ve got a bunch of walkthrough videos to get you started, too:

Major kudos to Phelan Kane of Meta Function for this release. (Phelan is an Ableton Certified Trainer as well as a specialist in Reaktor and Maschine on the Native Instruments side, as well as London chairman for AES.)

I’m also interested in other ways to go about this – SuperCollider code, anyone?

Wavetable on!

The post Generate wavetables for free, for Ableton Live 10.1 and other synths appeared first on CDM Create Digital Music.

A free, shared visual playground in the browser: Olivia Jack talks Hydra

Delivered... Peter Kirn | Scene | Fri 22 Feb 2019 7:50 pm

Reimagine pixels and color, melt your screen live into glitches and textures, and do it all for free on the Web – as you play with others. We talk to Olivia Jack about her invention, live coding visual environment Hydra.

Inspired by analog video synths and vintage image processors, Hydra is open, free, collaborative, and all runs as code in the browser. It’s the creation of US-born, Colombia-based artist Olivia Jack. Olivia joined our MusicMakers Hacklab at CTM Festival earlier this winter, where she presented her creation and its inspirations, and jumped in as a participant – spreading Hydra along the way.

Olivia’s Hydra performances are explosions of color and texture, where even the code becomes part of the aesthetic. And it’s helped take Olivia’s ideas across borders, both in the Americas and Europe. It’s part of a growing interest in the live coding scene, even as that scene enters its second or third decade (depending on how you count), but Hydra also represents an exploration of what visuals can mean and what it means for them to be shared between participants. Olivia has rooted those concepts in the legacy of cybernetic thought.

Oh, and this isn’t just for nerd gatherings – her work has also lit up one of Bogota’s hotter queer parties. (Not that such things need be thought of as a binary, anyway, but in case you had a particular expectation about that.) And yes, that also means you might catch Olivia at a JavaScript conference; I last saw her back from making Hydra run off solar power in Hawaii.

Following her CTM appearance in Berlin, I wanted to find out more about how Olivia’s tool has evolved and its relation to DIY culture and self-fashioned tools for expression.

Olivia with Alexandra Cardenas in Madrid. Photo: Tatiana Soshenina.

CDM: Can you tell us a little about your background? Did you come from some experience in programming?

Olivia: I have been programming now for ten years. Since 2011, I’ve worked freelance — doing audiovisual installations and data visualization, interactive visuals for dance performances, teaching video games to kids, and teaching programming to art students at a university, and all of these things have involved programming.

Had you worked with any existing VJ tools before you started creating your own?

Very few; almost all of my visual experience has been through creating my own software in Processing, openFrameworks, or JavaScript rather than using software. I have used Resolume in one or two projects. I don’t even really know how to edit video, but I sometimes use [Adobe] After Effects. I had no intention of making software for visuals, but started an investigative process related to streaming on the internet and also trying to learn about analog video synthesis without having access to modular synth hardware.

Alexandra Cárdenas and Olivia Jack @ ICLC 2019:

In your presentation in Berlin, you walked us through some of the origins of this project. Can you share a bit about how this germinated, what some of the precursors to Hydra were and why you made them?

It’s based on an ongoing Investigation of:

  • Collaboration in the creation of live visuals
  • Possibilities of peer-to-peer [P2P] technology on the web
  • Feedback loops

Precursors:

A significant moment came as I was doing a residency in Platohedro in Medellin in May of 2017. I was teaching beginning programming, but also wanted to have larger conversations about the internet and talk about some possibilities of peer-to-peer protocols. So I taught programming using p5.js (the JavaScript version of Processing). I developed a library so that the participants of the workshop could share in real-time what they were doing, and the other participants could use what they were doing as part of the visuals they were developing in their own code. I created a class/library in JavaScript called pixel parche to make this sharing possible. “Parche” is a very Colombian word in Spanish for group of friends; this reflected the community I felt while at Platoedro, the idea of just hanging out and jamming and bouncing ideas off of each other. The tool clogged the network and I tried to cram too much information in a very short amount of time, but I learned a lot.

I was also questioning some of the metaphors we use to understand and interact with the web. “Visiting” a website is exchanging a bunch of bytes with a faraway place and routed through other far away places. Rather than think about a webpage as a “page”, “site”, or “place” that you can “go” to, what if we think about it as a flow of information where you can configure connections in realtime? I like the browser as a place to share creative ideas – anyone can load it without having to go to a gallery or install something.

And I was interested in using the idea of a modular synthesizer as a way to understand the web. Each window can receive video streams from and send video to other windows, and you can configure them in real time suing WebRTC (realtime web streaming).

Here’s one of the early tests I did:

https://vimeo.com/218574728

I really liked this philosophical idea you introduced of putting yourself in a feedback loop. What does that mean to you? Did you discover any new reflections of that during our hacklab, for that matter, or in other community environments?

It’s processes of creation, not having a specific idea of where it will end up – trying something, seeing what happens, and then trying something else.

Code tries to define the world using specific set of rules, but at the end of the day ends up chaotic. Maybe the world is chaotic. It’s important to be self-reflective.

How did you come to developing Hydra itself? I love that it has this analog synth model – and these multiple frame buffers. What was some of the inspiration?

I had no intention of creating a “tool”… I gave a workshop at the International Conference on Live Coding in December 2017 about collaborative visuals on the web, and made an editor to make the workshop easier. Then afterwards people kept using it.

I didn’t think too much about the name but [had in mind] something about multiplicity. Hydra organisms have no central nervous system; their nervous system is distributed. There’s no hierarchy of one thing controlling everything else, but rather interconnections between pieces.

Ed.: Okay, Olivia asked me to look this up and – wow, check out nerve nets. There’s nothing like a head, let alone a central brain. Instead the aquatic creatures in the genus hydra has sense and neuron essentially as one interconnected network, with cells that detect light and touch forming a distributed sensory awareness.

Most graphics abstractions are based on the idea of a 2d canvas or 3d rendering, but the computer graphics card actually knows nothing about this; it’s just concerned with pixel colors. I wanted to make it easy to play with the idea of routing and transforming a signal rather than drawing on a canvas or creating a 3d scene.

This also contrasts with directly programming a shader (one of the other common ways that people make visuals using live coding), where you generally only have access to one frame buffer for rendering things to. In Hydra, you have multiple frame buffers that you can dynamically route and feed into each other.

MusicMakers Hacklab in Berlin. Photo: Malitzin Cortes.

Livecoding is of course what a lot of people focus on in your work. But what’s the significance of code as the interface here? How important is it that it’s functional coding?

It’s inspired by [Alex McLean’s sound/music pattern environment] TidalCycles — the idea of taking a simple concept and working from there. In Tidal, the base element is a pattern in time, and everything is a transformation of that pattern. In Hydra, the base element is a transformation from coordinates to color. All of the other functions either transform coordinates or transform colors. This directly corresponds to how fragment shaders and low-level graphics programming work — the GPU runs a program simultaneously on each pixel, and that receives the coordinates of that pixel and outputs a single color.

I think immutability in functional (and declarative) coding paradigms is helpful in live coding; you don’t have to worry about mentally keeping track of a variable and what its value is or the ways you’ve changed it leading up to this moment. Functional paradigms are really helpful in describing analog synthesis – each module is a function that always does the same thing when it receives the same input. (Parameters are like knobs.) I’m very inspired by the modular idea of defining the pieces to maximize the amount that they can be rearranged with each other. The code describes the composition of those functions with each other. The main logic is functional, but things like setting up external sources from a webcam or live stream are not at all; JavaScript allows mixing these things as needed. I’m not super opinionated about it, just interested in the ways that the code is legible and makes it easy to describe what is happening.

What’s the experience you have of the code being onscreen? Are some people actually reading it / learning from it? I mean, in your work it also seems like a texture.

I am interested in it being somewhat understandable even if you don’t know what it is doing or that much about coding.

Code is often a visual element in a live coding performance, but I am not always sure how to integrate it in a way that feels intentional. I like using my screen itself as a video texture within the visuals, because then everything I do — like highlighting, scrolling, moving the mouse, or changing the size of the text — becomes part of the performance. It is really fun! Recently I learned about prepared desktop performances and related to the live-coding mantra of “show your screens,” I like the idea that everything I’m doing is a part of the performance. And that’s also why I directly mirror the screen from my laptop to the projector. You can contrast that to just seeing the output of an AV set, and having no idea how it was created or what the performer is doing. I don’t think it’s necessary all the time, but it feels like using the computer as an instrument and exploring different ways that it is an interface.

The algorave thing is now getting a lot of attention, but you’re taking this tool into other contexts. Can you talk about some of the other parties you’ve played in Colombia, or when you turned the live code display off?

Most of my inspiration and references for what I’ve been researching and creating have been outside of live coding — analog video synthesis, net art, graphics programming, peer-to-peer technology.

Having just said I like showing the screen, I think it can sometimes be distracting and isn’t always necessary. I did visuals for Putivuelta, a queer collective and party focused on diasporic Latin club music and wanted to just focus on the visuals. Also I am just getting started with this and I like to experiment each time; I usually develop a new function or try something new every time I do visuals.

Community is such an interesting element of this whole scene. So I know with Hydra so far there haven’t been a lot of outside contributions to the codebase – though this is a typical experience of open source projects. But how has it been significant to your work to both use this as an artist, and teach and spread the tool? And what does it mean to do that in this larger livecoding scene?

I’m interested in how technical details of Hydra foster community — as soon as you log in, you see something that someone has made. It’s easy to share via twitter bot, see and edit the code live of what someone has made, and make your own. It acts as a gallery of shareable things that people have made:

https://twitter.com/hydra_patterns

Although I’ve developed this tool, I’m still learning how to use it myself. Seeing how other people use it has also helped me learn how to use it.

I’m inspired by work that Alex McLean and Alexandra Cardenas and many others in live coding have done on this — just the idea that you’re showing your screen and sharing your code with other people to me opens a conversation about what is going on, that as a community we learn and share knowledge about what we are doing. Also I like online communities such as talk.lurk.org and streaming events where you can participate no matter where you are.

I’m also really amazed at how this is spreading through Latin America. Do you feel like there’s some reason the region has been so fertile with these tools?

It’s definitely influenced me rather than the other way around, getting to know Alexandra [Cardenas’] work, Esteban [Betancur, author of live coding visual environment Cine Vivo], rggtrn, and Mexican live coders.

Madrid performance. Photo: Tatiana Soshenina.

What has the scene been like there for you – especially now living in Bogota, having grown up in California?

I think people are more critical about technology and so that makes the art involving technology more interesting to me. (I grew up in San Francisco.) I’m impressed by the amount of interest in art and technology spaces such as Plataforma Bogota that provide funding and opportunities at the intersection of art, science, and technology.

The press lately has fixated on live coding or algorave but maybe not seen connections to other open source / DIY / shared music technologies. But – maybe now especially after the hacklab – do you see some potential there to make other connections?

To me it is all really related, about creating and hacking your own tools, learning, and sharing knowledge with other people.

Oh, and lastly – want to tell us a little about where Hydra itself is at now, and what comes next?

Right now, it’s improving documentation and making it easier for others to contribute.

Personally, I’m interested in performing more and developing my own performance process.

Thanks, Olivia!

Check out Hydra for yourself, right now:

https://hydra-editor.glitch.me/

Previously:

Inside the livecoding algorave movement, and what it says about music

Magical 3D visuals, patched together with wires in browser: Cables.gl

The post A free, shared visual playground in the browser: Olivia Jack talks Hydra appeared first on CDM Create Digital Music.

VCV Rack nears 1.0, new features, as software modular matures

Delivered... Peter Kirn | Scene | Mon 18 Feb 2019 7:42 pm

VCV Rack, the open source platform for software modular, keeps blossoming. If what you were waiting for was more maturity and stability and integration, the current pipeline looks promising. Here’s a breakdown.

Even with other software modulars on the scene, Rack stands out. Its model is unique – build a free, open source platform, and then build the business on adding commercial modules, supporting both the platform maker (VCV) and third parties (the module makers). That has opened up some new possibilities: a mixed module ecosystem of free and paid stuff, support for ports of open source hardware to software (Music Thing Modular, Mutable Instruments), robust Linux support (which other Eurorack-emulation tools currently lack), and a particular community ethos.

Of course, the trade-off with Rack 0.xx is that the software has been fairly experimental. Versions 1.0 and 2.0 are now in the pipeline, though, and they promise a more refined interface, greater performance, a more stable roadmap, and more integration with conventional DAWs.

New for end users

VCV founder and lead developer Andrew Belt has been teasing out what’s coming in 1.0 (and 2.0) online.

Here’s an overview:

  • Polyphony, polyphonic cables, polyphonic MIDI support and MPE
  • Multithreading and hardware acceleration
  • Tooltips, manual data entry, and right-click menus to more information on modules
  • Virtual CV to MIDI and direct MIDI mapping
  • 2.0 version coming with fully-integrated DAW plug-in

More on that:

Polyphony and polyphonic cables. The big one – you can now use polyphonic modules and even polyphonic patching. Here’s an explanation:

https://community.vcvrack.com/t/how-polyphonic-cables-will-work-in-rack-v1/

New modules will help you manage this.

Polyphonic MIDI and MPE. Yep, native MPE support. We’ve seen this in some competing platforms, so great to see here.

Multithreading. Rack will now use multiple cores on your CPU more efficiently. There’s also a new DSP framework that adds CPU acceleration (which helps efficiency for polyphony, for example). (See the developer section below.)

Oversampling for better audio quality. Users can set higher settings in the engine to reduce aliasing.

Tooltips and manual value entry. Get more feedback from the UI and precise control. You can also right-click to open other stuff – links to developer’s website, manual (yes!), source code (for those that have it readily available), or factory presets.

Core CV-MIDI. Send virtual CV to outboard gear as MIDI CC, gate, note data. This also integrates with the new polyphonic features. But even better –

Map MIDI directly. The MIDI map module lets you map parameters without having to patch through another module. A lot of software has been pretty literal with the modular metaphor, so this is a welcome change.

And that’s just what’s been announced. 1.0 is imminent, in the coming months, but 2.0 is coming, as well…

Rack 2.0 and VCV for DAWs. After 1.0, 2.0 isn’t far behind. “Shortly after” 2.0 is released, a DAW plug-in will be launched as a paid add-on, with support for “multiple instances, DAW automation with parameter labels, offline rendering, MIDI input, DAW transport, and multi-channel audio.”

These plans aren’t totally set yet, but a price around a hundred bucks and multiple ins and outs are also planned. (Multiple I/O also means some interesting integrations will be possible with Eurorack or other analog systems, for software/hardware hybrids.)

VCV Bridge is already deprecated, and will be removed from Rack 2.0. Bridge was effectively a stopgap for allowing crude audio and MIDI integration with DAWs. The planned plug-in sounds more like what users want.

Rack 2.0 itself will still be free and open source software, under the same license. The good thing about the plug-in is, it’s another way to support VCV’s work and pay the bills for the developer.

New for developers

Rack v1 is under a BSD license – proper free and open source software. There’s even a mission statement that deals with this.

Rack v1 will bring a new, stabilized API – meaning you will need to do some work to port your modules. It’s not a difficult process, though – and I think part of Rack’s appeal is the friendly API and SDK from VCV.

https://vcvrack.com/manual/Migrate1.html

You’ll also be able to take advantage of an SSE wrapper (simd.hpp) to take advantage of accelerated code on desktop CPUs, without hard coding manual calls to hardware that could break your plug-ins in the future. This also theoretically opens up future support for other platforms – like NEON or AVX acceleration. (It does seem like ARM platforms are the future, after all.)

Plus check this port for adding polyphony to your stuff.

And in other Rack news…

Also worth mentioning:

While the Facebook group is still active and a place where a lot of people share work, there’s a new dedicated forum. That does things Facebook doesn’t allow, like efficient search, structured sections in chronological order so it’s easy to find answers, and generally not being part of a giant, evil, destructive platform.

https://community.vcvrack.com/

It’s powered by open source forum software Discourse.

For a bunch of newly free add-ons, check out the wonder XFX stuff (I paid for at least one of these, and would do again if they add more commercial stuff):

http://blamsoft.com/vcv-rack/

Vult is a favorite of mine, and there’s a great review this week, with 79 demo patches too:

There’s also a new version of Mutable Instruments Tides, Tidal Modular 2, available in the Audible Instruments Preview add-on – and 80% of your money goes to charity.

https://vcvrack.com/AudibleInstruments.html#preview

And oh yeah, remember that in the fall Rack already added support for hosting VST plugins, with VST Host. It will even work inside the forthcoming plugin, so you can host plugins inside a plugin.

https://vcvrack.com/Host.html

Here it is with the awesome d16 stuff, another of my addictions:

Great stuff. I’m looking forward to some quality patching time.

http://vcvrack.com/

The post VCV Rack nears 1.0, new features, as software modular matures appeared first on CDM Create Digital Music.

Live coding group toplap celebrates days of live streaming, events

Delivered... Peter Kirn | Scene | Fri 15 Feb 2019 5:21 pm

What began as a niche field populated mainly by code jockeys has grown into a worldwide movement of artists, many of them new to programming. Onekey group, TOPLAP, celebrates 15 years of operation with live streams and events.

Image at top – Olivia Jack’s Hydra in action, earlier this month at our MusicMakers Hacklab at CTM Festival. We’ll be talking to Olivia over the weekend about live coding visuals, and you can catch her in Berlin tonight – or online – see below.

Here’s the full announcement – eloquently worded enough that I’ll just copy it here – check this crazy schedule, which began yesterday:

Live coding is about making live music, visuals and other time-based arts by writing and manipulating code. Recently it’s been popularised as Algorave, but is a technique used in all kinds of genres and artforms.

The open worldwide live coding community goes by the name of TOPLAP (Temporary Organisation for the Promotion of Live Algorithm Programming) was formed 15 years again (14th February, 2004) at an event called Changing Grammars in Hamburg.

Now this worldwide community is coming together to make a continuous 3.5 day live stream with over 168 half-hour performance slots..

Watch here:
http://toplap.org/wearefifteen/

Join the livestream chat here:
https://talk.lurk.org/channel/toplap15

There’s over 168 performances from 14th-17th February, quite a few beamed from local celebratory events being organised around the place (Prague, London, NYC, Amsterdam, Madison, Bath, Argentina, Richmond, Hamilton, …), and others by individuals who’ll be live coding from their sofa.

Anyone going to stay up to watch the whole thing?

Here in Berlin tonight, there’s a live and in-person event featuring 𝕭𝖅𝕲𝕽𝕷, Calum Gunn, Olivia Jack with Alexandra Cardenas, Yaxu (who we hosted here last year), and Renick Bell:

KEYS: computer music ~ digital arts | Renick Bell • Yaxu & more [Faecbook event]

Algorave and TOPLAP have made major efforts to be more gender balanced and inclusive and community driven – a topic deep enough that I’ll leave it for another time, as they’ve worked on some specific techniques to enable this. But it’s extraordinary what people are doing with code – and yes, if typing isn’t your favorite mode of control, some are also extending these tools to physical controllers and other live performance techniques. Live coding in one form or another has been around decades, but now is possibly the best time yet for this scene. We’ll be watching – and streaming. Stay tuned.

The post Live coding group toplap celebrates days of live streaming, events appeared first on CDM Create Digital Music.

Why is this Valentine’s song made by an AI app so awful?

Delivered... Peter Kirn | Scene | Wed 13 Feb 2019 11:19 pm

Do you hate AI as a buzzword? Do you despise the millennial whoop? Do you cringe every time Valentine’s Day arrives? Well – get ready for all those things you hate in one place. But hang in there – there’s a moral to this story.

Now, really, the song is bad. Like laugh-out-loud bad. Here’s iOS app Amadeus Code “composing” a song for Valentine’s Day, which says love much in the way a half-melted milk chocolate heart does, but – well, I’ll let you listen, millennial pop cliches and all:

Fortunately this comes after yesterday’s quite stimulating ideas from a Google research team – proof that you might actually use machine learning for stuff you want, like improved groove quantization and rhythm humanization. In case you missed that:

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Now, as a trained composer / musicologist, I do find this sort of exercise fascinating. And on reflection, I think the failure of this app tells us a lot – not just about machines, but about humans. Here’s what I mean.

Amadeus Code is an interesting idea – a “songwriting assistant” powered by machine learning, delivered as an app. And it seems machine learning could generate, for example, smarter auto accompaniment tools or harmonizers. Traditionally, those technologies have been driven by rigid heuristics that sound “off” to our ears, because they aren’t able to adequately follow harmonic changes in the way a human would. Machine learning could – well, theoretically, with the right dataset and interpretation – make those tools work more effectively. (I won’t re-hash an explanation of neural network machine learning, since I got into that in yesterday’s article on Magenta Studio.)

https://amadeuscode.com/

You might well find some usefulness from Amadeus, too.

This particular example does not sound useful, though. It sounds soulless and horrible.

Okay, so what happened here? Music theory at least cheers me up even when Valentine’s Day brings me down. Here’s what the developers sent CDM in a pre-packaged press release:

We wanted to create a song with a specific singer in mind, and for this demo, it was Taylor Swift. With that in mind, here are the parameters we set in the app.

Bpm set to slow to create a pop ballad
To give the verses a rhythmic feel, the note length settings were set to “short” and also since her vocals have great presence below C, the note range was also set from low~mid range.
For the chorus, to give contrast to the rhythmic verses, the note lengths were set longer and a wider note range was set to give a dynamic range overall.

After re-generating a few ideas in the app, the midi file was exported and handed to an arranger who made the track.

Wait – Taylor Swift is there just how, you say?

Taylor’s vocal range is somewhere in the range of C#3-G5. The key of the song created with Amadeus Code was raised a half step in order to accommodate this range making the song F3-D5.

From the exported midi, 90% of the topline was used. The rest of the 10% was edited by the human arranger/producer: The bass and harmony files are 100% from the AC midi files.

Now, first – these results are really impressive. I don’t think traditional melodic models – theoretical and mathematical in nature – are capable of generating anything like this. They’ll tend to fit melodic material into a continuous line, and as a result will come out fairly featureless.

No, what’s compelling here is not so much that this sounds like Taylor Swift, or that it sounds like a computer, as it sounds like one of those awful commercial music beds trying to be a faux Taylor Swift song. It’s gotten some of the repetition, some of the basic syncopation, and oh yeah, that awful overused millennial whoop. It sounds like a parody, perhaps because partly it is – the machine learning has repeated the most recognizable cliches from these melodic materials, strung together, and then that was further selected / arranged by humans who did the same. (If the machines had been left alone without as much human intervention, I suspect the results wouldn’t be as good.)

In fact, it picks up Swift’s ticks – some of the funny syncopations and repetitions – but without stringing them together, like watching someone do a bad impression. (That’s still impressive, though, as it does represent one element of learning – if a crude one.)

To understand why this matters, we’re going to have to listen to a real Taylor Swift song. Let’s take this one:i’

Okay, first, the fact that the real Taylor Swift song has words is not a trivial detail. Adding words means adding prosody – so elements like intonation, tone, stress, and rhythm. To the extent those elements have resurfaced as musical elements in the machine learning-generated example, they’ve done so in a way that no longer is attached to meaning.

No amount of analysis, machine or human, can be generative of lyrical prosody for the simple reason that analysis alone doesn’t give you intention and play. A lyricist will make decisions based on past experience and on the desired effect of the song, and because there’s no real right or wrong to how do do that, they can play around with our expectations.

Part of the reason we should stop using AI as a term is that artificial intelligence implies decision making, and these kinds of models can’t make decisions. (I did say “AI” again because it fits into the headline. Or, uh, oops, I did it again. AI lyricists can’t yet hammer “oops” as an interjection or learn the playful setting of that line – again, sorry.)

Now, you can hate the Taylor Swift song if you like. But it’s catchy not because of a predictable set of pop music rules so much as its unpredictability and irregularity – the very things machine learning models of melodic space are trying to remove in order to create smooth interpolations. In fact, most of the melody of “Blank Space” is a repeated tonic note over the chord progression. Repetition and rhythm are also combined into repeated motives – something else these simple melodic models can’t generate, by design. (Well, you’ll hear basic repetition, but making a relationship between repeated motives again will require a human.)

It may sound like I’m dismissing computer analysis. I’m actually saying something more (maybe) radical – I’m saying part of the mistake here is assuming an analytical model will work as a generative model. Not just a machine model – any model.

This mistake is familiar, because almost everyone who has ever studied music theory has made the same mistake. (Theory teachers then have to listen to the results, which are often about as much fun as these AI results.)

Music theory analysis can lead you to a deeper understanding of how music works, and how the mechanical elements of music interrelate. But it’s tough to turn an analytical model into a generative model, because the “generating” process involves decisions based on intention. If the machine learning models sometimes sound like a first year graduate composition student, that may be that the same student is steeped in the analysis but not in the experience of decision making. But that’s important. The machine learning model won’t get better, because while it can keep learning, it can’t really make decisions. It can’t learn from what it’s learned, as you can.

Yes, yes, app developers – I can hear you aren’t sold yet.

For a sense of why this can go deep, let’s turn back to this same Taylor Swift song. The band Imagine Dragons picked it up and did a cover, and, well, the chord progression will sound more familiar than before.

As it happens, in a different live take I heard the lead singer comment (unironically) that he really loves Swift’s melodic writing.

But, oh yeah, even though pop music recycles elements like chord progressions and even groove (there’s the analytic part), the results take on singular personalities (there’s the human-generative side).

“Stand by Me” dispenses with some of the ticks of our current pop age – millennial whoops, I’m looking at you – and at least as well as you can with the English language, hits some emotional meaning of the words in the way they’re set musically. It’s not a mathematical average of a bunch of tunes, either. It’s a reference to a particular song that meant something to its composer and singer, Ben E. King.

This is his voice, not just the emergent results of a model. It’s a singer recalling a spiritual that hit him with those same three words, which sets a particular psalm from the Bible. So yes, drum machines have no soul – at least until we give them one.

“Sure,” you say, “but couldn’t the machine learning eventually learn how to set the words ‘stand by me’ to music”? No, it can’t – because there are too many possibilities for exactly the same words in the same range in the same meter. Think about it: how many ways can you say these three words?

“Stand by me.”

Where do you put the emphasis, the pitch? There’s prosody. What melody do you use? Keep in mind just how different Taylor Swift and Ben E. King were, even with the same harmonic structure. “Stand,” the word, is repeated as a suspension – a dissonant note – above the tonic.

And even those observations still lie in the realm of analysis. The texture of this coming out of someone’s vocal cords, the nuances to their performance – that never happens the same way twice.

Analyzing this will not tell you how to write a song like this. But it will throw light on each decision, make you hear it that much more deeply – which is why we teach analysis, and why we don’t worry that it will rob music of its magic. It means you’ll really listen to this song and what it’s saying, listen to how mournful that song is.

And that’s what a love song really is:

If the sky that we look upon
Should tumble and fall
Or the mountain should crumble to the sea
I won’t cry, I won’t cry
No, I won’t shed a tear
Just as long as you stand
Stand by me

Stand by me.

Now that’s a love song.

So happy Valentine’s Day. And if you’re alone, well – make some music. People singing about hearbreak and longing have gotten us this far – and it seems if a machine does join in, it’ll happen when the machine’s heart can break, too.

PS – let’s give credit to the songwriters, and a gentle reminder that we each have something to sing that only we can:
Singer Ben E. King, Best Known For ‘Stand By Me,’ Dies At 76 [NPR]

The post Why is this Valentine’s song made by an AI app so awful? appeared first on CDM Create Digital Music.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Delivered... Peter Kirn | Scene | Tue 12 Feb 2019 8:34 pm

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magneta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magneta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Max, though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magneta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magneta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magneta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magneta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

Ableton Live 10.1: more sound shaping, work faster, free update

Delivered... Peter Kirn | Scene | Wed 6 Feb 2019 12:17 pm

There’s something about point releases – not the ones with any radical changes, but just the ones that give you a bunch of little stuff you want. That’s Live 10.1; here’s a tour.

Live 10.1 was announced today, but I sat down with the team at Ableton last month and have been working with pre-release software to try some stuff out. Words like “workflow” are always a bit funny to me. We’re talking, of course, mostly music making. The deal with Live 10.1 is, it gives you some new toys on the sound side, and makes mangling sounds more fun on the arrangement side.

Oh, and VST3 plug-ins work now, too. (MOTU’s DP10 also has that in an upcoming build, among others, so look forward to the Spring of VST3 Support.)

Let’s look at those two groups.

Sound tools and toys

User wavetables. Wavetable just got more fun – you can drag and drop samples onto Wavetable’s oscillator now, via the new User bank. You can get some very edgy, glitchy results this way, or if you’re careful with sample selection and sound design, more organic sounds.

This looks compelling.

Here’s how it works: Live splits up your audio snippet into 1024 sample chunks. It then smooths out the results – fading the edges of each table to avoid zero-crossing clicks and pops, and normalizing and minimizing phase differences. You can also tick a box called “Raw” that just slices up the wavetable, for samples that are exactly 1024 samples or a regular periodic multiple of that.

Give me some time and we can whip up some examples of this, but basically you can glitch out, mangle sounds you’ve recorded, carefully construct sounds, or just grab ready-to-use wavetables from other sources.

But it is a whole lot of fun and it suggests Wavetable is an instrument that will grow over time.

Here’s that feature in action:

Delay. Simple Delay and Ping Pong Delay have merged into a single lifeform called … Delay. That finally updates an effect that hasn’t seen love since the last decade. (The original ones will still work for backwards project compatibility, though you won’t see them in a device list when you create a new project – don’t panic.)

At first glance, you might think that’s all that’s here, but in typical Ableton fashion, there are some major updates hidden behind those vanilla, minimalist controls. So now you have Repitch, Fade, and Jump modes. And there’s a Modulation section with rate, filter, and time controls (as found on Echo). Oh, and look at that little infinity sign next to the Feedback control.

Yeah, all of those things are actually huge from a sound design perspective. So since Echo has turned out to be a bit too much for some tasks, I expect we’ll be using Delay a lot. (It’s a bit like that moment when you figure out you really want Simpler and Drum Racks way more than you do Sampler.)

The old delays. Ah, memories…

And the new Delay. Look closely – there are some major new additions in there.

Channel EQ. This is a new EQ with visual feedback and filter curves that adapt across the frequency range – that is, “Low,” “Mid,” and “High” each adjust their curves as you change their controls. Since it has just three controls, that means Channel EQ sits somewhere between the dumbed down EQ Three and the complexity of EQ Eight. But it also means this could be useful as a live performance EQ when you don’t necessarily want a big DJ-style sweep / cut.

Here it is in action:

Arranging

The stuff above is fun, but you obviously don’t need it. Where Live 10.1 might help you actually finish music is in a slew of new arrangement features.

Live 10 felt like a work in progress as far as the Arrange view. I think it immediately made sense to some of us that Ableton were adjusting arrangement tools, and ironing out the difference between, say, moving chunks of audio around and editing automation (drawing all those lovely lines to fade things in and out, for instance).

But it felt like the story there wasn’t totally complete. In fact, the change may have been too subtle – different enough to disturb some existing users, but without a big enough payoff.

So here’s the payoff: Ableton have refined all those subtle Arrange tweaks with user feedback, and added some very cool shape drawing features that let you get creative in this view in a way that isn’t possible with other users.

Fixing “$#(*& augh undo I didn’t want to do that!” Okay, this problem isn’t unique to Live. In every traditional DAW, your mouse cursor does conflicting things in a small amount of space. Maybe you’re trying to move a chunk of audio. Maybe you want to resize it. Maybe you want to fade in and out the edges of the clip. Maybe it’s not the clip you’re trying to edit, but the automation curves around it.

In studio terms, this sounds like one of the following:

[silent, happy clicking, music production getting … erm … produced]

OR ….
$#(*&*%#*% …. Noo errrrrrrrgggggg … GAACK! SDKJJufffff ahhh….

Live 10 added a toggle between automation editing and audio editing modes. For me, I was already doing less of the latter. But 10.1 is dramatically better, thanks to some nearly imperceptible adjustments to the way those clip handles work, because you can more quickly change modes, and because you can zoom more easily. (The zoom part may not immediately seem connected to this, but it’s actually the most important part – because navigating from your larger project length to the bit you’re actually trying to edit is usually where things break down.)

In technical terms, that means the following:

Quick zoom shortcuts. I’ll do a separate story on these, because they’re so vital, but you can now jump to the whole song, details, zoom various heights, and toggle between zoom states via keyboard shortcuts. There are even a couple of MIDI-mappable ones.

Clips in Arrangement have been adjusted. From the release notes: “The visualisation of Arrangement clips has been improved with adjusted clip borders and refinements to the way items are colored.” Honestly, you won’t notice, but ask the person next to you how much you’re grunting / swearing like someone is sticking something pointy into your ribs.

Pitch gestures! You can pitch-zoom Arrangement and MIDI editor with Option or Alt keys – that works well on Apple trackpads and newer PC trackpads. And yeah, this means you don’t have to use Apple Logic Pro just to pinch zoom. Ahem.

The Clip Detail View is clearer, too, with a toggle between automation and modulation clearly visible, and color-coded modulation for everything.

The Arrangement Overview was also adjusted with better color coding and new resizing.

In addition, Ableton have worked a lot with how automation editing functions. New in 10.1:

Enter numerical values. Finally.

Free-hand curves more easily. With grid off, your free-hand, wonky mouse curves now get smoothed into something more logical and with fewer breakpoints – as if you can draw better with the mouse/trackpad than you actually can.

Simplify automation. There’s also a command that simplifies existing recorded automation. Again – finally.

So that fixes a bunch of stuff, and while this is pretty close to what other DAWs do, I actually find Ableton’s implementation to be (at last) quicker and friendlier than most other DAWs. But Ableton kept going and added some more creative ideas.

Insert shapes. Now you have some predefined shapes that you can draw over automation lanes. It’s a bit like having an LFO / modulation, but you can work with it visually – so it’s nice for those who prefer that editing phase as a way do to their composition. Sadly, you can only access these via the mouse menu – I’d love some keyboard shortcuts, please – but it’s still reasonably quick to work with.

Modify curves. Hold down Option/Ctrl and you can change the shape of curves.

Stretch and skew. Reshape envelopes to stretch, skew, stretch time / ripple edit.

Insert Shapes promises loads of fun in the Arrangement – words that have never been uttered before.

Check out those curve drawing and skewing/scaling features in action:

Freeze/Export

You can freeze tracks with sidechains, instead of a stupid dialog box popping up to tell you you can’t, because it would break the space-time continuum or crash the warp core injectors or … no, there’s no earthly reason why you shouldn’t be able to freeze sidechains on a computer.

You can export return and master effects on the actual tracks. I know, I know. You really loved bouncing out stems from Ableton or getting stems to remix and having little bits of effects from all the tracks on separate stems that were just echos, like some weird ghost of whatever it was you were trying to do. And I’m a lazy kid, who for some reason thinks that’s completely illogical since, again, this is a computer and all this material is digital. But yes, for people are soft like me, this will be a welcome feature.

So there you have it. Plus you now get VST3s, which is great, because VST3 … is so much … actually, you know, even I don’t care all that much about that, so let’s just say now you don’t have to check if all your plug-ins will run or not.

Go get it

One final note – Max for Live. 10.0.6 synchronized with Max 8.0.2. See those release notes from Cycling ’74:

https://cycling74.com/forums/max-8-0-2-released

Live 10.1 is keeping pace, with the beta you download now including Max 8.0.3.

Ableton haven’t really “integrated” Max for Live; they’re still separate products. And so that means you probably don’t want perfect lockstep between Max and Live, because that could mean instability on the Live side. It’d be more accurate to say that what Ableton have done is to improve the relationship between Max and Live, so that you don’t have to wait as long for Max improvements to appear in Max for Live.

Live 10.1 is in beta now with a final release coming soon.

Ableton Live 10.1 release notes

And if you own a Live 10.1 license, you can join the beta group:

Beta signup

Live 10.1: User wavetables, new devices and workflow upgrades

Thanks to Ableton for those short videos. More on these features soon.

The post Ableton Live 10.1: more sound shaping, work faster, free update appeared first on CDM Create Digital Music.

DP10 adds clip launching, improved audio editing to MOTU’s DAW

Delivered... Peter Kirn | Scene | Mon 4 Feb 2019 6:24 pm

DP10 might just grant two big wishes to DAW power users. One: pull off Ableton Live-style clip launching. Two: give us serious, integrated waveform editing. Here’s why DP10 might get your attention.

A handful of music tools has stood the test of time because the developers have built relationships with users over years and decades. DP is definitely in that category, established in fields like TV and film scoring.

This also means, however, it’s rare for an update to seem like news. DP10 is a potential exception. I haven’t had hands-on time with it yet, but this makes me interested in investing that time.

Bride of Ableton Live?

The big surprise is, MOTU are tackling nonlinear loop triggering, with what they call the Clips window.

The connection to Ableton Live here is obvious; MOTU even drives home the point with a similar gray color scheme, round indicators showing play status, clips grouped into Scenes (as a separate column) horizontally, and into tracks vertically.

And hey, this works for users – all of those decisions are really intuitive.

Here’s where MOTU has an edge on Ableton, though. DP10 adds the obvious – but new – idea of queuing clips in advance. These drop like Tetris pieces into your tracks so you can chain together clips and let them play automatically. The queue is dynamic, meaning you can add and remove those bits at will.

That sounds like a potential revelation. It’s way easier to grok – and more visible – than Live’s Follow Actions. And it frees users from taking their focus of their instruments and other work just to manually trigger clips.

Also, as with Bitwig Studio, MOTU lets you trigger multiple clips both as scenes and as clip groups. (Live is more rigid; the only way to trigger multiple clips in one step is as a complete row.)

I have a lot of questions here that require some real test time. Could MOTU’s non-linear features here pair with their sophisticated marker tools, the functionality that have earned them loyalty with people doing scoring? How do these mesh with the existing DP editing tools, generally – does this feel like a tacked-on new mode, or does it integrate well with DP? And just how good is DP as a live performance tool, if you want to use this for that use case? (Live performance is a demanding thing.)

But MOTU do appear to have a shot to succeed where others haven’t. Cakewalk added clip triggering years ago to SONAR (and a long-defunct tool called Project 5), but it made barely a dent on Live’s meteoric rise and my experience of trying to use it was that it was relatively clunky. That is, I’d normally rather use Live for its workflow and bounce stems to another DAW if I want that. And I suspect that’s not just me – that’s really now the competition.

More audio manipulation

Every major DAW seems locked now in a sort of arms race in detecting beats and stretching audio, as the various developers gradually add new audio mangling algorithms and refine usability features.

So here we go with DP10 – detect beats, stetch audio, adjust tempo, yadda yadda.

Under the hood, most developers are now licensing the algorithms that manipulate audio – MOTU now works with ZTX Pro from zynaptic. But how you then integrate that mathemagical stuff with user interface design is really important, so this is down to implementation.

It’s certainly doubly relevant that MOTU are adding new beat detection and pitch-independent audio stretching in DP10, because of course this is a natural combination for the new Clips View.

More research needed.

Maybe just as welcome, though, is that MOTU have updated the integrated waveform editor in DP. And let’s be honest – even after decades of development, most DAWs have really terrible editors when it comes down to precise work on individual bits of audio. (I cringe every time I open the one in Logic, for instance. Ableton doesn’t really even have waveform editing apart from the limited tools in the main Arrangement view. And even users of something like Pro Tools or Cubase will often jump out to use a dedicated program.)

MOTU say they’ve streamlined and improved their Waveform Editor. And there’s reason to stay in the DAW – in DP10, they’ve integrated all those beat editing and time stretching and pitch correction tools. They’re also promising dynamic editing tools and menus and shortcuts and … yeah, just have to try this one. But those integrated tools and views look great, and – spectral view!

Other improvements

There’s some other cool stuff in DP10:

A new integrated Browser (this will also be familiar to users of Ableton Live and other tools, but it seems nicely implemented)

“VCA Faders” – which let you control multiple tracks with relative volumes, grouping however you like and with full automation support. This looks ilke a really intuitive way to mix.

VST3 support – yep, the new format is slowly gaining adoption across the industry.

Shift-spacebar to run commands. This is terrific to me – skip the manual, skip memorizing shortcuts for everything, but quickly access commands. (I think a lot of us use Spotlight and other launchers in a similar way, so this is totally logical.)

Transport bar skips by bars and beats. (Wait… why doesn’t every program out there do this, actually?)

Streamlined tools for grid snapping, Region menu, tool swapping, zooming, and more.

Quantize now applies to controllers (CC data), not just notes. (Yes. Good.)

Scalable resolution.

Okay, actually, that last one – I was all set to try the previous version of DP, but discovered it was impossible for my weak eyes to see the UI on my PC. So now I’m in. If you hadn’t given DP a second look because you actually couldn’t see it – it seems that problem is finally solved.

And by the way, you also really see DP’s heritage as a MIDI editor, with event list editing, clear displays of MIDI notes, and more MIDI-specific improvements.

All in all, it looks great. DP has to compete now with a lot of younger DAWs, the popularity of software like Ableton Live, and then the recent development on Windows of Cakewalk (aka SONAR) being available for free. But this looks like a pretty solid argument against all of that – and worth a test.

And I’ll be totally honest here – while I’ve been cursing some of DP’s competition for being awkward to set up and navigate for these same tasks, I’m personally interested.

It means a lot to have one DAW with everything from a mature notation view editor to video scoring to MIDI editing and audio and mixing. It means something you don’t outgrow. But that makes it even more important to have it grow and evolve with you. We’ll see how DP10 is maturing.

64-bit macOS, and 32-bit/64-bit Windows 7/8/10, shipping this quarter.

Pricing:
Full version: $499USD (street price)
Competitive upgrade: $395USD
AudioDesk upgrade: $395USD
Upgrade from previous version: $195USD

http://motu.com/products/software/dp/

I have just one piece of constructive criticism, MOTU. You should change your name back to Mark of the Unicorn and win over millennials. And me, too; I like unicorns.

The post DP10 adds clip launching, improved audio editing to MOTU’s DAW appeared first on CDM Create Digital Music.

Synth One is a free, no-strings-attached, iPad and iPhone synthesizer

Delivered... Peter Kirn | Scene | Thu 31 Jan 2019 6:52 pm

Call it the people’s iOS synth: Synth One is free – without ads or registration or anything like that – and loved. And now it’s reached 1.0, with iPad and iPhone support and some expert-designed sounds.

First off – if you’ve been wondering what happened to Ashley Elsdon, aka Palm Sounds and editor of our Apps section, he’s been on a sabbatical since September. We’ll be thinking soon about how best to feature his work on this site and how to integrate app coverage in the current landscape. But you can read his take on why AudioKit matters, and if Ashley says something is awesome, that counts.

But with lots of software synths out there, why does Synth One matter in 2019? Easy:

It’s really free. Okay, sure, it’s easy for Apple to “give away” software when they make more on their dongles and adapters than most app developers charge. But here’s an independent app that’s totally free, without needing you to join a mailing list or look at ads or log into some cloud service.

It’s a full-featured, balanced synth. Under the hood, Synth One is a polysynth with hybrid virtual analog / FM, with five oscillators, step sequencer, poly arpeggiator, loads of filtering and modulation, a rich reverb, multi-tap delay, and loads of etras.

There’s standards support up the wazoo. Are you visually impaired? There’s Voice Over accessibility. Want Ableton Link support? MIDI learn on everything? Compatibility with Audiobus 3 and Inter App Audio so you can run this in your favorite iOS DAW? You’re set.

It’s got some hot presets. Sound designer Francis Preve has been on fire lately, making presets for everyone from KORG to the popular Serum plug-in. And version 1.0 launches with Fran’s sound designs – just what you need to get going right away. (Fran’s sound designs are also usually great for learning how a synth works.)

It’s the flagship of an essential framework. Okay the above matters to users – this matters to developers (who make stuff users care about, naturally). Synth One is the synthesizer from the people who make AudioKit. That’s good for making sure the framework is solid, plus

You can check out the source code. Everything is up at github.com/AudioKit/AudioKitSynthOne – meaning Synth One is also an (incredibly sophisticated) example app for Audio Kit.

More is coming… MPE (MIDI Polyphonic Expression) and AUv3 are coming soon, say the developers.

And now the big addition —

It runs on iPhone, too. I have to say, I’ve been waiting for a synth that’s pocket sized for extreme portability, but few really are compelling. Now you can run this on any iPhone 6 or better – and if you’ve got a higher-end iPhone (iPhone X/XS/XR / iPhone XS Max / 6/7/8 Plus size), you’ll get a specially optimized UI with even more space.

Check out this nice UI:

On iPhone:

More:

AudioKit Synth One 1.0 arrives, is universal, is awesome

The post Synth One is a free, no-strings-attached, iPad and iPhone synthesizer appeared first on CDM Create Digital Music.

Bitwig Studio is about to deliver on a fully modular core in a DAW

Delivered... Peter Kirn | Scene | Mon 21 Jan 2019 5:21 pm

Bitwig Studio may have started in the shadow of Ableton, but one of its initial promises was building a DAW that was modular from the ground up. Bitwig Studio 3 is poised to finally deliver on that promise, with “The Grid.”

Having a truly modular system inside a DAW offers some tantalizing possibilities. It means, in theory at least, you can construct whatever you want from basic building blocks. And in the very opposite of today’s age of presets, that could make your music tool feel more your own.

Oh yeah, and if there is such an engine inside your DAW, you can also count on other people building a bunch of stuff you can reuse.

Why modulaity? It doesn’t have to just be about tinkering (though that can be fun for a lot of people).

A modular setup is the very opposite of a preset mentality for music production. Experienced users of these environments (software especially, since it’s open-ended) do often find that patching exactly what they need can be more creative and inspirational. It can even save time versus the effort spent trying to whittle away at a big, monolithic tool just go get to the bit you actually want. But the traditional environments for modular development are fairly unfriendly to new users – that’s why very often people’s first encounters with Max/MSP, SuperCollider, Pd, Reaktor, and the like is in a college course. (And not everyone has access to those.) Here, you get a toolset that could prove more manageable. And then once you have a patch you like, you can still interconnect premade devices – and you can work with clips and linear arrangement to actually finish songs. With the other tools, that often means coding out the structure of your song or trying to link up to a different piece of software.

We’ve seen other DAWs go modular in different ways. There’s Apple Logic’s now mostly rarely-used Environment. There’s Reason with its rich, patchable rack and devices. There’s Sensomusic Usine, which is a fully modular DAW / audio environment, and DMX lighting and video tool – perhaps the most modular of these (even relative to Bitwig Studio and The Grid). And of course there’s Ableton Live with Max for Live, though that’s really a different animal – it’s a full patching development environment that runs inside Live via a runtime, and API and interface hooks that allow you to access its devices. The upside: Max for Live can do just about everything. The downside: it’s mostly foreign to Ableton Live (as it’s a different piece of software with its own history), and it could be too deep for someone just wanting to build an effect or instrument.

So, enter The Grid. This is really the first time a relatively conventional DAW has gotten its own, native modular environment that can build instruments and effects. And it looks like it could be accomplished in a way that feels comfortable to existing users. You get a toolset for patching your own stuff inside the DAW, and you can even mix and match signal to outboard hardware modular if that’s your thing.

And it really focuses on sound applications, too, with three devices. One is dedicated to monophonic synths, one to polyphonic synths, and one to effects.

From there, you get a fully modular setup with a modern-looking UI and 120+ modules to choose from.

They’ve done a whole lot to ease the learning curve normally associated with these environments – smoothing out some of the wrinkles that usually baffle beginners:

You can patch anything to anything, in to out. All signals are interchangeable – connect any out to any in. Most other software environments don’t work that way, which can mean a steeper learning curve. (We’ll have to see how this works in practice inside The Grid).

Any in can go to any out – reducing some of the complexity of other patching environments (software and hardware alike).

Everything’s stereo. Here’s another way of reducing complexity. Normally, you have to duplicate signals to get stereo, which can be confusing for beginners. Here, every audio cable and every control cable routes stereo.

Everything’s also in living stereo, reducing cable count and cognitive effort.

There are default patchings. Funny enough, this idea has actually been seen on hardware – there are default routings so modules automatically wire themselves if you want, via what Bitwig calls “pre-cords.” That means if you’re new to the environment, you can always plug stuff in.

They’ve also promised to make phase easier to understand, which should open up creative use of time and modulation to those who may have been intimidated by these concepts before.

“Pre-cords” mean you can easily add default patchings to get stuff working straight away.

What fun is a modular tool if you can’t explore phase? Bitwig say they’ve made this concept more accessible to modulation and easier to learn.

There’s also a big advantage to this being native to the environment – again, something you could only really say about Sensomusic Usine before now (at least as far as things that could double as DAWs).

This unlocks:

  • Nesting and layering devices alongside other Bitwig devices
  • Full support from the Open Controller API. (Wow, this is a pain the moment you put something like Reaktor into another host, too.)
  • Route modulation out of your stuff from The Grid into other Bitwig devices.
  • Complete hardware modular integration – yeah, you can mix your software with hardware as if they’re one environment. Bitwig says they’ve included “dedicated grid modules for sending any control, trigger, or pitch signal as CV Out and receiving any CV In.”

I’ve been waiting for this basically since the beginning. This is an unprecedented level of integration, where every device you see in Bitwig Studio is already based on this modular environment. Bitwig had even touted that early on, but I think they were far overzealous with letting people know about their plans. It unsurprisingly took a while to make that interface user friendly, which is why it’ll be a pleasure to try this now and see how they’ve done. But Bitwig tells us this is in fact the same engine – and that the interface “melds our twin focus on modularity and swift workflows.”

There’s also a significant dedication to signal fidelity. There’s 4X oversampling throughout. That should generally sound better, but it also has implications for control and modularity. And it’ll make modulation more powerful in synthesis, Bitwig tells CDM:

With phase, sync, and pitch inputs on most every oscillator, there are many opportunities here for complex setups. Providing this additional bandwidth keeps most any patch or experiment from audible aliasing. As an open system, this type of optimization works for the most cases without overtaxing processors.

It’s stereo only, which puts it behind some of the multichannel capabilities of Reaktor, Max, SuperCollider, and others – Max/MSP especially given its recent developments. But that could see some growth in a later release, Bitwig hints. For now, I think stereo will keep us plenty busy.

They’ve also been busy optimizing, Bitwig tells us:

This is something we worked a lot on in early development, particularly optimizing performance on the oversampled, stereo paths to align with the vector units of desktop processors. In addition, the modules are compiled at runtime for the best performance on the particular CPU in use.

That’s a big deal. I’m also excited about using this on Linux – where, by the way, you can really easily use JACK to integrate other environments like SuperCollider or live coding tools.

If you’re at NAMM, Bitwig will show The Grid as part of Bitwig Studio 3. They have a release coming in the second quarter, but we’ll sit down with them here in Berlin for a detailed closer look (minus NAMM noise in the background or jetlag)!

Oh yeah, and if you’ve got the Upgrade Plan, it’s free.

This is really about making a fully modular DAW – as opposed to the fixed multitrack tape/mixer models of the past. Bitwig have even written up an article about how they see modularity and how it’s evolved over various release versions:

BEHIND THE SCENES: MODULARITY IN BITWIG STUDIO

More on Bitwig Studio 3:

https://www.bitwig.com/en/19/bitwig-studio-3

Obligatory:

Oh yeah, also Tron: Legacy seems like a better movie with French subtitles…

That last line fits: “And the world was more beautiful than I ever dreamed – and also more dangerous … hop in bed now, come on.”

Yeah, personal life / sleep … in trouble.

The post Bitwig Studio is about to deliver on a fully modular core in a DAW appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme