Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » Software


Master your Roland TR-8S drum machine settings with a plug-in editor

Delivered... Peter Kirn | Scene | Mon 14 Jan 2019 5:38 pm

Roland’s TR-8S added loads of parameters for shaping drum kits and effects. Now you can get at all of those without diving through menus with this VST/AU plug-in – and keep your drum machine settings stored with your project.

Hardware is great, but it introduces two problems. First, there are inevitably some parameters buried in menus that are hard to reach on the front panel, no matter how many knobs and faders makers add. Second, stuff you do on the hardware is likely to get out of sync with your DAW, leading to that invariable “what the Hell was this supposed to be?” feeling when you power things up. (Okay, sometimes that leads to happy accidents. Sometimes it just leads to misery.)

Momo Miller has been trucking through the full Roland range (plus KORG and Novation Circuit). He’s been adding plug-ins for just this reason. You get more accessible editing and control, and your settings stay inside your DAW projects for easy recall.

Now, first, what this isn’t: it isn’t a full-blown editor for the TR-8S. And it’s a shame, given Roland Cloud, that the manufacturer didn’t provide one. That also means loading custom samples on the TR-8S is a manual affair. This unofficial editor isn’t able to load sample files. And you don’t get full access to all of the TR-8S’ hidden parameters, like the deep settings per kit. So, Roland, if you’re listening – please, give us that.

You do, however, get a lot of access to parameters per sound and kit – basically, anything that has a MIDI CC assignment. And you can still save your changes on the hardware, for anything this controls. Plus you can save parameters separately in software. And there are some useful performance controller mappings.

Here’s what you get:

  • Full access to TR-8S parameters (as accessible via MIDI)
  • Control effects via custom-mapped X/Y performance controllers
  • Automation of parameters inside your DAW
  • Save parameter data with your DAW – including which kit was selected, which is invaluable on its own
  • Interactive visual display
  • 32-bit and 64-bit VST (Windows, Mac) AU (Mac) and standalone (Windows, Mac) versions

Have a look:

Price: 5,90€ / US$6.90

TR-8S editor/controller

The post Master your Roland TR-8S drum machine settings with a plug-in editor appeared first on CDM Create Digital Music.

Reloop’s new RP-8000 MK2: instrumental pitch control, Serato integration

Delivered... Peter Kirn | Scene | Thu 10 Jan 2019 6:20 pm

Like the relaunched Technics 1200, the new Reloop decks sport digitally controlled motors. But Reloop have gone somewhere very different from Technics: platters that can be controlled at a full range of pitches, and even play scales. And the RP-8000 MK2 is a MIDI controller, too, for Serato and other software.

Oh yeah, and one other thing – Reloop as always is more affordable – a pair of RP-8000 MK2s costs the same as one SL-1200 MK7. (One deck is EUR600 / USD700 / GBP525).

And there’s a trend beyond these decks. Mechanical engineers rejoice – the age of the motor is here.

238668 Reloop RP-8000 MK2

We’re seeing digitally controlled motors for haptic feedback, as on the new Native Instruments S4 DJ controllers. And we’re seeing digital control on motors providing greater reliability, more precision, and broader ranges of speed on conventional turntables.

So digitally controlled motors were what Technics was boasting earlier this week with their SL-1200 MK7, which they say borrows from Blu-Ray drive technology (Technics is a Panasonic brand).

Reloop have gone one step further on the RP-8000 MK2. “Platter Play” rotates the turntable platter at different speeds to produce different pitches – rapidly. You can use the colored pads on the turntable, or connect an external MIDI keyboard.

That gives the pads a new life, as something integral to the turntable instead of just a set of triggers for software. (I’m checking with Reloop to find out if the performance pads require Serato to work, but either way, they do actually impact the platter rotation – it’s a physical result.)

238668 Reloop RP-8000 MK2

Serato and Reloop have built a close relationship with turntablists; this lets them build the vinyl deck into a more versatile instrument. It’s still an analog/mechanical device, but with a greater range of playing options thanks to digital tech under the hook. Call it digital-kinetic-mechanical.

Also digital: the pitch fader Reloop. (Reloop call it “high-resolution.”) Set it to +- 8% (hello Technics-style pitch), or +/- 16% for a wider range (hello, Romanian techno, -16%), or an insane +/- 50%. That’s the actual platter speed we’re talking here. (Makes sense – platters on CDs and Blu-Ray spin far, far faster.)

With quartz lock on, the same mechanism will simply play your records more accurately at a steady pitch (0%).

The pitch fader and motor mechanism are both available on the RP-7000 MK2, for more traditional turntable operation The performance pad melodic control is on the 8000, the one intended for Serato users.

Serato integration

I expect some people want their controller and their deck separate – playing vinyl means bringing actual vinyl records, and playing digital means using a controller and computer, or for many people, just a USB stick and CDJs.

If you want that, you can grab the RP-7000 MK2 for just 500 bucks a deck, minus the controller features.

On the RP-8000 MK2, you get a deck that adds digital features you’ve seen on controllers and CDJs directly on the deck. As on the original RP-8000, Reloop are the first to offer Serato integration. And it’s implemented as MIDI, so you can work with third-party software as well. The market is obviously DVS users.

The original RP offered Cue, Loop, Sample and Slicer modes with triggers on the left-hand side. Plus you get a digital readout above the pitch fader.

On the MK2, the numeric display gives you even more feedback: pitch, BPM, deck assignment, scales and notes, elapsed/remaining time of current track, plus firmware settings.

New playback and platter control options on the Reloop RP-8000 MK2.

The pads have new performance modes, too: Cue, Sampler, Saved Loops, Pitch Play, Loop, Loop Roll, Slicer, and two user-assignable modes (for whatever functions you want).

Reloop have also upgraded the tone arm base for greater reliability and more adjustments.

And those performance modes look great – 22 scales and 34 notes, plus up to 9 user-defined scales.

For more integration, Reloop are also offering the Reloop Elite, a DVS-focused mixer with a bunch of I/O, displays that integrate with the software, and more RGB-colored performance triggers and other shortcuts.

https://www.reloop.com/reloop-elite

One of these things is not like the others: the new kit still requires a laptop to run Serato.

If I had any complaint, it’s this: when will Serato do their own standalone embedded hardware in place of the computer? I know many DJs are glad to bring a computer – and Reloop claims the controls on the deck eliminate the need for a standalone controller (plus they have that new mixer with still more Serato integration). But it seems still a bummer to have to buy and maintain a PC or Mac laptop as part of the deal. And if you’re laying out a couple grand on hardware, wouldn’t you be willing to buy an embedded solution that let you work without a computer? (Especially since Serato is an integrated environment, and would run on embedded machines. Why not stick an ARM board in there to run those displays and just read your music off USB?)

As for Reloop, they’re totally killing it with affordable turntables. If you just want some vinyl playback and basic DJing for your home or studio, in December they also unveiled the RP-2000 USB MK2. USB interface (for digitization or DVS control), direct drive control (so you can scratch on it), under 300 bucks.

https://www.reloop.com/

Previously in phonographs:

The Technics SL-1200 is back, and this time for DJs again

The post Reloop’s new RP-8000 MK2: instrumental pitch control, Serato integration appeared first on CDM Create Digital Music.

This playlist is full of wonderful ARP music – some might surprise you

Delivered... Peter Kirn | Scene | Wed 9 Jan 2019 5:46 pm

As we remember Alan R. Pearlman and the impact his instruments had on music, here’s a survey of the many places ARP sounds appeared in music culture. It’s a reminder of just how profound electronic music tools can be in their influence – and of the unique age in which we live.

Perhaps now is the perfect time for an ARP revival. With modular synthesis reaching ever-wider audiences, the ARP creations – the 2500, 2600, and Odyssey featured here – represent something special. Listen across these tracks, and you’re struck by the unique colors of those ARP creations across a range of genres. It’s also significant that each of these designs in their own way struck a balance between modularity and accessibility, sound design and playability. That includes making instruments that had modular patching capability but also produced useful sounds at each patch point by default – that is, you don’t have to wire things up just to make something happen. That in turn also reduces cable spaghetti, because the patch connections you make represent the particular decisions you made deviating from the defaults. On the 2500, this involves a matrix (think Battleship games, kids), which is also a compelling design in the age of digital instruments and software.

And lest we get lost in sound design, it’s also worth noting how much these things get played. In the era of Eurorack, it’s easy to think music is just about tweaking … but sometimes it’s just as useful to have a simple, fresh sound and then just wail on it. (Hello, Herbie Hancock.)

It’s easy to forget just how fast musical sound has moved in a couple of generations. An instrument like the piano or violin evolved over centuries. Alan R. Pearlman literally worked on some of the first amplifiers to head into space – the Mercury and Gemini programs that first sent Americans into space and orbit, prior to Apollo’s journey to the moon. And then he joined the unique club of engineers who have remade music – a group that now includes a lot of you. (All of you, in fact, once you pick up these instruments.)

So I say go for it. Play a preset in a software emulation. Try KORG’s remake of the Odyssey. Turn a knob or re-patch something. Make your own sound design – and don’t worry about whether it’s ingenious or ground-breaking, but see what happens when you play it. (Many of my, uh, friends and colleagues are in the business of creating paid presets, but I have the luxury of making some for my own nefarious music production purposes that no one else has to use, so I’m with you!)

David Abravanel puts together this playlist for CDM:

Some notes on this music:

You know, we keep talking about Close Encounters, but the actual sound of the ARP 2500 is very limited. The clip I embedded Monday left out the ARP sound, as did the soundtrack release of John Williams’ score. The appearance is maybe more notable for the appearance of ARP co-founder David Friend at the instrument – about as much Hollywood screen time as any synth manufacturer has ever gotten. Oh, and … don’t we all want that console in our studio? But yes, following this bit, Williams takes over with some instrumental orchestration – gorgeous, but sans-ARP.

So maybe a better example of a major Hollywood composer is Jerry Goldsmith. The irony here is, I think you could probably get away with releasing this now. Freaky. Family Guy reused it (at the end). We’ll never defeat The Corporation; it’s true.

It’s also about time to acknowledge that Stevie Wonder combined Moog and ARP instruments, not just Moog. As our industry looks at greater accessibility, it’s also worth noting that Wonder was able to do so without sight.

What about U2? Well, that’s The Edge’s guitar routed through the ARP 2600 for filter distortion and spring reverb. That’s a trick you can steal, of course – especially easily now that Arturia has an emulation of the 2600.

Expect our collective reader knowledge exceeds anything we can contribute so – let us know what other artists using ARP inspired you, and if you have any notes on these selections.

The post This playlist is full of wonderful ARP music – some might surprise you appeared first on CDM Create Digital Music.

What could make APC Live, MPC cool: Akai’s new software direction

Delivered... Peter Kirn | Scene | Wed 2 Jan 2019 11:01 pm

Akai tipped their hand late last year that they were moving more toward live performance. With APC Live hardware leaked and in the wild, maybe it’s time to take another look. MPC software improvements might interest you with or without new hardware.

MPC 2.3 software dropped mid-November. We missed talking about it at the time. But now that we’re (reasonably certain, unofficially) that Akai is releasing new hardware, it puts this update in a new light. Background on that:

APC as standalone hardware? Leaked Akai APC Live

Whether or not the leaked APC Live hardware appeals to you, Akai are clearly moving their software in some new directions – which is relevant whatever hardware you choose. We don’t yet know if the MPC Live hardware will get access to the APC Live’s Matrix Mode, but it seems a reasonable bet some if not all of the APC Live features are bound for MPC Live, too.

And MPC 2.3 added major new live performance features, as well as significant internal synths, to that standalone package. Having that built in means you get it even without a computer.

New in 2.3:

Three synths:

  • A vintage-style, modeled analog polysynth
  • A bass synth
  • A tweakable, physically modeled electric piano

Tubesynth – an analog poly.

Electric’s physically-modeled keys.

Electric inside the MPC Live environment.

As with NI’s Maschine, each of those can be played from chords and scales with the pads mode. But Maschine requires a laptop, of course – MPC Live doesn’t.

A new arpeggiator, with four modes of operation, ranging from traditional vintage-style arp to more modern, advanced pattern playback

And there’s an “auto-sampler.”

That auto-sampler looks even more relevant when you see the APC Live. On MPC Live (and by extension APC Live), you can sample external synths, sample VST plug-ins, and even capture outboard CV patches.

Of course, this is a big deal for live performance. Plug-ins won’t work in standalone mode – and can be CPU hogs, anyway – so you can conveniently capture what you’re doing. Got some big, valuable vintage gear or a modular setup you don’t to take to the gig? Same deal. And then this box gives you the thing modular instruments don’t do terribly well – saving and recalling settings – since you can record and restore those via the control voltage I/O (also found on that new APC Live). The auto-sampler is an all-in-one solution for making your performances more portable.

Full details of the 2.3 update – though I expect we’ve got even more new stuff around the corner:

http://www.akaipro.com/pages/mpc-2.3-desktop-software-and-firmware-update

With or without the APC Live, you get the picture. While Ableton and Native Instruments focus on studio production and leave you dependent on the computer, Akai’s angle is creating an integrated package you can play live with – like, onstage.

Sure enough, Akai have been picking up large acts to their MPC Live solution, too – John Mayer, Metallica, and Chvrches all got named dropped. Of those, let’s check out Chvrches – 18 minutes in, the MPC Live gets showcased nicely:

It makes sense Akai would come to rely on its own software. When Akai and Novation released their first controllers for Ableton Live, Ableton had no hardware of their own, which changed with Push. But of course even the first APC invoked the legendary MPC legacy – and Akai has for years been working on bringing desktop software functionality to the MPC name. So, while some of us (me included) first suspected a standalone APC Live might mean a collaboration with Ableton, it does make more sense that it’s a fully independent Akai-made, MPC-style tool.

It also makes sense that this means, for now, more internal functionality. (The manual reference to “plugins” in the APC Live manual that leaked probably means those internal instruments and effects.) That has more predictability as far as resource consumption, and means avoiding the licensing issues necessary and the like to run plug-ins in embedded Linux. This could change, by the way – Propellerhead’s Rack Extensions format now is easily portable to ARM processors, for example – but that’s another story. As far as VST, AU, and AAX, portability to embedded hardware is still problematic.

The upshot of this, though, is that InMusic at least has a strategy for hardware that functions on its own – not just as a couple of one-off MPC pieces, but in terms of integrated hardware/software development across a full product line. Native Instruments, Ableton, and others might be working on something like that that lets you untether from the computer, but InMusic is shipping now, and they aren’t.

Now the question is whether InMusic can capitalize on its MPC legacy and the affection for the MPC and APC brands and workflows – and get people to switch from other solutions.

The post What could make APC Live, MPC cool: Akai’s new software direction appeared first on CDM Create Digital Music.

More surprise in your sequences, with ESQ for Ableton Live

Delivered... Peter Kirn | Scene | Sun 30 Dec 2018 5:39 pm

With interfaces that look lifted from a Romulan warbird and esoteric instruments, effects, and sequencers, K-Devices have been spawning surprising outcomes in Ableton Live for some time now. ESQ is the culmination of that: a cure for preset sounds and ideas in a single device.

You likely know the problem already: all of the tools in software like Ableton Live that make it easy to quickly generate sounds and patterns also tend to do so in a way that’s … always the same. So instead of being inspiring, you can quickly feel stuck in a rut.

ESQ is a probability-based sequencer with parameters, so you adjust a few controls to generate a wide variety of possibilities – velocity, chance, and relative delay for each step. You can create polyrhythms (multiple tracks of the same length, but different steps), or different-length tracks, you can copy and paste, and there are various random functions to keep things fresh. The results are still somehow yours – maybe even more so – it’s just that you use probability and generative rules to get you to what you want when you aren’t sure how to describe what you want. Or maybe before you knew you wanted it.

Because you can trigger up to 12 notes, you can use ESQ to turn bland presets into something unexpected (like working with preset Live patches). Or you can use it as a sequencer with all those fun modular toys we’ve been talking about lately (VCV Rack, Softube Modular, Cherry Audio Voltage Modular, and so on) – because 5- and 8-step sequencers are often just dull.

There’s no sound produced by ESQ – it’s just a sequencer – but it can have a big enough impact on devices that this “audio” demo is just one instance of ESQ and one Drum Rack. Even those vanilla kits start to get more interesting.

K-Devices has been working this way for a while, but ESQ feels like a breakthrough. The generative sequence tools are uniquely complete and especially powerful for producing rhythms. You can make this sound crazy and random and IDM-y, but you can also add complexity without heading into deep space – it’s really up to you.

And they’ve cleverly made two screens – one full parameter screen that gets deep and detailed, but a compact device screen that lets you shift everything with single gestures or adjust everything as macros – ideal for live performance or for making bigger changes.

It seems like a good wildcard to keep at your disposal … for any of those moments when you’re getting stuck and boring.

And yes, of course Richard Devine already has it:

But you can certainly make things unlike Devine, too, if you want.

Right now ESQ is on sale, 40% off through December 31 – €29 instead of 49. So it can be your last buy of 2018.

Have fun, send sequences!

https://k-devices.com/products/esq/

The post More surprise in your sequences, with ESQ for Ableton Live appeared first on CDM Create Digital Music.

Your questions answered: Sonarworks Reference calibration tools

Delivered... Peter Kirn | Scene | Thu 27 Dec 2018 7:13 pm

If getting your headphones and studio monitors calibrated sounds like a good New Years’ Resolution, we’ve got you covered. Some good questions came up in our last story on Sonarworks Reference, the automated calibration tool, so we’ve gotten answers for you.

First, if you’re just joining us, Sonarworks Reference is a tool for automatically calibrating your studio listening environment and headphones so that the sound you hear is as uncolored as possible – more consistent with the source material. Here’s our previous write-up, produced in cooperation with Sonarworks:

What it’s like calibrating headphones and monitors with Sonarworks tools

CDM is partnering with Sonarworks to help users better understand how to use the tool to their benefit. And so that means in part answering some questions with Sonarworks engineers. If you’re interested in the product, there’s also a special bundle discount on now: you get the True-Fi mobile app for calibration on your mobile device, free with a Sonarworks Studio Edition purchase (usually US$79):

https://try.sonarworks.com/christmasspecial/

Readers have been sending in questions, so I’ll answer as many as I can as accurately as possible.

Does it work?

Oh yeah, this one is easy. I found it instantly easier to mix both on headphones and sitting in the studio, in that you hear far more consistency from one listening environment / device to another, and in that you get a clearer sense of the mix. It feels a little bit like how I feel when I clean my eyeglasses. You’re removing stuff that’s in the way. That’s my own personal experience, anyway; I linked some full reviews and comparisons with other products in the original story. But my sense in general is that automated calibration has become a fact of life for production and live situations. It doesn’t eliminate the role of human experts, not by a long shot – but then color calibration in graphics didn’t get rid of the need for designers and people who know how to operate the printing press, either. It’s just a tool.

Does it work when outside of the sweet spot in the studio?

This is a harder question, actually, but anecdotally, yeah, I still left it on. You’re calibrating for the sweet spot in your studio, so from a calibration perspective, yeah, you do want to sit in that location when monitoring – just as you always would. But a lot of what Sonarworks Reference is doing is about frequency response as much as space, I found it was still useful to leave the calibration on even when wandering around my studio space. It’s not as though the calibration suddenly stops working when you move around. You only notice the calibration stops working if you have the wrong calibration profile selected or you make the mistake of bouncing audio with it left on (oops). But that’s of course exactly what you’d expect to happen.

What about Linux support?

Linux is officially unsupported, but you can easily calibrate on Windows (or Mac) and then use the calibration profile on Linux. It’s a 64-bit Linux-native VST, in beta form.

If you run the plug-in the handy plug-in host Carla, you can calibrate any source you like (via JACK). So this is really great – it means you can have calibrated results while working with SuperCollider or Bitwig Studio on Linux, for example.

This is beta only so I’m really keen to hear results. Do let us know, as I suspect if a bunch of CDM readers start trying the Linux build, there will be added incentive for Sonarworks to expand Linux support. And we have seen some commercial vendors from the Mac/Windows side (Pianoteq, Bitwig, Renoise, etc.) start to toy with support of this OS.

If you want to try this out, go check the Facebook group:
https://www.facebook.com/groups/1751390588461118/

(Direct compiled VST download link is available here, though that may change later.)

What’s up with latency?

You get a choice of either more accuracy and higher latency, or lower accuracy and lower latency. So if you need real-time responsiveness, you can prioritize low latency performance – and in that mode, you basically won’t notice the plug-in is on at all in my experience. Or if you aren’t working live / tracking live, and don’t mind adding latency, you can prioritize accuracy.

Sonarworks clarifies for us:

Reference 4 line-up has two different *filter* modes – zero latency and linear phase. Zero latency filter adds, like the name states, zero latency, whereas linear phase mode really depends on sample-rate but typically adds about 20ms of latency. These numbers hold true in plugin form. Systemwide, however, has the variable of driver introduced latency which is set on top of the filter latency (zero for Zero latency and approx 20ms for linear phase mode) so the numbers for actual Systemwide latency can vary depending on CPU load, hardware specs etc. Sometimes on MacOS, latency can get up to very high numbers which we are investigating at the moment.

What about loudness? Will this work in post production, for instance?

Some of you are obviously concerned about loudness as you work on projects where that’s important. Here’s an explanation from Sonarworks:

So what we do in terms of loudness as a dynamic range character is – nothing. What we do apply is overall volume reduction to account for the highest peak in correction to avoid potential clipping of output signal. This being said, you can turn the feature off and have full 0dBFS volume coming out of our software, controlled by either physical or virtual volume control.

Which headphones are supported?

There’s a big range of headphones with calibration profiles included with Sonarworks Reference. Right now, I’ve got that folder open, and here’s what you get at the moment:

AIAIAI TMA-1

AKG K72, K77, K121, K141 MKII, K240, K240 MKII, K271 MKII, K550 MKII, K553 Pro, K612 Pro, K701, K702, K712 Pro, K812, Q701

Apple AirPods

Audeze KCD-2, LCD-X

Audio-Technica ATH-M20x, M30x, M40x, M50x, M70x, MSR7, R70x

Beats EP, Mixr, Pro, Solo2, Solo3 wireless, Studio (2nd generation), X Average

Beyerdynamic Custom One Pro, DT 150, DT 250 80 Ohm, DT 770 Pro (80 Ohm, 32 Ohm PPRO, 80 Ohm Pro, 250 Ohm Pro), DT 990 Pro 250 Ohm, DT 1770 Pro, DT 1990 Pro (analytical + balanced), T 1

Blue Lola, Mo-Fi (o/On+)

Bose QuietComfort 25, 35, 35 II, SoundLink II

Bowers & Wilkins P7 Wireless

Extreme Isolation EX-25, EX-29

Focal Clear Professional, Clear, Listen Professional, Spirit Professional

Fostex TH900 mk2, TX-X00

Grado SR60e, SR80e

HiFiMan HE400i

HyperX Cloud II

JBL Everest Elite 700

Koss Porta Pro Classic

KRK KNS 6400, 8400

Marshall Major II, Monitor

Master & Dynamic MH40

Meze 99, 99 NEO

Oppo PM-3

Philips Fidelio X2HR, SHP9500

Phonen SMB-02

Pioneer HDJ-500

Plantronics BackBeat Pro 2

PreSonus HD 7

Samson SR850

Sennheiser HD, HD 25 (&0 Ohm, Light), HD-25-C II, HD 201, HD 202, HD 205, HD 206, HD 215-II, HD 280 Pro (incl. new facelift version), HD 380 Pro, HD 518, HD 598, HD 598 C, HD 600, HD 650, HD 660 , HD 700, HD 800, HD 800 S, Moometum On-Ear Wireless, PX 100-II

Shure SE215, SRH440, SRH840, SRH940, SRH1440, SRH1540, SRH1840

Skullcandy Crusher (with and without battery), Hesh 2.0

Sony MDR-1A, MDR-1000X, MDR-7506, MDR-7520, MDR-CD900ST, MDR-V150, MDR-XB450, MDR-XB450AP, MDR-XB650BT, MDR-XB950AP, BDR-XB950BT, MDR-Z7, MDR-XZ110, MDR-ZX110AP, MDR-ZX310, MR-XZ310AP, MDR-ZX770BN, WH-1000MX2

Status Audio CB-1

Superlux HD 668B, HD-330, HD681

Ultrasone Pro 580i, 780i, Signature Studio

V-Moda Crossfade II, M-100

Yamaha HPH-MT5, HPH-MT7, HPH-MT8, HPH-MT220

So there you have it – lots of favorites, and lots of … well, actually, some truly horrible consumer headphones in the mix, too. But I not lots of serious mixers like testing a mix on consumer cans. The advantage of doing that with calibration is presumably that you get to hear the limitations of different headphones, but at the same time, you still hear the reference version of the mix – not the one exaggerated by those particular headphones. That way, you get greater benefit from those additional tests. And you can make better use of random headphones you have around, clearly, even if they’re … well, fairly awful, they can be now still usable.

Even after that long list, I’m sure there’s some stuff you want that’s missing. Sonarworks doesn’t yet support in-ear headphones for its calibration tools, so you can rule that out. For everything else, you can either request support or if you want to get really serious, opt for individual mail-in calibration in Latvia.

More:

https://www.sonarworks.com/reference

The post Your questions answered: Sonarworks Reference calibration tools appeared first on CDM Create Digital Music.

Ethereal, enchanting Winter Solstice drone album, made in VCV Rack

Delivered... Peter Kirn | Scene | Fri 21 Dec 2018 5:35 pm

It’s the shortest day of the year and first astronomical day of winter in the Northern Hemisphere. Don’t fight it. Embrace all that darkness – with this transcendent album full of drones, made in the free VCV Rack modular platform.

And really, what better way to celebrate modular than with expansive drones? Leave the on-the-grid “mad beats” and EDM wavetable presets to commercial tools. Enjoy as each modular patch achingly, slowly shifts, like a frost across a snowbank. Or something like that.

These aren’t just any drones. The compilation, for its part, is absolutely gorgeous, start to finish. It’s the work of ablaut, a Dutch-born, Suzhou-based artist living in China, with a winter wonderland worth of lush sonic shapes to send a chill up your spine. And everything came from the active VCV Rack community, where users of the open source modular platform have been avidly sharing patches and music alongside.

There’s terrific attention to detail. The group were inspired by the work of composers like La Monte Young, and … this is no lazy “pad through some reverb” work here. It’s utterly otherworldly:

We’ll hopefully take a look at some of these patches soon. If you’ve got ambient Rack creations of your own and missed out on the collaboration, we’d love to hear those, too.

The album is pay-what-you-will.

https://ablaut.bandcamp.com/album/winter-solstice-drone

https://vcvrack.com/

VCV Rack Official Facebook group

The post Ethereal, enchanting Winter Solstice drone album, made in VCV Rack appeared first on CDM Create Digital Music.

Hands-on: Complex-1 puts West Coast-inspired modular in Reason

Delivered... Peter Kirn | Scene | Tue 18 Dec 2018 1:53 pm

Propellerhead has unveiled a modular instrument add-on for Reason, Complex-1. It puts a patchable, West Coast-inspired synth inside the already patchable Reason environment – and it sounds fabulous.

Complex-1 is a monophonic modular synth delivered as a Rack Extension, available now. What you get is a selection of modules, with a combination of Buchla- and Moog-inspired synths, and some twists from Propellerhead. You can patch these right on the front panel – not the back panel as you normally would in Reason – and combine the results with your existing Reason rack. The ensemble is very West Coast-ish, as in Buchla-inspired, but also with some unique character of its own and modern twists and amenities you would expect now.

Propellerhead have also a lot of design decisions that allow you to easily patch anything to anything, which is great for happy mistakes and unusual sounds – for beginners or advanced users alike. The three oscillators each have ranges large enough to act as modulation sources, and to tune paraphonic setups if you so wish.

Prepare to get lost in this: the recent Quad Note Generator is a perfect pairing with Complex-1.

What’s inside:
Complex Osc This is the most directly Buchla-like module – subsonic to ultrasonic range, FM & AM, and lots of choices for shaping its dual oscillators.

Noise source, OSC 3 Noise sources including red, plus an additional oscillator (OSC 3) with a range large enough to double as a modulation source.

Comb delay If the Complex Osc didn’t get you, the comb delay should – you can use this for string models by tuning the delay with feedback, as well as all the usual comb delay business.

Filter Here’s the East Coast ingredient – a Moog-style ladder filter with drive, plus both high pass and low pass outputs you can use simultaneously.

Low Pass Gates Two LPGs (envelope + filter you can trigger) give you more West Coast-style options, including envelope follower functions.

Shaper Distortion, wavefolding, and whatnot.

More modules: LFO, ADSR envelope, output mixer, plus a really handy Mix unit, Lag, Scale & amp, Clock & LFO + Clock 2. There’s also a useful oscilloscope.

Sequencer plus Quant: You can easily use step sequencers from around Reason, but there’s also a step sequencer in Complex-1 itself, useful for storing integrated patches. Quant also lets you tune to a range of scales.

Function: A lot of the hidden power of Complex-1 is here – there’s a function module with various algorithms.

Yes, you can make complex patches with Complex-1.

The dual advantages of Complex-1: one, it’s an integrated instrument all its own, but two, it can live inside the existing Reason environment.

I’ve had my hands on Complex-1 since I visited Propellerhead HQ last week and walked through a late build last week. Full disclosure: I was not immediately convinced this was something I needed personally. The thing is, we’re spoiled for choice, and software lovers are budget-minded. So while a hundred bucks barely buys you one module in the hardware world, in software, it buys a heck of a lot. That’s the entry price for Softube Modular, for VCV Rack and a couple of nice add-ons, and for Cherry Audio’s Voltage Modular (at least at its current sale price, with a big bundle of extras).

Not to mention, Reason itself is a modular environment.

But there are a few things that make Complex-1 really special.

It’s a complete, integrated modular rig. This is important – VCV Rack, Softube Modular, Voltage Modular, and Reason itself are all fun because you can mix and match modules.

But it’s creatively inspiring to work with Complex-1 for the opposite reason. You have a fixed selection of modules, with some basic workflows already in mind. It immediately takes me back to the first vintage Buchla system I worked on for that reason. You still have expansive possibilities, but within something that feels like an instrument – modular patching, but not the added step of choosing which modules. The team at Propellerhead talked about their admiration for the Buchla Music Easel. This isn’t an emulation of that – Arturia have a nice Music Easel in software if that’s what you want – but rather takes that same feeling of focusing on a toolkit and provides a modern, Propellerhead-style take on the concept.

It sounds fantastic. This one’s hard to overstate, so it’s better to just go give the trial a spin. In terms of specs, Propellerhead points to their own DSP and 4X oversampling everywhere. In practice, it means even just a stupidly-simple patch with raw oscillators sounds gorgeous and lush. I love digital sounds and aliasing and so on, but… it’s nice to have this end of the spectrum, too. You get a weird, uncanny feeling of lying in bed with a laptop and some studio headphones and hearing your own music as if it’s a long-lost 1970s electronic classic. It’s almost too easy to sound good. Tell your friends you’ll see them in the spring because for now you want to spend some time along pretending you’re Laurie Spiegel.

It lives inside Reason. The other reality is, it’s really fun having this inside Reason, where you can combine your patches into Combinators and work with all the other pattern sequencers and effects and whatnot. You can also make elaborate polysynths by stacking instances of Complex-1.

There’s basic CV and audio interconnectivity with your rack. This may look meager at first, but I found this in addition to the Combinator opens a lot of possibilities, especially for playing live/improvising.

You get loads of presets, of course, which will appeal to those not wanting to get lost in patching. But I also welcome that Propellerhead included a set of basic templates as starting points for those who do want to explore.

Patching is also really easy, though I miss being able to re-patch from both sides of a cable as in a lot of software modulars. Better is the hide/unhide cables functionality, so you can make the patch cables disappear for easier control of the front panel. (Why don’t all software modulars have this feature, actually?)

You don’t get unlimited patchability between Complex-1 and the rest of Reason. For simplicity, you’re limited to note/MIDI input (from other devices as well as externally), basic CV input and output, and input to the sequencer. There’s also a very useful audio input. That may disappoint some people who wanted more options, though it still provides a lot of power.

Mostly I want to buy a really big touch display for Windows and use that. And with this kind of software out there, I may not be looking at hardware so much. I even expect to use this live.

Some sounds for you (while I work on sharing some of my own):

Complex-1 Rack Extension

Complex-1 in the shop

The post Hands-on: Complex-1 puts West Coast-inspired modular in Reason appeared first on CDM Create Digital Music.

Generative music from Shanghai’s AYRTBH: interview, download

Delivered... David Abravanel | Scene | Mon 17 Dec 2018 9:07 pm

The mysterious, murky, glitchy-future sounds you hear from, AYRTBH, Shanghai’s Wang Changcun, emerges from an algorithm. That software can make this album over and over again without sounding the same. The artist explains – and shares a specially generated exclusive for CDM readers.

Here at CDM, we’re no strangers to experiments with iterations in music. Icarus’ 2012 release Fake Fish Distribution used custom Max for Live devices to produce 1,000 generative variations, each sold to one customer and providing an individualized experience.

Six year later and Shanghai-based Wang Changcun aka AYRTBH has released Song of Anon, an album available in two formats: as an eight-track traditional listen, or as a stand-alone generative app.

Listen and download (M4A) via SoundCloud:

Or download lossless FLAC from WeTransfer.

I spoke to Wang about creating such a unique listen, and how it challenges our perceptions of authorship, what constitutes a piece of music, and composition.

Generative app.

How does the Song of Anon app work?

The focus of Song of Anon both App and album is the construction and dividing of rhythm. The App is not a “tool” software; it can only make Song of Anon-alike music. After a pattern is played many times, the App will re-generate a new pattern based on the packed-in JavaScript file. The synth parameters of the App are also randomly locked in a specific range. The listener can toggle the App on and off 🙂

Why did you decide to distribute Song of Anon as both an app and an album?

The App is a prototype of the album track’s rhythm algorithm, though for convenience all sounds in the App are synthesized through [Max/]MSP. Before recording Song of Anon tracks, I use the prototype system for testing.

Inside the Max patcher for the Song of Anon app.

Where would you draw the line of authorship using algorithmic software with user controls? If I adjust the controls on the Song of Anon app, is the song mine? Is it yours? Is it a collaboration between the two of us?

Song of Anon app is not a “tool” software. It’s not supposed to be controlled, it runs in its own world outputting sounds. I think outputs of the app belong to the Song of Anon album. Yeah the app can be adjusted and interfered with, but after a certain duration of time it will go back to its own logic again.

Song of Anon the App is composed and synthesized entirely in Max. Do you normally prefer synthesis over samples, or was this a choice relating to making Song of Anon its own system?

Song of Anon Prototype the App is made entirely in Max, but not the album. In the album I also used the Madrona Labs Aalto synth for more complex sounds. For Song of Anon yes I only use synthesis, but in my 2017 EP MTK I mainly used samples.

Is performing Song of Anon different from performing around previous albums?

Yes, it’s a new performing system I built for Song of Anon. But basically the software (Live, Max, Numerology) are the same; the ways of using them are changed.

Currently my performance setup is: Ableton Live as the central host, some selfmade M4L devices, [Five12] Numerology which is synced to Live’s clock, and sometimes Terminal and Max for generating parameter values of Numerology’s sequencer.

AYRTBH live.

How does Numerology fit in to the workflow? Are you using the generative/algorithmic features of Numerology, or just the basic step sequencing?

I’ve been using Numerology since maybe 2009, and it’s still my favorite go-to sequencer if I want to quickly implement a sequencing idea. The modulation system of Numerology is very powerful, and you can even make a synthesizer in it using the LFO as an oscillator; there are envelopes, VCAs, and filters after all. Normally I use Numerology and Max together, learn things/transfer ideas from each other. Yes I also use the generative/algorithmic features of Numerology, and sometimes if doing something is not easy in Numerology I’ll make a Max patch as a helper.

Have you thought of making/selling software (standalone, plug-in, or perhaps Max for Live devices) based on your sequencing work for Song of Anon?

I do have a plan on making/selling software (standalone and Max for Live device) with another Shanghai-based electronic musician Gooooose, but not the patches I used in Song of Anon. It should be something can be easily “shared” to others, patches/devices from Song of Anon are too personal, they can only make Song of Anon tracks.

What are you working on next?

I just finished the work for my first solo exhibition in Shanghai’s OCAT museum last month. Now I’m working on two live computer music sets next month in Shanghai and Beijing, improving the Song of Anon system.

AYRTBH live.

During the interview, Wang directed me to Zhao Yue, who runs the Beijing-based label D-Force, on which Song of Anon was released. I spoke briefly to Zhao about her label, and she offered some perspectives and excellent recommendations for further exploring what D-Force has to offer.

How do you go about marketing a release like Song of Anon, knowing that there’s also an algorithmic app version of it out there?

The algorithmic app is a major part of our promotion actually. To us, the release works on two levels: the musical level and the conceptual level. On the music side it is a bit weird, but humorous and chill, and it speaks to fans who already know and trust Wang. But we did put more emphasis on the concepts and technology behind the record, which has more of a wow-factor.

>We deliberately contacted more media in the IT, art and academic circles than what we’d usually do for a release. They responded very well to this and helped a lot with putting out press releases and giving us interviews. To be honest we were expecting obstacles “selling” this to media and platforms but their enthusiasm gave us a pleasant surprise. One contact from a major media platform gave us this feedback: “Of course we are tired of reporting on the idols and pop stars everyday. Something fun like this is refreshing for us too.” We have since realized that the Chinese audience have a very open mind when it comes to technical ideas, and this “art meets algorithm meets AI” idea really fires up people’s imaginations. Perhaps we should thank the general emphasis on tech and science in the Chinese society for this? The strong concept helps to get Wang Changcun’s idea over to more people than our usual music fans.

How did you first get connected to Wang?

We have heard of his name quite a long time ago. He’s considered to be one of the gurus in the field of experimental electronic music. Then we were introduced by one of our other artists called Han Han (he also produces and performs under the name Gooooose, and is the frontman of the band Duck Fight Goose). Han Han connected us because he felt that we were one of the few labels that were open to more experimental releases and, simply, that we could be personal friends. And he was proven right.

What are some other artists/albums that you’re working with that you’d recommend, especially for fans of Song of Anon?

For fans of Song of Anon, we could certainly recommend:

Synthetic China Vol.1 by Various Artists
This is a compilation curated by Han Han, and it is a collection of tracks from the pioneers of Chinese electronic music. Wang Changcun and Han Han (Gooooose) all contributed.

They by Gooooose
This is a concept album about an imagined synthetic alien lifeform that exists in Han Han’s mind. Each track depicts one aspects of their lives.

Listen to Song of Anon on Spotify

Download the standalone app [Cycling ’74]

The post Generative music from Shanghai’s AYRTBH: interview, download appeared first on CDM Create Digital Music.

Reason 10.3 will improve VST performance – here’s how

Delivered... Peter Kirn | Scene | Fri 14 Dec 2018 3:06 pm

VST brings more choice to Reason, but more support demands, too. Here’s an update on how Propellerhead are optimizing Reason to bring plug-in performance in line with what users expect.

For years, Reason was a walled-off garden. Propellerhead resisted supporting third-party plug-ins, and when they did, introduced their own native Rack Extensions technology for supporting them. That enables more integrated workflows, better user experience, greater stability, and easier installation and updates than a format like VST or AU allows.

But hey, we have a lot of VSTs we want to run inside Reason, engineering arguments be damned. And so Propellerhead finally listened to users, delivering support for VST effects and instruments on Mac and Windows in Reason 9.5. (Currently only VST2 plug-ins are supported, not VST3.)

Propellerhead have been working on improving stability and performance continuously since then. Reason 10.3 is a much-anticipated update, because it addresses a significant performance issue with VST plug-ins – without disrupting one of the things that makes Reason’s native devices work well.

The bad news is, 10.3 is delayed.

The good news is, it works really well. It puts Reason on par with other DAWs as far as VST performance. That’s a big deal to Reason users, just because in many other ways Reason is unlike other DAWs.

I met with Propellerhead engineers yesterday in Stockholm, including Mattias Häggström Gerdt (product manager for Reason). We got to discuss the issue, their whole development effort, and get hands-on with their alpha version.

Why this took a while

Okay, first, some technical discussion. “Real time” is actually not a thing in digital hardware and software. The illusion of a system working in real time is created by buffering – using very small windows of time to pass audio information, so small that the results seem instantaneous to the user.

There’s a buffer size you set for your audio interface – this one you may already know about. But software also have internal buffers for processing, hidden to the user. In a modular environment, you really want this buffer to be as small as possible, so that patching and processing feels reponsive – just as it would if you were using analog hardware. Reason accordingly has an internal buffer of 64 frames to do just that. That means without any interruptions to your audio stream, you can patch and repatch and tweak and play to your heart’s content.

Here’s the catch: some plug-ins developers for design reasons prefer larger buffers (higher latency), in order to reduce CPU consumption even though their plug-in technically work in Reason’s small buffer environment. This is common in plug-ins where ultra-low latency internal processing isn’t as important. But running inside Reason, that approach adds strain to your CPU. Some users won’t notice anything, because they don’t use these plug-ins or use fewer instances of them. But some will see their machine run out of CPU resources faster in Reason than in other DAWs. The result: the same plug-in setup you used in another DAW will make Reason sputter, which is of course not what you want.

Another catch: if you have ever tried adjusting the audio buffer size on your interface to reduce CPU usage, in this case, that won’t help. So users encountering this issue are left frustrated.

This is a fixable problem. You give those plug-ins larger buffers when they demand them, while Reason and its devices continue to work as they always have. It’s just there’s a lot of work going back through all the rest of Reason’s code to adjust for the change. And like a lot of coding work, that takes time. Adding more people doesn’t necessarily even speed this up, either. (Ever tried adding more people to a kitchen to “speed up” cooking dinner? Like that.)

When it’s done, existing Reason users won’t notice anything. But users of the affected plug-ins will see big performance gains.

What to expect when it ships

I sat with the engineers looking at an alpha and we measured CPU usage. The results by plug-in are what you might expect.

We worked with three plug-ins by way of example – charts are here. With Izotope Ozone 7, there’s a massive gain in the new build. That makes sense – a mastering plug-in isn’t so concerned about low latency performance. With Xfer Records Serum, there’s almost none. Native Instruments’ Massive is somewhere in between. These are just typical examples – many other plug-ins will also fall along this range.

Native Instruments’ Massive gets a marginal but significant performance boost. Left: before. Right: after.

iZotope’s Ozone is a more dramatic example. Stack some instances of this mastering-focused plug-in, and you can max out the CPU quickly in Reason. (left) But in Reason 10.3 alpha, you can see the “big batch” approach yields resolves that performance issue. (right)

Those graphs are on the Mac but OS in this case won’t really matter.

The fix is coming to the public. The alpha is not something you want to run; it’s already in the hands of testers who don’t mind working with prerelease softare. A public beta won’t happen in the couple of weeks we have left in 2018, but it is coming soon – as soon as it’s done. And of course 10.3 will be a free upgrade for Reason 10 users.

When it ships, Reason 10.3 will give you performance on par with other DAWs. That is, your performance will depend on your CPU and which plug-ins you’re using, but Reason will be more or less the same as other hosts beyond that.

So this isn’t really exciting stuff, but it will make your life easier. We’ll let you know how it comes and try to test that final version.

Official announcement:

Update on Reason and VST performance

For more on Reason and VST support, see their support section:

Propellerhead Software Rack Extensions, ReFills and VSTs VSTs

The post Reason 10.3 will improve VST performance – here’s how appeared first on CDM Create Digital Music.

Cherry Audio Voltage Modular: a full synth platform, open to developers

Delivered... Peter Kirn | Scene | Thu 13 Dec 2018 4:43 pm

Hey, hardware modular – the computer is back. Cherry Audio’s Voltage Modular is another software modular platform. Its angle: be better for users — and now, easier and more open to developers, with a new free tool.

Voltage Modular was shown at the beginning of the year, but its official release came in September – and now is when it’s really hitting its stride. Cherry Audio’s take certainly isn’t alone; see also, in particular, Softube Modular, the open source VCV Rack, and Reason’s Rack Extensions. Each of these supports live patching of audio and control signal, hardware-style interfaces, and has rich third-party support for modules with a store for add-ons. But they’re all also finding their own particular take on the category. That means now is suddenly a really nice time for people interested in modular on computers, whether for the computer’s flexibility, as a supplement to hardware modular, or even just because physical modular is bulky and/or out of budget.

So, what’s special about Voltage Modular?

Easy patching. Audio and control signals can be freely mixed, and there’s even a six-way pop-up multi on every jack, so each jack has tons of routing options. (This is a computer, after all.)

Each jack can pop up to reveal a multi.

It’s polyphonic. This one’s huge – you get true polyphony via patch cables and poly-equipped modules. Again, you know, like a computer.

It’s open to development. There’s now a free Module Designer app (commercial licenses available), and it’s impressively easy to code for. You write DSP in Java, and Cherry Audio say they’ve made it easy to port existing code. The app also looks like it reduces a lot of friction in this regard.

There’s an online store for modules – and already some strong early contenders. You can buy modules, bundles, and presets right inside the app. The mighty PSP Audioware, as well as Vult (who make some of my favorite VCV stuff) are already available in the store.

There’s an online store for free and paid add-ons – modules and presets. But right now, a hundred bucks gets you started with a bunch of stuff right out of the gate.

Voltage Modular is a VST/AU/AAX plug-in and runs standalone. And it supports 64-bit double-precision math with zero-latency module processes – but, impressively in our tests, isn’t so hard on your CPU as some of its rivals.

Right now, Voltage Modular Core + Electro Drums are on sale for just US$99.

Real knobs and patch cords are fun, but … let’s be honest, this is a hell of a lot of fun, too.

For developers

So what about that development side, if that interests you? Well, Apple-style, there’s a 70/30 split in developers’ favor. And it looks really easy to develop on their platform:

Java may be something of a bad word to developers these days, but I talked to Cherry Audio about why they chose it, and it definitely makes some sense here. Apart from being a reasonably friendly language, and having unparalleled support (particularly on the Internet connectivity side), Java solves some of the pitfalls that might make a modular environment full of third-party code unstable. You don’t have to worry about memory management, for one. I can also imagine some wackier, creative applications using Java libraries. (Want to code a MetaSynth-style image-to-sound module, and even pull those images from online APIs? Java makes it easy.)

Just don’t think of “Java” as in legacy Java applications. Here, DSP code runs on a Hotspot virtual machine, so your DSP is actually running as machine language by the time it’s in an end user patch. It seems Cherry have also thought through GUI: the UI is coded natively in C++, while you can create custom graphics like oscilloscopes (again, using just Java on your side). This is similar to the models chosen by VCV and Propellerhead for their own environments, and it suggests a direction for plug-ins that involves far less extra work and greater portability. It’s no stretch to imagine experienced developers porting for multiple modular platforms reasonably easily. Vult of course is already in that category … and their stuff is so good I might almost buy it twice.

Or to put that in fewer words: the VM can match or even best native environments, while saving developers time and trouble.

Cherry also tell us that iOS, Linux, and Android could theoretically be supported in the future using their architecture.

Of course, the big question here is installed user base and whether it’ll justify effort by developers, but at least by reducing friction and work and getting things rolling fairly aggressively, Cherry Audio have a shot at bypassing the chicken-and-egg dangers of trying to launch your own module store. Plus, while this may sound counterintuitive, I actually think that having multiple players in the market may call more attention to the idea of computers as modular tools. And since porting between platforms isn’t so hard (in comparison to VST and AU plug-in architectures), some interested developers may jump on board.

Well, that and there’s the simple matter than in music, us synth nerds love to toy around with this stuff both as end users and as developers. It’s fun and stuff. On that note:

Modulars gone soft

Stay tuned; I’ve got this for testing and will let you know how it goes.

https://cherryaudio.com/voltage-modular

https://cherryaudio.com/voltage-module-designer

The post Cherry Audio Voltage Modular: a full synth platform, open to developers appeared first on CDM Create Digital Music.

FL Studio 20.1 arrives, studio-er, loop-ier, better

Delivered... Peter Kirn | Scene | Wed 12 Dec 2018 9:03 pm

The just-before-the-holiday-break software updates just keep coming. Next: the evergreen, lifetime-free-updates latest release of the DAW the developer calls FL Studio, and everyone else calls “Fruity Loops.”

FL Studio has given people reason to take it more seriously of late, too. There’s a real native Mac version, so FL is no longer a PC-vs-Mac thing. There’s integrated controller hardware from Akai (the new Fire), and that in turn exploits all those quick-access record and step sequence features that made people love FL in the first place.

AKAI Fire and the Mac version might make lapsed or new users interested anew – but hardcore users, this software release is really for you.

The snapshot view:

Does your DAW have a visualizer built on a game engine inside it? No? FL does. And you thought you were going to just have to make your next music video be a bunch of shaky iPhone footage you ran through some weird black and white filter. No!

Stepsequencer looping is back (previously seen in FL 11), but now has more per-channel controls so you can make polyrhythms – or not, lining everything up instead if you’d rather.

Plus if you’re using FIRE hardware, you get options to set channel loop length and the ability to burn to Patterns.

Audio recording is improved, making it easier to arm and record and get audio and pre/post effects where you want.

And there are 55 new minimal kick drum samples.

And now you can display the GUI FPS.

And you have a great way of making music videos by exporting from the included video game engine visualizer.

Actually, you know, I’m just going to stop -t here’s just a whole bunch of new stuff, and you get it for free. And they’ve made a YouTube video. And as you watch the tutorial, it’s evident that FL really has matured into a serious DAW to stand toe-to-toe with everything else, without losing its personality.

https://www.image-line.com/flstudio/

20.1 update

The post FL Studio 20.1 arrives, studio-er, loop-ier, better appeared first on CDM Create Digital Music.

Pigments is a new hybrid synth from Arturia, and you can try it free now

Delivered... Peter Kirn | Scene | Tue 11 Dec 2018 6:04 pm

Arturia made their name emulating classic synths, and then made their name again in hardware synths and handy hardware accessories. But they’re back with an original synthesizer in software. It’s called Pigments, and it mixes vintage and new together. You know, like colors.

The funny thing is, wavetable synthesis as an idea is as old or older than a lot of the vintage synths that spring to mind – you can trace it back to the 1970s and Wolfgang Palm, before instruments from PPG and Waldorf.

But “new” is about sound, not history. And now it’s possible to make powerful morphing wavetable engines with loads of voice complexity and modulation that certainly only became practical recently – plus now we have computer displays for visualizing what’s going on.

Pigments brings together the full range of possible colors to work with – vintage to modern, analog to advanced digital. And it does so in a way that feels coherent and focused.

I’ve just started playing around with Pigments – expect a real hands-on shortly – and it’s impressive. You get the edgier sounds of wavetable synthesis with all the sonic language you expect from virtual analog, including all those classic and dirty and grimy sounds. (I can continue my ongoing mission to make everyone think I’m using analog hardware when I’m in the box. Fun.)

Arturia’s marketing copy here is clever – like I wish I’d thought of this phrase: “Pigments can sound like other synths, [but] no other synth can sound like Pigments.”

Okay, so what’s under the hood that makes them claim that?

Two engines: one wavetable, one virtual analog, each now the latest stuff from Arturia. The waveshaping side gives you lots of options for sculpting the oscillator and fluidly controlling the amount of aliasing, which determines so much of the sound’s harmonic character.

Advanced pitch modulation which you can quantize to scale – so you can make complex modulations melodic.

From the modeling Arturia has been doing and their V Collection, you get the full range of filters, classic and modern (surgeon and comb). There’s also a bunch of effects, like wavefolder, overdrive, parametric EQ, and delay.

There’s also extensive routing for all those toys – drag and drop effects into inserts or sends, choose series or parallel routings, and so on.

The effects section is as deep as modulation, but somehow everything is neatly organized, visual, and never overwhelming.

You can modulate anything with anything, Arturia says – which sounds about right. And for modulation, you have tons of choices in envelopes, modulation shapes, and even function generators and randomization sources. But all of this is also graphical and neatly organized, so you don’t get lost. Best of all, there are “heads-up” graphical displays that show you what’s happening under the hood of even the most complex patch.

The polyphonic sequencer alone is huge, meaning you could work entirely inside Pigments.


Color-coded and tabbed, the UI is constantly giving you subtle visual feedback of what waveforms of modulation, oscillators, and processors are doing at any given time, which is useful both in building up sounds from scratch or picking apart the extensive presets available. You can build something step by step if you like, with a sense that inside this semi-modular world, you’re free to focus on one thing at a time while doing something more multi-layered.

Then on top of all of that, it’s not an exaggeration to say that Pigments is really a synth combined with a sequencer. The polyphonic sequencer/arpeggiator is full of trigger options and settings that mean it’s totally possible to fire up Pigments in standalone mode and make a whole piece, just as you would with a full synth workstation or modular rig.

Instead of a short trial, you get a full month to enjoy this – a free release for everyone, expiring only on January the 10th. So now you know what to do with any holiday break. During that time, pricing is $149 / 149€, rising to 199 after that.

I’m having a great deal of fun with it already. And we’re clearing at a new generation of advanced soft synths. Stay tuned.

Product page:

https://www.arturia.com/products/analog-classics/pigments/media

The post Pigments is a new hybrid synth from Arturia, and you can try it free now appeared first on CDM Create Digital Music.

Bitwig Studio 2.5 beta arrives with features inspired by the community

Delivered... Peter Kirn | Scene | Tue 11 Dec 2018 2:37 pm

We’re coasting to the end of 2019, but Bitwig has managed to squeeze in Studio 2.5, with feature the company says were inspired by or directly requested by users.

The most interesting of these adds some interactive arrangement features to the linear side of the DAW. Traditional DAWs like Cubase have offered interactive features, but they generally take place on the timeline. Or you can loop individual regions in most DAWs, but that’s it.

Bitwig are adding interactive actions to the clips themselves, right in the arrangement. “Clip Blocks” apply Next Action features to individual clips.

Also in this release:

“Audio Slide” lets you slide audio inside clips without leaving the arranger. That’s possible in many other DAWs, but it’s definitely a welcome addition in Bitwig Studio – especially because an audio clip can contain multiple audio events, which isn’t necessarily possible elsewhere.

Note FX Selector lets you sweep through multiple layers of MIDI effects. We’ve seen something like this before, too, but this implementation is really nice.

There’s also a new set of 60 Sampler presets with hundreds of full-frequency waveforms – looks great for building up instruments. (This makes me ready to boot into Linux with Bitwig, too, where I don’t necessarily have my full plug-in library at my disposal.)

Other improvements:

  • Browser results by relevance
  • Faster plug-in scanning
  • 50 more functions accessible as user-definable key commands

To me, the thing that makes this newsworthy, and the one to test, is really this notion of an interactive arrangement view.

Ableton pioneered Follow Actions in their Session View years back in Ableton Live, but they’ve failed to apply that concept even inside Session View to scenes. (Some Max for Live hacks fill in the gap, but that only proves that people are looking for this feature.)

Making the arrangement itself interactive at the clip level – that’s really something new.

Now, that said, let’s play with Clip Blocks in Bitwig 2.5 and see if this is helpful or just confusing or superfluous in arrangements. (Presumably you can toy with different arrangement possibilities and then bounce out whatever you’ve chosen? I have to test this myself.) And there’s also the question of whether this much interactivity actually just has you messing around instead of making decisions, but that’s another story.

Go check out the release, and if you’re a Bitwig user, you can immediately try out the beta. Let us know what you think and how those Clip Blocks impact your creative process. (Or share what you make!)

Just please – no EDM tabla. (I think that moment sent a chill of terror down my spine in the demo video.)

https://www.bitwig.com/en/18/bitwig-studio-2_5.html

The post Bitwig Studio 2.5 beta arrives with features inspired by the community appeared first on CDM Create Digital Music.

Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws

Delivered... Peter Kirn | Artists,Scene | Mon 10 Dec 2018 6:03 pm

Musicians don’t just endure technology when it breaks. They embrace the broken. So it’s fitting that Holly Herndon’s team have produced a demonic spawn of machine learning algorithms – and that the results are wonderful.

The new music video for the Holly Herndon + Jlin collaboration have been making the rounds online, so you may have seen it already:


n
But let’s talk about what’s going on here. Holly is continuing a long-running collaboration with producer Jlin, here joined by technologist Mat Dryhurst and coder Jules LaPlace. (The music video itself is directed by Daniel Costa Neves with software developer Leif Ryge, employing still more machine learning technique to merge the two artists’ faces.)

Machine learning processes are being explored in different media in parallel – characters and text, images, and sound, voice, and music. But the results can be all over the place. And ultimately, there are humans as the last stage. We judge the results of the algorithms, project our own desires and fears on what they produce, and imagine anthropomorphic intents and characteristics.

Sometimes errors like over-fitting then take on a personality all their own – even as mathematically sophisticated results fail to inspire.

But that’s not to say these reactions aren’t just as real. An part of may make the video “Godmother” compelling is not just the buzzword of AI, but the fact that it genuinely sounds different.

The software ‘Spawn,’ developed by Ryge working with the team, is a machine learning-powered encoder. Herndon and company have anthropomorphized that code in their description, but that itself is also fair – not least because the track is composed in such a way to suggest a distinct vocalist.

I love Holly’s poetic description below, but I think it’s also important to be precise about what we’re hearing. That is, we can talk about the evocative qualities of an oboe, but we should definitely still call an oboe an oboe.

So in this case, I confirmed with Dryhurst that what I was hearing. The analysis stage employs neural network style transfers – some links on that below, though LaPlace and the artists here did make their own special code brew. And then they merged that with a unique vocoder – the high-quality WORLD vocoder. That is, they feed a bunch of sounds into the encoder, and get some really wild results.

And all of that in turn makes heavy use of the unique qualities of Jlin’s voice, Holly’s own particular compositional approach and the arresting percussive take on these fragmented sounds, Matt’s technological sensibilities, LaPlace’s code, a whole lot of time spent on parameters and training and adaptation…

Forget automation in this instance. All of this involves more human input and more combined human effort that any conventionally produced track would.

Is it worth it? Well, aesthetically, you could make comparisons to artists like Autechre, but then you could do that with anything with mangled sample content in it. And on a literal level, the result is the equivalent of a mangled sample. The results retain recognizable spectral components of the original samples, and they add a whole bunch of sonic artifacts which sound (correctly, really) ‘digital’ and computer-based to our ears.

But it’s also worth noting that what you hear is particular to this vocoder technique and especially to audio texture synthesis and neutral network-based style transfer of sound. It’s a commentary on 2018 machine learning not just conceptually, but because what you hear sounds the way it does because of the state of that tech.

And that’s always been the spirit of music. The peculiar sound and behavior of a Theremin says a lot about how radios and circuits respond to a human presence. Vocoders have ultimately proven culturally significant for their aesthetic peculiarities even if their original intention was encoding speech. We respond to broken circuits and broken code on an emotional and cultural level, just as we do acoustic instruments.

In a blog post that’s now a couple of years old – ancient history in machine learning terms, perhaps – Dmitry Ulyanov and Vadim Lebedev acknowledged that some of the techniques they used for “audio texture synthesis and style transfer” used a technique intended for something else. And they implied that the results didn’t work – that they had “stylistic” interest more than functional ones.

Dmitry even calls this a partial failure: “I see a slow but consistent interest increase in music/audio by the community, for sure amazing things are just yet to come. I bet in 2017 already we will find a way to make WaveNet practical but my attempts failed so far :)”

Spoiler – that hasn’t really happened in 2017 or 2018. But “failure” to be practical isn’t necessarily a failure. The rising interest has been partly in producing strange results – again, recalling that the vocoder, Theremin, FM synthesis, and many other techniques evolved largely because musicians thought the sounds were cool.

But this also suggests that musicians may uniquely be able to cut through the hype around so-called AI techniques. And that’s important, because these techniques are assigned mystical powers, Wizard of Oz-style.

Big corporations can only hype machine learning when it seems to be magical. But musicians can hype up machine learning even when it breaks – and knowing how and when it breaks is more important than ever. Here’s Holly’s official statement on the release:

For the past two years, we have been building an ensemble in Berlin.

One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances.

Spawn can already do quite a few wonderful things. ‘Godmother’ was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice.

This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.

In nurturing collaboration with the enhanced capacities of Spawn, I am able to create music with my voice that far surpass the physical limitations of my body.

Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.

Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic, path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are, and dream up new ways of creating and organizing accordingly.

I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.

– Holly Herndon

Some interesting code:
https://github.com/DmitryUlyanov/neural-style-audio-tf

https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder

Go hear the music:

http://smarturl.it/Godmother

Previously, from the hacklab program I direct, talks and a performance lab with CTM Festival:

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

A look at AI’s strange and dystopian future for art, music, and society

I also wrote about machine learning:

Minds, machines, and centralization: AI and music

The post Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme