Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » Music tech


Pigments is a new hybrid synth from Arturia, and you can try it free now

Delivered... Peter Kirn | Scene | Tue 11 Dec 2018 6:04 pm

Arturia made their name emulating classic synths, and then made their name again in hardware synths and handy hardware accessories. But they’re back with an original synthesizer in software. It’s called Pigments, and it mixes vintage and new together. You know, like colors.

The funny thing is, wavetable synthesis as an idea is as old or older than a lot of the vintage synths that spring to mind – you can trace it back to the 1970s and Wolfgang Palm, before instruments from PPG and Waldorf.

But “new” is about sound, not history. And now it’s possible to make powerful morphing wavetable engines with loads of voice complexity and modulation that certainly only became practical recently – plus now we have computer displays for visualizing what’s going on.

Pigments brings together the full range of possible colors to work with – vintage to modern, analog to advanced digital. And it does so in a way that feels coherent and focused.

I’ve just started playing around with Pigments – expect a real hands-on shortly – and it’s impressive. You get the edgier sounds of wavetable synthesis with all the sonic language you expect from virtual analog, including all those classic and dirty and grimy sounds. (I can continue my ongoing mission to make everyone think I’m using analog hardware when I’m in the box. Fun.)

Arturia’s marketing copy here is clever – like I wish I’d thought of this phrase: “Pigments can sound like other synths, [but] no other synth can sound like Pigments.”

Okay, so what’s under the hood that makes them claim that?

Two engines: one wavetable, one virtual analog, each now the latest stuff from Arturia. The waveshaping side gives you lots of options for sculpting the oscillator and fluidly controlling the amount of aliasing, which determines so much of the sound’s harmonic character.

Advanced pitch modulation which you can quantize to scale – so you can make complex modulations melodic.

From the modeling Arturia has been doing and their V Collection, you get the full range of filters, classic and modern (surgeon and comb). There’s also a bunch of effects, like wavefolder, overdrive, parametric EQ, and delay.

There’s also extensive routing for all those toys – drag and drop effects into inserts or sends, choose series or parallel routings, and so on.

The effects section is as deep as modulation, but somehow everything is neatly organized, visual, and never overwhelming.

You can modulate anything with anything, Arturia says – which sounds about right. And for modulation, you have tons of choices in envelopes, modulation shapes, and even function generators and randomization sources. But all of this is also graphical and neatly organized, so you don’t get lost. Best of all, there are “heads-up” graphical displays that show you what’s happening under the hood of even the most complex patch.

The polyphonic sequencer alone is huge, meaning you could work entirely inside Pigments.


Color-coded and tabbed, the UI is constantly giving you subtle visual feedback of what waveforms of modulation, oscillators, and processors are doing at any given time, which is useful both in building up sounds from scratch or picking apart the extensive presets available. You can build something step by step if you like, with a sense that inside this semi-modular world, you’re free to focus on one thing at a time while doing something more multi-layered.

Then on top of all of that, it’s not an exaggeration to say that Pigments is really a synth combined with a sequencer. The polyphonic sequencer/arpeggiator is full of trigger options and settings that mean it’s totally possible to fire up Pigments in standalone mode and make a whole piece, just as you would with a full synth workstation or modular rig.

Instead of a short trial, you get a full month to enjoy this – a free release for everyone, expiring only on January the 10th. So now you know what to do with any holiday break. During that time, pricing is $149 / 149€, rising to 199 after that.

I’m having a great deal of fun with it already. And we’re clearing at a new generation of advanced soft synths. Stay tuned.

Product page:

https://www.arturia.com/products/analog-classics/pigments/media

The post Pigments is a new hybrid synth from Arturia, and you can try it free now appeared first on CDM Create Digital Music.

Bitwig Studio 2.5 beta arrives with features inspired by the community

Delivered... Peter Kirn | Scene | Tue 11 Dec 2018 2:37 pm

We’re coasting to the end of 2019, but Bitwig has managed to squeeze in Studio 2.5, with feature the company says were inspired by or directly requested by users.

The most interesting of these adds some interactive arrangement features to the linear side of the DAW. Traditional DAWs like Cubase have offered interactive features, but they generally take place on the timeline. Or you can loop individual regions in most DAWs, but that’s it.

Bitwig are adding interactive actions to the clips themselves, right in the arrangement. “Clip Blocks” apply Next Action features to individual clips.

Also in this release:

“Audio Slide” lets you slide audio inside clips without leaving the arranger. That’s possible in many other DAWs, but it’s definitely a welcome addition in Bitwig Studio – especially because an audio clip can contain multiple audio events, which isn’t necessarily possible elsewhere.

Note FX Selector lets you sweep through multiple layers of MIDI effects. We’ve seen something like this before, too, but this implementation is really nice.

There’s also a new set of 60 Sampler presets with hundreds of full-frequency waveforms – looks great for building up instruments. (This makes me ready to boot into Linux with Bitwig, too, where I don’t necessarily have my full plug-in library at my disposal.)

Other improvements:

  • Browser results by relevance
  • Faster plug-in scanning
  • 50 more functions accessible as user-definable key commands

To me, the thing that makes this newsworthy, and the one to test, is really this notion of an interactive arrangement view.

Ableton pioneered Follow Actions in their Session View years back in Ableton Live, but they’ve failed to apply that concept even inside Session View to scenes. (Some Max for Live hacks fill in the gap, but that only proves that people are looking for this feature.)

Making the arrangement itself interactive at the clip level – that’s really something new.

Now, that said, let’s play with Clip Blocks in Bitwig 2.5 and see if this is helpful or just confusing or superfluous in arrangements. (Presumably you can toy with different arrangement possibilities and then bounce out whatever you’ve chosen? I have to test this myself.) And there’s also the question of whether this much interactivity actually just has you messing around instead of making decisions, but that’s another story.

Go check out the release, and if you’re a Bitwig user, you can immediately try out the beta. Let us know what you think and how those Clip Blocks impact your creative process. (Or share what you make!)

Just please – no EDM tabla. (I think that moment sent a chill of terror down my spine in the demo video.)

https://www.bitwig.com/en/18/bitwig-studio-2_5.html

The post Bitwig Studio 2.5 beta arrives with features inspired by the community appeared first on CDM Create Digital Music.

Split MIDI, without latency, for under $50: meet MeeBlip cubit

Delivered... Peter Kirn | Scene | Mon 10 Dec 2018 6:54 pm

You want to play with your music toys together, and instead you wind up unplugging and repatching MIDI. That’s no fun. We wanted to solve this problem for ourselves, without having to trade high performance for low cost or simplicity. The result is MeeBlip cubit.

cubit is the first of a new generation of MeeBlip tools from us, as we make work to make synths more accessible and fun for everybody. cubit’s mission is simple: take one input, and turn it into four outputs, with minimum latency, minimum fuss, and at the lowest price possible.

Why cubit?

Rock-solid timing. Everything you throw at the input jack is copied to the four outputs with ultra-low latency. Result: you can use it for MIDI messages, you can use it for clock, and keep your timing tight. (Under the hood is a hardware MIDI passthrough circuit, with active processing for each individual output – but that just translates to you not having to worry.)

It fits anywhere. Ports are on the top, so you can use it in tight spaces. It’s small and lightweight, so you can always keep it with you.

You’ll always have power. The USB connection means you can get power from your laptop or a standard USB power adapter (optional).

Don’t hum along. Opto-isolated MIDI IN to reduce ground loops. Everything should have this, but not everything does.

Blink! LED light flashes so you know MIDI is coming in – ideal for troubleshooting.

One input, four outputs, no lag – easy.

We love little, inexpensive gear. But those makers often leave out MIDI out/thru just to save space. With cubit, you can put together a portable rig and keep everything jamming together – alone or with friends.

Right now, cubit is launching at just US$39.95 with a USB cable thrown in.

If you need extra adapters or cables, we’ve got those too, so you can start playing right when the box arrives. (Shipping in the US is free, with affordable shipping to Canada and worldwide.) And if you’re grabbing some stocking stuffers, don’t forget to add in a cubit so your gifts can play along and play with others.

Get one while our stocks last. And don’t look in stores – we sell direct to keep costs low.

Full specs from our engineer, James:

  • Passes all data from the MIDI IN to four MIDI OUT jacks
  • Ultra-low latency hardware MIDI pass-through
  • Runs on 5V Power from a computer USB port or optional USB power adapter
  • Opto-isolated MIDI IN to reduce ground loops
  • Individual active signal processing for each MIDI OUT
  • Bright green MIDI data indicator LED flashes when you’re receiving MIDI
  • Measures: 4.25″ x 3″ x 1″, weighs 92 g (3.25 oz)
  • Includes 3 ft (1 m) USB cable
  • Optional 5V USB power adapter available
  • Made in Canada

MeeBlip cubit product and order page

The post Split MIDI, without latency, for under $50: meet MeeBlip cubit appeared first on CDM Create Digital Music.

Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws

Delivered... Peter Kirn | Artists,Scene | Mon 10 Dec 2018 6:03 pm

Musicians don’t just endure technology when it breaks. They embrace the broken. So it’s fitting that Holly Herndon’s team have produced a demonic spawn of machine learning algorithms – and that the results are wonderful.

The new music video for the Holly Herndon + Jlin collaboration have been making the rounds online, so you may have seen it already:


n
But let’s talk about what’s going on here. Holly is continuing a long-running collaboration with producer Jlin, here joined by technologist Mat Dryhurst and coder Jules LaPlace. (The music video itself is directed by Daniel Costa Neves with software developer Leif Ryge, employing still more machine learning technique to merge the two artists’ faces.)

Machine learning processes are being explored in different media in parallel – characters and text, images, and sound, voice, and music. But the results can be all over the place. And ultimately, there are humans as the last stage. We judge the results of the algorithms, project our own desires and fears on what they produce, and imagine anthropomorphic intents and characteristics.

Sometimes errors like over-fitting then take on a personality all their own – even as mathematically sophisticated results fail to inspire.

But that’s not to say these reactions aren’t just as real. An part of may make the video “Godmother” compelling is not just the buzzword of AI, but the fact that it genuinely sounds different.

The software ‘Spawn,’ developed by Ryge working with the team, is a machine learning-powered encoder. Herndon and company have anthropomorphized that code in their description, but that itself is also fair – not least because the track is composed in such a way to suggest a distinct vocalist.

I love Holly’s poetic description below, but I think it’s also important to be precise about what we’re hearing. That is, we can talk about the evocative qualities of an oboe, but we should definitely still call an oboe an oboe.

So in this case, I confirmed with Dryhurst that what I was hearing. The analysis stage employs neural network style transfers – some links on that below, though LaPlace and the artists here did make their own special code brew. And then they merged that with a unique vocoder – the high-quality WORLD vocoder. That is, they feed a bunch of sounds into the encoder, and get some really wild results.

And all of that in turn makes heavy use of the unique qualities of Jlin’s voice, Holly’s own particular compositional approach and the arresting percussive take on these fragmented sounds, Matt’s technological sensibilities, LaPlace’s code, a whole lot of time spent on parameters and training and adaptation…

Forget automation in this instance. All of this involves more human input and more combined human effort that any conventionally produced track would.

Is it worth it? Well, aesthetically, you could make comparisons to artists like Autechre, but then you could do that with anything with mangled sample content in it. And on a literal level, the result is the equivalent of a mangled sample. The results retain recognizable spectral components of the original samples, and they add a whole bunch of sonic artifacts which sound (correctly, really) ‘digital’ and computer-based to our ears.

But it’s also worth noting that what you hear is particular to this vocoder technique and especially to audio texture synthesis and neutral network-based style transfer of sound. It’s a commentary on 2018 machine learning not just conceptually, but because what you hear sounds the way it does because of the state of that tech.

And that’s always been the spirit of music. The peculiar sound and behavior of a Theremin says a lot about how radios and circuits respond to a human presence. Vocoders have ultimately proven culturally significant for their aesthetic peculiarities even if their original intention was encoding speech. We respond to broken circuits and broken code on an emotional and cultural level, just as we do acoustic instruments.

In a blog post that’s now a couple of years old – ancient history in machine learning terms, perhaps – Dmitry Ulyanov and Vadim Lebedev acknowledged that some of the techniques they used for “audio texture synthesis and style transfer” used a technique intended for something else. And they implied that the results didn’t work – that they had “stylistic” interest more than functional ones.

Dmitry even calls this a partial failure: “I see a slow but consistent interest increase in music/audio by the community, for sure amazing things are just yet to come. I bet in 2017 already we will find a way to make WaveNet practical but my attempts failed so far :)”

Spoiler – that hasn’t really happened in 2017 or 2018. But “failure” to be practical isn’t necessarily a failure. The rising interest has been partly in producing strange results – again, recalling that the vocoder, Theremin, FM synthesis, and many other techniques evolved largely because musicians thought the sounds were cool.

But this also suggests that musicians may uniquely be able to cut through the hype around so-called AI techniques. And that’s important, because these techniques are assigned mystical powers, Wizard of Oz-style.

Big corporations can only hype machine learning when it seems to be magical. But musicians can hype up machine learning even when it breaks – and knowing how and when it breaks is more important than ever. Here’s Holly’s official statement on the release:

For the past two years, we have been building an ensemble in Berlin.

One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances.

Spawn can already do quite a few wonderful things. ‘Godmother’ was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice.

This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.

In nurturing collaboration with the enhanced capacities of Spawn, I am able to create music with my voice that far surpass the physical limitations of my body.

Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.

Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic, path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are, and dream up new ways of creating and organizing accordingly.

I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.

– Holly Herndon

Some interesting code:
https://github.com/DmitryUlyanov/neural-style-audio-tf

https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder

Go hear the music:

http://smarturl.it/Godmother

Previously, from the hacklab program I direct, talks and a performance lab with CTM Festival:

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

A look at AI’s strange and dystopian future for art, music, and society

I also wrote about machine learning:

Minds, machines, and centralization: AI and music

The post Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws appeared first on CDM Create Digital Music.

TUNNELS imagines Eurorack if you could multiply and patch anywhere

Delivered... Peter Kirn | Scene | Fri 7 Dec 2018 10:03 am

Kids today. First, they want synth modules with the power of computers but the faceplate of vintage hardware – and get just that. Next, they take for granted the flexibility of patching that virtual systems in software have. Well, enter TUNNELS: “infinite multiple” for your Eurorack.

TUNNELS is a set of modules that doesn’t do anything on its own. It’s just a clever patch bay for your modular system. But with the IN and OUT modules, what you get is the ability to duplicate signals (so a signal from one patch cord can go multiple places), and then route signals anywhere you like.

“Infinite” is maybe a bit hyperbolic. (Well, I suppose what you might do with this is potentially, uh, infinite.) It’s really a bus for signals. And maybe not surprisingly, this freer, ‘virtual’ way of thinking about signal comes from people with some software background on one side, and the more flexible Buchla patching methodology on the other. TUNNELS is being launched by Olympia Modular, a collaboration between Patterning developer Ben Kamen and Buchla Development Engineer Charles Seeholzer.

There are two module types. TUNNEL IN just takes a signal and duplicates it to multiple outs. In signal to out signal, that’s 1:6, 2:3 (each signal gets three duplicates, for two signals), or 3:2 (each signal gets two duplicates, for three signals).

You might be fine with just IN, but you can also add one or more OUT modules. That connects via a signal link cable, but duplicates the outputs from the IN module. (Cool!) So as you add more OUT modules, this can get a lot fancier, if you so desire. It means some patches that were impossible before become possible, and other patches that were messy tangles of spaghetti become clean and efficient.

Actually, I’m comparing to software (think Reaktor, Pd, Max), but even some dataflow software could use some utility modules like this just to clean things up. (Most dataflow software does let you connect as many outputs from a patch point as you want. Code environments like SuperCollider also make it really easy to work with virtual ‘buses’ for signal… but then hardware has the advantage of making the results visible.)

Tunnels is on Kickstarter, with a module for as little as US$75 (limited supply). But, come on, spring for the t-shirt, right?

Specs:
TUNNEL IN: buffered multiple, duplicate input across multiple outputs
TUNNEL OUT: add additional outputs at another location – chain infinitely for massive multiple banks, or use as sends for signals like clock and 1v/oct

Add more OUTs, and you get a big bank of multiples.

I’d say it’s like send and receive objects in Max/Pd, but… that’ll only make sense to Max/Pd people, huh? But yeah, like that.

On Kickstarter:
https://www.kickstarter.com/projects/639167978/tunnels-infinite-multiple-for-eurorack-synthesizer

The post TUNNELS imagines Eurorack if you could multiply and patch anywhere appeared first on CDM Create Digital Music.

The new iPad Pro has a USB-C port – so what can it do, exactly?

Delivered... Peter Kirn | Scene | Wed 5 Dec 2018 1:46 pm

The iPad finally gets a dedicated port for connectivity, as you’d find on a “desktop” computer – and it’s loaded with potential uses, from power to music gear. Let’s break down exactly what it can do.

“USB-C” is a port type; it refers to the reversible, slim, oval-shaped connector on the newest gadgets. But it doesn’t actually describe what the port can do as far as capabilities. So initially, Apple’s reference to the “USB-C” port on the latest iPad Pro generation was pretty vague.

Since then, press have gotten their hands on hardware and Apple themselves have posted technical documentation. Specifically, they’ve got a story up explaining the port’s powers:

https://support.apple.com/en-us/HT209186

Now, keep in mind the most confusing thing about Apple and USB-C is the two different kinds of ports. There’s a Thunderbolt-3 port, as found on the high-end MacBooks Pro and the Mac mini. It’s got a bolt of lightning indicator on it, and is compatible with audio devices like those from Universal Audio, and high-performance video gadgetry. And then there’s the plain-vanilla USB-C port, which has the standard USB icon on it.

All Thunderbolt 3 ports also double as USB-C ports, just not the other way around. The Thunderbolt 3 one is the faster port.

Also important, USB-C is backwards compatible with older USB formats if you have the right cable.

So here’s what you can do with USB-C. The basic story: do more, with fewer specialized adapters and dongles.

You can charge your iPad. Standard USB-C power devices as well as Apple’s own adapter. Nicely enough, you might even charge faster with a third-party adapter – like one you could share with a laptop that uses USB-C power.

Connect your iPad to a computer. Just as with Lightning-to-USB, you can use USB cables to connect to a USB-C port or older standard USB-A port, for charge and sync.

Connect to displays, projectors, TVs. Here you’ve got a few options, but they all max out at far higher quality than before:

  • USB-C to HDMI. (up to 4K resolution, 60 Hz, with HDMI 2.0 adapter.)
  • USB-C Digital AV Multiport. Apple’s own adapter supports up to 4K resolution, 30Hz. (The iPad display itself is 1080p / 60Hz, video up to 4K, 30Hz.)
  • USB-C displays. Up to 5K, with HR10 high dynamic range support. Some will even charge the iPad Pro in the process.

High end video makes the new iPad Pro look indispensable as a delivery device for many visual applications – including live visuals. It’s not hard to imagine people carrying these to demo high-end graphics with, or even writing custom software using the latest Apple APIs for 3D graphics and using the iPad Pro live.

Connect storage – a lot of it. Fast. USB-C is now becoming the standard for fast hard drives – USB 3.1/3.2. That theoretically allows for up to 2500 MB/s data access, and Apple says the iPad Pro will now work with 1 TB of storage. I’ve asked them for more clarification, but basically, yes, you can plug in big, fast storage and use it with your iPad, not limiting yourself to internal storage capacity. So that’s a revelation for pros, especially when using the iPad as an accessory to process video and photos and field recordings on the go.

Play audio. There’s no minijack audio output (grrr), but what you do get is audio playback to USB-C audio interfaces, docks, and specialized headphones. There’s also a USB-C to 3.m mm headphone jack adapter, but that’s pretty useless because it doesn’t include power passthrough – it’s a step backward from what you had before. Better to use a specialized USB-C adapter, which could also mean getting an analog audio output that’s higher quality than the one previous included internally on the iPad range.

And of course you can use AirPlay or Bluetooth, though it doesn’t appear Apple yet supports higher quality Bluetooth streaming, so wires seem to win for those of us who care about sound.

Oh, also interesting – Apple says they’ve added Dolby Digital Plus support over HDMI, but not Dolby Atmos. That hints a bit at consumer devices that do support Atmos – these are rare so far, but it’ll be interesting to watch, and to see whether Apple and Dolby work together or compete in this space.

Speaking of audio and music, though, here’s the other big one:

Work with USB devices. Apple specifically calls out audio and MIDI tools, presumably because musicians remain a big target Pro audience. What’s great here is, you no longer have the extra Lightning to USB “Camera” adapter required on older iPads, which was expensive and only worked with the iPad, and you should be free of some of the more restrictive electrical power capabilities of those past models.

You could also use a standard external keyboard to type on, or wired Ethernet – the latter great for wired use of applications like Liine’s Lemur.

The important thing here is there’s more bandwidth and more power. (Hardware that draws more power may still require external power – but that’s already true on a computer, too.)

The iPad Pro is at last closer to a computer, which makes it a much more serious tool for soft synths, controller tools, audio production, and more.

Charge other stuff. This is also cool – if you ever relied on a laptop as a mobile battery for phones and other accessories, now you can do that with the USB-C on the iPad Pro, too. So that means iPhones as well as other non-Apple phones. You can even plug one iPad into another iPad Pro.

Thunderbolt – no. Note that what you can’t do is connect Thunderbolt hardware. For that, you still want a laptop or desktop computer.

What about Made for iPhone? Apple’s somewhat infamous “MFI” program, which began as “Made for iPod,” is meant to certify certain hardware as compatible with their products. Presumably, that still exists – it would have to do so for the Lightning port products, but it seems likely certain iPad-specific products will still carry the certification.

That isn’t all bad – there are a lot of dodgy USB-C products out there, so some Apple seal of approval may be welcome. But MFI has hamstrung some real “pro” products. The good news as far as USB-C is, because it’s a standard port, devices made for particular “pro” music and audio and video uses no longer need to go through Apple’s certification just to plug directly into the iPad Pro. (And they don’t have to rely on something like the Camera Connection Kit to act as a bridge.)

Apple did not initially respond to CDM’s request for comment on MFI as it relates to the USB-C port.

More resources

MacStories tests the new fast charging and power adapter.

9to5Mac go into some detail on what works and what doesn’t (largely working from the same information I am, I think, but you get another take):
What can you connect to the new iPad Pro with USB-C?

And yeah, this headline gives it away, but agree totally. Note that Android is offering USB-C across a lot of devices, but that platform lacks some of the support for high-end displays and robust music hardware support that iOS does – meaning it’d be more useful coming from Apple than coming from those Android vendors.

The iPad Pro’s USB-C port is great. It should be on my iPhone, too

The post The new iPad Pro has a USB-C port – so what can it do, exactly? appeared first on CDM Create Digital Music.

What it’s like calibrating headphones and monitors with Sonarworks tools

Delivered... Peter Kirn | Scene | Mon 3 Dec 2018 5:55 pm

No studio monitors or headphones are entirely flat. Sonarworks Reference calibrate any studio monitors or headphones with any source. Here’s an explanation of how that works and what the results are like – even if you’re not someone who’s considered calibration before.

CDM is partnering with Sonarworks to bring some content on listening with artist features this month, and I wanted to explore specifically what calibration might mean for the independent producer working at home, in studios, and on the go.

That means this isn’t a review and isn’t independent, but I would prefer to leave that to someone with more engineering background anyway. Sam Inglis wrote one at the start of this year for Sound on Sound of the latest version; Adam Kagan reviewed version 3 for Tape Op. (Pro Tools Expert also compared IK Multimedia’s ARC and chose Sonarworks for its UI and systemwide monitoring tools.)

With that out of the way, let’s actually explain what this is for people who might not be familiar with calibration software.

In a way, it’s funny that calibration isn’t part of most music and sound discussions. People working with photos and video and print all expect to calibrate color. Without calibration, no listening environment is really truly neutral and flat. You can adjust a studio to reduce how much it impacts the sound, and you can choose reasonably neutral headphones and studio monitors. But those elements nonetheless color the sound.

I came across Sonarworks Reference partly because a bunch of the engineers and producers I know were already using it – even my mastering engineer.

But as I introduced it to first-time calibration product users, I found they had a lot of questions.

How does calibration work?

First, let’s understand what calibration is. Even studio headphones will color sound – emphasizing certain frequencies, de-emphasizing others. That’s with the sound source right next to your head. Put studio headphones in a room – even a relatively well-treated studio – and you combine the coloration of the speakers themselves as well as reflections and character of the environment around them.

The idea of calibration is to process the sound to cancel out those modifications. Headphones can use existing calibration data. For studio speakers, you take some measurements. You play a known test signal and record it inside the listening environment, then compare the recording to the original and compensate.

Hold up this mic, measure some whooping sounds, and you’re done calibration. No expertise needed.

What can I calibrate?

One of the things that sets Sonarworks Reference apart is that it’s flexible enough to deal with both headphones and studio monitors, and works both as a plug-in and a convenient universal driver.

The Systemwide driver works on Mac and Windows with the final output. That means you can listen everywhere – I’ve listened to SoundCloud audio through Systemwide, for instance, which has been useful for checking how the streaming versions of my mixes sound. This driver works seamlessly with Mac and Windows, supporting Core Audio on the Mac and the latest WASAPI Windows support, which is these days perfectly useful and reliable on my Windows 10 machine. (There’s unfortunately no Linux support, though maybe some enterprising user could get that Windows VST working.)

On the Mac, you select the calibrated output via a pop-up on the menu bar. On Windows, you switch to it just like you would any other audio interface. Once selected, everything you listen to in iTunes, Rekordbox, your Web browser, and anywhere else will be calibrated.

That works for everyday listening, but in production you often want your DAW to control the audio output. (Choosing the plug-in is essential on Windows for use with ASIO; Systemwide doesn’t yet support ASIO though Sonarworks says that’s coming.) In this case, you just add a plug-in to the master bus and the output will be calibrated. You just have to remember to switch it off when you bounce or export audio, since that output is calibrated for your setup, not anyone else’s.

Three pieces of software and a microphone. Sonarworks is a measurement tool, a plug-in and systemwide tool for outputting calibrated sound from any source, and a microphone for measuring.

Do I need a special microphone?

If you’re just calibrating your headphones, you don’t need to do any measurement. But for any other monitoring environment, you’ll need to take a few minutes to record a profile. And so you need a microphone for the job.

Calibrating your headphones is as simple as choosing the make and model number for most popular models.

Part of the convenience of the Sonarworks package is that it includes a ready-to-use measurement mic, and the software is already pre-configured to work with the calibration. These mics are omnidirectional – since the whole point is to pick up a complete image of the sound. And they’re meant to be especially neutral.

Sonarworks’ software is pre-calibrated for use with their included microphone.

Any microphone whose vendor provides a calibration profile – available in standard text form – can also use the software in a fully calibrated mode. If you have some cheap musician-friendly omni mic, though, those makers usually don’t do anything of the sort in the way a calibration mic maker would.

I think it’s easier to just use these mics, but I don’t have a big mic cabinet. Production Expert did a test of generic omni mics – mics that aren’t specifically for calibration – and got results that approximate the results of the test mic. In short, they’re good enough if you want to try this out, though Production Expert were being pretty specific with which omni mics they tested, and then you don’t get the same level of integration with the calibration software.

Once you’ve got the mics, you can test different environments – so your untreated home studio and a treated studio, for instance. And you wind up with what might be a useful mic in other situations – I’ve been playing with mine to sample reverb environments, like playing and re-recording sound in a tile bathroom, for instance.

What’s the calibration process like?

Let’s actually walk through what happens.

With headphones, this job is easy. You select your pair of headphones – all the major models are covered – and then you’re done. So when I switch from my Sony to my Beyerdynamic, for instance, I can smooth out some of the irregularities of each of those. That’s made it easier to mix on the road.

For monitors, you run the Reference 4 Measure tool. Beginners I showed the software got slightly discouraged when they saw the measurement would take 20 minutes but – relax. It’s weirdly kind of fun and actually once you’ve done it once, it’ll probably take you half that to do it again.

The whole thing feels a bit like a Nintendo Wii game. You start by making a longer measurement at the point where your head would normally be sitting. Then you move around to different targets as the software makes whooping sounds through the speakers. Once you’ve covered the full area, you will have dotted a screen with measurements. Then you’ve got a customized measurement for your studio.

Here’s what it looks like in pictures:

Simulate your head! The Measure tool walks you through exactly how to do this with friendly illustrations. It’s easier than putting together IKEA furniture.

You’ll also measure the speakers themselves.

Eventually, you measure the main listening spot in your studio. (And you can see why this might be helpful in studio setup, too.)

Next, you move the mic to each measurement location. There’s interactive visual feedback showing you as you get it in the right position.

Hold the mic steady, and listen as a whooping sound comes out of your speakers and each measurement is completed.

You’ll make your way through a series of these measurements until you’ve dotted the whole screen – a bit like the fingerprint calibration on smartphones.

Oh yeah, so my studio monitors aren’t so flat. When you’re done, you’ll see a curve that shows you the irregularities introduced by both your monitors and your room.

Now you’re ready to listen to a cleaner, clearer, more neutral sound – switch your new calibration on, and if all goes to plan, you’ll get much more neutral sound for listening!

There are other useful features packed into the software, like the ability to apply the curve used by the motion picture industry. (I loved this one – it was like, oh, yeah, that sound!)

It’s also worth noting that Sonarworks have created different calibration types made for real-time usage (great for tracking and improv) and accuracy (great for mixing).

Is all of this useful?

Okay, disclosure statement is at the top, but … my reaction was genuinely holy s***. I thought there would be some subtle impact on the sound. This was more like the feeling – well, as an eyeglass wearer, when my glasses are filthy and I clean them and I can actually see again. Suddenly details of the mix were audible again, and moving between different headphones and listening environments was no longer jarring – like that.

Double blind A/B tests are really important when evaluating the accuracy of these things, but I can at least say, this was a big impact, not a small one. (That is, you’d want to do double blind tests when tasting wine, but this was still more like the difference between wine and beer.)

How you might actually use this: once they adapt to the calibrated results, most people leave the calibrated version on and work from a more neutral environment. Cheap monitors and headphones work a little more like expensive ones; expensive ones work more as intended.

There are other uses cases, too, however. Previously I didn’t feel comfortable taking mixes and working on them on the road, because the headphone results were just too different from the studio ones. With calibration, it’s far easier to move back and forth. (And you can always double-check with the calibration switched off, of course.)

The other advantage of Sonarworks’ software is that it does give you so much feedback as you measure from different locations, and that it produces detailed reports. This means if you’re making some changes to a studio setup and moving things around, it’s valuable not just in adapting to the results but giving you some measurements as you work. (It’s not a measurement suite per se, but you can make it double as one.)

Calibrated listening is very likely the future even for consumers. As computation has gotten cheaper, and as software analysis has gotten smarter, it makes sense that these sort of calibration routines will be applied to giving consumers more reliable sound and in adapting to immersive and 3D listening. For now, they’re great for us as creative people, and it’s nice for us to have them in our working process and not only in the hands of other engineers.

If you’ve got any questions about how this process works as an end user, or other questions for the developers, let us know.

And if you’ve found uses for calibration, we’d love to hear from you.

Sonarworks Reference is available with a free trial:

https://www.sonarworks.com/reference

And some more resources:

Erica Synths our friends on this tool:

Plus you can MIDI map the whole thing to make this easier:

The post What it’s like calibrating headphones and monitors with Sonarworks tools appeared first on CDM Create Digital Music.

Roland’s little VT-4 vocal wonder box just got new reverbs

Delivered... Peter Kirn | Scene | Fri 30 Nov 2018 5:06 pm

Roland’s VT-4 is more than a vocal processor. It’s best thought of as a multi-effects box that happens to be vocal friendly. And it’s getting deeper, with new reverb models, downloadable now.

Roland tried this once before with an AIRA vocoder/vocal processor, the VT-1. But that model proved a bit shallow: limited presets and pitch control only through the vocal input meant that it works great in some situations, but doesn’t fit others.

The VT-4 is really about retaining a simple interface, but adding a lot more versatility (and better sound).

As some of you noted in comments when I wrote it up last time, it’s not a looper. (Roland or someone else will gladly sell you one of those.) But what you do get are dead simple controls, including intuitive access to pitch, formant, balance, and reverb on faders. And you can control pitch through either a dial on the front panel or MIDI input. I’ll have a full hands-on review soon, as I’m particularly interested in this as a live processor for vocalists and other live situations.

If your use case is sometimes you want a vocoder, and sometimes you want some extra effects, and sometimes you’re playing with gear or sometimes with a laptop, the VT-4 is all those things. It’s got USB audio support, so you can pack this as your only interface if you just need mic in and stereo output.

And it has a bunch of effects now: re-pitch, harmonize, feedback, chorus, vocoder, echo, tempo-synced delay, dub delay … and some oddities like robot and megaphone and radio. More on that next time.

This update brings new reverb effects. They’re thick, lush, digital-style reverbs:

DEEP REVERB
LARGE REVERB
DARK REVERB
… and the VT-1’s rather nice retro-ish reverb is back as VT-1 REVERB

Deep dark say what? So the VT-1 reverb already was deeper (more reflections) and had a longer tail than the new VT-4 default; that preset restores those possibilities. “Deep” is deeper (more reflections). “Large” has longer duration reflections or simulates a larger room. And “DARK” is like the default, but with more high frequency filtering. You’ll flash the new settings via USB.

Roland is pushing more toward adding features to their gear over time, now via the AIRA minisite, so you can grab this pack there:
https://aira.roland.com/soundlibrary/reverb-pack-1/

And this being Japan, they introduce the pack by saying “It will set you in a magnificent space.” Yes, indeed, it will. That’s lovely.

The VT-4 got a firmware update, too.

1. PITCH AND FORMANT can be active now irrespective of input signal level and length, via a new settings. (Basically, this lets you disable a tracking threshold, I think. I have to play with this a bit.)
2. ROBOT VOICE now won’t hang notes; it disables with note off events.
3. There’s a new MUTE function setting.

VT-4 page:
http://www.roland.co.in/products/vt-4/

I mean, a really easy-to-use pitch + vocoder + delay + reverb for just over $200, and sometimes you can swap it for an audio interface? Seems a no brainer to me. So if you have some questions or things you’d like me to try with this unit I just got in, let me know.

http://www.roland.co.in/products/vt-4/

The post Roland’s little VT-4 vocal wonder box just got new reverbs appeared first on CDM Create Digital Music.

You can now add VST support to VCV Rack, the virtual modular

Delivered... Peter Kirn | Scene | Tue 27 Nov 2018 4:59 pm

VCV Rack is already a powerful, free modular platform that synth and modular fans will want. But a $30 add-on makes it more powerful when integrating with your current hardware and software – VST plug-in support.

Watch:

It’s called Host, and for $30, it adds full support for VST2 instruments and effects, including the ability to route control, gate, audio, and MIDI to the appropriate places. This is a big deal, because it means you can integrate VST plug-ins with your virtual modular environment, for additional software instruments and effects. And it also means you can work with hardware more easily, because you can add in VST MIDI controller plug-ins. For instance, without our urging, someone just made a MIDI controller plug-in for our own MeeBlip hardware synth (currently not in stock, new hardware coming soon).

You already are able to integrate VCV’s virtual modular with hardware modular using audio and a compatible audio interface (one with DC coupling, like the MOTU range). Now you can also easily integrate outboard MIDI hardware, without having to manually select CC numbers and so on as previously.

Hell, you could go totally crazy and run Softube Modular inside VCV Rack. (Yo dawg, I heard you like modular, so I put a modular inside your modular so you can modulate the modular modular modules. Uh… kids, ask your parents who Xzibit was? Or what MTV was, even?)

What you need to know

Is this part of the free VCV Rack? No. Rack itself is free, but you have to buy “Host” as a US$30 add-on. Still, that means the modular environment and a whole bunch of amazing modules are totally free, so that thirty bucks is pretty easy to swallow!

What plug-ins will work? Plug-ins need to be 64-bit, they need to be VST 2.x (that’s most plugs, but not some recent VST3-only models), and you can run on Windows and Mac.

What can you route? Modular is no fun without patching! So here we go:

There’s Host for instruments – 1v/octave CV for controlling pitch, and gate input for controlling note events. (Forget MIDI and start thinking in voltages for a second here: VCV notes that “When the gate voltages rises, a MIDI note is triggered according to the current 1V/oct signal, rounded to the nearest note. This note is held until the gate falls to 0V.”)

Right now there’s only monophonic input. But you do also get easy access to note velocity and pitch wheel mappings.

Host-FX handles effects, pedals, and processors. Input stereo audio (or mono mapped to stereo), get stereo output. It doesn’t sound like multichannel plug-ins are supported yet.

Both Host and Host-FX let you choose plug-in parameters and map them to CV – just be careful mapping fast modulation signals, as plug-ins aren’t normally built for audio-rate modulation. (We’ll have to play with this and report back on some approaches.)

Will I need a fast computer? Not for MIDI integration, no. But I find the happiness level of VCV Rack – like a lot of recent synth and modular efforts – is directly proportional to people having fast CPUs. (The Windows platform has some affordable options there if Apple is too rich for your blood.)

What platforms? Mac and Windows, it seems. VCV also supports Linux, but there your best bet is probably to add the optional installation of JACK, and … this is really the subject for a different article.

How to record your work

I actually was just pondering this. I’ve been using ReaRoute with Reaper to record VCV Rack on Windows, which for me was the most stable option. But it also makes sense to have a recorder inside the modular environment.

Our friend Chaircrusher recommends the NYSTHI modules for VCV Rack. It’s a huge collection but there’s both a 2-channel and 4-/8-track recorder in there, among many others – see pic:

NYSTHI modules for VCV Rack (free):
https://vcvrack.com/plugins.html#nysthi
https://github.com/nysthi/nysthi/blob/master/README.md

And have fun with the latest Rack updates.

Just remember when adding Host, plug-ins inside a host can cause… stability issues.

But it’s definitely a good excuse to crack open VCV Rack again! And also nice to have this when traveling… a modular studio in your hotel room, without needing a carry-on allowance. Or hide from your family over the holiday and make modular patches. Whatever.

https://vcvrack.com/Host.html

The post You can now add VST support to VCV Rack, the virtual modular appeared first on CDM Create Digital Music.

Cyber Monday means still more deals on music software

Delivered... Peter Kirn | Scene | Mon 26 Nov 2018 6:28 pm

If you snoozed on some deals this weekend, and you’re longing to build out your software arsenal, erm, legally, it’s not too late. Here are some of the best deals we missed over the weekend plus some Cyber Monday news.

And yes, if you think I’d do this just as an excuse to run an image of some Cybermen, vintage ones looking like BBC actors dressed in a combination of balaclavas and some combination of hardware store parts that make it look like they have an air conditioner strapped to their chest, oh absolutely I would.

Ah, back to deals.

pluginboutique.com continues the sale of the weekend with a bunch of Monday “flash” deals. That includes ROLI’s wonderful new Cypher2 synth on sale, Softube Tape for thirty bucks, and many others – plus loads of plug-ins are $1 or free meaning you can go shopping for next to nothing or actually nothing. Also, pluginboutique.com’s site is up, which isn’t always the case with some of these flash deals from plug-in developers, so they’re a good place to check out.

Some examples:
Loopmasters Studio Bundle at 90% off, or $132 for a bunch of stuff.

iZotope at 78% off (weird number, but great)!

AAS / Applied Acoustics for 50% off – I’ve always loved their unique physical modeling creations.

The beautiful Sinevibes creations for 30% off.

Harrison make wonderful consoles. Now Mixbus was already kind of ridiculously affordable – US$79 buys you a full console emulation that’s great for mixdowns and mastering and the like. But for Cyber Monday, that’s a “okay, you have to buy this” $19, which is just stupidly good. Alternatively get Mixbus plus 5 plug-ins for $39. They didn’t pay me to say that, either; at those prices, I don’t imagine they have much marketing budget!

Enter code CYBERMON18 when you shop their store.

Trakction are back with 50% off everything today only. Try entering code EPIC2018, too.

Spitfire Audio have Black Weekend sales still going – think 25% off individual products, or up to 77% off of collections, for their unique and delightful sound libraries.

Sugarbytes have everything on sale: EUR69 plug-ins, EUR333 bundle, plus up to 50% on iOS Apps.

Propellerhead have a huge Cyber Monday sale, and with loads of big discounts on Reason add-ons and the cheapest ever price on an upgrade, it’s nice fodder for their loyal users. Euclidean rhythms, the KORG Polysix, the Parsec “spectral synth,” the Resonans physical modeling synth – some serious goodies there on sale. And €99 for the upgrade means you can finally stop putting off getting the latest Reason 10. (Not only is VST compatibility in there, but the Props have done a lot lately on usability and stability meaning now seems a good time to jump for Reason users.)

Eventide have their software on sale through the end of the month. This is really the most affordable way to get Eventide sound in your productions (short of a subscription deal).

Anthology XI for US$699 instead of the usual $1799 is especially notable. Having those 23 plug-ins feels a bit like you’ve just rented a serious studio, virtually.

If that’s too much to budget, consider also the new Elevate Bundle – makes your sounds utterly massive, and the three do fit well together, so $79 is a steal.

There’s also the excellent H3000 delay on steep discount, and the luscious Blackhole reverb for just $69. (Or for more studio reverb sounds, the ‘Heroes’/Visconti-inspired Tverb for $99.) And of course the rest of the lineup, too.

Waves had a big sale over the weekend, but for Cyber Monday they also have a new synth – the Flow Motion FM Synth. This crazy UI is certainly a new take on making FM easier to grasp, and it’s got an intro price of US$39. (I have no idea how good it is as I haven’t tried it yet, but they’ve got my attention – and NI aren’t shipping the new Massive yet, so Waves gets in here first with their own hybrid take!) And Waves are doing a buy 2 get 1 free deal, as well.

After introducing a vocal plug-in over the weekend, Waves are using Cyber Monday for a product launch, too – the Flow Motion FM synth seen here.

Output have added a 25% off discount on their software, even including their already discounted bundle, for Cyber Monday.

Steinberg have a big sale this week, including apps, with up to 60% off. That’s a big deal for fans of their production software and plug-ins, but also take note that their terrific mobile app Cubasis – perhaps the most feature-complete DAW for iOS – is half off, as is the Waves in-app purchase for the same.

App lovers, it’s worth checking the Android App Store / Google Play as a bunch of stuff is on sale now – too much to track, probably. But some top picks this week: Imaginando’s Traktor and Live controllers, iOS and Android, are all 40% off – everything.

KORG’s apps are still 50% off.

And the terrific MoMinstruments line is all on sale:
Elastic Drums: 10,99€ -> 5,49€, $9.99 -> $4.99
Elastic FX: 10,99€ -> 5,49€, $9.99 -> $4.99
iLep: 10,99€ -> 5,49€, $9.99 -> $4.99
fluXpad: 8,99€ -> 3,99, $7.99 -> $4.49
WretchUp: 4,49€ -> 2,29€, $3.99 -> $1.99

Puremagnetik have US$10 Cyber Monday deals – $20 each, then enter code BLACKFRIDAY18 for 50% off on top of that – so ten bucks for String Machines XL, Retro Computers +, and Soniq’s classic synths.

Still going… A lot of the deals I wrote up over the weekend are still on, including Arturia and Soundtoys.

Native Instruments have a 50% off sale still going. Tons of stuff in there, but Reaktor 6 for a hundred bucks – full version, meaning you don’t need a past version – that’s insane. That’s a hundred bucks to buy you what could be the last plug-in you ever need.

IRRUPT/audio have a 50% off deal on their unique sound selection if you enter code IRRUPT-VIP.

Sonic Faction have a 40% off sale on instruments for Ableton Live and Native Instruments Kontakt – enter code CYBRMNDY40

Need to learn things and not just buy them? Askvideo/Macprovideo have a deal for today only with US$75 for a yearly pass (the price that usually gets you just three months), or 75% off all à la carte training.

And SONAR+D in Bacelona has a 200EUR delegate pass sale today only.

Some of the deals are expiring, but some last through today or through Friday (with a few straggling into December), so check out previous guide and guide to other guides:

Here’s where to find all the don’t-miss deals for Black Friday weekend

The post Cyber Monday means still more deals on music software appeared first on CDM Create Digital Music.

Here’s where to find all the don’t-miss deals for Black Friday weekend

Delivered... Peter Kirn | Scene | Sat 24 Nov 2018 3:45 pm

Black Friday, Cyber Monday, end of November, whatever … your inbox is likely overflowing with random sale prices. Here are a few of our favorite, in case you want to shop for yourself or others this weekend.

(Food coma, America, for instance? Winter blues, everyone in the northern bit of Earth?)

And yeah, with so much noise out there, these are the ones you might actually want to take advantage of.

The insane-est free-ish software deals deals

A reader tips us off that a whopping 46 GB of samples from Samples from Mars – 5 years of work, their entire product line – is US$39 instead of $1,367.00. In terms of percentage, that may be the steepest non-free discount I’ve seen yet. (Thanks, Davo!)

Waves have their brand-new Sibilance vocal plug-in for free with email. (Yeah, I signed up for it, actually.) Looks promising – Black Friday free plug-in.

And Waves have a bunch of other discounts and free plug-ins with purchase, meaning you can stock up for a fraction of the normal price.

Sibilance, Waves’ newest plug-in, is free this weekend.

pluginboutique has iZotope Neutron Elements for free if you sign up for their newsletter, and they’ve put all their plug-in discounts in one place. That means instead of digging through your inbox for lots of different plug-in developers, you can check everything on one page – and pricing starts at just US$1 for some of those plugs (really – check the AIR instruments and effects).

And Arturia have the entire V Collection – that’s a massive bundle of emulations of just about every instrument you can imagine – for just US$249, or even $199 if you already own something from them:

https://www.arturia.com/black-friday

Arturia want you to spring for it and own all of this … well, virtually speaking.

Yes, gear deals

US retailer Sweetwater are all over this one. Even rarely-discounted gear like Moog is on sale (Mother-32 for $100 off?) They also have some picks of their own on their blog.

But Sweetwater’s Roland deals are especially sweet. $399 for the SE-02 analog synth is a steal, and the original TR-8 drum machine is just $299, which might beat what you can get it for used.

ProAudioStar has a load of deep discounts, too. How about a KORG Monologue for US$229? (That’s cheaper than I can get from the KORG sticker price even if I twist their arm, and it means the Minilogue I just sold will just turn me a profit. Plus they have all three colors. And it’s one of my favorite recent synths – with microtuning, too.)

They’ve also got the Beatstep Pro for $159 (that’s a no-brainer – if you don’t have it, just get it), and Behringer Model D and Neutron for $225 (plus the DeepMind 6 for $369.99):

https://www.proaudiostar.com/landing/black-friday-2018/keyboard-synth.html

Going back in time a bit with KORG, Kraft have the beloved original microKORG for US$254.99.

And Sam Ash has the brand new IK Multimedia Uno synth for $150.

More favorite tools on sale

We just covered Tracktion making their engine open source; well, their full range of software is on sale. That includes Tracktion Waveform 9 Basic $60 (or $30 for an upgrade).

Visualists, all of Resolume’s VJ and visual tools are half off through Monday.

I already mentioned that Softtube has deep discounts on their software, including their modular platform and the superb Console 1. These Swedish developers have some of the best sounding stuff out there, so I have to give them a nod.

Speaking of excellent boutique emulation, Soundtoys has some serious 50-80% discounts on their plug-ins, including some starting for just US$30. I’ve been addicted to their EchoBoy Jr.; you can go ahead and leap for the non-junior edition, for instance.

And Universal Audio have been rolling out one new discount each day for 12 days which are … not the 12 Days of Christmas, but whatever. It seems those discounts stay to the end, so you’ll have all the deals by the last day of November if you set an alarm.

And we covered at the beginning of the month a big sale on Output’s bundle of instruments and effects.

Accusonus’ powerful mixing and production tools are on sale, as well.

Round-up of round-ups!

GearSlutz has a running discussion thread

Attack Magazine has a handful of picks

itsoundsfuture did a nice job of grabbing all the steepest software discounts.

SynthAnatomy has a beautiful, super comprehensive spreadsheet-style view with hundreds of deals. Exhaustive! (Nice, Tom!)

Get some music, too!

Indie darling Bandcamp is seeing loads of self-released producers and small labels doing their own sale. Assuming you signed up for mailing lists when you bought music – and you really should, as it helps artists – you can just search your inbox for loads of little deals. For instance, fans of techno producer Truncate, check out 60% off his whole discography!

Beatport, which has also been a home to a lot of sales for indie and underground labels (really, not just the big deep house hits) – type in CYBERSALE at checkout and unlock up to 50%. (Yeah, time to clear out my current cart there, too.)

Most unexpected Black Friday deal?

noiiz.com used the opportunity to post their jobs board. So if you need to get a job to, uh, pay for all the stuff above – give them a go.

The post Here’s where to find all the don’t-miss deals for Black Friday weekend appeared first on CDM Create Digital Music.

How to recreate vintage polyphonic character, using Softube Modular

Delivered... Peter Kirn | Scene | Wed 21 Nov 2018 5:54 pm

It’s not about which gear you own any more – it’s about understanding techniques. That’s especially true when a complete modular rig in software runs you roughly the cost of a single hardware module. All that remains is learning – so let’s get going, with Softube Modular as an example.

David Abravanel joins us to walk us through technique here using Softube’s Modular platform, all with built-in modules. If you missed the last sale, by the way, Modular is on sale now for US$65, as are a number of the add-on modules that might draw you into their platform in the first place. But if you have other hardware or software, of course, this same approach applies. -Ed.

Classic Style Polyphony with Softube Modular

If you’ve ever played an original Korg Mono/Poly synthesizer, then you know why it’s so prized for its polyphonic character. Compared to fully polyphonic offerings (such as Korg’s own Polysix synthesizer), the Mono/Poly features four analog oscillators which can either be played stacked (monophonic), or triggered in order for “polyphony” (though still with just the one filter).

The original KORG classic Mono/Poly synth, introduced in 1981.

The resulting sound is richly imperfect – each time a chord is played, the minute difference in timing between individual fingers affect a difference in sound.

The cool thing is – we can easily re-create this in the Softube Modular environment, using the unique “Quad MIDI to CV” interface module. Follow along:

Our chord progression.

To start with, I need a reason for having four voices. In this case, it’s the simple chord sequence above. In order to play those notes simultaneously using Modular, I’ll need a dedicated oscillator for each. Each virtual voice will consist of one oscillator, ADSR envelope, and VCA amplifier. Here’s the basic setup – the VCO / ADSR / VCA modules will be repeated three more times to give us four voices:

Wiring up the first oscillator.

For the first oscillator, I’ve selected a pulse wave – go with whichever sounds you’d like to hear (things sound especially nice with multiple waveforms stacked on top of one another). With all four voices, the patch should look like this:

Note that each voice has its own dedicated note and gate channels from the Quad MIDI to CV. Now, we need to combine the voices – for this, we’ll use the Audio Mix module. I’m also adding a VCF filter, with its own ADSR. Because the filter needs to be triggered every time any note is input, I’m going to add a single MIDI to CV module to gate the filter envelope. It all looks like this:

Now, let’s hear what we’ve got:

That’s not bad, but we can spice it up a little bit. I went with two pulse waves, a saw wave, and a tri wave for my four oscillators – I’ll add a couple LFOs to modulate the pulsewidths of the two pulse waves and add some thickness. For extra dubby space, I’m also adding the Doepfer BBD module, a recent addition to Softube Modular which includes a toggle option for the clock noise bleed-through of the analog original. I’m also adding one more LFO, for a bit of modulation on the filter.

Adding in some additional modules for flavor. The Doepfer BBD (an add-on for the Softube Modular) adds unique retro delays and other effects, including bitcrushing, distortion, and lots of other chorusing, flanging, ambience, and general swirly crunchy stuff.

Honestly, the characterful BBD module deserves its own article – and may get one! Stay tuned.

Here’s our progression, really moving and spacey now:

And there we have it! A polyphonic patch with serious analog character. You can also try playing monophonic melodies through it – in Quad MIDI to CV’s “rotate” mode, each incoming note will go to a different oscillator.

Want to try this out for yourself? Download the preset and run it in Modular (requires Modular and the BBD add-on, both of which you can demo from Softube).

DHLA poly + BBD.softubepreset

We’re just scratching the surface with Modular here – there’s an enormous well of potential, and they’ve really nailed the sound of many of these modules. Modular is a CPU-hungry beast – don’t try to run more than one or two instances of a rich patch like this one without freezing some tracks – but sound-wise it’s really proved its worth.

Stay tuned for future features, as we dive into some of Modulars other possibilities, including the vast potential found in the first ever model of Buchla’s legendary Twisted Waveform oscillator!

Softube Modular

The post How to recreate vintage polyphonic character, using Softube Modular appeared first on CDM Create Digital Music.

Escape vanilla modulation: Nikol shows you waveshaping powers

Delivered... Peter Kirn | Scene | Tue 20 Nov 2018 1:57 pm

You wouldn’t make music with just simple oscillators, so why only use basic, repetitive modulation? In the latest video in Bastl’s how-to series hosted by Patchení’s Nikol, waveshaping gets applied to control signals.

A-ha! But what’s waveshaping? Well, Nikol teaches basic classes in modular synthesis to beginners, but she did skip over that. Waveshapers add more complex harmonic content to simple waveform inputs. Basic vanilla waveform in, nice wiggly complex waveform out. (See Wikipedia for that moment when you say, oh, well, why didn’t my math teacher bring in synthesizers when she taught us polynomials, then I would have stayed awake!)

Bastl unveiled the Timber waveshaping module back in May, and we all thought it was cool:

Bastl do waveshaping, MIDI, and magically tune your modules

But when most people hear waveshapers, they think of them just as a fancy oscillator – as a sound source. But in the modular world, you can also imagine it as a way of adding harmonics (read: complexity) to simple control signals, which is what Nikol demonstrates here.

That is, instead of Waveshaper -> out, you’ll route [modulation/control signal/LFO] -> Waveshaper in, and mess with that signal. WahWahWahWah can turn into WahwrrEEEEkittyglrblMrcbb… ok, okay, video:

Keep watching, because this eventually gets into adding variation to a sequenced signal.

You can try this in any software or hardware environment, but you do need your waveshaper to work with your control input. What’s relatively special about Timber in the hardware domain at least is its ability to process slow circuits.

https://www.bastl-instruments.com/modular/timber/

You can also follow Nikol on Instagram.

But more of Deina the modular dog, please!

Tragically, while Nikol’s English is getting fluent, us Americans are not doing any better with our Czech. So, Bastl, we may need an immersion language program more than synthesis.

The post Escape vanilla modulation: Nikol shows you waveshaping powers appeared first on CDM Create Digital Music.

The guts of Tracktion are now open source for devs to make new stuff

Delivered... Peter Kirn | Scene | Fri 16 Nov 2018 8:33 pm

Game developers have Unreal Engine and Unity Engine. Well, now it’s audio’s turn. Tracktion Engine is an open source engine based on the guts of a major DAW, but created as a building block developers can use for all sorts of new music and audio tools.

You can new music apps not only for Windows, Mac, and Linux (including embedded platforms like Raspberry Pi), but iOS and Android, too. And while developers might go create their own DAW, they might also build other creative tools for performance and production.

The tutorials section already includes examples for simple playback, independent manipulation of pitch and time (meaning you could conceivably turn this into your own DJ deck), and a step sequencer.

We’ve had an open source DAW for years – Ardour. But this is something different – it’s clear the developers have created this with the intention of producing a reusable engine for other things, rather than just dumping the whole codebase for an entire DAW.

Okay, my Unreal and Unity examples are a little optimistic – those are friendly to hobbyists and first-time game designers. Tracktion Engine definitely needs you to be a competent C++ programmer.

But the entire engine is delivered as a JUCE module, meaning you can drop it into an existing project. JUCE has rapidly become the go-to for reasonably painless C++ development of audio tools across plug-ins and operating systems and mobile devices. It’s huge that this is available in JUCE.

Even if you’re not a developer, you should still care about this news. It could be a sign that we’ll see more rapid development that allows music loving developers to try out new ideas, both in software and in hardware with JUCE-powered software under the hood. And I think with this idea out there, if it doesn’t deliver, it may spur someone else to try the same notion.

I’ll be really interested to hear if developers find this is practical in use, but here’s what they’re promising developers will be able to use from their engine:

A wide range of supported platforms (Windows, macOS, Linux, Raspberry Pi, iOS and Android)
Tempo, key and time-signature curves
Fast audio file playback via memory mapping
Audio editing including time-stretching and pitch shifting
MIDI with quantisation, groove, MPE and pattern generation
Built-in and external plugin support for all the major formats
Parameter adjustments with automation curves or algorithmic modifiers
Modular plugin patching Racks
Recording with punch, overdub and loop modes along with comp editing
External control surface support
Fully customizable rendering of arrangements

The licensing is also stunningly generous. The code is under a GPLv3 license – meaning if you’re making a GPLv3 project (including artists doing that), you can freely use the open source license.

But even commercial licensing is wide open. Educational projects get forum support and have no revenue limit whatsoever. (I hope that’s a cue to academic institutions to open up some of their licensing, too.)

Personal projects are free, too, with revenue up to US$50k. (Not to burst anyone’s bubble, but many small developers are below that threshold.)

For $35/mo, with a minimum 12 month commitment, “indie” developers can make up to $200k. Enterprise licensing requires getting in touch, and then offers premium support and the ability to remove branding. They promise paid licenses by next month.

Check out their code and the Tracktion Engine page:

https://www.tracktion.com/develop/tracktion-engine

https://github.com/Tracktion/tracktion_engine/

I think a lot of people will be excited about this, enough so that … well, it’s been a long time. Let’s Ballmer this.

The post The guts of Tracktion are now open source for devs to make new stuff appeared first on CDM Create Digital Music.

Datalooper lets you play Ableton Live with your feet

Delivered... Peter Kirn | Scene | Thu 15 Nov 2018 5:27 pm

It’s a looper, it’s a Session View controller. It’s USB powered, and you play it with your feet. But unlike other options, Datalooper integrates directly with how you work in Ableton Live – and it doesn’t require Max for Live to operate. Here’s a first look – and an exclusive discount.

http://www.datalooperpedal.com/cdmspecial

Ableton may have called their event “Loop,” but that doesn’t mean there’s an obvious way to control the software’s looping capability via hardware out of the box. And that’s essential – Ableton Push is great, but it doesn’t fit a lot of instrumental and vocal uses. It’s too complicated, and involves too much hand-eye coordination – stuff you want to focus elsewhere. I’m not sure what Ableton would have called their own foot hardware – Ableton Tap? Ableton Toes? But instead, users have been stepping up … sorry, unintentional pun … and giving Live the kind of immediacy you’d expect of a looper pedal.

Demand seems higher than ever – there were two projects floating around Ableton Loop in LA last week. I covered State of the Loop already:

Ableton Live Looping gets its own custom controller

That project focused mainly on the Looper instrument and the use of scenes, all via Max for Live. It also seems well suited to running a lot of loopers at once.

Datalooper – the work of musician/creator Vince Cimo – is a similar project, but finds its own niche. First off, Max for Live isn’t required, meaning any edition of Live will work. (It uses a standard Live Control Script to communicate with Live.)

We got hands-on with Datalooper at Ableton Loop this year.

Datalooper will use the Looper device if you want. In that mode, it’s basically a controller for the Looper instrument – and supports up to three at once by default (which will be enough for most people anyway).

But there’s not much difference between the Looper device and other plug-ins or dedicated looping tools. “Natively” looping in Live still logically involves Session View. Before Ableton had a Looper, the company would advise customers to just record into clips in the Session View. That’s all fine and well, except that users of hardware pedals were accustomed to being able to set a tempo with the length of their initial recording, so the loop kept time with them instead of having to adjust to an arbitrary metronome.

Datalooper does both. You can use Session View, taking advantage of all those clips and arrangement tools and track routing and effects chains. But you can also use the looper to set the tempo. As the developers describe it:

If you long press on the clear button, the metronome will turn off, and the tempo will re-calculate based on the next loop you record, so you can fluidly move between pieces without having to listen to a click track. Throughout this process, the transport never stops, meaning you can linearly record your whole set and capture every loop and overdub in pristine quality.

Datalooper is also a handy foot-powered control system for working with clips in general. So even if you weren’t necessarily in the market for a looper or looper pedal, you might want Datalooper in your studio just to facilitate working quickly with clips.

(And of course, this also makes it an ideal companion to Ableton Push … or Maschine with a Live template, or an APC, or a Launchpad, or whatever.)

Session Control mode lets you hop in and record quickly to wherever you wish. I imagine this will be great for improvisation not only solo but when you invite a friend to play with you.

For users that are more familiar with the clip system, the Datalooper also features a ‘session control’ mode, built to allow users to quickly record clips. In this mode, the Datalooper script will link up with a track, then ‘auto-scan’ and latch on to the first unused clip slot. You can then use the first the buttons in a row to control the recording, deletion and playback of the clip. Best of all, when you want to record another clip, you can simply press record again and the script will find you another unused clip slot. This is a game-changer if you’re trying to quickly record ideas and want your hands free.

Videos:

You get all of this in a nice, metal box – die-cast aluminum, weighing 3 lbs (1.4 kg), micro USB bus-powered standard MIDI device. The onboard LEDs light to show you status and feedback from the metronome.

By default, it uses three loopers, but all the behaviors are customizable. In fact, when you want to dive into customization, there’s drag-and-drop customization of commands.

A graphical controller editor lets you customize how the Datalooper works. This could be the future of all custom control.

US$199 is the target price, or $179 early bird (while supplies last). It’s now on Indiegogo; creator Vince Cimo needs enough supporters to be able to pull the trigger on a $10k manufacturing run or it wont’ happen.

Vince has offered CDM readers a special discount. Head here for another $20 off the already discounted price:

http://www.datalooperpedal.com/cdmspecial

(No promotional fee paid for that – he just asked if we wanted a discount, and I said sure!)

Having gotten hands on with this thing and seen how the integration and configuration works … I want one. I didn’t even know I wanted a pedal. I think it could well make Live use far more improvisatory. And the fact that we have two projects approaching this from different angles I think is great. I hope both find enough support to get manufactured – so if you want to see them, do spread the word to other musicians who might want them.

The post Datalooper lets you play Ableton Live with your feet appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme