Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » Software


This free phaser from NI is a must, even if you don’t like phasers

Delivered... Peter Kirn | Scene | Fri 15 Dec 2017 4:36 pm

Native Instruments has a free phaser plug-in called Phasis as a holiday special – and, wow, definitely don’t skip this one.

Here’s the deal: as NI do yearly, they’ve got a holiday special going. This year, there’s an e-voucher and a giveaway contest and blah blah — let’s skip to Phasis.

Phasis is a free plug-in (VST, AU, AAX) for Mac and Windows. You’ll need to sign up for the mailing list, then get a serial number to enter into Native Access, NI’s latest all-in-one software for managing licenses and updates. That tool works well, though one note on Windows: look for the phasis.dll file on your hard drive, as I had to manually copy it to the correct VST plug-in folder.

Phasers may call to mind cheesy guitar effects and overused pop sounds, but this one’s different. Here’s how NI describe it:

PHASIS is a brand new phaser. It offers timeless phasing sounds – adding movement, soul, and creative magic to any signal. PHASIS draws inspiration from classic phasers but adds powerful new features for never-heard-before results. The Spread control changes the spacing of the phaser’s notches, for vocal-style effects. Ultra mode pushes modulation to ultra high rates, producing unique FM-esque tones. Download the VST/AU/AAX plug-in for free now!

It’s the combination of the phaser with those notch filters and “ultra” extreme audio rate modulation that produces something genuinely novel. I apply it here to a bland 909 drum loop, and already you get some more radical results:

Holiday Deal or …

Phasis download page

Wow, Windows backwards compatibility has gotten way easier than the Mac… Mac users will need 10.11 or later (10.13 if you use Cubase); Windows runs back to Windows 7. Well, once we find the darned VST plug-in folder. I’ll put it on both my machines. I only wish we’d gotten a Reaktor ensemble here so we could play around with the innards.

The post This free phaser from NI is a must, even if you don’t like phasers appeared first on CDM Create Digital Music.

Try a new physical model of a pipe organ for free

Delivered... Peter Kirn | Scene | Wed 13 Dec 2017 5:01 pm

Now, all your realistic pipe organ dreams are about to be solved in software – without samples.

MODARTT are the French firm behind the terrific Pianoteq physically modeled instrument, which covers various classic keys and acoustic pianos. That mathematical model is good enough as to find applications in teaching and training.

Now, they’re turning their attentions to the pipe organ – some of which turns out to be surprisingly hard to model.

For now, we get just a four-octave preview of the organ flue pipe. But that’s free, and fun to play with – and it sounds amazing enough that I spent some part of the afternoon just listening to the demos. (Pair this with a convolution reverb of a church and I think you could be really happy.)

The standalone version is free, and like all their software runs on Linux as well as Mac and Windows. Stay tuned for the full version. Description:

ORGANTEQ Alpha is a new generation physically modeled pipe organ that reproduces the complex behaviour of the organ flue pipe.
It is a small organ with a keyboard range of 4 octaves (from F1 to F5) and with 2 stops: a Flute 8′ and a Flute 4′ (octave).
It is provided in standalone mode only and should be regarded as a foretaste of a more advanced commercial version in development, due to be released during 2018.

https://www.modartt.com/organteq

The post Try a new physical model of a pipe organ for free appeared first on CDM Create Digital Music.

djay Pro 2 brings algorithms and machine learning to DJing

Delivered... Peter Kirn | Scene | Tue 12 Dec 2017 6:42 pm

A.I.D.J.? The next-generation djay Pro 2 for Mac adds mixing and recommendations powered by machine learning – and more human-powered features, too.

When Big Data meets the DJ

The biggest break from how we’ve normally thought about DJ software comes in the form of automatic mixing and selection tools. One is powered by machine learning working with DJ sets, and one from data collected from listening (Spotify).

Automix AI is a new mixing technology. And hold on to your hats, folks, if the “sync” button was unnerving to you, this goes further.

When we say “A.I.,” we’re really talking machine learning – that is, “training” algorithms on large sets of data. In this case, that data comes from existing DJ sets. (Algoriddim tells CDM that was drawn from a variety of DJs, mostly in hip-hop and electronic genres.) Those sets were analyzed according to various sonic features, and the automixing applies those to your music. So this isn’t just about mixing two different techno tracks with mechanical efficiency – it’s meant to go further across different tempos and genres.

It’s also more than matching tempo. Automix AI will identify where the transition occurs, decide how long the fade should be, and apply filters and EQ. So, if you’ve ever listened to existing Automix features and how clumsy they are with starting and stopping tracks, this takes a different approach. Algoriddim explains to CDM:

The core of this tech is finding good start and end regions for transition between two songs, while also respecting the corresponding sound energies and choosing an appropriate transition accordingly (e.g. most likely EQ or short filter transition if you have two high energy parts of the song for the transition)

Then there’s “Morph” – which Algoriddim argue opens up new ways of mixing:

This actually goes beyond what a regular DJ can do with two hands. Morph not only syncs the songs but seamlessly ramps the changed tempo of the inactive deck to its regular speed as the transition progresses. E.g. in the past if you had a hip-hop song at say 95 BPM and an electronic track at 130 BPM, syncing the two and making a transition would leave the new track in an awkwardly rate changed state (even with time-stretching enabled). So as the transition starts, both songs (in this example) would be playing at 130 BPM but as we are doing a simultaneous tempo “crossfade”, the hip-hop track ends up being back at 95 BPM at the end of the transition. This ensures the tracks always play at their regular tempo and these types of mixes sound very natural, allowing for seamless cross-genre transitions.”

Also impressive: while you might think this sort of technology would be licensed externally, the whiz kids over at Algoriddim did all of this on their own, in-house.

On the Spotify integration side, and also related to automating DJing tasks, “Match” technology recommends music based on BPM, key, and music style. Existing Spotify users will be familiar with some of this recommendation engine already. Where it could be good for producers is, this means there’s an avenue by which your music gets exposed by algorithms. And that in turn is potentially good news, if you’re a producer whose music isn’t always charting the top of a genre on Beatport.

These “autopilot” features are all under your control, too: you can choose which parameters are used, choose your own tracks, switch it off at will – as you like. Or you can sit back and let djay Pro run in the background while you’re doing something else, if you want to let the machine do the DJing while you cook dinner, for instance.

Pro features, for humans

Okay, so at this point, djay Pro 2 may sound a bit like this:

But one of the disruptive things about Algoriddim’s approach to DJ software is, it has simultaneously challenged rivals both among entry level and casual users and more advanced users at the same time.

So, here’s the more “Pro” sounding side of this. Some of these are features that are either missing or not implemented quite the way we’d like in industry leaders like Serato and Traktor.

A new audio engine with master AU plug-ins. A rewrite of the engine now allows high-res waveforms, post-fader effects, higher-quality filters, plus the ability to add Audio Unit plug-ins as master output effects.

Integrated libraries. iTunes, Spotify, and music in the file system / Finder are now all integrated and can be viewed side-by-side.

Integrated library views bring together everything on your local machine as well as Spotify.

Smart filters. Set up dynamic playlists sorted by BPM, key, date, genre, and other metadata. (Those columns are available in other tools, but here you get them dynamically, a bit like the ones in iTunes.)

Keyboard Shortcuts Editor. There’s a full editor for assigning individual features to custom shortcuts – which in turn can also map to custom hardware or the MacBook Pro Touch Bar.

CDJ and third-party hardware support. Whereas some other players make their own hardware or limit compatibility (or even require specific hardware just to launch, ahem), Algoriddim’s approach is more open. So they’re fully certified by Pioneer for CDJ compatibility, and they include 60 MIDI controllers in the box, and they have an extensive MIDI learn function.

More cueing and looping. Version 2 now has up to eight cue points and loops, with naming, per song. (I recently lauded Soda for adding this.) You can also now assign loop triggers to cue points.

Single deck mode for preparation. Okay, some (cough, again Serato) lock you into this view if you don’t have authorized hardware plugged in. But here, it’s designed specifically for the purpose of making set prep easier.

Accessibility. VoiceOver support makes djay Pro 2 work for vision-impaired users. We really need more commitment to this in the industry; it’s also been great to see this technology from Algoriddim showcased at Apple’s developer conference. If you’re using this (and hopefully CDM is working well with screen readers), do let us know.

New photo / still image support.

And it does photos

Back to less club/pro features, the other breakthrough for casual users, weddings, and commercial gigs is photo integration. Drag and drop photos or albums onto the visual decks, and the software will make beat-matched slide shows.

The photo decks also work with existing, fairly powerful VJ features, which includes external output, effects, and the like. You can also adjust beat sync.

Still image support builds on an existing video/VJ facility.

Plus a no-brainer price

The other thing that’s disruptive about djay Pro 2: price. It’s US$49.99, with an intro price of US$39.99, on the App Store.

You’ll need Spotify Premium for those features, of course, and macOS 10.11 or later is required.

https://www.algoriddim.com/

The post djay Pro 2 brings algorithms and machine learning to DJing appeared first on CDM Create Digital Music.

How BeatMaker caught the iOS music trend before it even started

Delivered... Ashley Elsdon | Scene | Mon 11 Dec 2017 6:39 pm

It was one of the first apps to define what mobile music making on iOS could be. We talk to its creators to understand the story behind Intua BeatMaker.

CDM’s mobile editor Ashley Elsdon has always been ahead of the curve in understanding the potential of mobile music making. The clue is right in the name of his ground-breaking block “Palm Sounds” – started back when Palm devices were state of the art and iOS didn’t even exist yet. Those Palm gadgets included some all-in-one production tools, but BeatMaker took advantage of Apple’s generational boost in power and multi-touch interface. And that journey starts even before Apple had an App Store, let alone an iPad and a cadre of music making tools running on desktop-class architectures. No one has really told that story until now – and Ashley is the person to investigate. -PK

Before even the introduction of the iPad or even the App Store, BeatMaker 1 helped define the iPhone as mobile music platform. From there, it’s grown continuously in feature set and community, with BeatMaker 2 and now BeatMaker 3 each representing not just incremental, but ground-up new apps and radical landmarks in functionality. Ed.: You might look at those older releases if you’ve got a ‘vintage’ device running an earlier OS.

Following BeatMaker 3’s release, I wanted to understand direct from the developers how that journey took place. I was curious what had driven them and how they’d made decisions about what to keep and what to throw away. Hopefully you’ll find Intua’s responses as interesting as I have. Intua developer and co-founder Mathieu Garcia responds.

Ashley: What was it that first made you think about developing BM1, and how did you go about making it happen in a pre-App Store world?

Mathieu: Back in 2006, I was an IT consultant and was sent for a mission in London. The company was looking to create a “proof-of-concept” app that would allow VoIP calls on the iPhone. At the time, months before the launch of the App Store, you had to go through all kind of homemade toolchains and rough documentation. It was pretty interesting project, and one of my tasks was to reverse-engineer the audio layer of iOS 1.x. By the end of the project, they gifted me the development iPhone. During the flight back home, I looked at this futuristic phone and thought it would be pretty nice to write a small drum machine on it, just for the sake of it.

Luckily I had a couple of free days ahead and basically spent them reverse-engineering, designing and coding this modest drum-machine called “BeatPhone”. I would be sleeping only a couple of hours a day and barely eating. It was a really creative “rush.” I connected with a very nice IRC community of hackers / devs; George Hotz was one of them.

At the time, third-party apps were distributed on a platform called “Cydia,” that was installed automatically after jail-breaking. Ed.: For those not familiar with this process, basically you’d hack an exploit in the phone, allowing custom, non-authorized open source software to run its own application installer on the device. Apple was routinely patching these holes, with hackers rushing to stay one step ahead.

Every day, new apps would be made available. I can imagine that a lot of now-established iOS developers started during this period, too. So I uploaded “BeatPhone” in there. It looked pretty horrible, to be honest, and was barely usable at first. I had a blog, too, with install instructions, dev updates, etc. People would reach out, sending encouragement emails, asking for new features, etc.

Before the iPhone was even announced, some close friends organized a meetup in Barcelona to brainstorm around a touch-screen based device for music production. It was tricky, since we were not living in the same place, but we kept exchanging for a couple of months. Work and budget came in the way as well. Two of them, Colin and Vincent, who would later become co-founders of INTUA, were part of the project. We attended the same engineering school back in Paris, and we knew each other pretty well.

Anyway, a couple of months later, I decided to show them the app “BeatPhone”. During that time, it was evolving quickly. The interest for music creation apps was growing steadily. In a couple of weeks, this turned from a complete hobby side-project to my daily activity. I think I reached somewhere around one million hits on the blog. Vincent and Colin came over in Geneva, I introduced them to the unofficial SDK/toolchain, and naturally, we started brainstorming and designing a new app. UIKit wasn’t even a “thing” at the time, but we had a good friend working on a cross-platform OpenGL widget library for a few years now. We ported it to iOS, and still today, we use this framework.

We also wrote an audio engine from scratch, we made blueprints — it was so creative. A few months later, in March 2007, I think, Apple made the big announcement: the App Store was launching in July – perfect timing. We quickly set up a company, got the official SDK, and started adapting the existing code. We were immensely productive and BeatMaker 1 was made available two days after the initial App Store launch. We thought, oh well, at least if we can cover just the basic cost of a modest lifestyle, that’d be great! We had no idea about sales, and I think 15 days later, someone from Apple gave us a call, congratulating us and giving us the first numbers. It was very unexpected. It was becoming real! That was it, we were now convinced we could really continue working on BeatMaker. We quickly went back on the whiteboard and start planning features ahead. INTUA was now a real mobile app company.

What was the reaction to BM1?

Amazing, really, at least from people who had bought the app. We would get daily encouragements, some super nice fellows reached out, and we naturally started working on artists kits, sound packs, etc. New opportunities would open almost every week, press would reach out, etc. That said, it was still very “niche.” Most artists and producers wouldn’t even consider sketching out a few beats on the iPhone. It made no sense to them, and honestly we could understand why, knowing the limitations of the devices. At the same time, people started sharing their tracks, or even full albums with us, entirely made on iOS. It was taking off; it was clear it would take time for the platform to be really considered as “viable.” The community involved was very supportive, and that really drove us in the right direction.

After BM1, what was it that helped you to form the ideas around BM2?

Basically, we thought BM1 was focusing too much on the drum aspect and had no real track/instrument paradigm. Limitations are good, but you really had only 16 pads for your track. We looked at what was available on iOS and started scratching our heads, brainstorming a lot. This was maybe only a few months after the initial BM1 launch. We looked at desktop software, too, and decided it made sense to follow the multitrack path, while also focusing on the sampling aspect. It came pretty naturally to improve the existing BM1 drum sampler layout, and complement it with a keyboard sampler. Adding a more advanced sequencer, people would be able to compose full tracks. Originally BM2 had no audio tracks and was designed for iPhones. The iPad came out and gave us even more room for improvements while also focusing on bringing meaningful features.

What did you want to achieve with BM2?

Trying to bring a solid new app on the iOS world. For us, innovation is paramount. The feature set had to be powerful and [not something users had] seen elsewhere. We knew big names from the industry had a growing interest in developing for iOS, so we absolutely needed to be one step ahead. BM2 was feature-rich, sometimes maybe even too much. The learning curve was a bit steep but after a while, people started finding crazy (genius) workarounds, tricks, and ways to compose. Basically, you got to invent your own workflow to materialize your idea in the app.

Keyboard Sampler Interface v3

BM3 was a big step from BM2. How did those ideas come about, and how much did user input help you to make decisions?

It was really important for us to address all the workflow issues and discrepancies BM2 suffered from. The idea was to bring something new not only in terms of features, but also on the UX [user experience design]. Again, we like to start fresh while improving concepts that have proven to work well. It was clear BM2‘s strongest points were the sampling and chopping capabilities. This time, we decided to look a bit further than the software world and see what modern gear had to offer. After all, the iPad is a controller, too. Before even hitting the whiteboard once more, we went on our own forums, collected all the feedback (positive & negative), and printed it. We would constantly read and get back to this huge pile of paper — a goldmine, really. The more we were reading it, the more we would grasp what people expected: a concept that would blur the line between a controller and an app. Digesting all of this information took time, but we did not want to rush anything and be sure to come with a novel design. It took us around three years in total.

One thing you haven’t done is move out from iOS, either to Android, or indeed to the desktop. Can you imagine BeatMaker as a desktop DAW?

It’s the next logical step, since more and more of our users are asking for BM3 on their Mac or PC. Competition is tough on desktop platforms, and I don’t see BM3 ever replacing the big DAWs out there, and that’s not what we have in mind, anyway. Our users want to transfer their productions back and forth to their studio/computer, without ever getting into manual file transfers and things like that. Offering the same feature set on the go or back at home is what we can provide. We do have a Mac version for internal development and to make the life of sound designers easier, but this isn’t quite what we want to release to the public. Hopefully, 2018 will be the year INTUA makes its first move to the desktop world — it’s a really good opportunity for us.

As for Android, well, it’s a tough one. If we can’t provide a similar experience on it, then we’ll keep waiting until it gets a bit more unified. There are so much devices out there, it could really become a nightmare to ensure the app works correctly on all of them. However, I think [Microsoft] Surface / Windows Universal Windows Platform is to look for! Ed.: That’s Microsoft’s family of touch-equipped hardware laptops and tablets, plus the means of targeting traditional desktop Windows users and users of a variety of hardware platforms at the same time – even including things like Xbox and HoloLens.

CDM: What does the future look like on iOS for Intua?

Intua: The latest iPads and iPhones are often benchmarked against laptops, and I think this says a lot of what’s coming next. Also, some of the frameworks we use to develop on macOS and iOS are merging into a single entity, so clearly, Apple is blurring the lines between both worlds. It’s ambitious to ever consider replacing laptops with tablets, but they can surely complement each other.

If we look back, iOS has evolved so much in the past couple of years; iOS 11 brings file management a step closer to the desktop experience.

On the audio side, well, Audio Units V3 [plug-in support] was a huge milestone, and our users love integrating their favorite synths and effects directly into BM3. This is a real creativity booster and gives a new dimension to mobile production. It’s even bringing devs to connect, which is great! We’ll keep working on iOS, for sure, and staying in line with Apple’s products and technology is something we actually enjoy doing – especially since the introduction of the “pro” iPads (and now iPhones).

If you could give new iOS developers a piece of advice, what would it be?

Be sure to bring something unique to the user. Competition is tough and there are so much synths, effects, DAWs out there that you need to differentiate yourself in a clever way. Being close to the community is also paramount — understanding how your users create with your app is something to look for constantly. As developers, we often focus on testing part of the app; it’s a very methodical approach, but testing it as a whole entity is a completely different thing. Your users do, so
keep listening to them and make sure you don’t break their creativity with a clumsy interface. Even the smallest detail can become a productivity killer.

That said, iOS is a land of opportunity; you see indie devs “living” along big companies such as Korg in the music app charts — this is pretty unique!

What would you change about iOS, if you could?

Luckily, iOS 11 was released not so long ago, but one aspect would be a more streamlined way to manage and transfer files to and from the device. Also, on the hardware side, we need more storage space! Samples, projects, exports, archives, etc. will eat space very quickly. The latest iPads and iPhones come with better storage options (but you pay the price), so I guess it’s going on the right direction.

 

All three versions of BeatMaker are still on the App Store – BM1, BM2 and the latest BM3).

The post How BeatMaker caught the iOS music trend before it even started appeared first on CDM Create Digital Music.

What you need to know about VCV Rack, a free Eurorack emulation

Delivered... Ted Pallas | Scene | Thu 7 Dec 2017 11:38 pm

In a few short weeks since it was released, VCV Rack has transformed how you might start with modular – by making it run in software, for free or cheap.

VCV Rack now lets you run an entire simulated Eurorack on your computer – or interface with hardware modular. And you can get started without spending a cent, with add-on modules available by the day for free or inexpensively. Ted Pallas has been working with VCV since the beginning, and gives us a complete hands-on guide.

There’s always a reason people fall in love with modular music set-ups. For some, it’s having a consistent, tactile interface. For others, it’s about the way open-ended architectures let the user, rather than a manufacturer, determine the system’s limits. For me, the main attraction to modulars is access to tools that can run free from a rigid musical timeline, but still play a sequence. It means they let me dial in interesting poly-rhythmic parts without stress.

An example: I hooked a Mutable Instruments Braids up to a Veils modular, triggered their VCA with an LFO, and ran the resulting pulse through a Befaco Spring Reverb. I used this patch to thicken the stew on a very minimal DJ mix. I also had a simple LFO pointed at a solenoid attached to a small spring reverb tank boinging away in a channel on the master mixer.

This is all pretty standard Eurorack deployment, except for one tiny detail – all of the modules exist in software, contained inside a cross-platform app called VCV Rack.

VCV Rack is an open-source Eurorack emulation environment. Developer Andrew Belt has built a system to simulate interactions between 0-5 volt signals and various circuits. He’s paired this system with a UI that mimics conventions of Eurorack use. Third-party developers are armed with an API and a strong community.

VCV Rack is open-source, and the core software is free to download and use. The VCV Rack website also features several sets of modules as expansions, many of which are free. The most notable cost-free VCV offering is a near complete set of Mutable Instruments modules, under the name Audible. Beyond the modules distributed by developer Andrew Belt, there’s an ecosystem of several dozen developers, all working on building and supporting their own sets of tools – the vast majority of these are free as well, as of the time of this writing.

The result is a wide array of tools, covering both real-world modules (including the notable recent addition of the Turing Machine and a full collection of Audible Instruments emulations) and original circuits made just for Rack. The software runs in Windows, Mac OS and Linux, though the system doesn’t force third-party developers to support all three platforms.

VCV Rack is a young project, with its first public build only having become available September 10th. I became a user the same day, and have been using it several times a week for several months. I don’t usually take to new software so quickly, but in Rack’s case I found myself opening the app first and only moving on to a DAW after I had a good thing going. What continues to keep me engaged is the software’s usability – drop modules into a Rack, connect them with cables, and the patch does what it’s patched to do. Integration with a larger system is simple – I use a MOTU 828 mk2 to send and receive audio and CV through and audio interface module, and MIDI interfacing is handled in a similar fashion through a MIDI module. I can choose to clock the system to my midiclock+, or I can let it run free.

VCV Rack runs great on my late 2014 MacBook Pro – I’ve heard crackling audio just a handful of times, and in those cases only because I was doing dumb things with shared sound cards. To a lesser degree, VCV Rack also runs well on a Microsoft Surface Pro 3, though using the interface via touch input on the Surface is fiddly at best. Knobs tend to run all the way up or all the way down at the slightest nudge, and the hitbox for patch cable insert points is a bit small for your fingers on any touch screens smaller than 15”. Using a stylus is more comfortable.

Stability is impressive overall, even at this early pre-1.0 development stage. Crashes are exceptionally rare, at least on my systems – I can’t specifically remember the last one, though there’s been a few times the aforementioned crackles forced me to restart Rack. Restarting Rack is no big deal, though – on relaunch, it restores the last state of your patch with audio running, and more than likely everything is ok. Rack will mute lines causing feedback loops, a restriction which ultimately serves to keep your ears and your gear safe.

As part of my field work for this write-up, I decided to run a survey. The VCV Rack community is more approachable, open, and down to get dirty with problem-solving than any other software community I’ve participated in directly. I figured I’d get a handful of responses, with variations of “it’s Eurorack but on my computer and for free” as the most common response.

Instead, I got a peek inside a community excited about the product bringing them all together. Over a third of the respondents have been using VCV since early September, and a quarter of the respondents have only been using the tool for a few weeks. Across the board, though, there’s a few key points I think deserve a highlight.

“Modular is for everybody”, and VCV Rack is modular for everybody.

Almost every single one of our 62 respondents in some way indicated that they love hardware modular for its creative possibilities, but also see cost as a barrier. VCV Rack gets right around the cost issue by being free upfront, with some more exotic modules costing money to access. There’s also a solid chunk of users coming from a university experience with large modular systems, such as Montreal’s SYSTMS, who say what initially appealed to them was “getting to explore modular, whereas before that was just not available to a low income musician. I had been introduced to Doepfer systems in university, and since then I have of course not had access to any very expensive physical Eurorack set ups. Also the idea of introducing and teaching my friends, who I knew would be into this!”

(While Rack is especially hardware-like, I do want to shout out fellow open-source modular solution Automatonism – you won’t find anything like a complete set of Mutable modules, but you will find a healthy Pd-driven open source modular synth with the ability to easily execute away from a computer via the Critter and Guitari Organelle.)

VCV Rack can be used in as many ways as a real Eurorack system.

The Rack Github describes Rack as an “Open-source virtual Eurorack DAW,” and while I wouldn’t use it to edit audio, Rack can handle a wide enough set of roles in a larger system to fairly call the software a workstation. There are several options for recording audio provided by the community, with an equal number of ways to mix and otherwise manipulate sets of signals. It’s possible to create stems of audio data and control data. It’s possible to get multiple channels of audio into another piece of software for further editing, directly via virtual soundcards.

VCV Rack also has a home within hardware modular systems, with users engineering soundcard-driven solutions for getting CV and audio in and out of a modular rack running alongside VCV. User Chris Beckstrom describes a typical broad array of uses: “standalone to make cool sounds (sampling for later), using Tidal Cycles (algorithmic sequencer) to trigger midi, using other midi sources like Bitwig to trigger Rack, and also sending and receiving audio to and from my diy modular.”

8th graders can make M-nus-grade techno with it.

I mean, check it out.

If you build it, they will come.

For having been around only since early September VCV Rack already has a very healthy ecosystem of third-party modules. Devs universally describe Rack’s source as especially easy to work with – Jeremy Wentworth, maker of the JW-modules series, says “[Andrew Belt’s] code for rack is so easy to follow. There is even a tutorial module. I looked at that and said, hey, maybe I can actually build a module, and then I did.” Jeremy is joined by over 40 other plug-in developers, most of whom are managing to find their own Eurorack recipe. VCV Rack also has a very active Facebook community, with over 100 posts appearing over the three days this article was written in. I’ve been on the Internet for a long time – it’s unusual to find something this cohesive, cool-headed and capable outside of a forum.

The community aren’t just freeloaders.

Almost two thirds of our respondents have already purchased some Rack modules, or are going to be purchasing some soon. Only a handful plan not to purchase any modules. There’s a market here, a path to the market via VCV Rack, and a group of developers already working to keep people interested and engaged with both new modules and recreations of real-world Eurorack hardware. Two thirds of respondents is a big number – if you’re a DSP-savvy developer it’s worth investigating VCV Rack.

DSP is portable.

The portability of signal processing algorithms isn’t a phenomenon unique to VCV Rack, but it is my opinion, VCV Rack will be uniquely well-served by the ability to easily port DSP code and concepts from other plaforms. Michael Hetrick’s beloved Euro Reakt Blocks are being partially ported from Reaktor Core patches into VCV Rack, for example, and Martin Lueder has ported over Stanford’s FreeVerb as part of his plugin pack. As the community cements itself, we’ll likely only see more and more beloved bits of code find their way into VCV Rack.

A handful of cool, recent VCV developments

VCV Rack are selling commercial modules. Pulse 8 and Pulse 16 are drum-style Sequencers, and there’s also an 8-channel mixer with built-in VCA level CV inputs. You’ll find them on the official VCV Rack site. Instead of donations, Andrew prefers people purchase his modules, or buy the modules of other devs. All the modules are highly usable, with logical front-panel layouts and powerful CV control. Ed.: This in turn is encouraging, as it suggests a business model pathway for the developers of this unexpected runaway (initially) free hit. -PK

An open Music Thing module has come to VCV. The Turing Machine mkII by Music Thing Modular released by Stellare Modular – A classic looping random CV generator, typically used for lead melodies or basslines, sees a port into VCV Rack by a third-party dev. Open source hardware is being modeled and deployed in an open source environment.

There’s now Ableton Link support. A module supporting Ableton Link, the live jamming / wireless sync protocol for desktop and mobile software, is available via a module released by Stellare. In addition to letting you join in with any software supporting Link, there’s a very handy clock offset.

Reaktor to VCV. Michael Hetrick is porting over Euro Reakt stuff from Reaktor Blocks, and making new modules in the process. Especially worth pointing out is his Github page, which includes ideas on what to actually do with the modules in the context of a patch: https://github.com/mhetrick/hetrickcv

VCV meets monome. Dewb’s Monome Modules allow users to connect their monome Grid controllers, or use a virtual monome within Rack itself. He’s currently also got a build of Monome’s White Whale module: https://github.com/Dewb/monome-rack

Hora’s upper class tools and drums. Hora Music is to my knowledge the first “premium” price module release, at 40euro for his package of modules. With a combination of sequencers, mixers, and drums, it could be the basis of whole projects. See: https://gumroad.com/horamusic

I’ll be back next week with a few different recipes for ways you can make Rack part of your set-up, as well as a Q&A with the developer.

Ted Pallas is a producer and technologist based out of Chicago, Illinois. Find him at http://www.savag.es/

The post What you need to know about VCV Rack, a free Eurorack emulation appeared first on CDM Create Digital Music.

Arturia add CMI, DX7, Clavinet – and Buchla Easel – in software

Delivered... Peter Kirn | Scene | Thu 7 Dec 2017 5:06 pm

Arturia refreshed their mega-collection of synths and keyboard instruments, with new sought-after additions – including a recreation of the Buchla Easel.

Get ready for some numbers and letters here here. The resulting product is the Arturia V Collection 6. The ancient Roman in me apparently wants to read that as “5 collection 6” but, uh, yeah, that’s the letter “v” as in “virtual.”

And what you’re now up to is 21 separate products bundled as one. Inception-style, some of those products contain the other products, too. (If you just want the Buchla, sit tight – yes, you can get it separately.)

So, hat we’re talking about is this:

Synths: models of the Synclavier, Oberheim Matrix 12 and SEM, Roland Jupiter-8, ARP 2600, Dave Smith’s Sequential Prophet V and vector Prophet VS, Yamaha CS-80, a Minimoog, and a Moog modular. To that roster, you can now add a Yamaha DX7, Fairlight CMI, and a Buchla Music Easel.

Keys: Fender Rhodes Stage 73 (suitcase and stage alike), ARP Solina String Ensemble, Wurlitzer. And now there’s a Clavinet, too.

Organs: Hammond B-3, Farfisa, VOX Continental.

And some pianos. Various pianos – uprights and grands – plus other parameters via physical modeling are bundled into Piano V.

The bundle also includes Analog Lab, which pulls together presets and performance parameters for all the rest into a unified interface.

This isn’t all sampled soundware, either – well, if it were, it’d be impossibly huge. Instead, Arturia use physical modeling and electronics modeling techniques to produce emulations of the inner workings of all these instruments.

About those new instruments…

There’s no question the Clavinet and DX7 round out the offerings, making this a fairly complete selection of just about everything you can play with keys. (Okay, no harpsicords or pipe organs, so every relatively modern instrument.) And the Fairlight CMI, while resurrected as a nifty mobile app on iOS, is welcome, too. But because it’s been so rare, and because of the renaissance of interest in Don Buchla and so-called “West Coast” synthesis for sound design, the Buchla addition is obviously stealing the show.

Here’s a look at those additions:

The DX7 V promises to build on the great sound of the Yamaha original while addressing the thing that wasn’t so great about the DX7 – interface and performance functionality. So you get an improved interface, plus a new mod matrix, customizable envelopes, extra waveforms, a 2nd LFO, effects, sequencer, and arpeggiator, among other additions.

Funk fans get the Clavinet V, with control over new parameters via physical modeling (in parallel with the Arturia piano offering), and the addition of amp and effect combos.

Okay, but let’s get on to the two really exciting offerings (ahem, I’m biased):

The CMI V recreates the 1979 instrument that led the move to digital sampling and additive synthesis. And this might be the first Fairlight recreation that you’d want in a modern setup: you get 10 multitmbral, polyphonic slots, plus real-time waveform shaping, effects, and a sequencer. And Arturia have thrown us a curveball, too: to create your own wavetables, there’s a “Spectral” synth that scans and mixes bits of audio.

I’m really keen to play with this one – it sounds like what you’ll want to do is to go Back to the Future and limit yourself to making some entire tracks using just the Fairlight emulation. If you read my children’s TV round-up, maybe Steve Horelick and Reading Rainbow had you thinking of this already. Now you just need a PC with a stylus so you can imagine you’ve got a light pen.

The Buchla Easel goes further back to 1973. It’s arguably the most musical of Don Buchla’s wild instruments, bringing the best ideas from the modular into a single performance-oriented design. And here, it looks like we get a complete, authentic reproduction.

Everything that makes the Buchla approach unique is there. Think amplitude modulation and frequency modulation and the “complex” oscillator’s wave folding, gating that allows for unique tuned sounds, and sophisticated routing of modulation. It all adds up to granting the ability to make strange, new timbres, to seek out new performance life and new sound designs – to boldly go where only privileged experimentalists have gone before.

This video explains the whole “West Coast” synthesis notion (as opposed to Moog’s “East Coast” modular approach):

Arturia makes up for the fact that this is now an in-the-box software synth by opening up the worlds of modulation. So you get something called “gravity” which applies game physics to modulation, and other modulation sources (the curves of the “left hand,” for instance) to make all the organic changes happen inside software. It’s a new take on the Buchla, and not really like anything we’ve seen before. And it suggests this software may elevate beyond just faux replication onscreen, with a genuinely new hybrid.

My only regret: I would love to have this with touch controls, on iOS or Windows, to really complete the feeling. It’s odd seeing the images from Arturia with that interface locked on a PC screen. But I think of all the software instruments in 2017, this late addition could be near the top (alongside VCV Rack’s modular world, though more on that later).

But it’s big news – a last-minute change to upset the world of sound making in 2017.

Watch for our hands-on soon.

Intro price and more new features

Also new in this version: the Analog Lab software, which acts as a hub for all those instruments, parameters, and presets, now has been updated, as well. There’s a new browser, more controller keyboard integration, and other improvements.

Piano V has three new piano models (Japanese Grand, a Plucked Grand, and a Tack Upright), enhanced mic positioning, an improved EQ, a new stereo delay, and it’s own built-in compressor.

There are improvements throughout, Arturia say.

There’s also a lower intro price: new users get US$/€ 249 instead of 499, through January 10.

And that Buchla is 99 bucks if that’s really what you want out of this set.

More:

V Collection

Buchla Easel V

The post Arturia add CMI, DX7, Clavinet – and Buchla Easel – in software appeared first on CDM Create Digital Music.

Why Soda could finally make you take DJ apps seriously again

Delivered... Peter Kirn | Scene | Mon 4 Dec 2017 4:56 pm

Soda for iOS is the first DJ app that is whatever you want it to be – with fully customizable interfaces, powerful specs, AU plug-ins, and Ableton Link.

The need for something new

Let’s be honest: we’re not exactly at the high water mark for DJ software. Even vinyl (not digital vinyl – like the stuff you hurt your back carrying) seems to be on a stronger upswing than DJ software. The Pioneer CDJ reigns supreme, to the extent that you can get laughed out of a club when you show up with a computer.

And software, instead of seeming innovative, is looking awfully rigid. You’re generally stuck with pre-fabbed interfaces and hardware mappings. Innovation seems to be slowing. And then there’s the laptop itself – requiring a separate audio interface, driver configuration, and physical space in the booth that often isn’t there.

Tablets running iOS and Windows could offer solace. But so far, iOS and Windows touch-based apps have focused on entry-level users, either to avoid cannibalizing high-end products (TRAKTOR, Rekordbox) or in an attempt to attract casual DJs.

Your way, right away?

A new DJ app called Soda goes a different direction – it’s built from the ground up to be a series, flexible app, but on a mobile/touch platform. It comes from the developers of the Modstep sequencer/production tool and Ableton Live controller app touchAble. And as a result, since those developers work… in my office – I’ve been watching it evolve from the very first sketch and have gotten some hands-on time with it. And much to my own surprise, it’s made me reconsider the value of touch DJ software at a time when I’d more or less written it off.

The basic idea of Soda: let the user tailor the DJ software to their needs, instead of the other way around.

First, how many decks do you want? You can choose from one to an absurd eight.
How do you want to mix? You choose: switch off sync and use pitch, or turn sync on and let everything be automatic. Time stretch to keep things locked to key, or use pitch to change speed. And when sync is on, you can even choose what quantization you want for tracks – just like launch quantization of clips in Ableton Live.

What should the screen look like? Vertical decks? Horizontal decks? Effects controls? Library? Instead of giving you a handful of pre-selected options, Soda ships with a complete interface editor, so you choose what you see and how, and every element on the screen can be moved and resized.

Do you want to focus on the screen and touch? There’s a color waveform display, which you can cue and zoom with your fingers.

Do you prefer MIDI controller hardware? Every single element on-screen can be MIDI mapped, opening up endless custom MIDI configurations.

Effects work more the way they do in traditional production tools. You get two send effects chains, with five internal effects (Delay, Reverb, Phaser, Flanger, EQ 3) and Audio Unit support (AUv3). And you can browse both the iTunes music library and new Files support on iOS 11.

Cue points and loop points are more powerful, too – you get 16 per deck and per track, you can name them, and cue points can be both cue points and work for loops.

From there, you have all the features you’d expect – recording, playlist management, key and BPM detection, compatibility with all iOS-compatible (Core Audio/Core MIDI) audio and MIDI devices, cueing, and split cable support (in case you don’t have an audio interface for separate cueing).

But let’s back up: this is generally more powerful than a lot of desktop DJ software available now. Certainly, it bests the deck and cue capabilities of leading tools Serato and TRAKTOR, and that’s before you get into the interface customization capabilities.

Here’s the key: endless customization of the UI, and modules for decks, effects, and more.

Promo video:

There’s also a video walkthrough from the beta:

Who’s this for?

I’m not suggesting iPads will unseat CDJs any time soon. But Soda doesn’t have to do that to be a radical new solution. I can see a number of use cases here:

On-the-go prep and mixing. For one, you’ve finally got an ideal mobile app for preparing music and practicing on the road. It’s also ideal for that situation where someone asks you for a DJ mix and… you’re not near decks. You get an interface that’s tremendously customizable, and the ability to differentiate that mix by adding effects and the like. Plus, while you can’t sync cue points this way, iTunes support means you can sync libraries with a desktop machine to bring into Rekordbox (for use with CDJs) or other DJ software (if you must).

Mobile computer replacement for DJing. Laptops are awkward in a booth, especially if the DJ software maker (cough) locks you into unwieldy, big controllers. But an iPad or Windows tablet is far easier. And you could pair Soda with some compact DJ controllers, like Faderfox.

Hybrid sets. Here, Soda really excels. The flexibility with decks and audio effect support make Soda a powerful DJ add-on. And Ableton Link support means you can wirelessly sync to live sets on a laptop running Ableton Live … or a laptop running Reason, or an iPad running Modstep, or whatever. There’s no MIDI clock support for running Soda alongside, say, an Elektron Octatrack, but developers say that should appear in an update soon.

Live sets and sampling. Of course, who says this is really even a “DJ app” in the conventional sense? With all that loop and name-able cue support, eight decks, and effects, you could use Soda with stems or backing tracks for your live set, or think of the “decks” as samplers. It could be an ideal production tool on iOS.

The iPad should be a great platform for this app, particularly with the rich app and effect ecosystem there. But if you prefer Windows, Soda won’t necessarily be wedded to iOS forever. The core software is developed in C, and is largely platform agnostic, with Windows support planned (and already privately tested). As Microsoft improves Surface and other partners deliver tablets and hybrids, that could be a strong option. It’s doubly encouraging not to be locked to one vendor, given Apple’s recent shaky OS quality and frequent updates.

Stay tuned – I’ll do a full hands-on / review soon. I’m also very interested in custom controller support, so we’ll talk about that soon – and possibly enlist some of the CDM community, if you’re interested.

For now, the app is a measly US$9.99 – for an app that (at least in some categories) objective bests alternatives costing many times that.

Developer site:
http://www.soda.world/

The post Why Soda could finally make you take DJ apps seriously again appeared first on CDM Create Digital Music.

Try AI remixing in Regroover with these tips and exclusive sounds

Delivered... Peter Kirn | Scene | Wed 29 Nov 2017 5:11 pm

Regroover opens up new ways of transforming sounds and remixing materials, as powered by machine learning. Here’s how you can try that out, for free.

CDM got the chance to partner with developer Accusonus to help introduce this way of working. And it is a somewhat new approach: you’re separating audio components from rhythmic material, starting with a stereo file. It’s new enough that you might not immediately know where to begin.

So, to get you started, we’ve collaborated on a tutorial and a sound pack.

You don’t need to buy anything here. There’s a 14-day unlimited trial version for download:
https://accusonus.com/products/regroover#downloads

Then, the trick is really understanding the different creative possibilities of Regroover’s toolset. I put together a video – the challenge to myself being really to take a generic sound and do something new with it. I usually ignore all those loops that come with music software, but here it wound up being useful. Sure, I could have programmed my own loop here from scratch, but by working with Regroover, I got to chop up the groove/rhythmic feel and sounds themselves, independent of one another.

Here’s a fast step-by-step walkthrough of the interface:

First, to load the sound pack we’re giving you, choose “load project.” Then navigate to your download, which is grouped by different kits and loops (yeah, there’s a lot of stuff in there).

Second, check tempo settings. Sometimes it’s necessary to halve or double the detected bpm, just as in other time stretching tools. Also, you need to manually sync to the host tempo any time it changes – that’s because it takes a moment for those machine learning-powered algorithms to analyze the file.

You may want to transform the default analysis. The “split” tool allows for some creative manipulation of the number of layers, and how dense different layers are.

Not all Regroover manipulations have to be radical. You can start out just by emphasizing or de-mphasizing portions of the loop – adjusting its relative amplitude and mid/side and left/right panning. I suspect some of you will be happy just making subtle modifications to loops and otherwise leaving them as-is; if you don’t change the tempo, those will sound fairly close to the original. But this is still really different than the usual EQ and compression tools available to you.

As I demonstrate in the video, you can create polyrhythms inside an existing loop by adjusting in and out point on each layer. Again, that’s normally impossible with a stereo audio mix.

You can pull out individual portions of a sound by double-clicking, then dragging a selection. From there, you can drag and drop either into Regroover’s own sampler facility, or back into a host/DAW like Ableton Live.

You may want to check out Regroover’s built-in sampler tools. You’ll find all the usual facilities for amplitude envelope and so on, and you can create a playable pad of sounds you’ve extracted from a loop.

Exclusive CDM sound pack

Just for you, we’ve got a sound pack entitled “Hyper Abstract Electronica.” It’s the work of London/Surrey artist Aneek Thapar, who has an extensive resume in mixing, mastering, and teaching, and has also worked with Novation and Ninja Tune’s iOS/Android remix app Ninja Jamm.

Aneek created something that’s really special, I think, in that it seems perfectly suited to creative abuse inside Regroover. Putting the two together makes this feel almost like a unique instrument.

Aneek clearly thinks of it that way. Watch what happens when he controls it with gestures and the Leap Motion (plus Ableton Push):

The pack is free; we’ll add you to our respective newsletters (which have opt-out options, of course).

Download Hyper Abstract Electronic – CDM Exclusive

I am actually really, really interested if people make any music with this, so please don’t be shy and do send us tracks if you come up with something. (If you aren’t ready to invest, of course, you’ve got a nice 14-day deadline to keep you productive!) I’ll share any really good ones with readers.

For more background on the research behind this:
Accusonus explain how they’re using AI to make tools for musicians

Diclosure: Accusonus sponsored the creation of this content with CDM.

The post Try AI remixing in Regroover with these tips and exclusive sounds appeared first on CDM Create Digital Music.

Maschine will finally get time stretching, melodic shifting for loops

Delivered... Peter Kirn | Scene | Tue 28 Nov 2017 7:27 pm

You can already sample and slice with Native Instruments’ groove production instrument. But soon, you’ll change loops’ pitch and time in real-time, too.

Maschine has been guided by focusing on certain means of working, ignoring others. The hardware/software combination from the start began with an MPC-style sampling workflow and drum machine features, and it’s added from there – eventually getting features like more elaborate pattern generation and editing, drum synths, more sound tools, and deeper arrangement powers.

But hang on – that’s not really an excuse for not doing time stretching. Real-time time stretching has been a feature on many similar hardware and software tools.

Now, it’s sort of nice that Maschine isn’t Ableton Live. In fact, it’s so nice that the combination of the two is one of the most common use cases for Maschine. But it’s so expected that you’d be able to work with changing pitch and time independently with loops, that it’s almost distracting when it isn’t there.

So, Maschine 2.7 adds that functionality. In addition to the existing Sampler, which lets you trigger sounds and loops and slice audio into chunks, there’s now an Audio plug-in device you can add to your projects. Audio will play loops in time with the project, and has the ability to time stretch in real-time.

The features we’re getting:

Real-time time stretching keeps loops in time with a project, without changing pitch

Loop hot swapping lets you change loops as you play – apparently without missing a beat, so you can audition lots of different loops or trigger different loops on the fly

Gate Mode lets you play a loop just by hitting a pad

Melodic re-pitching lets you change pitch in Gate Mode of a whole loop or portion of a loop, just by playing pads

Gate Mode: trigger loops, change pitch, from pads.

More discussion on the NI blog.

The combination of pads and Gate Mode sounds really performer-friendly, and different from what you see elsewhere. That’s crucial, because since you can already do a lot of this in other tools, you need some reason to do it in Maschine.

I’m eager to get my hands on this and test it. It’s funny, I had some samples I wanted to play around with in the studio just before I saw this, and decided not to use Maschine because, well, this was missing. But because the pads on the Maschine MK3 hardware feel really, really great, and because sometimes you want to get hands-on with material using something other than the mouse, I’m intrigued by this. I find this sort of way of working can often generate different ideas. I’m sure a lot of you feel the same way. Actually, I know you do, because you’ve been yelling at NI to do this since the start. It looks like the wait might pay off with a unique, reflective implementation.

We’ll know soon enough – stay tuned.

The old way of doing things: the Sampling workflow:

The post Maschine will finally get time stretching, melodic shifting for loops appeared first on CDM Create Digital Music.

It’s Cyber Monday; here’s where to find deals on music gear and apps

Delivered... Peter Kirn | Scene | Mon 27 Nov 2017 4:20 pm

Over the holiday season, developers of software and hardware for music making are offering steep discounts. Here’s where to find them.

First, it’s really a no-brainer to pick these big sales to load up your iPad, iPhone, or other mobile device with apps cheaply.

Mobile shopping: Our very own Ashley Elsdon, he of Palm Sounds, has an absolutely insane list of music apps, covering the gamut of tools from experimental soundscape generators to DJ software, instruments and effects, drums and synths, and powerful sequencers and production tools. It’s worth a skim just to see if there’s anything you’re missing that you wanted:
Black Friday means some seriously good discounts on excellent apps

Shopping for everything: Another great place to start is a thread on Reddit tracking different deals (including our MeeBlip, so thanks!):
Holiday Sales Thread! (self.synthesizers)

These tend more to software than hardware, of course, because of margins, but you’ll see even the likes of Moog, Waldorf, Audio Damage, and Critter & Guitari in there.

As seen on CDM: A few products we’ve written up recently are also discounted.

That includes Reason 10 at 25% off and Ableton’s ongoing 20% off Live sale (which includes a free upgrade to Live 10 early next year).

Accusonus is discounting Regroover.

Cakewalk’s excellent z3ta+ waveshaping synth is just US$35 – which might be your last chance to snap it up now that Cakewalk are going the way of the dodo.

Native Instruments have a huge sale on – including a great time to buy Reaktor, which I’ve been talking about lately (and, genuinely, using sort of nonstop). See discussion.

Isotonik’s stuff is on sale, including their tools and add-ons for Ableton Live, etc.

More tidbits: Moog apparel and merch is on 20% discount with code TURKEYMOOG in the USA only.

Soundtoys has 50% off everything, including upgrades.

Plugin Boutique have an up-to-80% sale.

DJ Tech Tools have discounts on their store, including their Midi Fighter hardware (thanks to comments for this one).

And if you’ve written a Christmas song, these folks are offering a competition and free mastering.

CDM deals: Back here in CDM territory, we’ve got deals on the stuff we’re producing.

Our MeeBlip hardware is available now for US$119.95, with all the cables you’ll need – power plus MIDI plus audio.
MeeBlip triode

And our record label Establishment has its whole catalog available for 50% off. Use code “cybersale” when you check out, through tomorrow Tuesday evening:
https://establishmentrecords.bandcamp.com/

Have a great Monday, and do remember the reason for the season – we purchase gear and apps now because the superior race of Cybermen overlords demand it of us. They’re already using the Gravitron on Berlin, where I haven’t seen the sun in eons, and I suspect if we don’t do their bidding, we will soon face full conversion to cyberpeople and see our home planet destroyed entirely as we’re hauled back to the planet Mondas. Now, happy cyber-holidays! Go shopping, because resistance is futile … especially when it comes to the need to acquire synthesizers!

The post It’s Cyber Monday; here’s where to find deals on music gear and apps appeared first on CDM Create Digital Music.

Waves give you the old-school VU meter your DAW is missing, for free

Delivered... Peter Kirn | Scene | Fri 24 Nov 2017 7:28 pm

Funny thing about those old analog mixing desks: the VU meters gave really good visual feedback. Now you can add that to your modern DAW, for free.

In the latest “here’s free stuff because we want your e-mail address” play, Waves are giving away a handsome VU meter with simulated needle. And it’s not just some twee retro touch: the way these meters respond to audio signal is actually often easier to see.

Mixing is all about listening. But there’s no shame in giving your ears a little extra reinforcement. I’m actually very suspicious that metering is part of what’s to blame as people have trouble mixing on computers. You’ll hear comments like people moving from one DAW to another to improve how a mix “sounds” – which is peculiar, given most DAWs literally mix by adding together numbers, and most DAWs even share the same mix accuracy in terms of how those numbers represent. If you and a friend add two and two, one of your fours isn’t more awesome than the other one, so you get the point. (Also suspect: these very often involve Ableton Live, whose meters I find a bit hard to see, even after Live 9 refurbished them a bit.)

Now, of course, it’s (very) possible people just don’t know how to mix. But then, if you’re learning mixing, this kind of visual feedback may be even more useful to newcomers – and old-timers will appreciate its familiarity.

While we’re on the topic, you might also consider mixing down in the superb (and almost weirdly inexpensive) Harrison Mixbus, which includes lots of sonic and usability features from traditional consoles – metering included. It even runs on Linux.

Harrison Mixbus

In the meantime, though, have fun with turning back the clock for free with this:

https://www.waves.com/plugins/vu-meter#

The post Waves give you the old-school VU meter your DAW is missing, for free appeared first on CDM Create Digital Music.

What you can learn from Belief Defect’s modular-PC live rig

Delivered... Peter Kirn | Scene | Wed 22 Nov 2017 5:42 pm

Belief Defect’s dark, grungy, distorted sounds come from hardware modulars in tandem with Reaktor and Maschine. Here’s how the Raster artists make it work.

Belief Defect is a duo from two known techno artists, minus their usual identities, with a full-length out on Raster (the label formerly known as Raster-Noton). It digresses from techno into aggressively crunchy left-field sonic tableau and gothic song constructions. There are some video excerpts from their stunning live debut at Berlin’s Atonal Festival, featuring visuals by OKTAform:

See also: STREAM BELIEF DEFECT’S DECADENT YET DEPRAVED ALBUM AND READ THE STORIES BEHIND THEIR CREEPY SAMPLES

They’ve got analog modulars in the studio and onstage, but a whole lot of the live set’s sounds emanate from computers – and the computer pulls the live show together. That’s no less expressive or performative – on the contrary, the combination with Maschine hardware means easy access to playing percussion live and controlling parameters.

Native Instruments asked me to do an in-depth interview for the new NI Blog, to get to talk about their music. The full interview:

Belief Defect on their Maschine and Reaktor modular rig [blog.native-instruments.com]

They’ve got a diverse setup: modular gear across two studios, Bitwig Studio running some stems (and useful in the studio for interfacing with modulars), a Nord Drum connected via MIDI, and then one laptop running Maschine and Reaktor that ties it all together.

Here are some tips picked up from that interview and reviewing the Reaktor patch at the heart of their album and live rig:

1. Embrace your Dr. Frankenstein.

Patching together something from existing stuff to get what you want can give you a tool that gets used and reused. In this case, Belief Defect used some familiar Reaktor ensemble bits to produce their versatile drum kit and effects combo.

2. Saturator love.

Don’t overlook the simple. A lot of the sound of Belief Defect is clever, economical use of the distinctive sound of delay, reverb, filter, and distortion. The distortion, for instance, is the sound of Reaktor’s built-in Saturator 2 module, which is routed after the filter. I suspect that’s not accidental – by not overcomplicating layers of effects, it frees up the artists to use their ears, focus on their source material, and dial in just the sound they want.

And remember if you’re playing with the excellent Reaktor Blocks, you can always modify a module using these tried-and-true bits and pieces from the Reaktor library.

For more saturation, check out the free download they recommend, which you can drop into your Blocks modular rig, too:

ThatOneKnob Compressor [Reaktor User Library]

3. Check out Molekular for vocals.

Also included with Reaktor 6, Molekular is its own modular multi-effects environment. Belief Defect used it on vocals via the harmonic quantizer. And it’s “free” once you have Reaktor – waiting to be used, or even picked apart.

“Using the harmonic quantizer, and then going crazy and have everything not drift into gibberish was just amazing.”

Maschine clips in the upper left trigger snapshots in Reaktor – simple, effective,

4. Maschine can act as a controller and snapshot recall for Reaktor.

One challenge I suspect for some Reaktor users is, whereas your patching and sound design process is initially all about the mouse and computer, when you play you want to get tangible. Here, Belief Defect have used Reaktor inside Maschine. Then the Maschine pads trigger drum sounds, and the encoders control parameters.

Group A on Maschine houses the Reaktor ensemble. Macro controls are mapped consistently, so that turning the third encoder always has the same result. Then Reaktor snapshots are triggered from clips, so that each track can have presets ready to go.

This is so significant, in fact, that I’ll be looking at this in some future tutorials. (Reaktor also pairs nicely with Ableton Push in the same way; I’ve done that live with Reaktor Blocks rigs. Since what you lose going virtual is hands-on control, this gets it back – and handles that preset recall that analog modulars, cough, don’t exactly do.)

5. Maschine can also act as a bridge to hardware.

On a separate group, Belief Defect control their Nord Drum – this time using MIDI CC messages mapped to encoders. That group is color-coded Nord red (cute).

Belief Defect, the duo, in disguise. (You… might recognize them in the video, if you know them.)

6. Build a committed relationship.

Well, with an instrument, that is. By practicing with that one Reaktor ensemble, they built a coherent sound, tied the album together, and then had room to play – live and in the studio – by really making it an instrument and an extension of themselves. The drum sounds they point out lasted ten years. On the hardware side, there’s a parallel – like talking about taking their Buchla Music Easel out to work on.

Check out the full interview:

Belief Defect on their Maschine and Reaktor modular rig [blog.native-instruments.com]

Whoa.

Follow Belief Defect on Twitter:
https://twitter.com/Belief_Defect

and Instagram:
https://www.instagram.com/belief_defect/

Reaktor 6

Reaktor User Library

Photo credits: Giovanni Dominice.

The post What you can learn from Belief Defect’s modular-PC live rig appeared first on CDM Create Digital Music.

Gibson just killed Cakewalk, because Philips?!

Delivered... Peter Kirn | Scene | Tue 21 Nov 2017 7:06 pm

Gibson, the company known for legendary guitars and killing your favorite DAW in the 90s … now gets the chance to remind the pro audio crowd of the latter.

Gibson is discontinuing all development of Cakewalk products, which would include the SONAR flagship DAW. The explanation: they want to focus on consumer audio electronics, namely Philips:

Gibson Brands announced today that it is ceasing active development and production of Cakewalk branded products. The decision was made to better align with the company’s acquisition strategy that is heavily focused on growth in the global consumer electronics audio business under the Philips brand.

Cakewalk has been an industry leader in music software for over 25 years by fusing cutting-edge technology with creative approaches to tools that create, edit, mix, and publish music for professional and amateur musicians. Gibson Brands acquired Cakewalk in 2013.

For perspective, this means Gibson is pointing to an acquisition that took place just one year after the acquisition of Cakewalk, namely WOOX Innovations. That sale, which cost US$135 million (plus an unspecified brand licensing fee), covered home audio and music accessories, with video products moving to Gibson this year.

And it means that just as Dutch giant Philips moves to “health and well being,” Gibson is moving from being a guitar company into being a consumer electronics megacorp.

Armin van Buuren selling his collaboration with Philips – a product included in the acquisition.

Cakewalk’s SONAR DAW, while it may not be relevant to each reader here personally, had retained a passionate following with many producers, particularly because of its focus on the Windows platform. It’s also one of a handful of tools that has survived multiple decades of technological change. (From the same generation: Logic, Cubase.)

It may be a mistake to focus on the high end here, though. Cakewalk’s entry-level products were a generally overlooked cash cow. As the entry-level market has refocused on mobile, it’s unclear whether a desktop tool aligned with higher-end products makes sense in the same way. To their credit, Apple has managed to position their GarageBand product across iOS and desktop – but, then, Apple gives away that product and they make iOS.

The announcement comes on the heels of Momentum, a tool for capturing ideas on mobile and then translating them to a DAW. But then, discontinuing the Cakewalk products means Momentum doesn’t have a DAW vendor to migrate to – only a plug-in. And it loses the Cakewalk name.

Momentum already was a questionable investment: for anything better than MP3-quality audio, you pay a hundred bucks a year, which is a steep price to pay given the fact that tools like GarageBand are free or a few bucks on iOS, and $100 a year easily buys you massive amounts of storage for hwatever you want.

Now, Momentum’s future is called into question, which I think makes investing in the subscription downright insane.

At the risk of being blunt and making some enemies, though, I think musicians might well be suspicious of corporate acquisitions and whether they really further innovation. There’s reason for users to be hurt and angry. And telling users of a professional music creation product line with a 30-year history that some branded speakers are the new direction adds to the sting.

There’s some business risk for Gibson, too. Consumer sound electronics are commodity markets – and big players can set themselves up for big failures.

For pro music creation, of course, terrific alternatives abound on Windows, including software developed by independent companies, from Reaper to Renoise, FL Studio to Ableton Live. And it seems independence and longevity go hand in hand.

But I have to be personally nostalgic. Cakewalk for DOS was the first sequencer I ever used, the first music software I ever owned. (My parents actually bought me the box.) Greg, the developer, had his name right on the screen.

To this day, I still like knowing the engineers behind the tools we use by their first names. I wish everyone at Cakewalk the best – and I’m certainly happy to keep getting to know individuals who work on stuff, and not just faceless brands.

And thanks, Greg – because without your work, I probably wouldn’t be writing this now.

PS – hey, by the way, Gibson, my second DAW wound up being Opcode Vision, so this is what I’ve got to say to you:

The post Gibson just killed Cakewalk, because Philips?! appeared first on CDM Create Digital Music.

Accusonus explain how they’re using AI to make tools for musicians

Delivered... Peter Kirn | Scene | Fri 17 Nov 2017 12:32 am

First, there was DSP (digital signal processing). Now, there’s AI. But what does that mean? Let’s find out from the people developing it.

We spoke to Accusonus, the developers of loop unmixer/remixer Regroover, to try to better understand what artificial intelligence will do for music making – beyond just the buzzwords. It’s a topic they presented recently at the Audio Engineering Society conference, alongside some other developers exploring machine learning.

At a time when a lot of music software retreads existing ground, machine learning is a relatively fresh frontier. One important distinction to make: machine learning involves training the software in advance, then applying those algorithms on your computer. But that already opens up some new sound capabilities, as I wrote about in our preview of Regroover, and can change how you work as a producer.

And the timing is great, too, as we take on the topic of AI and art with CTM Festival and our 2018 edition of our MusicMakers Hacklab. (That call is still open!)

CDM spoke with Accusonus’ co-founders, Alex Tsilfidis (CEO) and Elias Kokkinis (CTO). Elias explains the story from a behind-the-scenes perspective – but in a way that I think remains accessible to us non-mathematicians!

Elias (left) and Alex (right). As Elias is the CTO, he filled us in on the technical inside track.

How do you wind up getting into machine learning in the first place? What led this team to that place; what research background do they have?

Elias: Alex and I started out our academic work with audio enhancement, combining DSP with the study of human hearing. Toward the end of our studies, we realized that the convergence of machine learning and signal processing was the way to actually solve problems in real life. After the release of drumatom, the team started growing, and we brought people on board who had diverse backgrounds, from audio effect design to image processing. For me, audio is hard because it’s one of the most interdisciplinary fields out there, and we believe a successful team must reflect that.

It seems like there’s been movement in audio software from what had been pure electrical engineering or signal processing to, additionally, understanding how machines learn. Has that shifted somehow?

I think of this more as a convergence than a “shift.” Electrical engineering (EE) and signal processing (SP) are always at the heart of what we do, but when combined with machine learning (ML), it can lead to powerful solutions. We are far from understanding how machines learn. What we can actually do today, is “teach” machines to perform specific tasks with very good accuracy and performance. In the case of audio, these tasks are always related to some underlying electrical engineering or signal processing concept. The convergence of these principles (EE, SP and ML) is what allows us to develop products that help people make music in new or better ways.

What does it mean when you can approach software with that background in machine learning. Does it change how you solve problems?

Machine learning is just another tool in our toolbox. It’s easy to get carried away, especially with all the hype surrounding it now, and use ML to solve any kind of problem, but sometimes it’s like using a bazooka to kill a mosquito. We approach our software products from various perspectives and use the best tools for the job.

What do we mean when we talk about machine learning? What is it, for someone who isn’t a researcher/developer?

The term “machine learning” describes a set of methods and principles engineers and scientists use to teach a computer to perform a specific task. An example would be the identification of the music genre of a given song. Let’s say we’d like to know if a song we’re currently listening is an EDM song or not. The “traditional” approach would be to create a set of rules that say EDM songs are in this BPM range and have that tonal balance, etc. Then we’d have to implement specific algorithms that detect a song’s BPM value, a song’s tonal balance, etc. Then we’d have to analyze the results according to the rules we specified and decide if the song is EDM or not. You can see how this gets time-consuming and complicated, even for relatively simple tasks. The machine learning approach is to show the computer thousands of EDM songs and thousands of songs from other genres and train the computer to distinguish between EDM and other genres.

Computers can get very good at this sort of very specific task. But they don’t learn like humans do. Humans also learn by example, but don’t need thousands of examples. Sometimes a few or just one example can be enough. This is because humans can truly learn, reason and abstract information and create knowledge that helps them perform the same task in the future and also get better. If a computer could do this, it would be truly intelligent, and it would make sense to talk about Artificial Intelligence (A.I.), but we’re still far away from that. Ed.: lest the use of that term seem disingenuous, machine learning is still seen as a subset of AI. -PK

If a reader would like to read more into the subject, a great blog post by NVIDIA and a slightly more technical blog post by F. Chollet will shed more light into what machine learning actually is.

We talked a little bit on background about the math behind this. But in terms of what the effect of doing that number crunching is, how would you describe how the machine hears? What is it actually analyzing, in terms of rhythm, timbre?

I don’t think machines “hear,” at least not now, and not as we might think. I understand the need we all have to explain what’s going on and find some reference that makes sense, but what actually goes behind the scenes is more mundane. For now, there’s no way for a machine to understand what it’s listening to, and hence start hearing in the sense a human does.

Inside Accusonus products, we have to choose what part of the audio file/data to “feed” the machine. We might send an audio track’s rhythm or pitch, along with instructions on what to look for in that data. The data we send are “representations” and are limited by our understanding of, for instance, rhythm or pitch. For example, Regroover analyses the energy of the audio loop across time and frequency. It then tries to identify patterns that are musically meaningful and extract them as individual layers.

Is all that analysis done in advance, or does it also learn as I use it?

Most of the time, the analysis is done in advance, or just when the audio files are loaded. But it is possible to have products that get better with time – i.e., “learn” as you use them. There are several technical challenges for our products to learn by using, including significant processing load and having to run inside old-school DAW and plug-in platforms that were primarily developed for more “traditional” applications. As plug-in creators, we are forced to constantly fight our way around obstacles, and this comes at a cost for the user.

Processed with VSCO with x1 preset

What’s different about this versus another approach – what does this let me do that maybe I wasn’t able to do before?

Sampled loops and beats have been around for many years and people have many ways to edit, slice and repurpose them. Before Regroover, everything happened in one dimension, time. Now people can edit and reshape loops and beats in both time and frequency. They can also go beyond the traditional multi-band approach by using our tech to extract musical layers and original sounds. The possibilities for unique beat production and sound design are practically endless. A simple loop can be a starting point for a many musical ideas.

How would you compare this to other tools on the market – those performing these kind of analyses or solving these problems? (How particular is what you’re doing?)

The most important thing to keep in mind when developing products that rely on advanced technologies and machine learning is what the user wants to achieve. We try to “hide” as much of complexity as possible from the user and provide a familiar and intuitive user interface that allows them to focus on the music and not the science. Our single knob noise and reverb removal plug-ins are very good examples of this. The amount of parameters and options of the algorithms would be too confusing to expose to the end user, so we created a simple UI to deliver a quick result to the user.

If you take something as simple as being able to re-pitch samples, each time there’s some new audio process, various uses and abuses follow. Is there a chance to make new kinds of sounds here? Do you expect people to also abuse this to come up with creative uses? (Or has that happened already?)

Users are always the best “hackers” of our products. They come up with really interesting applications that push the boundaries of what we originally had in mind. And that’s the beauty of developing products that expand the sound processing horizons for music. Regroover is the best example of this. Stavros Gasparatos has used Regroover in an installation where he split industrial recordings routing the layers in 6 speakers inside a big venue. He tried to push the algorithm to create all kinds of crazy splits and extract inspiring layers. The effect was that in the middle of the room you could hear the whole sound and when you approached one of the speakers crazy things happened. We even had some users that extracted inspiring layers from washing machine recordings! I’m sure the CDM audience can think of even more uses and abuses!

Regroover gets used in Gasparatos’ expanded piano project:

Looking at the larger scene, do you think machine learning techniques and other analyses will expand what digital software can do in music? Does it mean we get away from just modeling analog components and things like that?

I believe machine learning can be the driving force for a much-needed paradigm shift in our industry. The computational resources available today not only on our desktop computers but also on the cloud are tremendous and machine learning is a great way to utilize them to expand what software can do in music and audio. Essentially, the only limit is our imagination. And if we keep being haunted by the analog sounds of the past, we can never imagine the sound of the future. We hope accusonus can play its part and change this.

Where do you fit into that larger scene? Obviously, your particular work here is proprietary – but then, what’s shared? Is there larger AI and machine learning knowledge (inside or outside music) that’s advancing? Do you see other music developers going this direction? (Well, starting with those you shared an AES panel on?)

I think we fit among the forward-thinking companies that try to bring this paradigm shift by actually solving problems and providing new ways of processing audio and creating music. Think of iZotope with their newest Neutron release, Adobe Audition’s Sound Remover, and Apple Logic’s Drummer. What we need to share between us (and we already do with some of those companies) is the vision of moving things forward, beyond the analog world, and our experiences on designing great products using machine learning (here’s our CEO’s keynote in a recent workshop for this).

Can you talk a little bit about your respective backgrounds in music – not just in software, but your experiences as a musician?

Elias: I started out as a drummer in my teens. I played with several bands during high school and as a student in the university. At the same time, I started getting into sound engineering, where my studies really helped. I ended up working a lot of gigs from small venues to stadiums from cabling and PA setup to mixing the show and monitors. During this time I got interested in signal processing and acoustics and I focused my studies on these fields. Towards the end of university I spent a couple of years in a small recording studio, where I did some acoustic design for the control room, recording and mixing local bands. After graduating I started working on my PhD thesis on microphone bleed reduction and general audio enhancement. Funny enough, Alex was the one who built the first version of the studio, he was the supervisor of my undergraduate thesis and we spend most of our PhDs working together in the same research group. It was almost meant to be that we would start Accusonus together!

Alex: I studied classical piano and music composition as a kid, and turned to synthesizers and electronic music later. As many students do, I formed a band with some friends, and that band happened to be one of the few abstract electronic/trip hop bands in Greece. We started making music around an old Atari computer, an early MIDI-only version of Cubase that triggered some cheap synthesizers and recorded our first demo in a crappy 4-channel tape recorder in a friend’s bedroom. Fun days!

We then bought a PC and more fancy equipment and started making our living from writing soundtracks for theater and dance shows. At that period I practically lived as a professional musician/producer and have quit my studies. But after a couple of years, I realized that I am more and more fascinated by the technology aspect of music so I returned to the university and focused in audio signal processing. After graduating from the Electrical and Computer Engineering Department, I studied Acoustics in France and then started my PhD in de-reverberation and room acoustics at the same lab with Elias. We became friends, worked together as researchers for many years and we realized the we share the same vision of how we want to create innovative products to help everyone make great music! That’s why we founded Accusonus!

So much of software development is just modeling what analog circuits or acoustic instruments do. Is there a chance for software based on machine learning to sound different, to go in different directions?

Yes, I think machine learning can help us create new inspiring sounds and lead us to different directions. Google Magenta’s NSynth is a great example of this, I think. While still mostly a research prototype, it shows the new directions that can be opened by these new techniques.

Can you recommend some resources showing the larger picture with machine learning? Where might people find more on this larger topic?

https://openai.com/

Siraj Raval’s YouTube channel:

Google Magenta’s blog for audio/music applications https://magenta.tensorflow.org/blog/

Machine learning for artists https://ml4a.github.io/

Thanks, Accusonus! Readers, if you have more questions for the developers – or the machine learning field in general, in music industry developments and in art – do sound out. For more:

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

http://accusonus.com

The post Accusonus explain how they’re using AI to make tools for musicians appeared first on CDM Create Digital Music.

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

Delivered... Peter Kirn | Scene | Thu 16 Nov 2017 9:36 pm

You’ve sampled. You’ve sliced. You’ve warped. So what’s left to do with loops? Accusonus have turned to machine learning for a new answer.

Software for years has been able to apply rhythmic analysis (like looking for transients or guessing at tempo), and frequency analysis (filtering by band). The more recent development involves training algorithms with big data sets using machine learning. That’s commonly called “A.I.,” though of course artificial intelligence makes most of us scifi fans start to think killer robots and Agent Smith and the like – and this isn’t really anything to do with that. Behind the flashy names, what you’re really dealing with is some heavy-duty mathematics. The “machine learning” element means the software that has been trained on pre-existing materials to give you results that are less brute-force, and more what you’d expect musically.

What is exciting about that is the results. With Regroover, what you get is a tool that analyzes audio into “layers” instead of just transients, slices, and bands. And now, it supports drag and drop into and out of the tool. So individual sounds and layers can now be dragged to your host, to an arrangement, or to a sampler – anything that also has drag-and-drop support.

Add Regroover to Ableton Live, for instance, and it’s a bit like having a new way to process sounds, on top of the warping techniques you’ve had for a few years. Instead of working with the whole stereo loop at once, you now are presented with various layers – which might separate out a melodic part, or even get as precise as specific pieces of percussion. It’s using time and frequency and that machine learning all at once.

Regroover joins a handful of tools providing this sort of “unmixing” capability, with a particular focus on percussive loops. If you didn’t get exactly the isolation you wanted, you can then adjust the density of the layers and run the algorithm again. Or for additional precision, you can select a portion and split the layers based on particular material.

Sometimes the “mistakes” are as interesting as the results you’re looking for: you get the chance to unearth portions of a loop you may not have even heard before.

Around this layers interface, the developers have wrapped various tools for mixing, processing, and slicing up the resulting materials. You’re given an interface that lets you then adjust the level and panning (both mid/side and left/right) of each layer, which lets you emphasize or de-emphasize parts of the loop. And you can route layers to effects, either in Regroover or by sending to external buses to your host.

You can just stop there, or you can take portions of a clip – individual layers, bits of time – and divide them up into pads. There’s a built in drum pad sampler, but now with version 1.7, you can also drag and drop out to your host. In Live or Maschine, to give two Berlin software examples, that means you can then use your favorite sampling tools to work with further.

This could mean everything from minor surgery on a clip to isolating individual parts of the groove or even individual percussion parts.

Sometimes, the simple tricks Regroover can pull are actually the most appealing. So while you could do some fancy sampling or kick drum replacement (takes one minute) or something like that, you can also just mess with polyrhythms inside a loop by dividing into layers, and changing length:

Production guru Thavius Beck has a great tutorial explaining the whole thing from a creative standpoint:

I’ve been playing with Regroover for a few weeks. It definitely takes a little getting into, because it is different – and you’re hearing different results than you would with other tools. Yes, there are other remixing and unmixing tools out there, too – and this isn’t quite that. It’s really geared for percussion and loops specifically, and the interface makes it a kind of AI-mad sampling drum machine loop re-processor.

The most important expectation to adjust is, this won’t sound quite like what you’ve heard before. Remember when you first played with warping in a tool like Live, ReCycle, or Acid? (Old timers, anyone?) It has that feeling.

There are some mathematical and perceptual realities of sound that you’re going to hit up against. You’re pulling out elements of a single audio file, which means because your ears are sensitive, you’ll start to hear the sound as less natural as you process it. The quality of the source material will matter – to the point that Accusonus are even producing their own libraries. On the other hand, that opens up some new possibilities. For one, some of the digital-sounding timbres that result have aesthetic potential all their own.

Or, you can look at this as a way not to just extract sound itself, but groove – because the results are very precise about rhythmic elements inside a loop.

CDM are teaming up with Accusonus to demonstrate how this works and give you some tips, so we’ll check in again with that.

As I see it, you get a few major use cases.

People who want to mess with loop libraries. If you’ve got loops that are stereo files, this lets you modify them in ways subtle or radical and make them your own – a bit more like what you can do with MIDI patterns.

A remix tool. Well, obviously. This gets really interesting, though, from a number of angles. There are some new options when someone says “oops, sorry, I have the stereo mix and no stems.” There are new ways of treating the stems you have. And there are new ways of treating additional materials outside the mix. (All of this holds whether it’s your music or someone else’s.)

A way to process your own materials. I’m fond of quoting something I overheard about French cooking once – that the kitchen was all about doing something to an ingredient, then doing something else. So if you’re in the middle of a project and want to take some of the material a different direction, this is a new way of doing that. And I think in electronic music, where we’re constantly getting away from the obvious solution, that’s compelling.

A groove extraction tool. Frankly, this works a whole lot better than the groove tools in conventional DAWs, because you can pull out elements of a loop, then use that either as a trigger or work with the audio directly.

An “alternative” sampling drum machine. Since you can pull out individual bits, you can make new drum kits out of sounds. And that includes —

Creative abuse. Regroover is really designed for drum loops – both in the interface and the way in which the machine learning algorithms were trained and adapted. But that doesn’t mean you have to follow the rules. Dropping any AIFF or WAV file will work, so you can take field recordings or whatever you can get your hands on and see what happens. There are some strange perceptions you may have of the results, but that’s the fun.

Next week, we’ll have a tutorial and a special giveaway so you can give this a try.

Regroover is available as a free trial, a US$99 Essentials version, or a $219 Pro version.

Here’s what’s new in 1.7:

A complete set of tutorials is available:

Product site:

Accusonus Regroover

The post Regroover is the AI-powered loop unmixer, now with drag-and-drop clips appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme