Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » apps


iPad Eurorack: An unofficial port is bringing VCV Rack to iOS

Delivered... Peter Kirn | Scene | Mon 16 Sep 2019 3:10 pm

Get ready for some tablet patching. A developer has revealed a port of popular open source modular environment VCV Rack to the iPad.

Synth Anatomy gets the scoop on this one. New Zealand-based developer Vitaly Pronkin has been working on a project that promises to put the free rack synthesizer platform on the iOS app store soon.

The most encouraging thing here is probably seeing an easy interface for adding modules from VCV and third parties. That would open up an additional platform for developers’ modules.

Don’t get too excited too fast – this is best seen as a proof of concept, especially since it forks an earlier version (0.x rather than 1.0). But it could be a good indication of performance on Apple’s tablets, and might well be the basis for a more polished, finished project.

VCV Rack 1.0 is licensed under the GPLv3, which generally is not allowed on Apple’s App Store. (There are some loopholes, as we discovered when licensing the iOS port of Pure Data, libpd – but that has to do with the fact that Pd itself is under a more permissive license, and patches, for instance, are not compiled.)

Another way to go if this is what you want – try running Rack on a Surface or similar Windows tablet. That also allows greater compatibility with your usual audio tools than you get from iOS, and without Apple’s App Store restrictions.

I’m still happy with Rack on a PC, where it can take advantage of some unique performance enhancements, and instead externalizing control. (Playing live, I don’t really want to be re-patching at all, but that’s me…)

Check out the full blog post – there is also an interesting note on an abortive port to the Web and JavaScript and some embedded hardware:

miRack is coming to iOS

The other ports: https://github.com/mi-rack/Rack

The post iPad Eurorack: An unofficial port is bringing VCV Rack to iOS appeared first on CDM Create Digital Music.

KORG are making Pokémon metronomes and tuners

Delivered... Peter Kirn | Scene | Sun 8 Sep 2019 11:30 pm

If there was any doubt that KORG wants to be the Nintendo of music brands, here’s yet another partnership with the iconic game maker – but it’s sadly only skin deep.

Yes, it’s true, you get insanely cute Pokémon metronomes and clip-on pitch tuners. But there’s a missed opportunity here – whereas Teenage Engineering recently made full-on Rick & Morty Pocket Operators, KORG are only changing the paint job on their hardware.

The mind reels at the possibilities. You could have a Tamagotchi-style creature on your metronome. Or you could use Pokémon Go-style real-world capture to find synths for KORG Gadget. (Hang around Kottbusser Tor, Berlin to snag a rare Eurorackosaur; get a Prophetee 5 in Berkeley, California.)

Okay, I guess this may not help you with violin practice. (Maybe some gamification element to music learning?)

The point is, KORG continue to play on their relationship with gaming. So even if it’s just a cute tuner or metronome for kids, I think they’ve been very clever continuing to associate fun with their music tech. And fun is supposed to be part of the point, right?

The tuners (Pitchclip 2)

The metronome (MA-2-PK/EV)

The post KORG are making Pokémon metronomes and tuners appeared first on CDM Create Digital Music.

Arturia’s KeyStep just got way more useful

Delivered... Peter Kirn | Scene | Wed 14 Aug 2019 3:03 pm

Arturia’s KeyStep was already appealing – a mobile MIDI keyboard with sequencer and arpeggiator. But the 1.1 update improves some details and adds major new musicality.

Let’s look at this in detail – though the sequence length and arp octaves alone already have me sold.

A ton of power is now available on the fly, as you play.

Three new features are now available from the KeyStep’s physical controls, as you play:

Sequence length. Hold Record, and press one of the MIDI Channel keys, and you set length of the sequence on the fly. This actually works from 1 – 64 steps, just by pressing a few keys in sequence.

Quantized tempo adjustment: Now you can hold shift and turn the tempo knob to move by increments of 1 bpm. That lets you round off bpms from the tap tempo or quickly dial in a bpm without winding up with something weird. (127.62, anyone?)

Arp Octaves: With the arpeggiator running, you can now shift notes you’re playing up or down the octave. (The Arturia site is a little unclear on this – it sounds like they mean just shifting the arpeggiator up and down by octave. It’s actually cooler than this.) So hold Shift+Octave + or -, and whichever notes you’re playing will be arpeggiated up or down by octave. Hit the +/- key multiple times for multiple octaves. I can’t think of anything that works quite like this; it’s really cool and performative, because it’s all on the fly.

You’ll need the editor to access some new features.

Three modes are available in the updated MIDI Control Center software editor (so not onboard, but something you set in advance):

“Armed” clock. This gives you the option of using external sync, and passing it along, but controlling the KeyStep’s sequencer with the play button. There’s now a new parameter for switching on or off Arm to Start, which determines how the KeyStep responds to external clock.

Off is the original mode – the KeyStep Pro will just run or pause or stop with your external clock signal. But switch this to on, and the KeyStep lets you start and stop the sequencer as you see fit. You still pass the sync on to other gear. So for example, you could keep your drum machine running with the master clock, but turn on and off the sequencer on the keyboard, stop and jam for a second live, or whatever.

Pattern and Brownian Randomness. You can set randomness to Brownian Motion (“drunken walk) or “Pattern,” which creates randomized but repeating patterns. Pattern Mode is borrowed from Arturia’s MicroFreak synth.

Change LED brightness. Finally. No more blindness.

I still would love to see a KeyStep Pro, akin to the way the BeatStep Pro built on the original BeatStep. It’d be terrific to have a keyboard with some knobs for parameter controls. Having to use tiny DIP switches to set sync modes is a pain. And obviously there will be limits to how much Arturia can do with key combos (which already mean a little time spent cracking the manual), or software editor options. It’s not hard to imagine something that expanded this with extra features.

But for now, the KeyStep stays nice and compact – and you could always add a little box with some faders or knobs, since it is so small. Plus, even with some of its rivals, Arturia has a serious edge:

  • The keys feel great.
  • There’s MIDI DIN support for external gear.
  • There’s a standalone option (including a dedicated power plug).
  • It works with USB when you need it – no drivers required. (Hello, Linux/Raspi, etc., in addition to mobile, of course)
  • Its power consumption is low enough to work with iPad, etc., without additional power.
  • It’s stupidly affordable.

I think that with the additional performance options, this is the one to beat.

https://www.arturia.com/products/keystep/details

The post Arturia’s KeyStep just got way more useful appeared first on CDM Create Digital Music.

Make music with mobile, MeeBlip, and one connection – here’s how (iOS, Android)

Delivered... Peter Kirn | Scene | Thu 25 Jul 2019 7:40 pm

It’s liberating – just take your phone or tablet, plug in a USB cable, and you can make music on this hardware synth anywhere. Here’s how to do that, with our MeeBlip geode, plus some tips on the best apps for both iOS and Android.

Inspiration is a funny thing, and somehow in the process of hunting around for interfaces and power sockets, you can wind up staring at a tangle of cables and no idea of what it was you were trying to do. So, I’m already finding it surprisingly empowering to be able to use the new USB port on the MeeBlip geode for both power and MIDI (sequencing notes and control). Every smartphone I’ve tested, plus the iPad, will gladly power the geode from the same connection.

Why not just use an app? Well, with the geode plugged in, you get some nice feeling knobs and switches, plus that grimy, dirty MeeBlip sound – and its screaming analog filter. To look at it the other way, all you need for different interfaces for playing this module, from step sequencers to touch keyboards, is your handy mobile gadget.

That also led me on a search for the best apps that support MIDI out. Not all do, Apple’s own GarageBand for iOS being notably incapable of the feat (unlike its Mac sibling). I also spoke with Ashley Elsdon, our resident mobile geek, for additional tips. So these apps will be working with lots of my other MIDI gear, too. And while I thought the Huawei Android handheld that I just got to replace my iPhone would leave me disappointed as far as music apps, I was glad to find some excellent Android-platform stuff, too. (For once, we don’t have to leave y’all out.)

First, here are a couple of jams on iOS, audio straight from the out jack of the MeeBlip. And these two I think count as my two favorite live performance tools for iOS (so far):

Mobile MeeBlip in action!

StepPolyArp may have been one of the first music apps I got for the iPad, actually. It’s an intuitive, deep combination of a piano roll editor for graphically drawing patterns, an arpeggiator, and a step sequencer. It syncs to Ableton Link, though I’ve also used plain MIDI clock. And yes, you can get grimy sounds out of geode, in case you didn’t know that.

https://dev.laurentcolson.com/steppolyarp.html

Arpeggionome Pro has a unique grid (influenced by the likes of the Tenori-On), and runs on both iPhone and iPad – it’s great handheld. Because of its particular approach to harmony and rhythm, it can lead you to some patterns you’d never play on a normal arpeggiator, let alone on a keyboard (unless you’re seriously some kind of pinball wizard). And yes, it also boasts Ableton Link support, so you can wireless sync up to another app or computer running lots of different software (not just Ableton Live).

It’s also on iOS, though ARPIO is an Android port from the original developer, and just lacks MIDI support – please, please!

More app ideas

On Android, there’s a powerful MIDI sequencer/arpeggiator toolkit that lets you build your own patterns:

https://play.google.com/store/apps/details?id=midi.midi.midi.looper.free&hl=en_US

Wildly enough, you can even use the Virtual ANS, a reimagining of a vintage Soviet synth, with MIDI output. The developer tells me he’s working on bringing that same MIDI output to his excellent tracker/production tool SunVox, where it makes more sense:

https://play.google.com/store/apps/details?id=nightradio.virtualans3

Various production tools on Android also do MIDI output, though perhaps the easiest to use would be Touch DAW, which simply acts as a general-purpose MIDI controller for everything – including a keyboard.

iOS is as usual richer with options. Ashley / Palm Sounds recommends considering MIDI plug-ins, too.

apeMatrix as host + AUv3 MIDI plug-ins

Rozeta sequencer suite from our friend Ruismaker (or if you want to get really fancy, try scripting your own MIDI with Mozaic)

And there’s Fugue Machine, also from Alexandernaut who built Arpeggionome above, which could be wild. I might have to try that with multiple MeeBlips, uh, fuguing. Stay tuned.

Or think of Modstep, a powerful sequencer with scene triggering

What do you need for the connection?

On many new Android devices, you can actually plug a cable directly between your phone (USB-C) and the MeeBlip (USB-B). Otherwise, you’ll need a USB OTG adapter. These run about ten bucks (ah, this obviously isn’t from Apple).

On iOS with only Lightning connections, you need an adapter. The best of these is Apple’s Lightning to USB3 Camera Connection Kit. It’s pricey, but it gives you both a USB-A and a separate Lightning breakout, so you can power your iPad or iPhone and connect USB at the same time, rather than drain the battery. It’s reliable enough to use live onstage, and it’s what you’ll see me using in these images.

Of course, on a computer with a standard USB connection, you don’t need any special adapters.

Regardless, you’re sure to be able to quickly connect your MeeBlip in the studio or at home, and you can even mess around with ideas on the go or busk at the park or picnic.

MeeBlip geode is shipping now. Grab one if you don’t have it already for US$149.95, direct from us.

https://meeblip.com/

The post Make music with mobile, MeeBlip, and one connection – here’s how (iOS, Android) appeared first on CDM Create Digital Music.

Music on the go – Auxy app now has tweakable sounds, Ableton export

Delivered... Peter Kirn | Scene | Thu 18 Jul 2019 2:24 pm

For all the app choices in music, a lot feel like plug-ins crammed onto the mobile screen. Auxy may have the essential combination of ingredients – a simple, quick UI, but now the ability to make sketches you finish in Ableton Live, and sounds you can more easily tweak.

Auxy always had an elegant, approachable UI. The tool basically strips the essential function of the familiar piano roll-style view so you can quickly sketch ideas with your fingertips.

But just being simple isn’t quite enough. Mobile apps all face the common problem of having to satisfy two very different use cases or workflows. Some people want to focus on music making right on the phone or tablet, stay away from their computers (or other gear), and yet make finished tracks. Others want the app to be a rough sketchpad for ideas they can use on the go, then finish in the more comfortable environs of their computer rig or studio. The problem is, of course, those come with different demands.

Swedish app Auxy has had two updates that address some of these cases.

First, Auxy 5.4 in April added direct export to Ableton Live projects. Cleverly, this exports both audio and MIDI, so you retain your sound designs from the app as stems, but can also use patterns to work with new sounds inside Live.

Auxy 5.4 also represents a new high water mark for Ableton’s SDK. Auxy encouraged Ableton to add features for populating the Arrangement, so that song ideas and arranging choices you make on the go are reflected when you open up your project in Live. These features will be available to other developers, too, so if you’re a dev, you can get in touch with Ableton. (And that’s important, too – the better this support works in different apps, the more useful mobile-to-Live workflows become.)

5.4 also added improved import/export for samples, imported samples that share when you share projects, and updated Ableton Link support.

Auxy 6 is a major update just released this month, focusing on giving you more control over sounds and effects. And that addresses the other thing that might have kept you from adopting Auxy in the past – the simplicity is great, but you might feel constrained by the available sounds.

Auxy launched as a kind of preset machine. That makes things simpler, but might be uninspiring if you feel like you can’t shape your own sounds. That changes with some significant features:

The new tweak panel. Hmm, Build Up Stress? Been there.

More effects for instrumental sounds: distortion, delay, reverb, chorus, filter, ducker, and EQ sounds everywhere – customizable, not locked to presets.

More effects for drums, too: delay, distortion, compressor, filter, EQ, and ducker are now available on drums.

Shape sound envelopes: attack, release, glide, offset. (works on drums, too)

Free grid mode: move notes and automation freely as you edit.

Browse sounds by category.

This isn’t going to sound so revolutionary, but of course that is always the challenge when trying to keep things simple – there’s a lot to think about adding even simple features.

All in all, Auxy has really evolved into one of the easiest, most elegant sketchpads for music on mobile. There’s many things it isn’t – it’s not really about live playing, it’s not a full-featured DAW (and doesn’t try to be), it’s not really an audio multitrack. But what it is, it really focuses on. And with Live export, that could prove invaluable.

Auxy regularly select favorite user tracks, which is a nice way to get a feel for what people are doing. Here are the Staff Picks for last month:

Plus one creation made in this latest release:

Check out Auxy for iOS (no Android version, sorry):

https://auxy.co/

The post Music on the go – Auxy app now has tweakable sounds, Ableton export appeared first on CDM Create Digital Music.

SEGA, Taito arcade come to KORG Gadget on Nintendo Switch

Delivered... Peter Kirn | Scene | Thu 4 Jul 2019 6:46 pm

Here’s one serious Japanese game + music nerdgasm: legendary arcade maker Taito, game giant SEGA all come together on the KORG platform on the Nintendo platform.

KORG Gadget on the Nintendo Switch was always at least an intriguing novelty. As with titles for Nintendo DS and Game Boy before it, bringing a music creation tool to a game platform means the ability to swap between gaming and music making for maximum fun. The Switch doesn’t have a unique onboard hardware synth like the Commodore 64 or vintage Nintendo machines. But it does also have the twist of connecting to a TV.

That’s cool, but frankly, it’s also not quite enough. Handheld gaming for musicians caught on partly because of a unique sound, and it happened before platforms like iPhone, iPad, and Android were available. If you have a choice between using Gadget on a Switch or in its original version on the iPad, well, it’s no contest – the iPad is more capable.

That’s what makes this a development. Now you get something that seems tailored to a game platform, from two titans of the arcade era.

Otorii is a sample-based instrument and rhythm generator, based on 80s SEGA arcade titles.

Titles: Out Run, After Burner

Ebina is a synthesizer built on FM sounds (apparently not doing FM itself, but capturing some signature FM sound samples), also with 80s colors in mind.

Titles: Darius, The Ninja Warriors

Kamata is a sound engine (already part of the Switch title) developed with Bandai Namco.

SEGA and Bandai Namco presumably need no introduction to anyone interested enough in gaming to even read this far. If Taito is familiar and you don’t know why, that’s because its name has graced the likes of Space Invaders, Bubble Bobble, Arkanoid, Battle Gear, and Kick Master. Sometimes Americans saw these titles with other distributors onboard, and Taito hasn’t been independent since the mid-90s, but you’ve likely also encountered the development house as part of its new life as part of Square Enix.

In short – this is Japan at its best, making us fall in love with something fun in childhood and then staying with us through our adult lives. Whether you’re particularly bound to Taito in the arcade, that’s something other Japanese music tech makers might learn from. (Partnership is key to the success of KORG here – they work with experienced mobile and game developer and Japanese neighbor DETUNE for these titles.) Roland, Yamaha, and Casio continue to have a rocky relationship with their own legacy (with some promising recent signs). But if the games industry has fended off clones and rivals, surely music tech could do the same – with plenty of back catalog to mine.

In any event, I know plenty of electronic musicians who are just as addicted to gaming – men and women, young and old, and plenty who even work inside the gaming industry. There’s nothing to do but smile when you see it come together. Game on.

http://www.detune.co.jp/

http://gadget.korg.com/nintendo_switch/

The post SEGA, Taito arcade come to KORG Gadget on Nintendo Switch appeared first on CDM Create Digital Music.

SubLab is an 808 bass synth and more, from makers of Circle

Delivered... Peter Kirn | Scene | Tue 11 Jun 2019 4:36 pm

Hard-hitting sub bass and percussion is the focus of SubLab, a new instrument from Future Audio Workshop. And it puts a ton of sound elements into an uncommonly friendly interface. Let’s get our hands on it.

This begins our Tools of Summer series of selections – stuff you’ll want to use when the nights are long (erm, northern hemisphere) and you need some new inspiration from instruments to actually use.

We hadn’t heard much lately from Future Audio Workshop. Their ground-breaking Circle instrument was uniquely friendly, clean, and easy to use. At a time when nearly all virtual instruments had virtually unreadable, tiny UIs, Circle broke from the norm with displays you could see easily. Beginners could track signal flow and modulation, and experts (erm, many of them, you know, older and with aging eyes) could be more productive and focused.

SubLab takes that same approach – so much so that a couple of quick shots I posted to Instagram got immediate feedback.

And then it’s just chock full of bass – with a whole lot of potential applications.

Sound layers, plus filter, plus distortion, plus compressor – deceptively simple and powerful.

So, sure, FAW talk trap and hip-hop and future bass and sub basslines – you’ll get those, for sure. But I think you’ll start using SubLab all over the place.

If you just want a recipe for 808 bass, this instrument is there for you. You can layer and filter and overdrive and distort sounds into basslines made from punchy drum bits. Then you discover that this produces interesting melodic lines, too. Or that while you have all the elements of various kick drums not only from Roland but sampled from a studio full of drum machines (Vermona to JoMoX), you … might as well make some punchy kicks and toms.

It’s just too fast. And that’s not because the interface is particularly dumbed down – on the contrary, it’s because once all the chrome and tiny controls are out of the way and the designers focused on what this does, you can get at a lot of options more quickly.

The synth has an easy-to-follow structure – sound, distortion, compressor. Sound is divided into a simple multi-oscillator synth, a sample playback engine, and then the trademarked ‘x-sub’ sub-oscillator. You can then mix these separately, and route a percentage of the synth and sampler to a multi-mode filter. (Don’t miss the essential ‘glide’ control lurking just at the bottom, as I did at first.) Pulling it all together, you get a ‘master’ overview that shows you how each element layers in the resulting sound spectrum.

Also in the sound > synth section, you can easily access multiple envelopes with visual feedback. (Arturia, who I’m also writing about this week, have also gone this route, and it makes a big difference being able to see as well as hear.)

The sampler has essential tracking, pitching, and looping features for this application. The x-sub bit is uniquely controllable – you can set individual harmonic levels just by dragging around purple vertical bars. It’s rare to sculpt sub-bass like this so easily, and it’s addictive.

X-sub (trademarked?) means you can sculpt the harmonics inside the sub-oscillator section just by dragging.

The interface is easy enough, but a couple of characteristic additions really complete the package. The sampler section is full of inspiring hardware samples to use as building blocks – great stuff that you might use for your non-melodic kicks, or try out for punchy percussion and melodies even in higher registers. The Distortion also has some compelling modes, like the lovely “darkdrive” and convincing tube and overdrive options.

Tons of hardware samples abound for layering.

There aren’t a lot of presets – it looks like FAW’s plan is to get you hooked, then add more patch packs. But with enough sound design options here, including custom sample loading, you might be fine just making your own.

Really, my only complaint is that I find the filter and compressor a bit vanilla, particularly in this age of so many beautiful modeled options from Native Instruments, Arturia, u-he, and others.

I figured I would be writing this glowing review and telling you, oh yeah, it’s definitely worth $149.

But — damn, this thing is $70, on sale for $40.

Sheesh. Just get it, then. There are lots of deeper and more complex things out there. But this is something else – simple enough that you’ll actually use it to design your own creative sounds. As FAW has shown us before, visual feedback and accessible interfaces combine to make sound design connect with your brain more effectively.

https://futureaudioworkshop.com/sublab

Here’s me messing around with it to prove it can do things other than what it was intended for:

And more hands-on videos from the creators:

The post SubLab is an 808 bass synth and more, from makers of Circle appeared first on CDM Create Digital Music.

Jam like you’re in a Tarkovsky film with this major app update

Delivered... Peter Kirn | Scene | Wed 5 Jun 2019 11:33 pm

Virtual ANS from prolific omni-platform developer Alexander Zolotov brings back spectral synthesis like it’s the mid-century USSR. But it also future-proofs that tech – full Android and iOS (plus desktop) support, and now a version that’s polyphonic and MIDI playable.

Alexander Zolotov can single-handedly make a mobile device useful. On my new Android phone, it was his stuff I grabbed first – and, well, last. Once you’ve got a tracker like SunVox that runs anywhere, what more do you need?

And for anyone bored with the world of knobs and subtractive synthesis (yawn), enter the eerily beautiful alien sound world of the ANS – an alternate timeline of synth history in which sound is painted as well as made electrical. The creation of Russian engineer Evgeny Murzin, the ANS used a unique analog-optical hybrid approach. Borrowing from the graphic scores used in early film audio, waveforms were optically produced. It’s What You See Is What You Get For Sound – the spectrogram is the interface as well as a representation of what you hear. This technique is what creates the gorgeous, otherworldly timbres of Tarkovsky’s Solaris – and now it can be on your phone.

The original ANS – its name drawn from the initials of Alexander Nikolayevich Scriabin, the synesthesia-experiencing esoteric composer – used a series of optical discs. It’s easier to do this in software, of course. Everything works in real time, you can have as many pure tone generators as you like (since you won’t just run out of optical-mechanical wheels), and you can convert to and from digital files of both images and sounds.

Sound from pictures, pictures from sounds.

Now with MIDI support on both Android and iOS (not to mention desktop OSes).

ANS 3.0 is a major update that moves the whole affair from fascinating proof of concept to a full-featured instrument. You can now map polyphony, and you can play your creations via MIDI – including via external MIDI controllers.

Adding MIDI controllers actually makes for a wild instrument:

Oh, and remember how I just said that AUv3 was the way forward on iOS? Well, Sasha is of course supporting AUv# – as he’s supported Audiobus, IAA, JACK, ALSA, OSS, MME, DirectSound, and ASIO in the past. (That long list of formats comes from supporting Mac, Windows, Linux, Android, and iOS all at once.)

And there’s more. On iOS, you get high-res support and MIDI. Android 6+ has MIDI support. Linux gets multitouch support. Files are accessible in the file system of both iOS and Android – including all those project, image, and sound files. There are more audio export options, new brushes, new lighten and darkening layering modes like you’d expect in Photoshop, and lots of shortcuts. Check the full changelog:

http://warmplace.ru/soft/ans/changelog.txt

Of course, because it runs on every platform (well, every modern platform), you can sketch an idea on your Android phone, move to iPad and work some more, then load it onto your PC and drop it into a DAW.

Frankly, I think it’s more exciting than anything from Apple this week, but I am impossibly biased toward this esoterica so … that goes without saying.

Enjoy:

http://warmplace.ru/soft/ans/

The post Jam like you’re in a Tarkovsky film with this major app update appeared first on CDM Create Digital Music.

The future of inter-app sound on iOS: a chat with Audiobus’ creator

Delivered... Peter Kirn | Scene | Wed 5 Jun 2019 11:25 am

Many iOS music makers want to route audio between apps – just as you would in a studio. But news came this week that Apple would drop support for its own IAA (Inter App Audio), used by apps like KORG Gadget, Animoog, and Reason Compact. What will that mean? I spoke with Audiobus’ creator to find out.

Michael Tyson created popular music apps Audiobus and Loopy. And he’s made frameworks for other developers, too, not only supporting countless developers working with Audiobus, but also creating the framework The Amazing Audio Engine, now part of Audiokit. So he’s familiar with both what users and developers want here.

Audiobus is key. At first, iOS music apps were each an island. Audiobus changed all that, by suggesting users might want to combine apps the way they do on an stompbox pedalboard or wiring gear together in a studio. Take an interesting synth, add a delay that sounds nice with it, patch that into a recording app – you get the idea. That expectation was also familiar from plug-in formats on desktop and inter-app tools like the open source JACK and Soundflower. And Tyson’s team developed this before Apple followed with their own IAA or the plug-in format AUv3.

So now, having pushed their own format, Apple is abandoning it. iOS and the new iPadOS will deprecate IAA, according to the iOS 13 beta release notes.

This won’t mean you lose access to your IAA apps right away. “Deprecated” in Apple speak generally means that something remains available in this OS release but will disappear in some major release that follows. Apple often deprecates tech quickly – as in one major release later (iOS 14?) – but that’s anyone’s guess, and can take longer.

That is still a worry for many users, as many iOS developers do abandon apps without updates. It’s tough enough to make money on an initial release, tougher still to squeeze any money out of upgrades – and iOS developers are often as small as one-person operations. Sometimes they just go get another job. That may mean for backwards compatibility it even makes sense to hold on to one old iPad and keep it from updating – not only because of this development, but to retain consistent support for a selection of instruments and effects.

But if you’re worried about Audiobus dying in iOS 13 – don’t. Michael explains to CDM what’s going on.

Audiobus 3.

Can you comment on the deprecation of Audiobus and IAA for iOS? It’s safe to say this should mean compatibility at least for the forseeable future, but not much future in OS updates after that, given Apple’s past record?

To be specific, this is a depreciation of IAA rather than Audiobus – Audiobus is a combination of a host app, and a communication technology built into supporting third party apps. The latter is presently based on IAA, but doesn’t have to be.

As for the IAA deprecation, I consider this a very positive move by Apple. The technology that replaces it, Audio Unit v3, is a big step forward in terms of usability and robustness, and focusing their own attention and that of the developer community on AUv3 is a good thing. I doubt IAA is going anywhere any time soon though; deprecations can last many years.

Does this mean the Audiobus app will reach its end of life? Do you have plans for further development in other areas?

Not at all. I’ve got lots of plans for Audiobus, to increase its value as an audio unit host, and possibly to fill the gap left by IAA if it’s ever switched off.

Do we lose anything by shifting to AUv3 versus IAA? (I have to admit I have a slightly tough time wrapping my head round this myself, in that there’s a workflow paradigm shift here, so it’s not so fair to compare the enabling technologies alone…)

AUv3 is actually quite impressive lately, and continues to grow. As you say, they’re pretty different workflows, so it can be tricky to compare. The shortcomings we see I largely put down to developers not fully exploiting the opportunities of the platform – myself included! This will only improve going forward, I suspect.

There is one pretty big downside, which is that implementing AUv3 support in an app is a lot harder than implementing IAA, which itself is harder than implementing Audiobus support. It’s the difference between just a few lines of code, and a whole restructure of an app. Minutes vs days or weeks; worse if there’s file management involved. For apps that want to host audio units (on the receiving end), it’s a lot more work too, as they would need to implement all of the audio unit selection and routing themselves, rather than letting Audiobus do all the work and just receiving the audio at the end.

This is the reason there are still plenty of apps that only do Audiobus or IAA – my own apps Loopy and Samplebot included! If those apps that don’t have AUv3 yet don’t update in time and Apple ever pull the plug on IAA, those will just stop working. And it’s possible we’ll see less adoption of AUv3 for new apps.

But if things do go that way, I’m completely open to the possibility of stepping in to fill the gap left by IAA; there’s no reason Audiobus couldn’t continue to function as it does right now without IAA, as this is how it worked in the beginning. But we’ll wait and see what happens.

AUv3 plug-in format is supported by instruments and effects, like this RM-1 Wave Modulator from Numerical Audio.

Is there some way to re-imagine Audiobus using AUv3?

Audiobus actually already has great AUv3 support built in, and lots of users are already on exclusively AUv3 setups. I’m continuing to add stuff to make the workflow even better, like MIDI learn and MIDI sync – and 2-up split screen coming soon.

Have you heard reaction from other developers?

Not as yet, no.

So you see a justification to Apple going this direction?

Sure, I’d say it’s so we can all focus on the new hotness that is AUv3. IAA was never enormously stable, and felt like a bridging technology until something like AUv3 came along. The resources of the audio team at Apple are just better put towards working on AUv3.

Thanks, Michael. We’ll keep an eye on this one, and if there’s anything CDM can do to pass on useful information to developers interested in adding AUv3 support, I imagine we can do that, too.

https://audiob.us/

The post The future of inter-app sound on iOS: a chat with Audiobus’ creator appeared first on CDM Create Digital Music.

Playdate is an indie game handheld with a crank from Teenage Engineering, Panic

Delivered... Peter Kirn | Scene | Thu 23 May 2019 9:08 pm

Playdate is a Game Boy-ish gaming handheld with a hand crank on it, wired for delivering indie and experimental games weekly. And it comes from an unlikely collaboration: Mac/iOS developer Panic with synth maker Teenage Engineering.

Yes, that svelte retro industrial look and unmistakable hand crank are the influence of prolific Swedish game house Teenage Engineering. And TE have already demonstrated their love of cranks on their synths, the OP-1 and OP-Z.

This isn’t a Teenage Engineering product, though – and here’s the even more surprising part. The handheld hardware comes from Panic, the long-time Mac and iOS developer. I’ve been a Panic owner over the years, having used their FTP and Web dev products early on in CDM’s life, as did a couple of my designers, and even messing around with Mac icons obsessively back in the day.

But now Panic are doing games – the spooky Wyoming mystery Firewatch, which has earned them some real street cred, and an upcoming thing with a goose.

The really interesting twist here is that the “Playdate” title is a reference to games that appear weekly. And this is where I might imagine this whole thing dovetailing with music. I mean, first, music and indie games naturally go hand in hand, and from the very start of CDM, the game community have been into strange music stuff.

The obvious crossover at some point would be some unusual music games and without question some kind of music creation tool – like nanoloop or LittleGPTracker. nanoloop got its own handheld iteration recently – see below – but this would be a natural hardware platform too.

Even barring that, though, I imagine some dovetailing audiences for this. And it does look cute.

Specs:
400×240 (that’s way more resolution than the original Game Boy), black and white screen
No backlight (okay, so kind of a pain for handheld chip music performance)
Built-in speaker (a little one)
D-pad, A and B switches
USB-C connector
… and it looks like there is a headphone jack

Not sure what the buttons on top and next to the display do – power and lock, maybe?

Involved game designers are tantalizing, too – and have some interesting music connections:

Keita Takahashi (Katamari Damacy)

Zach Gage (SpellTower, Ridiculous Fishing)

Bennett Foddy (QWOP, Getting Over It with Bennett Foddy, and – music lead again, he was the bassist in Cut Copy, remember them?)

Shaun Inman (also a game composer, as well as a designer of Retro Game Crunch, The Last Rocket, Flip’s Escape, etc.)

This takes me back to that one time I hosted a one-button game exhibition at GDC (the game developer conference) with Kokoromi, the Montreal game collective. That has accessibility implications, too, including for music. (Flashback to their game showcase at the same time.) So there is crossover here, I mean – and intersecting interests between composers and game designers, too.

US$149 will buy you the console and a 12 game subscription. Coming early 2020.

Music connections or no, it looks like a toy we’ll want to have.

https://play.date/

EDGE, the print mag, has an exclusive – with an excerpt of that feature online:

https://play.date/edge/

Thanks to Oliver Chesler for the tip.

Obvious marketing campaign, though only for Panic wanting to market to Americans of my age or so…

The post Playdate is an indie game handheld with a crank from Teenage Engineering, Panic appeared first on CDM Create Digital Music.

Turn your iPad or iPhone into a scriptable MIDI tool with Mozaic

Delivered... Peter Kirn | Scene | Mon 20 May 2019 6:07 pm

Its creator describes it as a “workshop in a plug-in.” Mozaic lets you turn your iOS device into a MIDI filter/controller that does whatever you want – a toolkit for making your own MIDI gadgets.

Oh yeah and it’s just US$6.99, which is absurd but awesome.

The beauty of this, of course, is that you can have whatever tools you want without having to wait for someone else to make them for you. Developer Bram Bos has been an innovator in music software for years – he created one of the first drum machines, among some ground-breaking (and sometimes weird) plug-ins, and now is one of the more accomplished iOS developers. So you can vouch for the quality of this one. It might move my iPad Pro back into must-have territory.

Bram writes to CDM that he thought this kind of DIY plug-in could let you make what you need:

“I noticed there is a lot of demand for MIDI filters and plugins (such as Rozeta) in the mobile music world,” he says,”especially with the rising popularity of DAW-less, modular plugin-based jamming and music making. Much of this demand is highly specific and difficult to satisfy with general purpose apps. So I decided to make it easier for people to create such plugins themselves.”

You get ready-to-use LFOs, graphic interface layouts, musical scales, random generators, and “a very easy-to-learn, easy-to-read script language.” And yeah, don’t be afraid, first-time programmers, Bram says: “I’ve designed the language from the ground up to be as accessible and readable as possible.”

To get you started, you’ll find example scripts and modular-style filters, and a big preset collection – with more coming, in response to your requests, Bram tells us. There’s a programming manual, meant both to get beginners going in as friendly a way as possible, and to give more advanced scripters and in-depth guide. And you get plenty of real-world examples.

There are some things you can do with your iOS gadget that you can’t do with most MIDI gadgets, too – like map your tilt sensors to MIDI.

This is an AUv3-compatible plug-in so you can use it in hosts like AUM, ApeMatrix, Cubasis, Nanostudio 2, Audiobus 3, and the like.

Full description/specs:

Mozaic runs inside your favorite AU MIDI host, and gives you practical building blocks such as LFOs, pre-fab GUI layouts, musical scales, AUv3 support (with AU Parameters, transport events, tempo syncing, etc.), random generators and a super-simple yet powerful script language. Mozaic even offers quick access to your device’s Tilt Sensors for expressive interaction concepts!

The Mozaic Script language is designed from the ground up to be the easiest and most flexible MIDI language on iOS. A language by creatives, for creatives. You’ll only need to write a few lines of script to achieve impressive things – or to create that uber-specific thing that was missing from your MIDI setup.

Check out the Programming Manual on Ruismaker.com to learn about the script language and to get inspiration for awesome scripts of your own.

Mozaic comes with a sizable collection of tutorials and pre-made scripts which you can use out of the box, or which can be a starting point for your own plugin adventures.

Features in a nutshell:

– Easy to learn Mozaic Script language: easy to learn, easy to read
– Sample-accurate-everything: the tightest MIDI timing possible
– Built-in script editor with code-completion, syntax hints, etc.
– 5 immediately usable GUI layouts, with knobs, sliders, pads, etc.
– In-depth, helpful programming manual available on Ruismaker.com
– Easy access to LFOs, scales, MIDI I/O, AU parameters, timers
– AUv3; so you’ll get multi-instance, state-saving, tempo sync and resource efficiency out of the box

Mozaic opens up the world of creative MIDI plugins to anyone willing to put in a few hours and a hot beverage or two.

Practical notes:
– Mozaic requires a plugin host with support for AUv3 MIDI plugins (AUM, ApeMatrix, Cubasis, Auria, Audiobus 3, etc.)
– The standalone mode of Mozaic lets you edit, test and export projects, but for MIDI connections you need to run it inside an AUv3 MIDI host
– MIDI is not sound; Mozaic on its own does not make noise… so bring your own synths, drum machines and other instruments!
– AUv3 MIDI requires iOS11 or higher

With some other MIDI controllers looking long in the tooth, and Liine’s Lemur also getting up in years, I wonder if this might not be the foundation for a universal controller/utility for music. So, yeah, I’d love to see some more touch-savvy widgets, OSC, and even Android support if this catches on. Now go forth, readers, and help it catch on!

Mozaic on the iTunes App Store

http://ruismaker.com/

The post Turn your iPad or iPhone into a scriptable MIDI tool with Mozaic appeared first on CDM Create Digital Music.

No, Beatport’s subscription will not kill music – here’s how it really works

Delivered... Peter Kirn | Labels,Scene | Fri 17 May 2019 7:18 pm

Pioneer and Beatport this week announced new streaming offerings for DJs. And then lots of people kind of freaked out. Let’s see what’s actually going on, if any of it is useful to DJs and music lovers, and what we should or shouldn’t worry about.

Artists, labels, and DJs are understandably on edge about digital music subscriptions – and thoughtless DJing. Independent music makers tend not to see any useful revenue or fan acquisition from streaming. So the fear is that a move to the kinds of pricing on Spotify, Amazon, and Apple services would be devastating.

And, well – that’s totally right, you obviously should be afraid of those things if you’re making music. Forget even getting rich – if big services take over, just getting heard could become an expensive endeavor, a trend we’ve already begun to see.

So I talked to Beatport to get some clarity on what they’re doing. We’re fortunate now that the person doing artist and label relations for Beatport is Heiko Hoffmann, who has an enormous resume in the trenches of the German electronic underground, including some 17 years under his belt as editor of Groove, which has had about as much a reputation as any German-language rag when it comes to credibility.

TL:DR

The skinny:

Beatport LINK: fifteen bucks a month, but aimed at beginners – 128k only. Use it for previews if you’re a serious Beatport user, recommend it to your friends bugging you about how they should start DJing, and otherwise don’t worry about it.

Beatport CLOUD: five bucks a month, gives you sync for your Beatport collection. Included in the other stuff here and – saves you losing your Beatport purchases and gives you previews. 128k only. Will work with Rekordbox in the fall, but you’ll want to pay extra for extra features (or stick with your existing download approach).

Beatport LINK PRO: the real news – but it’s not here yet. Works with Rekordbox, costs 40-60 bucks, but isn’t entirely unlimited. Won’t destroy music (uh, not saying something else won’t, but this won’t). The first sign of real streaming DJs – but the companies catering to serious DJs aren’t going to give away the farm the way Apple and Spotify have. In fact, if there’s any problem here, it’s that no one will buy this – but that’s Beatport’s problem, not yours (as it should be).

WeDJ streaming is for beginners, not Pioneer pros

This first point is probably the most important. Beatport (and SoundCloud) have each created a subscription offering that works exclusively with Pioneer’s WeDJ mobile DJ tool. That is, neither of these works with Rekordbox – not yet.

Just in case there’s any doubt, Pioneer has literally made the dominant product image photo some people DJing in their kitchen. So there you go: Rekordbox and and CDJ and TORAIZ equals nightclub, WeDJ equals countertop next to a pan of fajitas.

So yeah, SoundCloud streaming is now in a DJ app. And Beatport is offering its catalog of tracks for US$14.99 a month for the beta, which is a pretty phenomenally low price – and one that would rightfully scare labels and artists.

But it’s important this is in WeDJ as far as DJing. Pioneer aren’t planning on endangering their business ecosystem in Rekordbox, higher-end controllers, and standalone hardware like the CDJ. They’re trying to attract the beginners in the hopes that some of those people will expand the high end market down the road.

By the same token, it’d be incredibly short-sighted if Beatport were to give up on customers paying a hundred bucks a month or so on downloads just to chase growth. Instead, Beatport will split its offerings into a consumer/beginner product (LINK for WeDJ) and two products for serious DJs (LINK Pro and Beatport CLOUD).

And there’s reason to believe that what disrupts the consumer/beginner side might not make ripples when it comes to pros – as we’ve been there already. Spotify is in Algoriddim’s djay. It’s actually a really solid product. But the djay user base doesn’t impact what people use in the clubs, where the CDJ (or sometimes Serato or TRAKTOR) reign supreme. So if streaming in DJ software were going to crash the download market, you could argue it would have happened already.

That’s still a precarious situation, so let’s break down the different Beatport options, both to see how they’ll impact music makers’ business – and whether they’re something you might want to use yourself.

Ce n’est pas un CDJ.

Beatport LINK – the beginner one

First, that consumer service – yeah, it’s fifteen bucks a month and includes the Beatport catalog. But it’s quality-limited and works only in the WeDJ app (and with the fairly toy-like new DDJ-200 controller, which I’ll look at separately).

Who’s it for? “The Beginner DJs that are just starting out will have millions of tracks to practice and play with,” says Heiko. “Previously, a lot of this market would have been lost to piracy. The bit rate is 128kbs AAC and is not meant for public performance.”

But us serious Beatport users might want to mess around with it, too – it’s a place you can audition new tracks for a fairly low monthly fee. “It’s like having a record shop in your home,” says Heiko.

Just don’t think Beatport are making this their new subscription offering. If you think fifteen bucks a month for everything Beatport is a terrible business idea, don’t worry – Beatport agree. “This is the first of our Beatport LINK products,” says Heiko. “This is not a ‘Spotify for dance music.’ It’s a streaming service for DJs and makes Beatport’s extensive electronic music catalog available to stream audio into the WeDJ app.” And yeah, Spotify want more money for that, which is good – because you want more money charged for that as a producer or label. But before we get to that, let’s talk about the locker, the other thing available now:

WeDJ – a mobile gateway drug for DJs, or so Pioneer hopes. (NI and Algoriddim did it first; let’s see who does it better.)

Beatport CLOUD – the locker/sync one

Okay, so streaming may be destroying music but … you’ve probably still sometimes wanted to have access to digital downloads you’ve bought without having to worry about hard drive management or drive and laptop failures. And there’s the “locker” concept.

Some folks will remember that Beatport bought the major “locker” service for digital music – when it acquired Pulselocker. [link to our friends at DJ TechTools]

Beatport CLOUD is the sync/locker making a comeback, with €/$ 4.99 a month fee and no obligation or contract. It’s also included free in LINK – so for me, for instance, since I hate promos and like to dig for my own music even as press and DJ, I’m seriously thinking of the fifteen bucks to get full streaming previews, mixing in WeDJ, and CLOUD.

There are some other features here, too:

Re-download anything, unlimited. I heard from a friend – let’s call him Pietro Kerning – that maybe a stupid amount of music he’d (uh, or “she’d”) bought on Beatport was now scattered across a random assortment of hard drives. I would never do such a thing, because I organize everything immaculately in all aspects of my life in a manner becoming a true professional, but now this “friend” will easily be able to grab music anywhere in the event of that last-minute DJ gig.

By the same token you can:

Filter all your existing music in a cloud library. Not that I need to, perfectly organized individual, but you slobs need this, of course.

Needle-drop full previews. Hear 120 seconds from anywhere in a track – for better informed purchases. (Frankly, this makes me calmer as a label owner, even – I would totally rather you hear more of our music.)

There should be some obvious bad news here – this only works with Beatport purchased music. You can’t upload music the way some sync/locker services have worked in the past. But I think given the current legal landscape, if you want that, set up your own backup server.

What I like about this, at least, is that this store isn’t losing stuff you’ve bought from them. I think other download sites should consider something similar. (Bandcamp does a nice job in this respect – and of course it’s the store I use the most when not using Beatport.)

The new Beatport cloud.

Beatport LINK Pro – what’s coming

There are very few cases where someone says, “hey, good news – this will be expensive.” But music right now is a special case. And it’s good news that Beatport is launching a more expensive service.

For labels and artists, it means a serious chance to stay alive. (I mean, even for a label doing a tiny amount of download sales, this can mean that little bit of cash to pay the mastering engineer and the person who did the design for the cover, or to host a showcase in your local club.)

For serious users using that service, it means a higher quality way of getting music than other subscription services – and that you support the people who make the music you love, so they keep using it.

Or, at least, that’s the hope.

What Beatport is offering at the “pro” tiers does more and costs more. Just like Pioneer doesn’t want you to stop buying CDJs just because they have a cheap controller and app, Beatport doesn’t want you to stop spending money for music just because they have a subscription for that controller and app. Heiko explains:

With the upcoming Pioneer rekordbox integration, Beatport will roll out two new plans – Beatport LINK Pro and Beatport LINK Pro+ – with an offline locker and 256kbps AAC audio quality (which is equivalent to 320kbps MP3, but you’re the expert here). This will be club ready, but will be aimed at DJs who take their laptops to clubs, for now. They will cost €39,99/month and €59,99/month depending on how many tracks you can put in the offline locker (50 and 100 respectively).

You’ll get streaming inside Rekordbox with the basic LINK, too – but only at 128k. So it’ll work for previewing and trying out mixes, but the idea is you’ll still pay more for higher quality. (And of course that also still means paying more to work with CDJs, which is also a big deal.)

And yeah, Beatport agree with me. “We think streaming for professional DJ use should be priced higher,” says Heiko. “And we also need to be sure that this is not biting into the indie labels and artists (and therefore also Beatport’s own) revenues,” he says.

What Heiko doesn’t say is that this could increase spending, but I think it actually could. Looking at my own purchase habits and talking to others, a lot of times you look back and spend $100 for a big gig, but then lapse a few months. A subscription fee might actually encourage you to spend more and keep your catalog up to date gig to gig.

It’s also fair to hope this could be good for under-the-radar labels and artists even relative to the status quo. If serious DJs are locked into subscription plans, they might well take a chance on lesser known labels and artists since they’re already paying. I don’t want to be overly optimistic, though – a lot of this will be down to how Beatport handles its editorial offerings and UX on the site as this subscription grows. That means it’s good someone like Heiko is handling relations, though, as I expect he’ll be hearing from us.

Really, one very plausible scenario is that streaming DJing doesn’t catch on initially because it’s more expensive – and people in the DJ world may stick to downloads. A lot of that in turn depends on things like how 5G rolls out worldwide (which right now involves a major battle between the US government and Chinese hardware vendor Huawei, among other things), plus how Pioneer deals with a “Streaming CDJ.”

The point is, you shouldn’t have to worry about any of that. And there’s no rush – smart companies like Beatport will charge sustainable amounts of money for subscriptions and move slowly. The thing to be afraid of is if Apple or Spotify rush out a DJ product and, like, destroy independent music. If they try it, we should fight back.

Will labels and artists benefit?

If it sounds like I’m trying to be a cheerleader for Beatport, I’m really not. If you look at the top charts in genres, a lot of Beatport is, frankly, dreck – even with great editorial teams trying to guide consumers to good stuff. And centralization in general has a poor track record when it comes to underground music.

No, what I am biased toward is products that are real, shipping, and based on serious economics. So much as I’m interested in radical ideas for decentralizing music distribution, I think those services have yet to prove their feasibility.

And I think it’s fair to give Beatport some credit for being a business that’s real, based on actual revenue that’s shared between labels and artists. It may mean little to your speedcore goth neo-Baroque label (BLACK HYPERACID LEIPZIG INDUSTRIES, obviously – please let’s make that). But Beatport really is a cornerstone for a lot of the people making dance music now, on a unique scale.

The vision for LINK seems to be solid when it comes to revenue. Heiko again:

LINK will provide an additional revenue source to the labels and artists. The people who are buying downloads on Beatport are doing so because they want to DJ/perform with them. LINK is not there to replace that.

But I think for the reason I’ve already repeated – that the “serious” and “amateur”/wedding/beginner DJ gulf is real and not just a thing snobs talk about – LINK and WeDJ probably won’t disrupt label business, even that much to the positive. Look ahead to Rekordbox integration and the higher tiers. And yeah, I’m happy to spend the money, because I never get tired of listening to music – really.

And what if you don’t like this? Talk to your label and distributor. And really, you should be doing that anyway. Heiko explains:

Unlike other DSP’s, Beatport LINK has been conceived and developed in close cooperation with the labels and distributors on Beatport. Over the past year, new contracts were signed and all music used for LINK has been licensed by the right holders. However, if labels whose distributors have signed the new contract don’t want their catalog to be available for LINK they can opt out. But again: LINK is meant to provide an additional revenue source to the labels and artists.

Have a good weekend, and let us know if you have questions or comments. I’ll be looking at this for sure, as I think there isn’t enough perspective coming from serious producers who care about the details of technology.

https://www.beatport.com/get-link

The post No, Beatport’s subscription will not kill music – here’s how it really works appeared first on CDM Create Digital Music.

AUM is perfect iOS music hub, now with Ableton Link and MIDI updates

Delivered... Peter Kirn | Scene | Wed 24 Apr 2019 10:49 pm

Speaking of tools to glue together your gear and serve as the heartbeat of your studio – AUM. This iOS super-tool can serve as an essential hub for combining apps and hardware in any combination – and now it’s even more savvy with Ableton Link and MIDI.

You’d be forgiven for thinking AUM was just some sort of fancy mixer for the iPad. But it’s more like a studio for combining software with software, software with hardware, and hardware with hardware. So it might be a way to combine stuff that’s on your iOS device, or a convenient tool for mobile recording, or a way to let your iPad sit in a studio of other gear and make them play together, or a combination of all those things.

It does this by letting you do whatever you like with inputs and outputs, iOS plug-ins (Audio Unit extensions), audio between apps (Audiobus and Inter-App Audio), and multichannel audio and MIDI interfaces. It’s a host, a virtual patch bay (for both MIDI and audio), and a recording/playback device. And it’s a tool to center other tools. There’s also Ableton Link and MIDI clock support.

It’s worth bringing up AUM right now, because a minor point update – 1.3 – brings some major new features that really make this invaluable.

  • Ableton Link 3 support means you can start/stop transport.
  • You get “MIDI strips” for hosting useful MIDI-only Audio unit extensions.
  • You can import channels between sessions, and duplicate channel strips.
  • And you get tons of new MIDI mappings: program changes, tap tempo, loading presets, and even loading whole sessions can now be done via MIDI. I imagine that could see this used in some pretty major stage shows.

Jakob Haq has shown some useful ways of approaching the app, including MIDI mapping control:

Lots more tutorials and resources on the official site:

http://kymatica.com/apps/aum

The full feature list:

High quality audio up to 32-bit 96kHz
Clean and intuitive user interface with crisp vector graphics
Extremely compact and optimized code, very small app size
Unlimited* number of channels
Unlimited* number of effect slots
Inserts and sends are configurable pre/post-fader
Internal busses for mixing or effect sends
Supports multi-channel audio interfaces
Supports Audio Unit extensions, Inter-App Audio and Audiobus
Audiobus state saving
Highly accurate transport clock
Metronome with selectable output and optional pre-roll
Sends host sync to Audio Unit plugins and IAA apps
Send MIDI clock to external hardware
Play in time with Ableton Link
FilePlayer with sync and looping, access to all AudioShare files
Records straight into AudioShare storage space
Record synchronized beat-perfect loops
Built-in nodes for stereo processing, filtering and dynamics
Latency compensation makes everything align at the outputs
Separate Inter-App Audio / Audiobus output ports
Built-in MIDI keyboard
Fully MIDI controllable
MIDI Matrix for routing MIDI anywhere

The post AUM is perfect iOS music hub, now with Ableton Link and MIDI updates appeared first on CDM Create Digital Music.

MidiWrist aids instrumentalists by giving Siri and Apple Watch control

Delivered... Peter Kirn | Scene | Tue 26 Feb 2019 9:02 pm

Grabbing the mouse, keyboard, or other controller while playing an instrument is no fun. Developer Geert Bevin has a solution: put an Apple Watch or (soon) iPhone’s Siri voice command in control.

We’ve been watching MidiWrist evolve over the past weeks. It’s a classic story of what happens when a developer is also a musician, making a tool for themselves. Geert has long been an advocate for combining traditional instrumental technique and futuristic electronic instruments; in this case, he’s applying his musicianship and developer chops to solving a practical issue.

If you’ve got an iPhone but no watch – like me – there are some solutions coming (more on that in a bit). But Apple Watch is really ideally suited to the task. The fact that you have the controller strapped to your body already means controls are at hand. Haptic feedback on the digital crown means you can adjust parameters without even having to look at the display. (The digital crown is the dial on the side of the watch that was used to wind and/or set time on analog watches. Haptic feedback uses sound to give physical feedback in the way a tangible control would, both on that crown and the touch surface of the watch face – what Apple calls “taptic” feedback since it works with the existing touch interface. Even if you’re not a fan of the Apple Watch, it’s a fascinating design feature.)

How this works in practice: you can use the transport and even overdub new tracks easily, here pictured in Logic Pro X:

Just seeing the Digital Crown mapped as a new physical control is a compelling tech demo – and very useful to mobile apps, which tend to lack physical feedback. Here it is in a pre-release demo with the Minimoog Model D on iPhone:

Or here it is with the Eventide H9 (though, yeah, you could just put the pedal on a table and get the same impact):

Here it is with IK Multimedia’s UNO synth, though this rather makes me wish the iPhone just had its own Digital Crown:

Version 1.1 will include voice control via Siri. That’ll work with iPhones, too, so you don’t necessarily need an Apple Watch. With voice-controlled interfaces coming to various home devices, it’s not hard to imagine sitting at home and recording ideas right when the mood strikes you, Star Trek: The Next Generation style.

Geert, please, can we set up a DAW that lets us dictate melodies like this?

It’s a simple app at its core, but you see it really embodies three features: wearable interfaces, hands-free control (with voice), and haptic feedback. And here are lots of options for custom control, MIDI functionality, and connectivity. Check it out – this really is insane for just a watch app:

Four knobs can be controlled with the digital crown
Macro control over multiple synth parameters from the digital crown
Remotely Play / Stop / Record / Rewind your DAW from your Watch
Knobs can be controlled individually or simultaneously
Knobs can be linked to preserve their offsets
Four buttons can be toggled by tapping the Watch
Buttons can either be stateful or momentary
Program changes through the digital crown or by tapping the Watch
Transport control over Midi Machine Control (MMC)
XY pad with individual messages for each axis
Optional haptic feedback for all Watch interactions
Optional value display on the Watch
Configurable colors for all knobs and buttons
Configurable MIDI channels and CC numbers
Save your configurations to preset for easy retrieval
MIDI learn for easy controller configuration
MIDI input to sync the state of the controllers with the controlled synths
Advertise as a Bluetooth MIDI device
Connect to other Bluetooth MIDI devices
Monitor the MIDI values on the iPhone
Low latency and fast response

http://uwyn.com/midiwrist/

All of this really does make me want a dedicated DIY haptic device. I had an extended conversation with the engineers at Native Instruments about their haptic efforts with TRAKTOR; I personally believe there’s a lot of potential for human-machine interfaces for music with this approach. But that will depend in the long run on more hardware adopting haptic interfaces beyond just the passive haptics of nice-feeling knobs and faders and whatnot.

It’s a good space to keep an eye on. (I almost wrote “a good space to watch.” No. That’s not the point. You know.)

Geert shares a bit about development here:

Fun anecdote — in a way, this app has been more than three years in the making. I got the first Apple Watch in the hope of creating this, but the technology was way too slow without a direct real-time communication protocol between the Watch and the iPhone. I’ve been watching every Watch release (teehee) up until the last one, the Series 4. The customer reception was so good overall that I decided to give this another go, and only after a few hours of prototyping, I could see that this would now work and feel great. I did buy a Watch Series 3 afterwards also to include in my testing during development.

The post MidiWrist aids instrumentalists by giving Siri and Apple Watch control appeared first on CDM Create Digital Music.

Why is this Valentine’s song made by an AI app so awful?

Delivered... Peter Kirn | Scene | Wed 13 Feb 2019 11:19 pm

Do you hate AI as a buzzword? Do you despise the millennial whoop? Do you cringe every time Valentine’s Day arrives? Well – get ready for all those things you hate in one place. But hang in there – there’s a moral to this story.

Now, really, the song is bad. Like laugh-out-loud bad. Here’s iOS app Amadeus Code “composing” a song for Valentine’s Day, which says love much in the way a half-melted milk chocolate heart does, but – well, I’ll let you listen, millennial pop cliches and all:

Fortunately this comes after yesterday’s quite stimulating ideas from a Google research team – proof that you might actually use machine learning for stuff you want, like improved groove quantization and rhythm humanization. In case you missed that:

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Now, as a trained composer / musicologist, I do find this sort of exercise fascinating. And on reflection, I think the failure of this app tells us a lot – not just about machines, but about humans. Here’s what I mean.

Amadeus Code is an interesting idea – a “songwriting assistant” powered by machine learning, delivered as an app. And it seems machine learning could generate, for example, smarter auto accompaniment tools or harmonizers. Traditionally, those technologies have been driven by rigid heuristics that sound “off” to our ears, because they aren’t able to adequately follow harmonic changes in the way a human would. Machine learning could – well, theoretically, with the right dataset and interpretation – make those tools work more effectively. (I won’t re-hash an explanation of neural network machine learning, since I got into that in yesterday’s article on Magenta Studio.)

https://amadeuscode.com/

You might well find some usefulness from Amadeus, too.

This particular example does not sound useful, though. It sounds soulless and horrible.

Okay, so what happened here? Music theory at least cheers me up even when Valentine’s Day brings me down. Here’s what the developers sent CDM in a pre-packaged press release:

We wanted to create a song with a specific singer in mind, and for this demo, it was Taylor Swift. With that in mind, here are the parameters we set in the app.

Bpm set to slow to create a pop ballad
To give the verses a rhythmic feel, the note length settings were set to “short” and also since her vocals have great presence below C, the note range was also set from low~mid range.
For the chorus, to give contrast to the rhythmic verses, the note lengths were set longer and a wider note range was set to give a dynamic range overall.

After re-generating a few ideas in the app, the midi file was exported and handed to an arranger who made the track.

Wait – Taylor Swift is there just how, you say?

Taylor’s vocal range is somewhere in the range of C#3-G5. The key of the song created with Amadeus Code was raised a half step in order to accommodate this range making the song F3-D5.

From the exported midi, 90% of the topline was used. The rest of the 10% was edited by the human arranger/producer: The bass and harmony files are 100% from the AC midi files.

Now, first – these results are really impressive. I don’t think traditional melodic models – theoretical and mathematical in nature – are capable of generating anything like this. They’ll tend to fit melodic material into a continuous line, and as a result will come out fairly featureless.

No, what’s compelling here is not so much that this sounds like Taylor Swift, or that it sounds like a computer, as it sounds like one of those awful commercial music beds trying to be a faux Taylor Swift song. It’s gotten some of the repetition, some of the basic syncopation, and oh yeah, that awful overused millennial whoop. It sounds like a parody, perhaps because partly it is – the machine learning has repeated the most recognizable cliches from these melodic materials, strung together, and then that was further selected / arranged by humans who did the same. (If the machines had been left alone without as much human intervention, I suspect the results wouldn’t be as good.)

In fact, it picks up Swift’s ticks – some of the funny syncopations and repetitions – but without stringing them together, like watching someone do a bad impression. (That’s still impressive, though, as it does represent one element of learning – if a crude one.)

To understand why this matters, we’re going to have to listen to a real Taylor Swift song. Let’s take this one:i’

Okay, first, the fact that the real Taylor Swift song has words is not a trivial detail. Adding words means adding prosody – so elements like intonation, tone, stress, and rhythm. To the extent those elements have resurfaced as musical elements in the machine learning-generated example, they’ve done so in a way that no longer is attached to meaning.

No amount of analysis, machine or human, can be generative of lyrical prosody for the simple reason that analysis alone doesn’t give you intention and play. A lyricist will make decisions based on past experience and on the desired effect of the song, and because there’s no real right or wrong to how do do that, they can play around with our expectations.

Part of the reason we should stop using AI as a term is that artificial intelligence implies decision making, and these kinds of models can’t make decisions. (I did say “AI” again because it fits into the headline. Or, uh, oops, I did it again. AI lyricists can’t yet hammer “oops” as an interjection or learn the playful setting of that line – again, sorry.)

Now, you can hate the Taylor Swift song if you like. But it’s catchy not because of a predictable set of pop music rules so much as its unpredictability and irregularity – the very things machine learning models of melodic space are trying to remove in order to create smooth interpolations. In fact, most of the melody of “Blank Space” is a repeated tonic note over the chord progression. Repetition and rhythm are also combined into repeated motives – something else these simple melodic models can’t generate, by design. (Well, you’ll hear basic repetition, but making a relationship between repeated motives again will require a human.)

It may sound like I’m dismissing computer analysis. I’m actually saying something more (maybe) radical – I’m saying part of the mistake here is assuming an analytical model will work as a generative model. Not just a machine model – any model.

This mistake is familiar, because almost everyone who has ever studied music theory has made the same mistake. (Theory teachers then have to listen to the results, which are often about as much fun as these AI results.)

Music theory analysis can lead you to a deeper understanding of how music works, and how the mechanical elements of music interrelate. But it’s tough to turn an analytical model into a generative model, because the “generating” process involves decisions based on intention. If the machine learning models sometimes sound like a first year graduate composition student, that may be that the same student is steeped in the analysis but not in the experience of decision making. But that’s important. The machine learning model won’t get better, because while it can keep learning, it can’t really make decisions. It can’t learn from what it’s learned, as you can.

Yes, yes, app developers – I can hear you aren’t sold yet.

For a sense of why this can go deep, let’s turn back to this same Taylor Swift song. The band Imagine Dragons picked it up and did a cover, and, well, the chord progression will sound more familiar than before.

As it happens, in a different live take I heard the lead singer comment (unironically) that he really loves Swift’s melodic writing.

But, oh yeah, even though pop music recycles elements like chord progressions and even groove (there’s the analytic part), the results take on singular personalities (there’s the human-generative side).

“Stand by Me” dispenses with some of the ticks of our current pop age – millennial whoops, I’m looking at you – and at least as well as you can with the English language, hits some emotional meaning of the words in the way they’re set musically. It’s not a mathematical average of a bunch of tunes, either. It’s a reference to a particular song that meant something to its composer and singer, Ben E. King.

This is his voice, not just the emergent results of a model. It’s a singer recalling a spiritual that hit him with those same three words, which sets a particular psalm from the Bible. So yes, drum machines have no soul – at least until we give them one.

“Sure,” you say, “but couldn’t the machine learning eventually learn how to set the words ‘stand by me’ to music”? No, it can’t – because there are too many possibilities for exactly the same words in the same range in the same meter. Think about it: how many ways can you say these three words?

“Stand by me.”

Where do you put the emphasis, the pitch? There’s prosody. What melody do you use? Keep in mind just how different Taylor Swift and Ben E. King were, even with the same harmonic structure. “Stand,” the word, is repeated as a suspension – a dissonant note – above the tonic.

And even those observations still lie in the realm of analysis. The texture of this coming out of someone’s vocal cords, the nuances to their performance – that never happens the same way twice.

Analyzing this will not tell you how to write a song like this. But it will throw light on each decision, make you hear it that much more deeply – which is why we teach analysis, and why we don’t worry that it will rob music of its magic. It means you’ll really listen to this song and what it’s saying, listen to how mournful that song is.

And that’s what a love song really is:

If the sky that we look upon
Should tumble and fall
Or the mountain should crumble to the sea
I won’t cry, I won’t cry
No, I won’t shed a tear
Just as long as you stand
Stand by me

Stand by me.

Now that’s a love song.

So happy Valentine’s Day. And if you’re alone, well – make some music. People singing about hearbreak and longing have gotten us this far – and it seems if a machine does join in, it’ll happen when the machine’s heart can break, too.

PS – let’s give credit to the songwriters, and a gentle reminder that we each have something to sing that only we can:
Singer Ben E. King, Best Known For ‘Stand By Me,’ Dies At 76 [NPR]

The post Why is this Valentine’s song made by an AI app so awful? appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme