Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » Music tech


SoundCloud will now handle distributing your music – and give you a 100% cut

Delivered... Peter Kirn | Scene | Tue 19 Feb 2019 6:34 pm

SoundCloud has a new pitch to creators: upload your music not just to SoundCloud, but to all major music services, too. Distribution is launching in a new beta as part of Premier service, and the terms look appealing.

Okay, first, to understand what digital distribution is, let’s go back in time. Digital music for many years meant primarily CDs and … well, piracy, despite some early (fairly horrible) stores. Then along came Apple’s iTunes Music Store. When it launched, you needed to have a label deal of some kind to make your music available; Apple dealt with those labels much as brick and mortar stores deal with labels and distributors. The first loophole was CDBaby – the name is a reminder that at the time, independent music producers were still largely duplicating releases on CDs. Pay for CDBaby, and you get your music on iTunes for sale.

Now, the landscape is different. Apart from DJs and specialists, most people get their music through streaming services. But the only major destination where you can upload music directly these days has been SoundCloud (though Apple and Spotify may soon change that).

So if you want your music on other services, you typically sign a distribution deal. Some of these are pay-once or subscription services open to anyone. More traditional distributors require multi-year contracts you can’t get out of – though they may offer personal relationships with curators at online stores, and the promise, at least, of getting you placed as “featured music” or on playlists.

If you just want to get your music out there, the issue is that the distribution costs can actually cost more than you bring in.

SoundCloud’s offering, then, could be at least cheap and convenient. Here’s how it works:

Qualified users with a SoundCloud Pro or Pro Unlimited account can sign up for an open beta right now.

You can select original music to distribute to a range of services, including Amazon Music, Apple Music, Instagram, Spotify, Tencent (the leading Chinese network), and YouTube Music, inside your SoundCloud account.

Then you keep 100% “of your rights” (need to read the fine print on that), plus 100% of distribution royalties from third-party services. There’s no additional cost for distribution.

Most other services either take a cut of royalties, or charge fees for distribution; here, what you’re paying already for your account already covers those costs.

So wait, what’s in it for SoundCloud if you get all the money? It seems the main goal is to attract users to their subscription services and provide monetization options to keep them there. In fact, you don’t have to include your music on SoundCloud or monetize it there if for some reason you don’t want to – like if for some reason you want it just on Apple or just on Spotify or some other combination. SoundCloud hopes you will, though; a spokeperson for the company tells us, “Monetizing tracks through SoundCloud Premier monetization gets creators the best revenue share rate on SoundCloud and fast payouts.”

I suspect SoundCloud does hope to use this offering to help build up their catalog, of course – which makes sense for them. The big challenge SoundCloud’s business faces is, while the service has a lot of original music the likes of Spotify and Apple lack, their catalog still lags the major music a lot of people want to listen to. And they’re in the unique position of wanting to attract both creators and listeners. That could be good in the long run for us as creators, but so far it’s meant that we tend to use SoundCloud as a way of building audience for other services (and for a lot of us, trying to convince people to buy downloads or physical music).

SoundCloud’s creator-facing tools are essentially unparalleled; the limited tools on Spotify and Apple are fairly weak and confusing. The real pitfalls here aren’t so much about SoundCloud as they are about streaming – streaming revenue for a lot of smaller artists is disappointing or even nonexistent. And this won’t help your music get playlisted or found on those services; it’ll just get you over the initial barrier of distribution.

In other words, I think generally the pricier services for distribution that just dump music on streaming are going to get run out of business, in favor of offerings like SoundCloud’s. But that leaves opportunities for distributors who do work on promotion, as well as the “we’re not dead yet” strangeness of cassette tapes and vinyl still being viable distribution formats in 2019.

Do you qualify?

The open beta requires a SoundCloud Pro / Pro Unlimited subscription, and you have to be an adult (18+ or age of majority).

You have to control all the rights to your music. So if you’ve signed music to a label, for instance, or you have an existing distribution deal, you can’t upload even your own music – technically, you’ve signed away the right to do so.

You also can’t have any copyright strikes against you on SoundCloud. That’s a dicey issue, I know, though SoundCloud points CDM readers to copyright@soundcloud.com if you’ve got a question about copyright policy or you have a strike against you.

And you need at least 1000 plays in countries that have advertising available – US, UK, Canada, Australia, France, Germany, Ireland, The Netherlands, New Zealand.

It seems you don’t necessarily have to be living in one of those countries, however.

When do you get data or get paid?

This is the part I really like.

You get monthly reporting of numbers from all the services where you’re distributing.

There are monthly royalty payments, with no minimums.

This is a big break from the truly terrible way the industry often operates, which is to lock you into long-term contracts, take a big slice of the money you’ve earned, and then make data hard to retrieve and slow, and hold up what money is left based on weird payment schedules or minimum thresholds.

So the appeal of just logging into a SoundCloud account and taking care of all of this – leaving time for you to go figure out who to talk to to make your music popular – that’s hugely appealing.

There’s a separate music ecosystem of DJ services like Beatport and Traxsource, plus of course the isolatedbut artist-friendly world of Bandcamp. I hope to check in with both those services soon.

And there will still be room for distributors who offer more advanced customer service and relationships with those outlets, or bundle distribution with other services (including label management).

For everything else, though, the new SoundCloud offering looks like a significant breakthrough. I’ll be testing the beta, for my own music – even though the label we operate, Establishment, has a few weeks left on one of those terrible contracts I mentioned. Let us know if you have questions about this and we can ask our Berlin neighbors at SoundCloud.

For more or to sign up:

creators.soundcloud.com/premier
@creatorsonSC on Twitter

The post SoundCloud will now handle distributing your music – and give you a 100% cut appeared first on CDM Create Digital Music.

Apple’s latest Macs have a serious audio glitching bug

Delivered... Peter Kirn | Scene | Mon 18 Feb 2019 8:28 pm

Apple has a serious, apparently unresolved bug that causes issues with all audio performance with external devices across all its latest Macs, thanks to the company’s own software and custom security chip. The only good news: there is a workaround.

Following bug reports online, the impacted machines are all the newest computers – those with Apple’s own T2 security chip:

  • iMac Pro
  • Mac mini models introduced in 2018
  • MacBook Air models introduced in 2018
  • MacBook Pro models introduced in 2018

he T2 in Apple’s words “is Apple’s second-generation, custom silicon for Mac. By redesigning and integrating several controllers found in other Mac computers—such as the System Management Controller, image signal processor, audio controller, and SSD controller—the T2 chip delivers new capabilities to your Mac.”

The problem is, it appears that this new chip has introduced glitches on a wide variety of external audio hardware from across the pro audio industry, thanks to a bug in Apple’s software. When your Mac updates its system clock, dropouts and glitches appear in the audio stream. (Any hardware with a non-default clock source appears to be impacted. It’s a good bet that any popular external audio interface may exhibit the problem.)

The workaround is fairly easy: switch off “Set date and time automatically” in System Preferences.

More:
https://www.reddit.com/r/apple/comments/anvufc/psa_2018_macs_with_t2_chip_unusable_with_external/

https://discussions.apple.com/thread/8509051

https://www.logicprohelp.com/forum/viewtopic.php?t=138992

https://www.gearslutz.com/board/music-computers/1232030-usb-audio-glitches-macbook-pro-2018-a.html

https://openradar.appspot.com/46918065

But more alarming is that this is another serious quality control fumble from Apple. The value proposition with Apple always been that the company’s control over its own hardware, software, and industrial engineering meant a more predictable product. But when Apple botches the quality of its own products and doesn’t test creative audio and video use cases, that value case quickly flips. You’re sacrificing choice and paying a higher price for a product that’s actually worse.

Apple’s recent Mac line have also come under fire for charging a premium price while sacrificing things users want (like NVIDIA graphics cards, affordable internal storage, or extra ports). And on the new thin MacBook and MacBook Pro lines, keyboard reliability issues.

Before Windows users start gloating, of course, PCs can have reliability issues of their own. They’re just distributed across a wider range of vendors – which is part of the reason some musicians sought out Apple in the first place.

Regardless, Apple needs to test and address these kinds of issues. Apple’s iPad Pro line is fantastic and essentially unchallenged because of its unique software ecosystem and poor low-cost PC or Android tablet options. But the Mac has to compete with increasingly impressive PC laptops and desktop machines at low costs, and a Windows operating system that has improved its audio plumbing (to say nothing of the fact that Linux now lets you run tools like Bitwig Studio and VCV Rack). And that’s why competition is a good thing – you might be happier with a different choice.

Anyway, if you do have one of these machines, let us know if you’ve been having trouble with this issue and if this workaround (hopefully) solves your problem.

The post Apple’s latest Macs have a serious audio glitching bug appeared first on CDM Create Digital Music.

VCV Rack nears 1.0, new features, as software modular matures

Delivered... Peter Kirn | Scene | Mon 18 Feb 2019 7:42 pm

VCV Rack, the open source platform for software modular, keeps blossoming. If what you were waiting for was more maturity and stability and integration, the current pipeline looks promising. Here’s a breakdown.

Even with other software modulars on the scene, Rack stands out. Its model is unique – build a free, open source platform, and then build the business on adding commercial modules, supporting both the platform maker (VCV) and third parties (the module makers). That has opened up some new possibilities: a mixed module ecosystem of free and paid stuff, support for ports of open source hardware to software (Music Thing Modular, Mutable Instruments), robust Linux support (which other Eurorack-emulation tools currently lack), and a particular community ethos.

Of course, the trade-off with Rack 0.xx is that the software has been fairly experimental. Versions 1.0 and 2.0 are now in the pipeline, though, and they promise a more refined interface, greater performance, a more stable roadmap, and more integration with conventional DAWs.

New for end users

VCV founder and lead developer Andrew Belt has been teasing out what’s coming in 1.0 (and 2.0) online.

Here’s an overview:

  • Polyphony, polyphonic cables, polyphonic MIDI support and MPE
  • Multithreading and hardware acceleration
  • Tooltips, manual data entry, and right-click menus to more information on modules
  • Virtual CV to MIDI and direct MIDI mapping
  • 2.0 version coming with fully-integrated DAW plug-in

More on that:

Polyphony and polyphonic cables. The big one – you can now use polyphonic modules and even polyphonic patching. Here’s an explanation:

https://community.vcvrack.com/t/how-polyphonic-cables-will-work-in-rack-v1/

New modules will help you manage this.

Polyphonic MIDI and MPE. Yep, native MPE support. We’ve seen this in some competing platforms, so great to see here.

Multithreading. Rack will now use multiple cores on your CPU more efficiently. There’s also a new DSP framework that adds CPU acceleration (which helps efficiency for polyphony, for example). (See the developer section below.)

Oversampling for better audio quality. Users can set higher settings in the engine to reduce aliasing.

Tooltips and manual value entry. Get more feedback from the UI and precise control. You can also right-click to open other stuff – links to developer’s website, manual (yes!), source code (for those that have it readily available), or factory presets.

Core CV-MIDI. Send virtual CV to outboard gear as MIDI CC, gate, note data. This also integrates with the new polyphonic features. But even better –

Map MIDI directly. The MIDI map module lets you map parameters without having to patch through another module. A lot of software has been pretty literal with the modular metaphor, so this is a welcome change.

And that’s just what’s been announced. 1.0 is imminent, in the coming months, but 2.0 is coming, as well…

Rack 2.0 and VCV for DAWs. After 1.0, 2.0 isn’t far behind. “Shortly after” 2.0 is released, a DAW plug-in will be launched as a paid add-on, with support for “multiple instances, DAW automation with parameter labels, offline rendering, MIDI input, DAW transport, and multi-channel audio.”

These plans aren’t totally set yet, but a price around a hundred bucks and multiple ins and outs are also planned. (Multiple I/O also means some interesting integrations will be possible with Eurorack or other analog systems, for software/hardware hybrids.)

VCV Bridge is already deprecated, and will be removed from Rack 2.0. Bridge was effectively a stopgap for allowing crude audio and MIDI integration with DAWs. The planned plug-in sounds more like what users want.

Rack 2.0 itself will still be free and open source software, under the same license. The good thing about the plug-in is, it’s another way to support VCV’s work and pay the bills for the developer.

New for developers

Rack v1 is under a BSD license – proper free and open source software. There’s even a mission statement that deals with this.

Rack v1 will bring a new, stabilized API – meaning you will need to do some work to port your modules. It’s not a difficult process, though – and I think part of Rack’s appeal is the friendly API and SDK from VCV.

https://vcvrack.com/manual/Migrate1.html

You’ll also be able to take advantage of an SSE wrapper (simd.hpp) to take advantage of accelerated code on desktop CPUs, without hard coding manual calls to hardware that could break your plug-ins in the future. This also theoretically opens up future support for other platforms – like NEON or AVX acceleration. (It does seem like ARM platforms are the future, after all.)

Plus check this port for adding polyphony to your stuff.

And in other Rack news…

Also worth mentioning:

While the Facebook group is still active and a place where a lot of people share work, there’s a new dedicated forum. That does things Facebook doesn’t allow, like efficient search, structured sections in chronological order so it’s easy to find answers, and generally not being part of a giant, evil, destructive platform.

https://community.vcvrack.com/

It’s powered by open source forum software Discourse.

For a bunch of newly free add-ons, check out the wonder XFX stuff (I paid for at least one of these, and would do again if they add more commercial stuff):

http://blamsoft.com/vcv-rack/

Vult is a favorite of mine, and there’s a great review this week, with 79 demo patches too:

There’s also a new version of Mutable Instruments Tides, Tidal Modular 2, available in the Audible Instruments Preview add-on – and 80% of your money goes to charity.

https://vcvrack.com/AudibleInstruments.html#preview

And oh yeah, remember that in the fall Rack already added support for hosting VST plugins, with VST Host. It will even work inside the forthcoming plugin, so you can host plugins inside a plugin.

https://vcvrack.com/Host.html

Here it is with the awesome d16 stuff, another of my addictions:

Great stuff. I’m looking forward to some quality patching time.

http://vcvrack.com/

The post VCV Rack nears 1.0, new features, as software modular matures appeared first on CDM Create Digital Music.

Live coding group toplap celebrates days of live streaming, events

Delivered... Peter Kirn | Scene | Fri 15 Feb 2019 5:21 pm

What began as a niche field populated mainly by code jockeys has grown into a worldwide movement of artists, many of them new to programming. Onekey group, TOPLAP, celebrates 15 years of operation with live streams and events.

Image at top – Olivia Jack’s Hydra in action, earlier this month at our MusicMakers Hacklab at CTM Festival. We’ll be talking to Olivia over the weekend about live coding visuals, and you can catch her in Berlin tonight – or online – see below.

Here’s the full announcement – eloquently worded enough that I’ll just copy it here – check this crazy schedule, which began yesterday:

Live coding is about making live music, visuals and other time-based arts by writing and manipulating code. Recently it’s been popularised as Algorave, but is a technique used in all kinds of genres and artforms.

The open worldwide live coding community goes by the name of TOPLAP (Temporary Organisation for the Promotion of Live Algorithm Programming) was formed 15 years again (14th February, 2004) at an event called Changing Grammars in Hamburg.

Now this worldwide community is coming together to make a continuous 3.5 day live stream with over 168 half-hour performance slots..

Watch here:
http://toplap.org/wearefifteen/

Join the livestream chat here:
https://talk.lurk.org/channel/toplap15

There’s over 168 performances from 14th-17th February, quite a few beamed from local celebratory events being organised around the place (Prague, London, NYC, Amsterdam, Madison, Bath, Argentina, Richmond, Hamilton, …), and others by individuals who’ll be live coding from their sofa.

Anyone going to stay up to watch the whole thing?

Here in Berlin tonight, there’s a live and in-person event featuring 𝕭𝖅𝕲𝕽𝕷, Calum Gunn, Olivia Jack with Alexandra Cardenas, Yaxu (who we hosted here last year), and Renick Bell:

KEYS: computer music ~ digital arts | Renick Bell • Yaxu & more [Faecbook event]

Algorave and TOPLAP have made major efforts to be more gender balanced and inclusive and community driven – a topic deep enough that I’ll leave it for another time, as they’ve worked on some specific techniques to enable this. But it’s extraordinary what people are doing with code – and yes, if typing isn’t your favorite mode of control, some are also extending these tools to physical controllers and other live performance techniques. Live coding in one form or another has been around decades, but now is possibly the best time yet for this scene. We’ll be watching – and streaming. Stay tuned.

The post Live coding group toplap celebrates days of live streaming, events appeared first on CDM Create Digital Music.

Teenage Engineering OP-1 synth is back in stock, here to stay

Delivered... Peter Kirn | Scene | Thu 14 Feb 2019 9:30 pm

It put the boutique Swedish maker on the music map, and helped usher in new interest in mobile devices and slick design. Now the OP-1 from Teenage Engineering is back in stock, and its makers say it’s here to stay.

That should be good news for OP-1 fans. Sure, the OP-Z has some fancy new features, but it loses the all-in-one functionality and inviting display on the OP-1. And Pocket Operators – both in their original mini-calculator form and now in a line of inexpensive kit modular – well, that’s for another audience. The OP-1, love it or hate it, is really unlike anything else out there. And someone must want it, because it’s been in demand a full decade after its first appearance.

Teenage Engineering shared today they were resurrecting the OP-1 ( under a headline “love never dies,” for Valentine’s Day). Here’s that announcement:

after being out of stock for more than a year with rumours of its demise, we are very happy to let you know that finally, the OP-1 is back and here to stay!

so what happened?

during our nine years of production, we have been very lucky in having a steady supply of the components needed for the OP-1. but last year we suddenly found ourselves without the amoled screen needed and nowhere to find new ones in the same high quality. but after a long time sourcing the perfect replacement, we have finally found it, and we will now be able to fulfil the demand that’s been growing for the past year.

Hmm, maybe the Teenagers want to start a side business reselling that display part? I’m interested.

Anyway, you can buy an OP-1 new now if you couldn’t find it on the used market – or watch for used prices to come down accordingly. Let’s celebrate with a little OP-1 reminiscence, as I know for some of you, Teenage Engineerings’ other stuff just doesn’t compare.

Also – shoes!

TĀLĀ is right – Teenage Engineering OP-1 is a great desert island synth

Teenage Engineering: Opbox Sensors and Shoes, OP-1 Drums and MIDI Sync

Teenage Engineering’s OP-1 Instrument: Hands-on, Videos, Why it’s Different

Someday I hope Elijah Wood says nice things about me:

https://teenageengineering.com/products/op-1

The post Teenage Engineering OP-1 synth is back in stock, here to stay appeared first on CDM Create Digital Music.

Why is this Valentine’s song made by an AI app so awful?

Delivered... Peter Kirn | Scene | Wed 13 Feb 2019 11:19 pm

Do you hate AI as a buzzword? Do you despise the millennial whoop? Do you cringe every time Valentine’s Day arrives? Well – get ready for all those things you hate in one place. But hang in there – there’s a moral to this story.

Now, really, the song is bad. Like laugh-out-loud bad. Here’s iOS app Amadeus Code “composing” a song for Valentine’s Day, which says love much in the way a half-melted milk chocolate heart does, but – well, I’ll let you listen, millennial pop cliches and all:

Fortunately this comes after yesterday’s quite stimulating ideas from a Google research team – proof that you might actually use machine learning for stuff you want, like improved groove quantization and rhythm humanization. In case you missed that:

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Now, as a trained composer / musicologist, I do find this sort of exercise fascinating. And on reflection, I think the failure of this app tells us a lot – not just about machines, but about humans. Here’s what I mean.

Amadeus Code is an interesting idea – a “songwriting assistant” powered by machine learning, delivered as an app. And it seems machine learning could generate, for example, smarter auto accompaniment tools or harmonizers. Traditionally, those technologies have been driven by rigid heuristics that sound “off” to our ears, because they aren’t able to adequately follow harmonic changes in the way a human would. Machine learning could – well, theoretically, with the right dataset and interpretation – make those tools work more effectively. (I won’t re-hash an explanation of neural network machine learning, since I got into that in yesterday’s article on Magenta Studio.)

https://amadeuscode.com/

You might well find some usefulness from Amadeus, too.

This particular example does not sound useful, though. It sounds soulless and horrible.

Okay, so what happened here? Music theory at least cheers me up even when Valentine’s Day brings me down. Here’s what the developers sent CDM in a pre-packaged press release:

We wanted to create a song with a specific singer in mind, and for this demo, it was Taylor Swift. With that in mind, here are the parameters we set in the app.

Bpm set to slow to create a pop ballad
To give the verses a rhythmic feel, the note length settings were set to “short” and also since her vocals have great presence below C, the note range was also set from low~mid range.
For the chorus, to give contrast to the rhythmic verses, the note lengths were set longer and a wider note range was set to give a dynamic range overall.

After re-generating a few ideas in the app, the midi file was exported and handed to an arranger who made the track.

Wait – Taylor Swift is there just how, you say?

Taylor’s vocal range is somewhere in the range of C#3-G5. The key of the song created with Amadeus Code was raised a half step in order to accommodate this range making the song F3-D5.

From the exported midi, 90% of the topline was used. The rest of the 10% was edited by the human arranger/producer: The bass and harmony files are 100% from the AC midi files.

Now, first – these results are really impressive. I don’t think traditional melodic models – theoretical and mathematical in nature – are capable of generating anything like this. They’ll tend to fit melodic material into a continuous line, and as a result will come out fairly featureless.

No, what’s compelling here is not so much that this sounds like Taylor Swift, or that it sounds like a computer, as it sounds like one of those awful commercial music beds trying to be a faux Taylor Swift song. It’s gotten some of the repetition, some of the basic syncopation, and oh yeah, that awful overused millennial whoop. It sounds like a parody, perhaps because partly it is – the machine learning has repeated the most recognizable cliches from these melodic materials, strung together, and then that was further selected / arranged by humans who did the same. (If the machines had been left alone without as much human intervention, I suspect the results wouldn’t be as good.)

In fact, it picks up Swift’s ticks – some of the funny syncopations and repetitions – but without stringing them together, like watching someone do a bad impression. (That’s still impressive, though, as it does represent one element of learning – if a crude one.)

To understand why this matters, we’re going to have to listen to a real Taylor Swift song. Let’s take this one:i’

Okay, first, the fact that the real Taylor Swift song has words is not a trivial detail. Adding words means adding prosody – so elements like intonation, tone, stress, and rhythm. To the extent those elements have resurfaced as musical elements in the machine learning-generated example, they’ve done so in a way that no longer is attached to meaning.

No amount of analysis, machine or human, can be generative of lyrical prosody for the simple reason that analysis alone doesn’t give you intention and play. A lyricist will make decisions based on past experience and on the desired effect of the song, and because there’s no real right or wrong to how do do that, they can play around with our expectations.

Part of the reason we should stop using AI as a term is that artificial intelligence implies decision making, and these kinds of models can’t make decisions. (I did say “AI” again because it fits into the headline. Or, uh, oops, I did it again. AI lyricists can’t yet hammer “oops” as an interjection or learn the playful setting of that line – again, sorry.)

Now, you can hate the Taylor Swift song if you like. But it’s catchy not because of a predictable set of pop music rules so much as its unpredictability and irregularity – the very things machine learning models of melodic space are trying to remove in order to create smooth interpolations. In fact, most of the melody of “Blank Space” is a repeated tonic note over the chord progression. Repetition and rhythm are also combined into repeated motives – something else these simple melodic models can’t generate, by design. (Well, you’ll hear basic repetition, but making a relationship between repeated motives again will require a human.)

It may sound like I’m dismissing computer analysis. I’m actually saying something more (maybe) radical – I’m saying part of the mistake here is assuming an analytical model will work as a generative model. Not just a machine model – any model.

This mistake is familiar, because almost everyone who has ever studied music theory has made the same mistake. (Theory teachers then have to listen to the results, which are often about as much fun as these AI results.)

Music theory analysis can lead you to a deeper understanding of how music works, and how the mechanical elements of music interrelate. But it’s tough to turn an analytical model into a generative model, because the “generating” process involves decisions based on intention. If the machine learning models sometimes sound like a first year graduate composition student, that may be that the same student is steeped in the analysis but not in the experience of decision making. But that’s important. The machine learning model won’t get better, because while it can keep learning, it can’t really make decisions. It can’t learn from what it’s learned, as you can.

Yes, yes, app developers – I can hear you aren’t sold yet.

For a sense of why this can go deep, let’s turn back to this same Taylor Swift song. The band Imagine Dragons picked it up and did a cover, and, well, the chord progression will sound more familiar than before.

As it happens, in a different live take I heard the lead singer comment (unironically) that he really loves Swift’s melodic writing.

But, oh yeah, even though pop music recycles elements like chord progressions and even groove (there’s the analytic part), the results take on singular personalities (there’s the human-generative side).

“Stand by Me” dispenses with some of the ticks of our current pop age – millennial whoops, I’m looking at you – and at least as well as you can with the English language, hits some emotional meaning of the words in the way they’re set musically. It’s not a mathematical average of a bunch of tunes, either. It’s a reference to a particular song that meant something to its composer and singer, Ben E. King.

This is his voice, not just the emergent results of a model. It’s a singer recalling a spiritual that hit him with those same three words, which sets a particular psalm from the Bible. So yes, drum machines have no soul – at least until we give them one.

“Sure,” you say, “but couldn’t the machine learning eventually learn how to set the words ‘stand by me’ to music”? No, it can’t – because there are too many possibilities for exactly the same words in the same range in the same meter. Think about it: how many ways can you say these three words?

“Stand by me.”

Where do you put the emphasis, the pitch? There’s prosody. What melody do you use? Keep in mind just how different Taylor Swift and Ben E. King were, even with the same harmonic structure. “Stand,” the word, is repeated as a suspension – a dissonant note – above the tonic.

And even those observations still lie in the realm of analysis. The texture of this coming out of someone’s vocal cords, the nuances to their performance – that never happens the same way twice.

Analyzing this will not tell you how to write a song like this. But it will throw light on each decision, make you hear it that much more deeply – which is why we teach analysis, and why we don’t worry that it will rob music of its magic. It means you’ll really listen to this song and what it’s saying, listen to how mournful that song is.

And that’s what a love song really is:

If the sky that we look upon
Should tumble and fall
Or the mountain should crumble to the sea
I won’t cry, I won’t cry
No, I won’t shed a tear
Just as long as you stand
Stand by me

Stand by me.

Now that’s a love song.

So happy Valentine’s Day. And if you’re alone, well – make some music. People singing about hearbreak and longing have gotten us this far – and it seems if a machine does join in, it’ll happen when the machine’s heart can break, too.

PS – let’s give credit to the songwriters, and a gentle reminder that we each have something to sing that only we can:
Singer Ben E. King, Best Known For ‘Stand By Me,’ Dies At 76 [NPR]

The post Why is this Valentine’s song made by an AI app so awful? appeared first on CDM Create Digital Music.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Delivered... Peter Kirn | Scene | Tue 12 Feb 2019 8:34 pm

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magneta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magneta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Max, though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magneta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magneta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magneta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magneta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

Two twisted desktop grooveboxes: hapiNES L, Acid8 MKIII

Delivered... Peter Kirn | Scene | Tue 12 Feb 2019 12:54 pm

Now the Nintendo NES inspires a new groovebox, with the desktop hapiNES. And not to be outdone, Twisted Electrons’ acid line is back with a MKIII model, too.

Twisted Electrons have been making acid- and chip music-flavored groovemakers of various sorts. That started with enclosed desktop boxes like the Acid8. But lately, we’d gotten some tiny models on exposed circuit boards, inspired by the Pocket Operator line from Teenage Engineering (and combining well with those Swedish devices, too).

Well, if you liked that Nintendo-flavored chip music sound but longer for a finished case and finger-friendly proper knobs and buttons, you’re in luck. The hapiNES L is here in preorder now, and shipping next month. It’s a groovebox with a 303-style sequencer and tons of parameter controls, but with a sound engine inspired by the RP2A07 chip.

“RP2A07” is not something that likely brings you back to your childhood (uh, unless you spent your childhood on a Famicom assembly line in Japan for some reason – very cool). Think to the Nintendo Entertainment System and that unique, strident sound from the video games of the era – here with controls you can sequence and tweak rather than having to hard-code.

You get a huge range of features here:

Hardware MIDI input (sync, notes and parameter modulation)
Analog trigger sync in and out
USB-MIDI input (sync, notes and parameter modulation)
Dedicated VST/AU plugin for full DAW integration
4 tracks for real-time composing
Authentic triangle bass
2 squares with variable pulsewidth
59 synthesized preset drum sounds + 1 self-evolving drum sound
16 arpeggiator modes with variable speed
Vibrato with variable depth and speed
18 Buttons
32 Leds
6 high quality potentiometers
16 pattern memory
3 levels of LED brightness (Beach, Studio, Club)
Live recording, key change and pattern chaining (up to 16 patterns/ 256 steps)
Pattern copy/pasting
Ratcheting (up to 4 hits per step)
Reset on any step (1-16 step patterns)

If you want to revisit the bare board version, here you go:

255EUR before VAT.

https://twisted-electrons.com/product/hapines-l/

Okay, so that’s all well and good. But if you want an original 8-bit synth, the Acid8 is still worth a look. It’s got plenty of sound features all its own, and the MKIII release loads in a ton of new digital goodies – very possibly enough to break the Nintendo spell and woo you away from the NES device.

In the MKIII, there’s a new digital filter, new real-time effects (transposition automation, filter wobble, stutter, vinyl spin-down, and more), and dual oscillators.

Dual oscillators alone are interesting, and the digital filter gives this some of the edge you presumably crave if drawn to this device.

And if you are upgrading from the baby uAcid8 board, you add hardware MIDI, analog sync in and out, and of course proper controls and a metal case.

Specs:

USB-MIDI input (sync, notes and parameter modulation)
Hardware MIDI input (sync, notes and parameter modulation)
Analog sync trigger input and output
Dedicated VST/AU plugin for full DAW integration
18 Buttons
32 Leds
6 high quality potentiometers
Arp Fx with variable depth and decay time
Filter Wobble with variable speed and depth
Crush Fx with variable depth
Pattern Copy/Pasting
Variable VCA decay (note length)
Tap tempo, variable Swing
Patterns can reset at any step (1-16 step pattern lengths)
Variable pulse-width (for square waveforms)
12 sounds: Square, Saw and Triangle each in 4 flavors (Normal, Distorted, Fat/Detuned, Harmonized/Techno).
3 levels of LED brightness (Beach, Studio, Club)
Live recording, key change and pattern chaining

Again, we have just the video of the board, but it gives you idea. Quite clever, really, putting out these devices first as the inexpensive bare boards and then offering the full desktop releases.

More; also shipping next month with preorders now:

https://twisted-electrons.com/product/acid8-mkiii/

The post Two twisted desktop grooveboxes: hapiNES L, Acid8 MKIII appeared first on CDM Create Digital Music.

Live compositions on oscilloscope: nuuun, ATOM TM

Delivered... Peter Kirn | Scene | Mon 11 Feb 2019 6:26 pm

The Well-Tempered vector rescanner? A new audiovisual release finds poetry in vintage video synthesis and scan processors – and launches a new AV platform for ATOM TM.

nuuun, a collaboration between Atom™ (raster, formerly Raster-Noton) and Americans Jahnavi Stenflo and Nathan Jantz, have produced a “current suite.” These are all recorded live – sound and visuals alike – in Uwe Schmidt’s Chilean studio.

Minimalistic, exposed presentation of electronic elements is nothing new to the Raster crowd, who are known for bringing this raw aesthetic to their work. You could read that as part punk aesthetic, part fascination with visual imagery, rooted in the collective’s history in East Germany’s underground. But as these elements cycle back, now there’s a fresh interest in working with vectors as medium (see link below, in fact). As we move from novelty to more refined technique, more artists are finding ways of turning these technologies into instruments.

And it’s really the fact that these are instruments – a chamber trio, in title and construct – that’s essential to the work here. It’s not just about the impression of the tech, in other words, but the fact that working on technique brings the different media closer together. As nuuun describe the release:

Informed and inspired by Scan Processors of the early 1970’s such as the Rutt/Etra video synthesizer, “Current Suite No.1” uses the oscillographic medium as an opportunity to bring the observer closer to the signal. Through a technique known as “vector-rescanning”, one can program and produce complex encoded wave forms that can only be observed through and captured from analog vector displays. These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms. Both the music and imagery in each of these videos were recorded as live compositions, as if they were intertwined two-way conversations between sound and visual form to produce a unique synesthetic experience.

“These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms.”

So that covers the visual element. I was curious about sound, too. Nathan explains:

As for audio source, we used an MPC-3000 which did live sampling and playback of the video oscillators coming from the video synth, Korg PS-3300(!). So it was a bit of a two-way conversation happening between the images and the sound/modulation sources.

Even with lots of prominent festivals, audiovisual work – and putting visuals on equal footing with music – still faces an uphill battle. Online music distribution isn’t really geared for AV work; it’s not even obvious how audiovisual work is meant to be uploaded and disseminated apart from channels like YouTube or Vimeo. So it’s also worth noting that Atom™ is promising that NN will be a platform for more audiovisual work. We’ll see what that brings.

Of course, NOTON and Carsten Nicolai (aka Alva Noto) already has a rich fine art / high-end media art career going, and the “raster-media” launched by Olaf Bender in 2017 describes itself as a “platform – a network covering the overlapping border areas of pop, art, and science.” We at least saw raster continue to present installations and other works, extending their footprint beyond just the usual routine of record releases.

There’s perhaps not a lot that can be done about the fleeting value of music in distribution, but then music has always been ephemeral. Let’s look at it this way – for those of us who see sound as interconnected with image and science, any conduit to that work is welcome. So watch this space.

For now, we’ve got this first release:

http://atom-tm.com/NN/1/Current-Suite-No-IVideo/

Previously:

Vectors are getting their own festival: lasers and oscilloscopes, go!

In Dreamy, Electrified Landscapes, Nalepa ‘Daytime’ Music Video Meets Rutt-Etra

The post Live compositions on oscilloscope: nuuun, ATOM TM appeared first on CDM Create Digital Music.

NI now has killer, budget audio interfaces and compact keys

Delivered... Peter Kirn | Scene | Thu 7 Feb 2019 7:24 pm

The answer to questions like “I just need a simple audio interface,” and “I want a compact keyboard that doesn’t suck,” along with “did I mention I’ve got almost no money?” – just got some new answers.

Native Instruments launched the new audio interfaces and the latest addition to their keyboard line as part of some grand, abstract PR idea called “for the music in you,” and said a bunch of things about starting points and ecosystems.

To cut to the chase – these are inexpensive, very mobile devices with a ton of bundled software extras that make sense for anyone on a budget, beginner or otherwise. And whereas most inexpensive stuff looks really cheap, they look pretty nice. (That holds up in person – I got a hands-on in Berlin just before NAMM.)

KOMPLETE AUDIO 1, AUDIO 2

There are two audio interfaces – KOMPLETE AUDIO 1 and KOMPLETE AUDIO 2. These take one of the best features of NI’s past audio interfaces – they put a big volume knob right on top so you can quickly adjust your level, and they’ve got meters so you can see what that level is. But crucially, they promise better audio quality.

There are two models here, but let me break it down for you: you don’t want the AUDIO 1, you want the AUDIO 2. Why?

The AUDIO 1 was clearly made with the idea that singers just want one mic input (so there’s only a single XLR in), and for some reason also with RCA jacks on the back (because consumers, I suppose).

But if you spend just a little more on the AUDIO 2, you get a lot more usefulness.

First, two inputs – both XLR/jack combo, for mics and instruments, with mic preamps and phantom power so you can use any microphone. My guess is at some point everyone wants to record two inputs rather than one. (Think line inputs, stereo instruments, a mic and an instrument… you get the point.)

And you get jack outputs instead of RCA.

So, quietly, NI just created the most affordable way of connecting a computer and a modular.

If you are a beginner, you get a bunch of software to play around with. Ableton Live 10 Lite is actually a reasonable version of Live to try – only 8 tracks, but all of the core functionality of the software and many instruments and effects. There’s also MASCHINE Essentials, MONARK, REPLIKA, PHASIS, SOLID BUS COMP, and KOMPLETE START, which represents plenty of music making time.

The price is really the big point: US$109 / 99 EUR and $139 / 129 EUR. Coming in March.

https://www.native-instruments.com/en/products/komplete/audio-interfaces/komplete-audio-1-audio-2/

A micro keyboard

If you want some sort of mobile input, there are now some wild multi-touch expressive controllers out there, like ROLI’s Seaboard Block and the Sensel Morph.

But what if you don’t want some new-fangled touch insanity? What if you just want a piano keyboard?

And you want it to be inexpensive, and fit in a backpack so you can take it with you or fit it on cramped desks?

Good news: you’ve got loads of options.

Bad news: they’re all kind of horrible. They’re ugly, and they feel cheap. And they have extras you may not need (like drum pads, mapped to the same channel as the keyboard, begging the question why you wouldn’t just play the keys).

So I welcome the introduction of Native Instruments’ KOMPLETE KONTROL M32. This is one that I figured I needed myself the moment I saw it. (Normally, my reaction on keyboard product launches is more on the lines of – “God, please don’t make me write about another generic keyboard controller.”)

The feel is solid – a bit like some of the mini-key keyboards from Roland/Edirol a few years back. They don’t have the travel of full-sized keys, allowing this low profile, but seemed reasonably velocity sensitive.

Plus there are transport buttons and encoders, and two very usable touch strips. In software like Ableton Live and Apple Logic, these map to the usual transport features, and the encoders are assignable. In Native Instruments’ software, of course, you get the usual deep integration with parameters, browsing, and production.

The M32 will be a particularly strong companion to Maschine on the go, finally with a small footprint – something simply not possible with a 4×4 pad layout, much as I love it.

Speaking of Maschine – this is the full Maschine software. There’s a smaller sound bank, but even that is still 1.5GB. So when they say “Maschine Essentials,” they’re practically giving Maschine away. The other extras I mentioned above are slick, too – Reaktor Prism alone you could lose weeks or months in. Monark is a gorgeous Minimoog emulation with realistic filters and some sound design twists not on the original.

And it’s just US$129 (119 EUR). So it looks twice as expensive, but is actually cheaper than a lot of other options out there.

NI are trying to tell a lot of stories at once – something about Sounds.com, something about DJs, something about producers… and they’re following us all over social media and Google with constant ads.

But here’s the bottom line: this is only compact keyboard at any price that feels good or looks good, it’s still only just over a hundred bucks, and the “beginners” bundle is likely to please advanced users for months.

Coming in March.

https://www.native-instruments.com/en/products/komplete/keyboards/komplete-kontrol-m32/

The post NI now has killer, budget audio interfaces and compact keys appeared first on CDM Create Digital Music.

Roland has registered the 303, 808 designs as trademarks

Delivered... Peter Kirn | Scene | Wed 6 Feb 2019 4:29 pm

Roland has quietly filed for trademark protection (Unionsmarkenanmeldung) in Germany for the designs of the TB-303 and TR-808.

The filings were uncovered by a poster on the sequencer.de forum. The discussion is in German:

Roland versucht aktuell sich die 808-Farben und das 303-Design als Marke schützen zu lassen [sequencer.de]

https://register.dpma.de/DPMAregister/marke/registerHABM?AKZ=018016159&CURSOR=34

https://register.dpma.de/DPMAregister/marke/registerHABM?AKZ=018016158&CURSOR=33

The “trademark” here is trade dress, the design of the actual appearance of the 303 and 808 – the signature layout of the keyboard and knobs of the 303, and the sequence of colored buttons on the 808. “Iconic” is a word that’s wildly overused, but here we can take it to be almost literally true: you can draw out these layouts and even a lot of lay people with a passing interest in electronic music will immediately recognize this bassline synth and drum machine.

Forum posters conclude that this is about Behringer, who announced last month at the NAMM show that they would ship their “RD-808” drum machine – matching the original TR-808 color scheme and button layout – in March. But the registration in Germany could be a sign Roland are generally planning to more aggressively protect their intellectual property, in respect to Behringer or others. And as the RD-808 could, for instance, wind up being subject to litigation outside Germany – that is, anywhere the drum machine ships.

That said, Behringer without fanfare reversed the order of the colors on their RD-808, from a production prototype (orange / light orange / yellow / white, as on the original Roland) to what was shown at NAMM.

The one thing I can say for sure is – the artwork Roland filed from Japan is gorgeous. So, Roland, please don’t sue us for sharing. (And yeah, I’d buy this if you want to turn it into merch.)

No idea how long processing will take, or really how the law works; if I can find out, I’ll share. At least Germany should appreciate the aesthetics of combining gold, bright red, and black – check the flag.

Meanwhile, in America… Roland last year filed applications for trademark protection in the USA for the TR-808 and TR-909 (also right after the NAMM show, January 25, 2018). You can find these (pending) applications at the United States Patent and Trademark Office, under 87769864 and 87769891.

It’s routine practice to file for things you might want to protect, not necessarily manufacture, but that doesn’t make it any less privately amusing to read this list of apparel that would be covered under that application:

“Jackets; sweaters; sport shirts; polo shirts; shirts; overcoats; raincoats; underwear; pajamas; undershirts; Tee-shirts; wind-resistant jackets; swimming costumes; sleep masks; neckties; aprons; socks and stockings; bandanas; headwear; caps as a headwear; hats”

I totally want a Roland swimming costume. But yeah, if you’re thinking of making one yourself, you should read this:

https://www.roland.com/global/company/intellectual_property/

The post Roland has registered the 303, 808 designs as trademarks appeared first on CDM Create Digital Music.

Ableton Live 10.1: more sound shaping, work faster, free update

Delivered... Peter Kirn | Scene | Wed 6 Feb 2019 12:17 pm

There’s something about point releases – not the ones with any radical changes, but just the ones that give you a bunch of little stuff you want. That’s Live 10.1; here’s a tour.

Live 10.1 was announced today, but I sat down with the team at Ableton last month and have been working with pre-release software to try some stuff out. Words like “workflow” are always a bit funny to me. We’re talking, of course, mostly music making. The deal with Live 10.1 is, it gives you some new toys on the sound side, and makes mangling sounds more fun on the arrangement side.

Oh, and VST3 plug-ins work now, too. (MOTU’s DP10 also has that in an upcoming build, among others, so look forward to the Spring of VST3 Support.)

Let’s look at those two groups.

Sound tools and toys

User wavetables. Wavetable just got more fun – you can drag and drop samples onto Wavetable’s oscillator now, via the new User bank. You can get some very edgy, glitchy results this way, or if you’re careful with sample selection and sound design, more organic sounds.

This looks compelling.

Here’s how it works: Live splits up your audio snippet into 1024 sample chunks. It then smooths out the results – fading the edges of each table to avoid zero-crossing clicks and pops, and normalizing and minimizing phase differences. You can also tick a box called “Raw” that just slices up the wavetable, for samples that are exactly 1024 samples or a regular periodic multiple of that.

Give me some time and we can whip up some examples of this, but basically you can glitch out, mangle sounds you’ve recorded, carefully construct sounds, or just grab ready-to-use wavetables from other sources.

But it is a whole lot of fun and it suggests Wavetable is an instrument that will grow over time.

Here’s that feature in action:

Delay. Simple Delay and Ping Pong Delay have merged into a single lifeform called … Delay. That finally updates an effect that hasn’t seen love since the last decade. (The original ones will still work for backwards project compatibility, though you won’t see them in a device list when you create a new project – don’t panic.)

At first glance, you might think that’s all that’s here, but in typical Ableton fashion, there are some major updates hidden behind those vanilla, minimalist controls. So now you have Repitch, Fade, and Jump modes. And there’s a Modulation section with rate, filter, and time controls (as found on Echo). Oh, and look at that little infinity sign next to the Feedback control.

Yeah, all of those things are actually huge from a sound design perspective. So since Echo has turned out to be a bit too much for some tasks, I expect we’ll be using Delay a lot. (It’s a bit like that moment when you figure out you really want Simpler and Drum Racks way more than you do Sampler.)

The old delays. Ah, memories…

And the new Delay. Look closely – there are some major new additions in there.

Channel EQ. This is a new EQ with visual feedback and filter curves that adapt across the frequency range – that is, “Low,” “Mid,” and “High” each adjust their curves as you change their controls. Since it has just three controls, that means Channel EQ sits somewhere between the dumbed down EQ Three and the complexity of EQ Eight. But it also means this could be useful as a live performance EQ when you don’t necessarily want a big DJ-style sweep / cut.

Here it is in action:

Arranging

The stuff above is fun, but you obviously don’t need it. Where Live 10.1 might help you actually finish music is in a slew of new arrangement features.

Live 10 felt like a work in progress as far as the Arrange view. I think it immediately made sense to some of us that Ableton were adjusting arrangement tools, and ironing out the difference between, say, moving chunks of audio around and editing automation (drawing all those lovely lines to fade things in and out, for instance).

But it felt like the story there wasn’t totally complete. In fact, the change may have been too subtle – different enough to disturb some existing users, but without a big enough payoff.

So here’s the payoff: Ableton have refined all those subtle Arrange tweaks with user feedback, and added some very cool shape drawing features that let you get creative in this view in a way that isn’t possible with other users.

Fixing “$#(*& augh undo I didn’t want to do that!” Okay, this problem isn’t unique to Live. In every traditional DAW, your mouse cursor does conflicting things in a small amount of space. Maybe you’re trying to move a chunk of audio. Maybe you want to resize it. Maybe you want to fade in and out the edges of the clip. Maybe it’s not the clip you’re trying to edit, but the automation curves around it.

In studio terms, this sounds like one of the following:

[silent, happy clicking, music production getting … erm … produced]

OR ….
$#(*&*%#*% …. Noo errrrrrrrgggggg … GAACK! SDKJJufffff ahhh….

Live 10 added a toggle between automation editing and audio editing modes. For me, I was already doing less of the latter. But 10.1 is dramatically better, thanks to some nearly imperceptible adjustments to the way those clip handles work, because you can more quickly change modes, and because you can zoom more easily. (The zoom part may not immediately seem connected to this, but it’s actually the most important part – because navigating from your larger project length to the bit you’re actually trying to edit is usually where things break down.)

In technical terms, that means the following:

Quick zoom shortcuts. I’ll do a separate story on these, because they’re so vital, but you can now jump to the whole song, details, zoom various heights, and toggle between zoom states via keyboard shortcuts. There are even a couple of MIDI-mappable ones.

Clips in Arrangement have been adjusted. From the release notes: “The visualisation of Arrangement clips has been improved with adjusted clip borders and refinements to the way items are colored.” Honestly, you won’t notice, but ask the person next to you how much you’re grunting / swearing like someone is sticking something pointy into your ribs.

Pitch gestures! You can pitch-zoom Arrangement and MIDI editor with Option or Alt keys – that works well on Apple trackpads and newer PC trackpads. And yeah, this means you don’t have to use Apple Logic Pro just to pinch zoom. Ahem.

The Clip Detail View is clearer, too, with a toggle between automation and modulation clearly visible, and color-coded modulation for everything.

The Arrangement Overview was also adjusted with better color coding and new resizing.

In addition, Ableton have worked a lot with how automation editing functions. New in 10.1:

Enter numerical values. Finally.

Free-hand curves more easily. With grid off, your free-hand, wonky mouse curves now get smoothed into something more logical and with fewer breakpoints – as if you can draw better with the mouse/trackpad than you actually can.

Simplify automation. There’s also a command that simplifies existing recorded automation. Again – finally.

So that fixes a bunch of stuff, and while this is pretty close to what other DAWs do, I actually find Ableton’s implementation to be (at last) quicker and friendlier than most other DAWs. But Ableton kept going and added some more creative ideas.

Insert shapes. Now you have some predefined shapes that you can draw over automation lanes. It’s a bit like having an LFO / modulation, but you can work with it visually – so it’s nice for those who prefer that editing phase as a way do to their composition. Sadly, you can only access these via the mouse menu – I’d love some keyboard shortcuts, please – but it’s still reasonably quick to work with.

Modify curves. Hold down Option/Ctrl and you can change the shape of curves.

Stretch and skew. Reshape envelopes to stretch, skew, stretch time / ripple edit.

Insert Shapes promises loads of fun in the Arrangement – words that have never been uttered before.

Check out those curve drawing and skewing/scaling features in action:

Freeze/Export

You can freeze tracks with sidechains, instead of a stupid dialog box popping up to tell you you can’t, because it would break the space-time continuum or crash the warp core injectors or … no, there’s no earthly reason why you shouldn’t be able to freeze sidechains on a computer.

You can export return and master effects on the actual tracks. I know, I know. You really loved bouncing out stems from Ableton or getting stems to remix and having little bits of effects from all the tracks on separate stems that were just echos, like some weird ghost of whatever it was you were trying to do. And I’m a lazy kid, who for some reason thinks that’s completely illogical since, again, this is a computer and all this material is digital. But yes, for people are soft like me, this will be a welcome feature.

So there you have it. Plus you now get VST3s, which is great, because VST3 … is so much … actually, you know, even I don’t care all that much about that, so let’s just say now you don’t have to check if all your plug-ins will run or not.

Go get it

One final note – Max for Live. 10.0.6 synchronized with Max 8.0.2. See those release notes from Cycling ’74:

https://cycling74.com/forums/max-8-0-2-released

Live 10.1 is keeping pace, with the beta you download now including Max 8.0.3.

Ableton haven’t really “integrated” Max for Live; they’re still separate products. And so that means you probably don’t want perfect lockstep between Max and Live, because that could mean instability on the Live side. It’d be more accurate to say that what Ableton have done is to improve the relationship between Max and Live, so that you don’t have to wait as long for Max improvements to appear in Max for Live.

Live 10.1 is in beta now with a final release coming soon.

Ableton Live 10.1 release notes

And if you own a Live 10.1 license, you can join the beta group:

Beta signup

Live 10.1: User wavetables, new devices and workflow upgrades

Thanks to Ableton for those short videos. More on these features soon.

The post Ableton Live 10.1: more sound shaping, work faster, free update appeared first on CDM Create Digital Music.

The synth modules of winter: your Eurorack radar

Delivered... Peter Kirn | Scene | Tue 5 Feb 2019 7:40 pm

The waves of synth modules never stop coming, as obsessed engineers keep making them and sound tinkerers keep buying them. So let’s catch up with what’s out there, in the wake of the NAMM show in California late last month.

Most of these are from NAMM, but there are some other sightings recently, as well.

Make Noise’s new modulation monster. Make Noise have made a name for themselves with some real weirdness that then shaped a lot of the music scene. The Quad Peak Animation System is the latest from them – a wild modulation system that can make vocalization-like sounds, with fast-responding multiple resonant filter peaks across a stereo image. In other words, this thing can sing – in an odd way – in stereo.

The best part of the story behind this is Tony Rolando of Make Noise partly got the idea calibrating Moog Voyagers … and now will apply that to making something crazy and new.

http://makenoisemusic.com/modules/qpas

Now we have multiple videos of that:

Low-cost Buchla. There’s a phrase I’ve never typed before. The Buchla USA company themselves are working to bring Buchla to the masses, with the new low-cost Red Label line of modules. This is 100 series stuff, the historical modules that really launched the West Coast sound – mixer, quad gate, dual-channel oscillator, filters, reverb, and more. There’s even a case and – of course – a touch surface for input, because keyboards are the devil’s playground. Good people are involved – Dave Small (Catalyst Audio) and Todd Barton – so this is one to watch.

https://buchla.com/

A module that’s whatever you want it to be. Nozori is a Kickstarter-backed project to make multifunctional modules – buy a module once, then switch modes via software (and of course coordinated faceplates). People must like the idea, because it’s already well funded, and you still have a week back if you want in.

http://kck.st/2TPKDdT

Lightning in a bottle. Gamechanger have a wild technology that lets you “play a lightning bolt” – basically, incorporating Tesla Coils into their hardware. They’ve done that once with Plasma Pedal, which we hope to test soon. With Erica, they’ll stick this in a module – and let you use high-voltage discharges in a xenon-filled tube. That looks cool and should sound wild; you get distortion with CV control in this module, octave up/down tracking oscillators for still more harmonics, and even an assignable pre/post- EQ. 310EUR before VAT, coming late February.

Erica Synths does the Sample Drum. This one’s sure to be a big hit, I think – not only for people wanting a drum module, per se, but presumably anyone interested in sample manipulation. Sample Drum plays and (finally!) records, with manual and automatic sample slicing, and three assignable CV inputs per channel. There are even effects onboard … which actually makes me wonder why we can’t have something like this as a desktop unit, too. You even can embed cue points in WAV. SD card storage. Looks terrific – 300EUR (not including VAT) coming late February.

One massive oscillator with zing, from Rossum. TRIDENT is a “multi-synchronic oscillator ensemble” – basically three oscillators in one, with loads of modulation and options for FM and phase and … uh, “zing.” Of course you could get a whole bunch of modules and do something similar, but the advantage here is a kind of integrated approach to making a lot of rich timbres – and while the sticker price here is US$599, that may well be less than wrangling a bunch of individual modules.

Actually, let’s let Dave himself talk about this:

http://www.rossum-electro.com/products/trident/

A module for drawing. LZX Industries’ Escher Sketch is a stylus pen controller with XY, pressure, and “directional velocity” (expression). LZX are thinking of this for video synthesis, though I’m sure it’ll get abused. US$499.

MIDI to CV, with autotuning and polyphony. Bastl Instruments’ 1983 4-channel MIDI to CV interface, complete with automatic tuning and other features, is one we’ve been following for a while. It’s now officially out as of 1 February.

Previously, including an explanation of why this is so cool:

Bastl do waveshaping, MIDI, and magically tune your modules

Don’t forget that Bastl also worked with Casper Electronics on Dark Matter, which I covered last month:

Bastl’s Dark Matter module unleashes the joys of feedback

Inexpensive Soundlazer modules. This LA company is actually known more for its directional speakers, but it looks like they’re getting into modules. Opening salvo: $99 bass drum, $69 VCA – evidence that it’s not just Behringer who may get into lower cost Eurorack. Check out their site for more.

Mix with vectors and quad. v3kt is really cool. Plug in joysticks, envelopes, LFOs, automatically calibrate them with push-button sampling, and then mix and connect all that CV to other stuff, with save states. Oh and you can use this as a quad panner, too. $199 now.

http://www.antimatteraudio.com/modules/v3kt

STG and Radiophonic 1 synthesizer. Radiophonic 1 is a terrific-sounding all-in-one, with a gorgeous oscillator at its core (also available separately). See Synthtopia’s video for explanation:

And Matt Chadra demonstrates how it sounds:

Slice and recombine waveforms in a module. Hey, you know how everyone keeps complaining there are no new ideas in synthesis? Well, Waverazor at least claims to be a new idea (with patent pending, too). Cut individual waveform cycles into slices, individually modify and modulate the slices, recombine. Okay – that sounds a lot like wavetable synthesis with a twist (albeit a compelling one), but we’ll bite. Or rather if you didn’t bite when this was a standalone plug-in, maybe you’ll like real knobs and a bunch of patch points:

https://mok.com/wr_dual.php

Control your modular with a ring. It’s funny how this idea never goes away. But here we are again – this time with crowd funding on IndieGogo, so maybe a larger group of people to actually use it. Wave is a ring you wear so you can make music by waving your hand around and … this time it plugs into a modular (the Wavefront module).

Watch this video and marvel at how you can do something you could do with an expression pedal or by using the same free hand to turn a knob, but, like, with a ring.

(Sorry, probably someone does want this, but… yes, it is truly a NAMM tradition to see someone trying it, again.)

Behringer are promising Roland System 100M modules. The German mass manufacturer was out ahead of the NAMM show with pre-production designs and prototypes based on Roland’s 100M series. Price is the lead here – US$49-99. Interestingly, what I didn’t see was people saying they’d opt for Behringer over other makers so much as that they might expand their system with these because of that low cost. Teenage Engineering also made a play for that “modular for the masses” mantle, though not in Eurorack.

Synthtopia did a good write-up of the prototype plans:
Behringer Plans 40 Eurorack Modules In The Next 2 Years, Priced at $49-99

Behringer did make this promise already back in April of last year – then, just in advance of the Superbooth show in Berlin – which I expect annoys other modular makers. But if you want Roland remakes right now, you can get them from – well, Roland, if at higher prices:

Roland’s new SYSTEM-500 modules, and why you might want them

Low cost, 2hp bells and grains and stuff.pocket operator modular system. And yes, while we might be talking about Behringer as the IKEA of modular, but for Teenage Engineering. TE have extended their pocket operator brand to a line of modular. It’s not Eurorack, but it is patchable and you can buy individual modules or a complete kit. I’m working on an in-depth interview with the teenagers, so stay tuned.

You actually do fold these things together – and prices run 399-549 EUR for a complete system.

https://teenageengineering.com/

That’s far from everything, but for me it’s the standouts. Any you’re excited about here – or anything I missed? Sound off in comments.

The post The synth modules of winter: your Eurorack radar appeared first on CDM Create Digital Music.

DP10 adds clip launching, improved audio editing to MOTU’s DAW

Delivered... Peter Kirn | Scene | Mon 4 Feb 2019 6:24 pm

DP10 might just grant two big wishes to DAW power users. One: pull off Ableton Live-style clip launching. Two: give us serious, integrated waveform editing. Here’s why DP10 might get your attention.

A handful of music tools has stood the test of time because the developers have built relationships with users over years and decades. DP is definitely in that category, established in fields like TV and film scoring.

This also means, however, it’s rare for an update to seem like news. DP10 is a potential exception. I haven’t had hands-on time with it yet, but this makes me interested in investing that time.

Bride of Ableton Live?

The big surprise is, MOTU are tackling nonlinear loop triggering, with what they call the Clips window.

The connection to Ableton Live here is obvious; MOTU even drives home the point with a similar gray color scheme, round indicators showing play status, clips grouped into Scenes (as a separate column) horizontally, and into tracks vertically.

And hey, this works for users – all of those decisions are really intuitive.

Here’s where MOTU has an edge on Ableton, though. DP10 adds the obvious – but new – idea of queuing clips in advance. These drop like Tetris pieces into your tracks so you can chain together clips and let them play automatically. The queue is dynamic, meaning you can add and remove those bits at will.

That sounds like a potential revelation. It’s way easier to grok – and more visible – than Live’s Follow Actions. And it frees users from taking their focus of their instruments and other work just to manually trigger clips.

Also, as with Bitwig Studio, MOTU lets you trigger multiple clips both as scenes and as clip groups. (Live is more rigid; the only way to trigger multiple clips in one step is as a complete row.)

I have a lot of questions here that require some real test time. Could MOTU’s non-linear features here pair with their sophisticated marker tools, the functionality that have earned them loyalty with people doing scoring? How do these mesh with the existing DP editing tools, generally – does this feel like a tacked-on new mode, or does it integrate well with DP? And just how good is DP as a live performance tool, if you want to use this for that use case? (Live performance is a demanding thing.)

But MOTU do appear to have a shot to succeed where others haven’t. Cakewalk added clip triggering years ago to SONAR (and a long-defunct tool called Project 5), but it made barely a dent on Live’s meteoric rise and my experience of trying to use it was that it was relatively clunky. That is, I’d normally rather use Live for its workflow and bounce stems to another DAW if I want that. And I suspect that’s not just me – that’s really now the competition.

More audio manipulation

Every major DAW seems locked now in a sort of arms race in detecting beats and stretching audio, as the various developers gradually add new audio mangling algorithms and refine usability features.

So here we go with DP10 – detect beats, stetch audio, adjust tempo, yadda yadda.

Under the hood, most developers are now licensing the algorithms that manipulate audio – MOTU now works with ZTX Pro from zynaptic. But how you then integrate that mathemagical stuff with user interface design is really important, so this is down to implementation.

It’s certainly doubly relevant that MOTU are adding new beat detection and pitch-independent audio stretching in DP10, because of course this is a natural combination for the new Clips View.

More research needed.

Maybe just as welcome, though, is that MOTU have updated the integrated waveform editor in DP. And let’s be honest – even after decades of development, most DAWs have really terrible editors when it comes down to precise work on individual bits of audio. (I cringe every time I open the one in Logic, for instance. Ableton doesn’t really even have waveform editing apart from the limited tools in the main Arrangement view. And even users of something like Pro Tools or Cubase will often jump out to use a dedicated program.)

MOTU say they’ve streamlined and improved their Waveform Editor. And there’s reason to stay in the DAW – in DP10, they’ve integrated all those beat editing and time stretching and pitch correction tools. They’re also promising dynamic editing tools and menus and shortcuts and … yeah, just have to try this one. But those integrated tools and views look great, and – spectral view!

Other improvements

There’s some other cool stuff in DP10:

A new integrated Browser (this will also be familiar to users of Ableton Live and other tools, but it seems nicely implemented)

“VCA Faders” – which let you control multiple tracks with relative volumes, grouping however you like and with full automation support. This looks ilke a really intuitive way to mix.

VST3 support – yep, the new format is slowly gaining adoption across the industry.

Shift-spacebar to run commands. This is terrific to me – skip the manual, skip memorizing shortcuts for everything, but quickly access commands. (I think a lot of us use Spotlight and other launchers in a similar way, so this is totally logical.)

Transport bar skips by bars and beats. (Wait… why doesn’t every program out there do this, actually?)

Streamlined tools for grid snapping, Region menu, tool swapping, zooming, and more.

Quantize now applies to controllers (CC data), not just notes. (Yes. Good.)

Scalable resolution.

Okay, actually, that last one – I was all set to try the previous version of DP, but discovered it was impossible for my weak eyes to see the UI on my PC. So now I’m in. If you hadn’t given DP a second look because you actually couldn’t see it – it seems that problem is finally solved.

And by the way, you also really see DP’s heritage as a MIDI editor, with event list editing, clear displays of MIDI notes, and more MIDI-specific improvements.

All in all, it looks great. DP has to compete now with a lot of younger DAWs, the popularity of software like Ableton Live, and then the recent development on Windows of Cakewalk (aka SONAR) being available for free. But this looks like a pretty solid argument against all of that – and worth a test.

And I’ll be totally honest here – while I’ve been cursing some of DP’s competition for being awkward to set up and navigate for these same tasks, I’m personally interested.

It means a lot to have one DAW with everything from a mature notation view editor to video scoring to MIDI editing and audio and mixing. It means something you don’t outgrow. But that makes it even more important to have it grow and evolve with you. We’ll see how DP10 is maturing.

64-bit macOS, and 32-bit/64-bit Windows 7/8/10, shipping this quarter.

Pricing:
Full version: $499USD (street price)
Competitive upgrade: $395USD
AudioDesk upgrade: $395USD
Upgrade from previous version: $195USD

http://motu.com/products/software/dp/

I have just one piece of constructive criticism, MOTU. You should change your name back to Mark of the Unicorn and win over millennials. And me, too; I like unicorns.

The post DP10 adds clip launching, improved audio editing to MOTU’s DAW appeared first on CDM Create Digital Music.

Synth One is a free, no-strings-attached, iPad and iPhone synthesizer

Delivered... Peter Kirn | Scene | Thu 31 Jan 2019 6:52 pm

Call it the people’s iOS synth: Synth One is free – without ads or registration or anything like that – and loved. And now it’s reached 1.0, with iPad and iPhone support and some expert-designed sounds.

First off – if you’ve been wondering what happened to Ashley Elsdon, aka Palm Sounds and editor of our Apps section, he’s been on a sabbatical since September. We’ll be thinking soon about how best to feature his work on this site and how to integrate app coverage in the current landscape. But you can read his take on why AudioKit matters, and if Ashley says something is awesome, that counts.

But with lots of software synths out there, why does Synth One matter in 2019? Easy:

It’s really free. Okay, sure, it’s easy for Apple to “give away” software when they make more on their dongles and adapters than most app developers charge. But here’s an independent app that’s totally free, without needing you to join a mailing list or look at ads or log into some cloud service.

It’s a full-featured, balanced synth. Under the hood, Synth One is a polysynth with hybrid virtual analog / FM, with five oscillators, step sequencer, poly arpeggiator, loads of filtering and modulation, a rich reverb, multi-tap delay, and loads of etras.

There’s standards support up the wazoo. Are you visually impaired? There’s Voice Over accessibility. Want Ableton Link support? MIDI learn on everything? Compatibility with Audiobus 3 and Inter App Audio so you can run this in your favorite iOS DAW? You’re set.

It’s got some hot presets. Sound designer Francis Preve has been on fire lately, making presets for everyone from KORG to the popular Serum plug-in. And version 1.0 launches with Fran’s sound designs – just what you need to get going right away. (Fran’s sound designs are also usually great for learning how a synth works.)

It’s the flagship of an essential framework. Okay the above matters to users – this matters to developers (who make stuff users care about, naturally). Synth One is the synthesizer from the people who make AudioKit. That’s good for making sure the framework is solid, plus

You can check out the source code. Everything is up at github.com/AudioKit/AudioKitSynthOne – meaning Synth One is also an (incredibly sophisticated) example app for Audio Kit.

More is coming… MPE (MIDI Polyphonic Expression) and AUv3 are coming soon, say the developers.

And now the big addition —

It runs on iPhone, too. I have to say, I’ve been waiting for a synth that’s pocket sized for extreme portability, but few really are compelling. Now you can run this on any iPhone 6 or better – and if you’ve got a higher-end iPhone (iPhone X/XS/XR / iPhone XS Max / 6/7/8 Plus size), you’ll get a specially optimized UI with even more space.

Check out this nice UI:

On iPhone:

More:

AudioKit Synth One 1.0 arrives, is universal, is awesome

The post Synth One is a free, no-strings-attached, iPad and iPhone synthesizer appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme