Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » News Feed

Pairing mode: music to listen to right now – a new series

Delivered... David Abravanel | Labels,Scene | Thu 9 Apr 2020 12:54 am

Ed.: Pairing mode is a new series focused on music to which we feel connection – mostly new, some back catalogs, all stuff we’re listening to. And maybe that’s the most essential way to approach music, finding what excites us. Resident music editor David Abravanel launches his new column.

Interested in getting covered? Promos can be sent to david[at]dhla[dot]me, or hit up David on twitter at @dabravanel

Quiet no more: cLOUDDEAD reissues

About a decade before the likes of Clams Casino, A$AP Rocky, and Lil B made Cloud Rap into a genre with major-label appeal, Oakland’s cLOUDDEAD were forging an experimental hip-hop path that was so cloudy it literally featured “cloud” in the name and clouds on the cover of a self-titled compilation of early EPs.

Clouds in the sky

Description automatically generated

cLOUDDEAD – none more cloudy

Consisting of Why?, DoseOne, and Odd Nosdam, all of whom would later go on to solo success, cLOUDDEAD was a seminal moment for the late-90s/early-00s “undie” hip-hop sound. A series of EP releases featured side-long tracks that collaged together stream-of-consciousness raps, lo-fi drone beats, and the occasional bizarre skit. Predating Burial’s track collages and the lo-fi/chill beats explosion, this was noncommercial music for the time, but sounds like the kind of thing that could easily have taken off towards wider appeal in the SoundCloud/Spotify/Bandcamp era.

cLOUDDEAD – “The Sound of a Handshake”

And there’s a lot to listen to from cLOUDDEAD, on top of everything. With a self-titled compilation of early EPs (2000), the album, Ten (2003), a couple Peel sessions and an EP which featured career highlight “The Sound Of A Handshake”, it’s hard to know exactly where to start. For beginners, Ten is a pretty consistent listen – and its single, “Dead Dogs Two”, featured a rare Boards of Canada remix (Odd Nosdam later returned the favor, remixing “Dayvan Cowboy” in 2006).

cLOUDDEAD – “Dead Dogs Two (Boards of Canada Remix)”

The new remastering from Daddy Kev brings things to a better general level while respecting the extremely lo-fi origins of some of this material. Dig in and surf some clouds.

The Sound of metal

For decades, Electric Indigo aka Susanne Kirchmayr has explored the experimental nooks and crannies of techno and its adjacent microgenres. Following 2018’s 5 1 1 5 9 3, a granular-heavy album on Robert Henke’s Imbalance Computer Music, Kirchmayr moves to another impressive imprint, Editions Mego, for 2020’s Ferrum. Inspired by the sounds of metal (“ferrum” is the chemical name for iron), Ferrum sounds appropriately clangy, exploring digital synthesis and the metallic tones is enables.

While the album’s first two 10+ minute pieces focus more on evolving and immersing sounds (this is a headphone album par excellence), Kirchmayr’s affinity for and roots in techno come through on pounding numbers like “Ferrum 5” and “Ferrum 7”. 

Ed. I fell in love with this material when I first heard her live set associated with the releaseI think that may even have been the last time I was out in Berlin before the lockdowns, at about blank. Anyway, point of this story – this work is equally engaging live. If you’re thinking of whom to book in 2021…

A welcome return from Windy and Carl

Isolation lends itself well to drone music, and Detroit’s Windy Weber and Carl Hultgren are two of the best ever to do it. Eight years after their last album, new LP Allegiance and Conviction on Kranky is another winner full of the duo’s trademark heavenly guitar, bass, and organ soundscapes. Windy Weber’s singing, previously used on other albums as a sparing treat, is a more frequent feature this time around – and adds an extra emotional punch to the sonic tapestry. 

If we’re going to continue the navel-gazing narrative around “ambient” as a buzz term, we can pause and show some respect for truly classic artists who have advanced ambient music, and continue to provide engulfing and beautiful post-rock experiences with deceptively simple guitar and bass lines.

Iheartnoise hearts space rock

Having followed the label Iheartnoise for a while, I’m hard-pressed to pinpoint their specialty, other than perhaps “all that is outside the norm”. There’s label stalwart Petridisch’s plunderphonic collages (including a forthcoming MiniDisc exclusive release – take that, cassette fetish culture), and in another corner there’s the slow psychedelia of Skyjelly and Solilians, two acts whose split release forms Iheartnoise’s first-ever vinyl release.

Skyjelly’s side reminds me of back when Animal Collective was a bit more disjointed and noisy, while Solilians self-described “tireless Jewish space rock” sounds somewhere between a bootleg of Seefeel on “Gowron Breaths” and drone rockers Loop on the live “Planet”.

Until next time, you can tell David what you think of his opinions on Twitter.

The post Pairing mode: music to listen to right now – a new series appeared first on CDM Create Digital Music.

Spin abstract geometries from your music, with Ableton Live visualizers by artist Arash Azadi

Delivered... Peter Kirn | Scene | Wed 8 Apr 2020 5:22 pm

Many of us imagine visuals when we close our eyes and listen to music. Here are two devices you can drop directly into Ableton Live to make that happen – from an artist whose work weaves together visual and sonic realms.

Iranian-born, Armenia-based composer and music and media artist Arash Azadi has built his own body of evocative work that explores imagined topographies of sound and image. (We put out one on our Establishment project – see below.)

What’s special about these devices is you can connect to his imagination – and let these inventions interpret your music live, too. One works with generative visuals, and one with a camera.

Sonic Geometry is a reactive visual generator that spits out gorgeous abstract imagery in response to your sound input. It’s a minimalistic mathematical sacred sonic geometrical trip.

It’s also a great example of Max’s power to allow people to build on one another’s work and create variations. Sonic Geometry began its life as Sound Particles by Kevin Kripper, and Arash took it in another direction. That’s long been a part of music composition (see cantus firmus tradition for one example); patches and code in these environments make it easier in the medium of software.


Here’s how to use it, step by step:

If camera input is more your speed, look to Body Glitch, which uses live video as input instead of sound.


Arash’s music

Come for the Max for Live Devices, stay for the experimental releases? Arash has been prolific lately across a variety of great projects; here are some of the most recent.

His new Structure Experience serves as a platform for artists around Armenia, across the full electroacoustic and electronic spectrum, through-composed and improvised.


That includes Totem and the Fears:

The EP is a sonic pilgrimage of the soul liberating itself from the mind. Through repetitive phrases and circular rhythms, Azadi and Marutian create hypnotic soundscapes to open the windows of listener’s subconscious. The recording is the outcome of a fully improvised set at Azadi’s studio. This is the first time that Arash Azadi appears as the pianist on a record.

Marut Marutian: electric guitar and pedals.
Video by: Karen Khachaturov Photography


There’s the side project Marginal Twilight, which marked the occasion of the Persian new year already disrupted by quarantine and lockdowns – a solitary new beginning:

In these times that we all are separated from each other and in fear of death, it’s good to realize that nature is becoming new and spring is bringing life to earth. Even now we can choose to celebrate life and Nowruz the Persian New Year (the New Day) through music and dance.

It’s earlier work, but I’m still quite fond of Arash’s Geosonic Journeys for us – and people slowly keep discovering its aural landscapes:

All the best to all our readers and my friends in Iran and Armenia and around the world. We’re listening. And I miss a lot of you.

The post Spin abstract geometries from your music, with Ableton Live visualizers by artist Arash Azadi appeared first on CDM Create Digital Music.

Not in C: transform pitch with Scale-O-Mat for Ableton Live [M4L/Suite]

Delivered... Peter Kirn | Scene | Wed 8 Apr 2020 4:48 pm

120 bpm. 4/4. C major. Yawn. What if you could use those same Ableton Live project defaults to do something different? A new Max for Live devices dares you to do just that.

It all started with an idea from the mighty composer/artist Tyondai Braxton. Developer Tim Charlemagne wove that notion into Scale-O-Mat – an all-in-one pitch transformer device for Max for Live (so compatible with any copy of Ableton Live Suite).

You can start simple – the devices let you change a scale over the whole project. You can filter out notes that don’t fit the scale, or constrain notes to the scale you want. That could mean basic transpositions, too – for instance, if needed by instrumentalists or vocalists.

But Scale-O-Mat goes deeper, too, with multiple devices that talk to one another, up to four different groups, a chord feature, presets, and of course, a ton of scales.

Ableton’s own Push hardware comes with a decent selection of scales and modes, from “church” modes like Dorian to Indonesian and Japanese selections. Tobias Hunke has added to those selections, which you should check out both for use with this device and outside it. Check those here:



Scale-O-Mat works in tandem with Tim’s other Max for Live creations, so you can work with his chord device and elaborate modular sequencer.

It’s great stuff, both for people who have had some theory and those who didn’t but want to spice up their lives a bit.

Download for 10 € / US$12 (includes VAT).


The post Not in C: transform pitch with Scale-O-Mat for Ableton Live [M4L/Suite] appeared first on CDM Create Digital Music.

Kero and Defasten made our virus dreams into a futuristic music video (CDM premieres)

Delivered... Peter Kirn | Scene | Tue 7 Apr 2020 9:48 pm

What do you do when you can’t get a virus off your mind? Channel that into a bioscience audio-vision of immunity that reflects that new reality. We talk to Defasten and Kero about their music video for “Lodge.”

We need this sort of fantasy and escape now, I think. But do also check in on the reality in Detroit and the USA – to all our colleagues and friends and family there, we are with you, from Berlin to LA and around the world.

Highways is the kind of EP that might soothe your mood now – it’s a pulsing, electronic, unfamiliar world, but somehow comfortable. It’s music to disinfect to – dry, irregular acid lines, asymmetrical rhythms, but then mellow harmonies set against them. “Chrysler” sounds like a floating Detroit concept car, after hours, a stylish opener punctuated by a wonderful, bizarre bass line. “Southfield” is urgent and groovy; “Fisher” a growling post-apocalyptic IDM deconstructed-electro. “Davison” is delightfully weird reserved glitch. This is Michigan, yes, but through some Tron filter – enter the sadistic game grid.

“Lodge” rounds out the release, and it’s to me the most ambitious – and striking – culmination of Kero’s concept here. Its abstract cycle never quite materializes, a stuttering sound sculpture trying to escape an Enterprise transporter pattern buffer, but with beautiful, murky pad clusters breathing in and out in the background.

That music is evocative even if you close your eyes, but Defasten gives us a bio-science concept visual – unsettling but eerily pretty turquoise and purple 3D imagery. Watch:

Video: www.defasten.com

This is built in Notch, the same 3D software Ted Pallas used in the xR experiments I wrote about yesterday. (Ted reviewed this 3D software for CDM – I’m editing that review now.)

Here’s what Kero and Defasten had to say about their work here.

Peter: Want to say anything about the music, sounds? Love the vibe so – could be either the gear or the feeling you had, or both?

I have always had a love for futuristic GUIs in an ode to classic Miami Vice 80s and futurism / cyberpunk aesthetics. Defasten basically just worked with the aesthetic concepts he knew I already loved, but added a few elements of medical and scientific imagery that we both felt inspired by the worlds current crisis.

Kero’s rig for this release: Eurorack modular (of course, classic Doepfer stuff at top anchoring the setup), Elektron Analog Rytm MKII, Elektron Analog Four MKII, and Teenage Engineering OP-Z, which is nearly camouflaged in gray against its larger fellow Swedish gear.

For both of you – I mean, is there something cathartic and calming about really diving into the science here, understanding what this thing is we’re up against? Or how did you feel about the viral content?

Patrick from Defasten: The Lodge video visibly is a comment on what we’re experiencing right now globally. I found the microsounds of the track to evoke the biology of our bodies, the microscopic world that constitutes our being. It was of interest to interpret this graphically, with the real-time synthetic imagery discreetly reacting to Kero’s sonic pulses.

That said, there is indeed something calming when you’re focusing on crafting an idea, isolated at home. I think a lot of academics and researchers of any field can relate to the isolation required to develop an idea. This is the strange calm required to quell the storm.

CDM: To ask another question, is there a feeling of being on the side of science and technology as an approach, not just sort of giving into 1918 chaos?

Patrick: Let’s hope we don’t give into the 1918 chaos. Looking at the numbers now, we are not experiencing the loss of life at such magnitude. We are grateful that science and technology and general quality of life has increased since the last 100 years. That said, massive loss of life in 2020 is still to be taken very seriously.

But yes – I am on the side of science and technology to combat the pandemic, in addition to the cooperation of everyone to respect the temporary measures in place in reducing the spread of the coronavirus. In 1918, they didn’t have the internet – we now have this luxury to have the latest info – either fake or real – relayed to our phones. In many ways, we’re equipped to handle a pandemic, however it doesn’t mean we should put all our faith in science – in reality, politics has played a huge role in the pandemic’s acceleration, and science only responds as a result.

Can you talk about how this collaboration came together? Obviously there’s tons of visual-sonic collaboration and boundary pushing on DU.

PD: It came out quite spontaneously. I’ve already done more than a few works for Kero’s label since a few years now, I think we understand each other musically/aesthetically and are generally in the same zone with our tastes and interests. So once again – he gave me carte blanche to design what I felt was right. This kind of creative collaboration is what I value most – when there is solid trust in the members involved, and that the constructive dialog between the creatives enhance the process.

What’s next for this project and others? Will we see this bio-future-opera expanded?

PD: I’m interested in exploring the themes explored in the video further, but is it really a ‘’bio-future-opera’’? That is up to the public to decide. Prior to the pandemic – I was already interested in the intersection of biology, technology, the need to improve/augment our bodies, and the innerworld that is within us all. I think, instead of this over-emphasis on AI we’ve seen in the 2010s – the time is ripe for the creative and tech industries to re-examine itself, expand its interests, and push towards a new awareness and understanding of what is already all around us – not only to gain immunity to timeless viruses, but to understand/unlock the secrets of the microscopic world which we rely on to be alive, and of course to respect its boundaries. This of course will not be a smooth journey.

How do you hope people will watch this? I turned out the lights and went VERY full screen in the dark. But with all these streams around, I wonder if you have a vision for how we can have some more, say, quality immersion.

PD: What you did sounds like a good idea. The video is a slow burner and doesn’t require your constant attention – watch it on your phone if you like. There’s a lot of micro detail, so the higher the screen resolution, the better. The original content was made in Notch, so it’s generated in real-time, and it could loop forever, so ideally – a multi-screen installation setup running on real-time data in a large, dimly lit, architectural space. Sound familiar? 🙂 I also see it as a kind of backdrop to a sober, advanced tech ‘’mission briefing’’, in a large auditorium or hangar, with speakers of various expertise explaining to an audience the stakes at hand. Like a TED Talk PowerPoint presentation replaced with a holographic visual data presence.

The latest from Detroit, on the front lines

Next up from Detroit Underground is Joe Sousa, who will be out next on audio tape, Infinite Cold Distance. He had these sobering words to share about the current situation in Detroit; he’s a respiratory therapist by day so right out there.

Joe, you stay safe, too – and thanks for the update, especially as we deal with this worldwide. We can’t wait to hear your music.

Covid-19, week 3 southeast Michigan update:

This is going to be an account of my experiences during the boom of this virus, and to the point.

I was the charge therapist staffed during the initial weekend of SARS-CoV-2 infected admissions. Admittedly, a time of extreme uncertainty, with high anxieties felt across every profession in my building. You could feel it as you walked into rooms, as you talked to providers, as you tried to guide those around you with what knowledge you had. I am at a minimum pleased to say that phase has passed.

While my hospital (40 minutes away from epicenters such as Detroit or central Oakland County) is not yet challenged with at-capacity status, we are faced with the highest acute workload we have ever seen as a respiratory department. We have more ventilated patients than I’ve seen in my 8 year tenure. The patient population who requires critical care typically has at least one other health issue, but not all. Age is barely a factor, but most people are between 40 and 80. That being said, there has been individuals who are in their earlier 20s, and let that be a reality check.

Regarding diagnosis and treatment: this is an ever-evolving beast. Too many unknowns, and much is questioned daily, not only based on my personal research but based on conversation with providers across the field. Those with hypertension, diabetes, and ACE inhibitor usage seem to be at highest risk. Conjecture between cytokine storm, vasculopathy, thromboembolism, and more, point to an atypical presentation of ARDS. Some things I’ve read are saying it’s actually not true ARDS at all. Many lung mechanic strategies we implement end up being similar, but there is so much food for thought that I’m kept with a consistently open mind. For the layman: this is a unique virus. Too new (“Novel”), and lots to study. Full disclosure: I am not a physician or an infectious disease specialist for example, but knowing how others think is important as a clinician; to best integrate my respiratory tactic into the care plan. I put this here just as an insight to perspective on how every frontline team member is integrally involved in outcomes.

Regarding PPE: we as an institution early-implemented conservative measures, so we are not yet on the shortage side of the line. I anticipate this happening, though, if the supply chain has not been figured out by now. I also don’t believe in a first world country we should have to worry about “conserving” single-use protective equipment, so that thought is slightly daunting. Key point: I take my time putting on my gear and taking it off, properly, effectively, safely. It goes without saying that we, as the health care workforce, are exponentially more at risk than anyone social distancing or locked in their homes.

Beyond this, I truly feel for my other southeast Michigan hospitals nearing or at capacity. Michigan is currently at the 4th highest case count, and the 3rd highest deaths. And in the spirit of honesty: when they do, the deaths come swiftly. It is taxing to all of us, mentally and emotionally. I’m not a proponent of fear, but please stay home. Please be clean. Keep up your immunity and cardiopulmonary system with proper diet and exercise. Please take my word for it. And probably best to lay off the NSAIDS for a while.

Finally, there has been a token of uplifting measure: to see the level of support from so many friends and family of mine, checking in, giving thanks, it’s just all extremely encouraging. This matters more than you know, and I know my peers feel the same. Thank you to all the restaurants and individuals who donate meals and endless snacks to our ICU units, and while these aren’t alway the healthiest options, it does plenty for morale.

Listen to facts from experts, not headlines in the news. Knowledge is power more than ever, and I hope it quells some stress for some of you, I know it does for me. Stay safe and diligent everyone. This too shall pass.

Kero’s release and videos

Well, we’ll need something while we’re home. So for those of you who can, go get that record, which comes adorned with fantastic urban topography from Berlin’s graphic design shop www.neubauberlin.com, pressed in Detroit at Archer.

And for everybody, we get some eye candy. Dim the lights. Start with the opener:

Another one by Katya Ganya, for “Fisher”:

And Bryant CPU Place [www.cyberpatrolunit.com], for “Southfield”:

The post Kero and Defasten made our virus dreams into a futuristic music video (CDM premieres) appeared first on CDM Create Digital Music.

Control free streaming tool OBS Studio with OSC – and more essential tricks

Delivered... Peter Kirn | Scene | Tue 7 Apr 2020 5:27 pm

Control live streaming and recording tool OBS Studio with other apps and tools, and route video live. Free add-ons make it all possible.

Keep in mind this isn’t just for the live streaming craze – it’s for recording, too. But if you’re going to stream, by all means, do something interesting.

Carlo Cattano has made a free tool with some major implications – and it’s simple enough that it’s also a nice demo of how to write this in Python, generally. This code lets you route Open Sound Control – the high-res, open communication protocol used by many VJ apps, touch apps on iOS, and other applications – into OBS Studio:

Control OBS Studio with Open Sound Control template example [https://github.com/CarloCattano/ObSC]

That opens up all sorts of possibilities – script and automate video switching, jam live with the input, automate screencasts and recording, and more.

Also useful in OBS – you can route input from other applications directly.

On the Mac, you can use Syphon, open tech that lets you route 3D textures in OpenGL as easily between apps as you might audio signal in a patch bay. That’s native in the latest OBS release.

By the way you might even go the opposite direction – using this as output to mapping, for example:

On Windows, there’s Spout2 support (the Windows DirectX 11 equivalent of Syphon):


For an example of what this is for, here’s someone recording live visuals – alongside Ableton Live – using OBS and Spout. And this is from 2017, so again, it’s not just about live streaming during the pandemic.

And across platforms, you can use obs-ndi, which support’s NewTek’s NDI for networked audiovisual support:


That’s useful,, because it lets you freely specify sources, outputs, and filters using OBS over a network.

Streamers – and gamers in particular – have been using this already to use phones as remote cameras and perform multiple computer streaming.

You can even use it to save using a capture card:

More tips:

And yes, you could also use NDI to build your own switcher using something like TouchDesigner:

Full tutorial:


So there you have it. Let other people keep running horrible sound from their phone, while you use OBS as an all-purpose tool for routing, switching, capturing, and streaming video. Oh yeah and – you can use all of this to make your phone a capture, while using your computer to make light work of streaming/recording audio feeds and mic in high quality.

And the essential glue here is all free.

That means all of this streaming craze is a perfectly reasonable time for the rest of us to hone some of our video chops, whether we’re musicians or visualists. So hope you’re staying safe at home, and happily patching video switchers any time the news makes you a bit too anxious. At least … that’s part of my plan, for sure. Best to all of you and – yes, you can actually invite me to your streams.

The post Control free streaming tool OBS Studio with OSC – and more essential tricks appeared first on CDM Create Digital Music.

FCC April Meeting to Consider LPFM and Video Captioning – Looking at the LPFM Proposed Order (Including Interference Protections for TV Channel 6)

Delivered... David Oxenford | Scene | Tue 7 Apr 2020 4:06 pm

The FCC last week released its tentative agenda for its April 23 open meeting.  For broadcasters, that meeting will include consideration of the adoption of a Notice of Proposed Rulemaking (draft NPRM here) looking to broaden obligations for the audio description of television programming (referred to as the Video Description proceeding) – which we will write about in more detail later.  The agenda also includes a Report and Order modifying rules relating to Low Power FM stations, which also addresses the protection of TV channel 6 stations by FM stations (full-power or LPFM) operating in the portion of the FM band reserved for use by noncommercial stations.  The FCC’s draft order in this proceeding is here.  We initially wrote here about these FCC’s proposals when the Notice of Proposed Rulemaking in the proceeding was adopted last year. Today, we will look at how the FCC has tentatively decided to resolve some of the issues.

One of the most controversial issues was the proposal to allow LPFM stations to operate with a directional antenna.  While some directional operations had been approved by waiver in the past, there was some fear that allowing these antennas more broadly could create the potential for more interference to full-power stations.  As a directional antenna requires greater care in installation and maintenance to ensure that it works as designed, some feared that LPFM operators, usually community groups often without a broadcast background or substantial resources, would not be able to properly operate such facilities.  The FCC has tentatively decided to allow use of directional antenna by LPFM stations. However, it will require LPFM stations installing such antennas to conduct proof of performance measurements to assure that the antenna is operating as designed.  The cost of such antennas, the limited situations in which such antennas will be needed (principally when protecting translators and in border areas), and the additional cost of the proof of performance should, in the FCC’s opinion, help to limit their use to entities that can afford to maintain them properly.

Also, the FCC has tentatively decided to allow LPFM stations to operate FM boosters.  As with any other FM station, the booster cannot extend the signal of the primary LPFM station.  Boosters will be helpful principally in areas with irregular terrain that shields part of an LPFM’s service area from receiving the main station’s signal.  As these stations operate on the same channel as the LPFM itself, if not properly shielded, they can create interference to the primary station.  The FCC will allow any LPFM station to operate up to two boosters (or two translators) or one translator and one booster.

The definition of a minor change in the transmission facilities of an LPFM would be broadened if the FCC adopts the draft Order.  Instead of limiting a minor change to moves of 5.6 kilometers, the FCC is now doubling that limitation – allowing moves of up to 11.2 kilometers or to any location where the present and proposed 60 dBu contour of the LPFM station would overlap.  This change is important to LPFM advocates as it significantly increases the area in which a station can be moved without waiting for the infrequent filing windows for new stations and major changes.  Minor changes can be filed at any time.

The FCC declined to allow LPFMs to increase maximum power from 100 to 250 watts.  The FCC has previously rejected similar proposals and decided that there was no reason to change that decision now.  The FCC felt that there would be too many potential interference issues, including issues that have already been raised by some full-power stations about LPFMs in “foothills” areas – where their height above average terrain is low and can put a vast signal over a metropolitan area.  That can occur if the LPFM is in an area that is high relative to the metropolitan area in one direction, but the height of the proposed antenna above average terrain is lowered because there are mountains behind the transmitter site, thus lowering the average terrain height (which is computed on the average of the heights along 12 radials extending from the proposed transmitter site).  A higher antenna can dramatically increase coverage over the lower metropolitan area far beyond what the FCC predicted when it adopted the required mileage separation requirements that apply to LPFM stations.

The one issue in the order with ramifications beyond LPFMs is the decision not to decide to lift all restrictions on the location of FM stations – full-power or low power – operating in the reserved portion of the FM band from locating too close to Channel 6 TV (or LPTV) stations.  Prior to the digital television transition, as the FM band is adjacent to Channel 6 and the analog TV transmission system involved FM-like transmission of audio signals, there was the potential for interference between analog FM stations operating low on the FM band and analog TV stations on Channel 6.  While the conversion to digital television has removed many of these issues, some TV operators argued that the potential for interference to digital signals has not been fully analyzed.  They also pointed to the fact that certain LPTV stations may still be operating in analog until July 2021.  Thus, the FCC declined to abolish the interference protections entirely at this point in time.  However, the FCC will permit noncommercial operators in the reserved band (full power or LPFM) to seek a waiver of Channel 6 protection requirements if they can show that the proposed operations would not create interference to any nearby Channel 6 TV station.  That showing would be made using the FM translator criteria in Section 74.1205(c) which establishes an interfering contour for the FM station depending on the frequency on which it operates and a protected Grade B contour for the Channel 6 TV station.  Where those contours don’t overlap, an FM in the reserved band can be located.

The FCC proposes to make other changes regarding LPFM operations in this Order – so review the order to see how they may affect your operations and watch for action at the April 23 meeting to see if these draft rule changes are adopted.

Uganda’s Afrorack goes from modular synths to a DIY disinfectant; more efforts worldwide

Delivered... Peter Kirn | Scene | Tue 7 Apr 2020 1:54 pm

Brian Bamanya made a name making DIY modular synths, but now he’s applying voltage to another task – making sodium hypochlorite (aka bleach). Science! That joins a growing number of efforts of DIYers turning to fight the pandemic head-on.

Please, do not try anything like this before reading advisories below.

First off, this stuff is what’s known as household bleach or liquid bleach. Despite the fact that it’s sold readily, it is potentially very toxic – don’t let it touch other cleaning substances based on ammonia and acidic cleaners, for instance, or you’ll brew some harmful fumes. In fact, don’t even leave it sunlight. (Here’s a list of don’ts.) Don’t drink it, obviously (okay, not obvious to some), but also don’t let it touch anything that you’re going to consume – don’t get this anywhere near food.

But used with care, bleach is fantastic. You’ll see it in the toolkits of professional cleaners for a reason – it’s good at certain tasks. And it is very effective on surfaces against SARS-CoV-2, that virus known as the coron— yeah, I know, you hear about it every 15 seconds. Let’s get back to bleach and chemistry, because they’re cool.

But the important thing here is – yes, this can produce a WHO-approved surface cleaner. And no, you should not take any advice in chemistry or health from CDM. Honestly, I’m not sure I would claim you should take synth advice from CDM. Here are reliable sources on bleach and SARS-CoV-2:

World Health Organization on disinfecting [WHO PDF]
COVID-19 – Disinfecting with Bleach [Michigan State University]
National Center for Biotechnology Information on bleach specifically [they’re part of the National Institutes of Health, a US government branch]
Environmental Protection Agency document on the topic

Brian’s approach leans as much on electronics background as it does chemistry, because you can make it by running electricity through sodium chloride salt solution. Yeah – it’s analog. And that’s how it is manufactured.

What Brian is doing that’s clever is making this on a small scale when industrially-produced material has been subject to price hikes – and reusing plastic bottle trash in the process.

Is this a good idea? I don’t want to comment, as I am neither an expert on infectious disease nor anything like a chemist. So I want to put it out there to hear reaction, as normally given the range of backgrounds on the site, someone has an answer. I’ll update this story and our social channels with whatever we hear.

You can support the project here:


And find Brian here:

Bleach is effective in small concentrations; alcohol requires greater purity. But theoretically it should be possible to DIY ethanol alcohol, and off-the-grid types have been doing that before the COVID-19 outbreak. Also, unlike distillation, this will be legal in most places – though be careful not to sell it or make health claims, as that requires a license.

Let me again restate that I am not in any way qualified to talk about this, and you should not listen to me, though you should get in touch if you are qualified, and it is worth reading the experts – if for no other reason than to pass the time.

More efforts from the music makers

It’s also an indication of the changed world we’re in that the synth DIY community in general is in some cases turning to things other than musical instruments.

From Slovakia, Jonáš Gruska of LOM label – an experimental music label and maker of various sound electronics – is one of many people making 3D-printed face masks. (He’s also experimenting with UV hardware, but the face masks I know are being actively advocated by health care professionals around the world for their supplies.)

Groups like NYCResistor, who had been a partner of ours back in NYC, are engaged in similar projects – though the calls are as diverse as places looking for plexiglass boxes for intubation equipment.

Our friend Geert Bevin now of Moog has been making protective gear with UNC Asheville students working at the STEAM Studio:

UNCA students help make protective gear for health care workers [WLOS news]

People are sewing cloth masks, too – originally specifically excluded from guidance, but now part of international recommendations as the contagion and our knowledge of it evolve. Take for instance SewnMasksNYC, and (too many to list here) various efforts undertaken by musicians and media artists in our circle.

Places to find DIY help

I’ll refer to the official US Center for Disease Control instructions here (English + Spanish), just posted as they updated their guidance to begin advocating them. After some mixed messages here, this document is clear and concise and applicable everywhere – uh, once you convert from inches. (Some day, my native country will go metric.)

Use of Cloth Face Coverings to Help Slow the Spread of COVID-19 [CDC]

You’ll also find active open source groups for equipment. The main hub is currently on Facebook:


With a preferred 3D-printed face shield plan living at:


And here’s some music to accompany this article, by Ana Quiroga as NWRMNTC, who I understand has been sewing masks together with curator/artist Estela Oliva in the UK:

We needed some music, for sure, somewhere in this.

Let us know your feedback and what you may be involved in. I certainly don’t mean to intend that everyone in our community needs to contribute in this way – staying at home or doing your day job may be your best bet, and there’s plenty that matters in music itself these days. But I do hope we can use our networks to stay informed and connected.

The post Uganda’s Afrorack goes from modular synths to a DIY disinfectant; more efforts worldwide appeared first on CDM Create Digital Music.

No shape: how tech helped musicians melt the gender binary

Delivered... Sasha Geffen | Scene | Tue 7 Apr 2020 12:57 pm

In new book Glitter Up the Dark: How Pop Music Broke the Binary, Sasha Geffen explores music’s new gender nonconformists - here’s an extract

In the 21st century, the proliferation of internet-equipped consumer electronics enabled a new generation of gender nonconformists to communicate across any distance. Trans kids no longer had to move to New York or San Francisco to speak with others like them; they could use Facebook, Twitter, Tumblr and YouTube to find community. Communication didn’t depend on the presence of the physical body, and even the voice was no longer necessary to speak instantaneously to another person in a different town or a different continent, which was useful if you were trans and still literally finding a voice that felt right in your throat.

Against this cultural backdrop, an increasing number of musicians have begun to make work that unstitches the gendered body from its usual schematic of meaning. In 2010, the Seattle songwriter Mike Hadreas released his debut LP under the name Perfume Genius. He wrote Learning, a raw collection written on piano, while living with his parents and in recovery from drug addiction. The album was quietly popular and Hadreas soon had to figure out how to tour his new songs. He enlisted help from Alan Wyffels, a friend who had taken Hadreas to AA meetings in the early days of his recovery. They proved an excellent musical match, and while playing Hadreas’s songs together, they also fell in love.

Related: Pop star, producer or pariah? The conflicted brilliance of Grimes

Continue reading...

URL to IRL: How Gen Z Are Using Technology to Change the World

Delivered... whitney | Scene | Tue 7 Apr 2020 9:04 am

Technology has always had a fraught relationship with humans. From the invention of the phonograph by Thomas Edison in 1877 to the Pioneer CDJs introduced in 1994, every new apparatus, it seems, is often met with a suspicious side-eye from older generations. These traditionalists, or Luddites even, regard every advancement as the impetus for a deteriorating society. Those who do take these...



Delivered... Spacelab - Independent Music and Media | Scene | Mon 6 Apr 2020 9:15 pm
The lineup stays the same!

Holodeck DJ: I played techno on an XR stage – here’s what it was like

Delivered... Peter Kirn | Scene | Mon 6 Apr 2020 3:38 pm

There are cameras. There’s video and 3D. What happens when you create a futuristic mixed reality space that combines them, live? I headed to a cavernous northern New Jersey warehouse to find out.

With or without the pandemic crisis, our lives in the digital age straddle physical and imagined, meatspace and electronic worlds. XR represents a collection of current techniques to mediate between these. Cross or mixed is a way to play in the worlds between what’s on screen or video and what exists in physical space.

Now, with all these webcasts and video conferencing that have become the norm, the reality of mixing these media is thrown into relief in the mainstream public imagination. There’s the physical – you’re still a person in a room. Then there’s the virtual – maybe your appearance, and the appearance of your physical room, is actually not the thing you want to express. And between lies a gap – even with a camera, the viewpoint is its own virtual version of your space, different than the way we see when we’re in the same space with another person. XR the buzzword can melt away, and you begin to see it as a toolkit for exploring alternatives to the simple, single optical camera point of view.

To experience first-hand what this might mean for playing music, I decided to get myself physically to Secaucus (earlier in March, when such things were not yet entirely inadvisable). Secaucus itself lies in a liminal space of New Jersey that exists between the distant realities of the Newark International Airport, the New Jersey Turnpike, and Manhattan.

Tucked into a small entrace to a nondescript, low-slung beige building, WorldStage hides one of the biggest event resources on the eastern seaboard. Their facility holds an expert team of AV engineers backed by a gargantuan treasure trove of lighting, video, and theatrical gear. Edgewater-based artist/engineer Ted Pallas and his creative agency Savages have partnered with their uniquely advanced setup to realize new XR possibilities.

“Digital artists collaborating with this new technology pave the road for where xR can go,” says Shelly Sabel, WorldStage’s Director of Design. “Giving content creators like Savages opportunities to play on the xR stage helps us understand the potential and continue in this new direction.”

I was the guinea pig in experimenting with how this might work with a live artist. The mission: get out of a Lyft from the airport, minimizing social contact, unpack my backpack of live gear (VCV Rack and a mic and controller), and try jamming on an XR stage – no rehearsal, no excuses. It really did feel like stepping onto a Holodeck program and playing some techno.

And I do mean stage. The first thing I found was a decent-sized surface, LEDs on the floor, a grid of moving head lights above, and over-sized fine-grade LED tiles as a backdrop on two sides. Count this as a seven-figure array of gear powering a high-end event stage.

The virtual magic is all about transforming that conventional stage with software. It’s nothing if not the latest digital expression of Neo-Baroque aesthetics and illusion – trompe-l’œil projection in real space, blended with a second layer of deception as that real-world LED wall imagery is extended in virtual space on the computer for a seamless, immersive picture.

It’s a very different feeling than being on a green screen or doing chroma key. You look behind you and you see the arches of the architecture Ted and his team have cooked up; the illusion is already real onstage. And that reality pulls the product out of the uncanny valley back into something your brain can process. It’s light years away from the weather reporter / 80s music video cheesiness of keying.

I’m a big believer in hacking together trial runs and proofs of concept, so fortunately, Ted and team were, too – as I was the first to try out this XR setup in this way. He tells CDM:

This was our first time having an artist in one of our xR environments, in a specific performance context – we’d previously had some come visit, but Peter is the first to bring his process into the picture. As such, we decided to keep things mellow – there was a lot of integration getting blessed as “stable” for the first time, and I wanted to minimize the potential for crashing during the performance – my strong preference is to do performances in one take.

The effects you’ll see in the video are pretty simple and subtle by design. Plus I was entirely improvising – I had no idea what I would walk onto in advance, really. But the experience already had my head reeling with possibilities. From here, you can certainly add additional layers of augmentation – mapping motion graphics to the space in three dimensions, for instance – but we kept to the background for this first experiment.

Just as in any layered illusion, there’s some substantial coordination work to be done. The Savages team are roping together a number of tools – tools which are not necessarily engineered to run together in this way.

The basic ingredients:

Stype – camera tracking
disguise gx 2c – media server (optimized for Notch)
Notch – real-time content hosted natively in disguise media software
Unreal Engine – running on a second machine feeding disguise
BOXX hardware for Unreal, running RTX 6000 GPUs from NVIDIA
SideFX Houdini software for visual effects

The view from Notch.

Camera tracking is essential – in order to extend the optically-captured imagery with virtual imagery as if it were in-camera, it’s necessary for each tiny camera move to be tracked in real time. You can see the precision partly in things like camera vibrations – the tiniest quiver has a corresponding move in the virtual video. Your first reaction may actually be that it’s unimpressive, but that’s the point – your eye accepts what it sees as real, even when it isn’t.

Media servers are normally tasked with just spitting out video. Here, disguise is processing data and output mapping at the same time as it is crunching video signal – hiding the seams between Stype camera tracking data and video – and then passing that control data on to Notch and Unreal Engine so they’re calibrated, too. It erases the gap between the physical, optical camera and the simulated computer one.

Those of you who do follow this kind of setup – Ted notes that disguise is instancing Notch directly on its timeline, while Unreal is being hosted on that outboard BOXX server. And the point, he says, is flexibility – because this is virtual, generative architecture. He explains:

All about the parameters.

Apart from the screen surface in the first set, all geometry was instanced and specified inside of the Unreal Engine via studio-built Houdini Digital Assets. HDAs allow Houdini to express itself in other pieces of software via the Houdini Engine – instead of importing finished geometry, we import the concept of finished geometry and specify it within the project, usually looking through the point of view of the [virtual 3d] camera.

This is similar in concept to a composer writing a very specific score for an unknown synthesizer, and then working out a patch with a performer specific to a performance. It’s a very powerful way to think about geometry from the perspective of the studio. Instead of worrying about finishing during the most expensive part of our process time-wise — the part that uses Houdini — we buffer off worrying about finishing until we are considering a render. This is our approach to building out our digital backlot.

The “concept of the geometry” – think a model for what that geometry will be, parameterized. There’s that Holodeck aspect again – you’re free to play around with what appears in virtual space.

Set pieces in Houdini.

There are two set pieces here as demo. I actually quite liked the simple first set, even, to which they mapped a Minimoog picture on the fly – partly because it really looks like I’m on some giant synth conference stage in a world that doesn’t yet exist. Ted describes the set:

The first set is purposefully pedestrian – in as little time as possible, we took a screen layout drawing for an existing show, added a bit of brand-relevant scenic, and chucked it in a Notch block. The name of the game here was speed – start to finish production time was about three hours. On the one hand, it looks it. On the other hand, this is the cheapest possible path to authoring content for xR – treat it like you’re making a stage, and then map it from the media server like it’s a screen. What’s on the screen can even be someone else’s problem, allowing digital media people to masquerade as scenic and lighting designers.

The second piece is more ambitious – and it lets a crew transport an artist to a genuinely new location:

Inside the layers of Savages’ virtual architecture.

The second set design was inspired by architect Ricardo Bofill’s project La Muralla Roja. As the world was gearing up to shutdown, we spent a lot of time discussing community. La Muralla Rojo was built to challenge modern perspectives of public and private spaces. Our Muralla is intended to do the same. We see it as a set for multiple performers, each with their own “staged location” or as a tool to support a single performer.  

Courtesy Ricardo Bofill, architects – see the full project page (and prepare to get lost in photos transporting you to the North African Mediterranean for a while).

And yes, placing an artist (that’ll be me, bear with me here) – that adds an additional layer to the process. Ted says:

[Bofill’s] language for the site is built out of plaster and the profile of a set of stairs, modulated by perpendicularity and level. An artist standing on [our] LED cube is modulating a perpendicular set of surfaces by adding levels of depth to the composition.

This struck me as a good peg for us all to use to hang our hats. Without you [Peter] standing there, the screens are very flat – no matter how much depth is in the image. :ikewise, without the stairs, muralla roja would be very flat. when i was looking for references this is what struck me.

It may not be apparent, but there is a lot still to be explored here. Because the graphics are generative and real-time, we could develop entire AV shows that make the visuals as performative of the sound, or even directly link the two. We could use that to produce a virtual performance (ideal for quarantine times), but also extend what’s possible in a live performance. We could blur the boundary between a game and a stage performance.

It’s basically a special effect as a performance. And that opens up new possibilities for the performer. So here I was pretty occupied just playing live, but now having dipped in these waters the first time, of course I’m eager to re-imagine the performance for this context – since the set I played here is really just conceived as something that fits into a (real world) DJ booth or stage area.

Ted and Savages continue to develop new techniques for combining software, including getting live MIDI control into the environment. So we’ll have more to look at soon.

To me, the pandemic experience is humbling partly in that it reminds us that many audiences can’t physically attend performances. It also reveals how virtual a lot of our connections were even before they were forced to be that way – and reveals some of the weakness of our technologies for communicating with each other in that virtual space. So to sound one hopeful note, I think that doubling down on figuring out how XR technologies work is a way for us to be more aware of our presence and how to make the most of it. Our distance now is necessary to save lives; figuring out how to bridge that distance is an extreme but essential way to develop skills we may need in the future.

Full set:

Artist: Peter Kirn
Designer (Scenography, Lighting, VFX): Ted Pallas, Savages
Director of Photography: Art Jones
Creative Director: Alex Hartman, Savages
Technical Director: Michael Kohler, WorldStage



Footnote: If you’re interested in exploring XR, there’s an open call out now for the GAMMA_LAB XR laboratory my friends and partners are running in St. Petersburg, Russia. Fittingly, they have adapted the format to allow virtual presence, allowing the event itself to go on., and it will bring some leading figures in this field It’s another way worlds are coming together – including Russia and the international scene.

Gamma_LAB XR [Facebook event / open call information in Russian and English]

The post Holodeck DJ: I played techno on an XR stage – here’s what it was like appeared first on CDM Create Digital Music.

A Webinar on FCC Issues for Broadcasters During the Current Crisis – And One More FCC Action on Sponsorship Identification on Paid PSAs

Delivered... David Oxenford | Scene | Mon 6 Apr 2020 3:24 pm

In the last three weeks, we have written about actions that the FCC has taken to help broadcasters through the current crisis caused by the COVID-19 virus.  The FCC appears to realize that the business of broadcasting in the current crisis is vastly different than it was just a month ago.  The FCC has provided relief on TV newsgathering and news sharing arrangements,  issued a determination that no charge spots unrelated to an existing advertising schedule do not affect lowest unit rates, granted liberal extensions to stations in Phase 9 of the TV repacking, deferred the filing of Quarterly Issues Programs Lists and the Annual Children’s Television Reports to July 10, and recognized that college-owned stations that are silent when students are no longer on campus do not need an STA to remain silent.  In a webinar I conducted for a number of state broadcast associations last Thursday, I summarized these developments and talked about other FCC rules and policies that broadcasters need to continue to observe during the current crisis.  That webinar is available on the website of the Indiana Broadcasters Association which hosted the session and can be viewed here.

On Friday, the FCC added to the actions that it has taken to assist broadcasters – issuing a Public Notice adopting a policy that, through June 30, commercial advertisers can donate ad time to government agencies or charities to run PSAs dealing with issues relating to COVID-19 without the station having to identify the companies donating the spots as sponsors of the PSA.  Even though the commercial sponsors paid for the time, they don’t need to associate themselves with the virus spots.  This was at the request of the Ad Council, which suggested that some advertisers had ad time that they no longer needed but were reluctant to donate it to COVID PSAs as they feared that, if they were identified as sponsors, their businesses would somehow be associated with the virus.  While it may be the unusual situation where an advertiser cancels its ad schedule and is willing to donate the advertising time for charitable uses without acknowledgement, in some cases it may give broadcasters one more way to try to convince advertisers not to totally cancel their schedules.  And it shows that the FCC is continuing to do its best to assist advertisers in this trying time.  Watch for more developments in the coming weeks.

Begone, webcams: Dixon will premiere an album in gorgeous 3D mixed reality, today

Delivered... Peter Kirn | Scene | Fri 3 Apr 2020 7:25 pm

We’ve gazed into grainy video feeds and literally watched multi-camera shoots of empty clubs. But we’re also starting to see a move into futuristic 3D and mixed reality – starting with one unique album premiere from the Innervisions mastermind.

The feed is tonight Berlin time, that’s 10PM CET so 4PM NYC time, 1PM California. (And yeah, my heart is with you right now, America, even as I type the letters NYC – and many other places worldwide. I know this is beyond tough; I’m watching and listening. If you want to join for Dixon distraction in 3D, please say hi.)

In a global crisis, one key element to look for in culture is people who were working on something before all of this, and that might endure through and after. So Dixon qualifies. He already DJed virtually (thanks to motion capture, a collaboration with Rockstar Games, and an appearance in Grand Theft Auto V Online, which you can still go visit in the game). And he had unveiled his Transmoderna project, which at least had the lofty goal of turning a club into something immersive. It’s hard to untangle what that means from the description, and I don’t tend to hang out in Ibiza. It seems in PR materials, everything in Ibiza starts to turn into some high-concept club-marketing gobbledygook – but yeah, his residency hosting everyone from the Innervisions crew to Mano le Tough to Honey Dijon also had the notion of developing immersive technology and reimagining what a club was.

Let’s skip to what is actually happening now, tonight Berlin time, as it at least starts to plumb this question of “what could be on a video feed that isn’t just a camera pointed at a bunch of clubgoers?”

The clubgoers for something like Boiler Room are now legally removed, their absence mandated by German contact limits in this pandemic. But the supercomputer on your desk and the supercomputer in your pocket were already capable of doing other things.

So Dixon tonight will perform a mixed reality DJ set as part of the Transmoderna collaboration, and to debut unreleased music of the same name. The artist says he’s building his Dj set already from this material. (No word on who yet, but previously announced collaborations on this moniker included Âme, Mathew Jonson, Echonomist, Trikk, Frank Wiedemann, and Roman Flügel.)

It’s really the visual material that starts to show promise, though – see the images here. Bleeding-edge visualist studio SELAM X has created an alien fishtank for the artist to inhabit tonight – and you can see the wonderful creatures they’re producing.

The whole thing will run in a game engine – fitting, as platforms like Twitch were streaming games while the club community was still just showing, well, clubs. And beyond that, I don’t know what to expect. But I’ll be tuning in, as this feels less like “DJ mix plus webcam” and more like something worth seeing on a screen.

I hope to talk to one of the artists at SELAM X soon, so take a look, let us know what you think, and if you have questions.

But I do suspect there’s a lot of potential here. And hey, if you want to catch Dixon and The Black Madonna in GTA V, too, I’m game. It’s more fun than watching Facebook Live chat hang my browser tabs, I know that. (Hey – I believe in computers and the Internet. We will get there, because we can.)



The post Begone, webcams: Dixon will premiere an album in gorgeous 3D mixed reality, today appeared first on CDM Create Digital Music.

The Rules to Live By, Part I

Delivered... whitney | Scene | Fri 3 Apr 2020 5:55 pm

Our newest series Ode to the Night captures the rave as a rite of passage. Each personal essay reveals how the party scene is as much about hedonism and celebration as it is about coming of age. In the inaugural piece, writer Geoffrey Mak searches for himself within a helix of self-destruction and enlightenment during his first year in a new city. This is part one of his two-part personal essay. &#


Streamlabs is an easier, free all-in-one streaming app, now on Mac, Windows, iOS, and Android

Delivered... Peter Kirn | Scene | Fri 3 Apr 2020 5:40 pm

Start with OBS, the now industry-standard streaming app, and add a bunch of special sauce to make it easier and friendlier. Now you’ve got Streamlabs – and it just added Mac support to its other platforms.

Mention live streaming any time in the past year or so, and someone no doubt told you to use OBS. Open Broadcaster Software, aka OBS Studio, is indeed free and powerful – not only for streaming but live recording, too. (It quietly displaced a lot of pricey and often incomplete commercial screencasting software, too.)

OBS has gotten a lot easier – a cash infusion from Twitch, Facebook, NVIDIA, and Logitech no doubt helped. But it’s still a bit intimidating as far as configuring settings for recording, to say nothing of the manual settings required to then make it upload to various streaming platforms.

That’s where Streamlabs comes in. It’s got its own desktop apps based on OBS, plus apps that let you easily stream from Android and iOS, too. So while you could do all of this on OBS desktop, Streamlabs makes it easier – basically, it’s a bit like having a custom distro of OBS. And then by adding mobile access, those platforms become easier, too.

Looks like OBS – but 100% less intimidating.

So in addition to all the things that make OBS powerful – using any video source or onscreen inputs, switching between them, handling resolutions and recording as well as connecting, you get:

  • Pre-configured streaming platforms and easy login (think YouTube, Twitch, Facebook, etc.)
  • Auto-optimized video settings
  • Custom alerts (so you can also beg for donations, add engagement)
  • Themes and widgets for customizing your stream
  • Built-in chat (normally requiring you to open another window in OBS, which gets surprisingly clumsy fast)
  • Easy recording
  • Cloud backups (so you don’t lose your recording)


Honestly, having played around with it a bit, maybe the best part of Streamlabs is that all the power of OBS is there, but easier to use. So it doesn’t feel like a dumbed-down version of OBS so much as a polished, beginner-friendly interface with all the same features – and some useful additions.

The easier-to-follow Sources dialog alone is probably worth the price of admission. And price of admission is free, anyway.

The mobile apps also feature a lot of nice integrations on these lines, too. Think similar cross-platform streaming support, importing OBS settings from desktop, and adding widgets for events, donations, and chat.


The spin here of OBS is open source, like its sibling. It’s based on Electron, so I hope that now that macOS was added, we’ll see Linux, too. Linux users should meanwhile note that OBS packaging has improved a lot across distros, and Ubuntu Studio for instance even bakes a pre-configured OBS right into the OS. I have no idea how much work would be required to do the same with Streamlabs. (PS, you can beta test 20.04 LTS right now and help them squash bugs before what I think will be a very essential global pandemic stay-at-home OS release!)

So, since this is free and open source, what’s the business model?

Basically, you can grab this for free and have a nicer version of OBS. Tips and donations to content makers go 100% to you – no cut for Streamlabs. (Good – and a major difference with a lot of horrible startups.)

Then for a monthly fee, you can add additional effects (US$4.99/month, “PRO”), or a bunch of custom widgets, custom domain and website, and other extras (Prime, $12/mo billed annually).


I hope they allow month-to-month billing, but regardless, it’s nice to see a business built on open source software and that still has sustainable business support. (CDM is possible because of just that idea – thank WordPress.)

I’m sure some people are groaning at me even sharing this information, given how many streams are out there right now. But”streaming” doesn’t necessarily mean to a wide audience – it’s useful in any case where you want to teleport yourself around the world (while under stay-at-home orders, for instance) even if it’s to a small group. Plus, even if you haven’t been struggling with this yourself, now you can tip off your friends so they don’t a) bug you for how to set up their stream and/or b) stream really low-quality material you have to then watch.

And I think just as with blogs, the question is not really quantity or openness, but quality – and whether there’s a model for supporting the people putting out that quality. More on this soon.

The post Streamlabs is an easier, free all-in-one streaming app, now on Mac, Windows, iOS, and Android appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme