Warning: mysql_get_server_info(): Access denied for user 'indiamee'@'localhost' (using password: NO) in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/indiamee/public_html/e-music/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » 2018 » June » 11


apeSoft bring a whole new way to route audio and MIDI in iOS with apeMatrix

Delivered... Ashley Elsdon | Scene | Mon 11 Jun 2018 11:25 pm

apeSoft have a history in the iOS world of bringing us unique and innovative apps and concepts. But I think that this time they may have outdone even themselves. They’ve now brought us apeMatrix, which is a truly unique and innovative AU/IAA routing tool that aims to give you total control over what you want to do with your audio and how you want to do it.

Here’s what apeSoft have to say about their new app apeMatrix:

Holding true to the standard of apeSoft, apeMatrix continues to push the limits of possibilities in music creation. Automating almost every aspect of linked control, apeMatrix brings all your creation tools together and links them both in MIDI and audio. Letting you inject all your favorite FX both AU and IAA in a very easy and intuitive way. apeMatrix can also assign its Control Manager MIDI, Accelerometer, Scrub and LFO’s to modulate every built-in parameter and any parameter available inside the AUv3 (Audio Unit Plugins).

apeMatrix offers 10 slots on each of the three Matrix grids with 2 bus slots on each grid that make it possible to interconnect all 3 Matrix. The MIDI Patchbay also offers endless possibilities for MIDI routing and control in a similar grid design for both internal and external MIDI control. So you can send and receive MIDI and have ultimate control over where its routed.

Each Audio Connection on the grid has its own gain control. Just tap on the node and drag will give you the full range of volume control. Or use Mixer to controls the RAU output and automate them using apeSoft’s unique Control Manager MIDI, Accelerometer, Scrub and LFO’s to modulate every built-in parameter and any parameter available inside the AUv3 (Audio Unit Plugins).

Turn each slot on or off by a control switch on the Matrix or MIDI control. Control output, pan, mute and solo of each slot and Master output of each Matrix with the built in mixer control or MIDI control.

Route your sounds through all a series of FX or route multiple sounds through the very same FX. The possibilities are as endless as your creativity.

Features include:

  • 3 Matrix Audio
  • 3 MIDI Matrix
  • 10 Open Slots per Matrix (hosts up to 30 plugins)
  • 2 Audio Bus Slots per Matrix
  • 2 MIDI Bus Slots per Matrix
  • MIDI Monitor
  • MIDI Scale Filtering, Transpose etc…
  • Control Manager (MIDI, Accelerometer, LFO, Scrub) for all Built-In Parameters
  • Control Manager (MIDI, Accelerometer, LFO, Scrub) for all AUv3 (Audio Unit Plugins)
  • Presets Manager and Morphing Pad
  • Save Custom AUv3 Presets
  • Session Saving/Load
  • Save/Load View’s Frame in Presets
  • Transport to sends host sync to Audio Unit plugins and IAA
  • Integrated and configurable MIDI keyboard with scales
  • Connections output and panning controls
  • Post Dynamic Processor
  • Audiobus and Inter-Audio App (Sender and Fx)
  • Audiobus state saving
  • Ableton Link
  • Precise MIDI Clock In/Out
  • AudioShare Compatible
  • AudioCopy Compatible
  • MIDI Manager: Virtual Midi and Network, 14 bit NRPN controllers
  • File Manager, sharing common audio files via iTunes, Dropbox and AudioCopy etc…
  • Variable Sampling Rate (up to 96 kHz)
  • Variable Buffering Size
  • Variable UI Color Schemes

Personally I think that this is going to be a big step forward in iOS music making, and I’m really looking forward to seeing how people use it and how it evolves the use of iOS music in general.

apeMatrix from apeSoft costs $9.99 on the app store now

The post apeSoft bring a whole new way to route audio and MIDI in iOS with apeMatrix appeared first on CDM Create Digital Music.

If noise is your thing then Noise Maschine Premium may be just what you’re looking for

Delivered... Ashley Elsdon | Scene | Mon 11 Jun 2018 11:02 pm

The original version of NOISE MASCHINE arrived back in September last year and was an ad supported app. Now Noise Maschine Premium has arrived, which is another version of the digitally controlled noise synthesizer. Also, this version has no ads in it at all.

The app bills itself as giving you the ability to “create powerful noise music or aural relaxation sounds for health and sleep phase improvements”. It goes on to say that the app is “an easy to use noise machine dedicated to instantly generate continuous playing of feedback loops and drone sounds”.

In the premium version of the app the developer has added a new sequencer with TR-style buttons that can be used to store up to 16 of sounds permanently. The trigger pads also switch quickly between your patches. Once a step is selected it will remember all control values including volume. You can randomize a single step via a RNDC button – or randomize all steps with the same random sound by pressing RNDA. This allows you to create variations when played in sequence. For further experimentation try Evolver running together with sequencer at slow BPM. To enable the sequencer panel press the dedicated button (SEQ) in Menu. You can seamlessly switch back to touch control pad – just as you prefer.

NOISE MASCHINE PREMIUM is on the app store and costs $1.99

The post If noise is your thing then Noise Maschine Premium may be just what you’re looking for appeared first on CDM Create Digital Music.

THE BONNAROO 2019 DATES HAVE BEEN ANNOUNCED!

Delivered... Spacelab - Independent Music and Media | Scene | Mon 11 Jun 2018 7:00 pm
It's a brand new weekend in 2019, different from the dates this year. Get the details.

A look at AI’s strange and dystopian future for art, music, and society

Delivered... Peter Kirn | Scene | Mon 11 Jun 2018 6:01 pm

Machine learning and new technologies could unlock new frontiers of human creativity – or they could take humans out of the loop, ushering in a new nightmare of corporate control. Or both.

Machine learning, the field of applying neural networks to data analysis, unites a range of issues from technological to societal. And audio and music are very much at the center of the transformative effects of these technologies. Commonly dubbed (partly inaccurately) “artificial intelligence,” they suggest a relationship between humans and machines, individuals and larger state and corporate structures, far beyond what has existed traditionally. And that change has gone from far-off science fiction to a reality that’s very present in our homes, our lives, and of course the smartphones in our pockets.

I had the chance to co-curate with CTM Festival a day of inputs from a range of thinkers and artist/curators earlier this year. Working with my co-host, artist and researcher Ioann Maria, we packed a day full of ideas and futures both enticing and terrifying. We’ve got that full afternoon, even including audience discussion, online for you to soak in.

Me, with Moritz, pondering the future. Photo: CTM Festival / Isla Kriss.

And there are tons of surprises. There are various terrifying dystopias, with some well-reasoned arguments for why they might actually come to fruition (or evidence demonstrating these scenarios are already in progress). There are more hopeful visions of how to get ethics, and humans, back in the loop. There are surveys of artistic responses.

All of this kicked off our MusicMakers Hacklab at CTM Festival, which set a group of invited artists on collaborative, improvisatory explorations of these same technologies as applied to performance.

These imaginative and speculative possibilities become not just idle thoughts, but entertaining and necessary explorations of what might be soon. This is the Ghost of Christmas Yet-to-Come, if a whole lot more fun to watch, here not just to scare us, but to spur us into action and invention.

Let’s have a look at our four speakers.

Machine learning and neural networks

Moritz Simon Geist: speculative futures

Who he is: Moritz is an artist and researcher; he joined us for my first-ever event for CTM Festival with a giant robotic 808, but he’s just at adept in researching history and future.

Topics: Futurism, speculation, machine learning and its impact on music, body enhancement and drugs

Takeaways: Moritz gives a strong introduction to style transfer and other machine learning techniques, then jumps into speculating on where these could go in the future.

In this future, remixes and styles and timbres might all become separate from a more fluid creativity – but that might, in turn, dissolve artistic value.

“In the future … music will not be conceived as an art form any more. – Moritz Simon Geist”

Then, Moritz goes somewhere else entirely – dreaming up speculative drugs that could transform humans, rather than only machines. (The historical basis for this line of thought: Alexander Shulgin and his drug notebooks, which might even propose a drug that transforms perception of pitch.)

Moritz imagines an “UNSTYLE” plug-in that can extract vocals – then change genre.

What if self-transformation – or even fame – were in a pill?

Gene Cogan: future dystopias

Who he is: An artist/technologist who works with generative systems and its overlap with creativity and expression. Don’t miss Gene’s expansive open source resource for code and learning, machine learning for artists.

Topics: Instrument creation, machine learning – and eventually AI’s ability to generate its own music

Takeaways: Gene’s talk begin with “automation of songwriting, production, and curation” as a topic – but tilted enough toward dystopia that he changed the title.

“This is probably going to be the most depressing talk.”

In a more hopeful vision, he presented the latest work of Snyderphonics – instruments that train themselves as musicians play, rather than only the other way around.

He turned to his own work in generative models and artistic works like his Donald Trump “meat puppet,” but presented a scary image of what would happen if eventually analytic and generative machine learning models combined, producing music without human involvement:

“We’re nowhere near anything like this happening. But it’s worth asking now, if this technology comes to fruition, what does that mean about musicians? What is the future of musicians if algorithms can generate all the music we need?”

References: GRUV, a generative model for producing music

WaveNet, the DeepMind tech being used by Google for audio

Sander Dieleman’s content-based recommendations for music

Gene presents – the death of the human musician.

Wesley Goatley: machine capitalism, dark systems

Who he is: A sound artist and researcher in “critical data aesthetics,” plumbing the meaning of data from London in his own work and as a media theorist

Topics: Capitalism, machines, aesthetics, Amazon Echo … and what they may all be doing to our own agency and freedom

Takeaways: Wesley began with “capitalism at machine-to-machine speeds,” then led to ways this informed systems that, hidden away from criticism, can enforce bias and power. In particular, he pitted claims like “it’s not minority report – it’s science; it’s math!” against the realities of how these systems were built – by whom, for whom, and with what reason.

“You are not working them; they are working you.”

As companies like Amazon and Google extend control, under the banner of words like “smart” and “ecosystem,” Wesley argues, what they’re really building is “dark systems”:

“We can’t get access or critique; they’re made in places that resemble prisons.”

The issue then becomes signal-to-noise. Data isn’t really ever neutral, so the position of power lets a small group of people set an agenda:

“[It] isn’t a constant; it’s really about power and space.”

Wesley on dark connectionism, from economics to design. Photo: CTM Festival / Isla Kriss.

Deconstructing an Amazon Echo – and data and AI as echo chamber. Photo: CTM Festival / Isla Kriss.

What John Cage can teach us: silence is never neutral, and neither is data.

Estela Oliva: digital artists respond

Who she is: Estela is a creative director / curator / digital consultant, an anchor of London’s digital art scene, with work on Alpha-ville Festival, a residency at Somerset House, and her new Clon project.

Topics: Digital art responding to these topics, in hopeful and speculative and critical ways – and a conclusion to the dystopian warnings woven through the afternoon.

Takeaways: Estela grounded the conclusion of our afternoon in a set of examples from across digital arts disciplines and perspectives, showing how AI is seen by artists.

Works shown:

Terence Broad and his autoencoder

Sougwen Chung and Doug, her drawing mate

https://www.bell-labs.com/var/articles/discussion-sougwen-chung-about-human-robotic-collaborations/

Marija Bozinovska Jones and her artistic reimaginings of voice assistants and machine training:

Memo Akten’s work (also featured in the image at top), “you are what you see”

Archillect’s machine-curated feed of artwork

Superflux’s speculative project, “Our Friends Electric”:

OUR FRIENDS ELECTRIC

Estela also found dystopian possibilities – as bias, racism, and sexism are echoed in the automated machines. (Contrast, indeed, the machine-to-machine amplification of those worst characteristics with the more hopeful human-machine artistic collaborations here, perhaps contrasting algorithmic capitalism with individual humanism.)

But she also contrasted that with more emotionally intelligent futures, especially with the richness and dimensions of data sets:

“We need to build algorithms that represent our values better – but I’m just worried that unless we really talk about it more seriously, it’s not going to happen.”

Estela Oliva, framed by Memo Akten’s work. Photo: CTM Festival / Isla Kriss.

It was really a pleasure to put this together. There’s obviously a deep set of topics here, and ones I know we need to continue to cover. Let us know your thoughts – and we’re always glad to share in your research, artwork, and ideas.

Thanks to CTM Festival for hosting us.

https://www.ctm-festival.de/news/

The post A look at AI’s strange and dystopian future for art, music, and society appeared first on CDM Create Digital Music.

Hey, Alexa, How Much Did You Raise My SoundExchange Royalties?

Delivered... David Oxenford | Scene | Mon 11 Jun 2018 5:09 pm

In the last year, the popularity of Alexa, Google Home and similar “smart speaker” devices has led to discussions at almost every broadcast conference of how radio broadcasters should embrace the technology as the new way for listeners to access radio programming in their homes. Broadcasters are urged to adopt strategies to take advantage of the technology to keep listeners listening to their radio stations through these new devices. Obviously, broadcasters want their content where the listeners are, and they have to take advantage of new platforms like the smart speaker. But in doing so, they also need to be cognizant that the technology imposes new costs on their operations – in particular increased fees payable to SoundExchange.

Never mentioned at these broadcast conferences that urge broadcasters to take advantage of these smart speakers is the fact that these speakers, when asked to play a radio station, end up playing that station’s stream, not its over-the-air signal. For the most part, these devices are not equipped with FM chips or any other technology to receive over-the-air signals. So, when you ask Alexa or Google to play your station, you are calling up a digital stream, and each digital stream gives rise to the same royalties to SoundExchange that a station pays for its webcast stream on its app or through a platform like TuneIn or the iHeartRadio. For 2018, those royalties are $.0018 per song per listener (see our article here). In other words, for each song you play, you pay SoundExchange about one-fifth of a cent for each listener who hears it. These royalties are in addition to the royalties paid to ASCAP, BMI, SESAC and, for most commercial stations, GMR.

In addition, if the station provides other content through these smart speakers, other royalty issues can arise. When a listener can ask for a certain DJ’s program at any time, the tendency for stations is to want to make it available on demand. Before doing that, stations need to get legal advice as to whether their royalties to SoundExchange cover such uses. As we have written before, podcasts and other on-demand media for the most part are not covered by these royalties. Instead, to use music in podcasts, you need to directly negotiate with the publishing company that own the rights to the underlying musical composition and the record company that owns the song as recorded by a particular artist – or find some musician who owns both the words and the recording who will give you rights to their music. The same would be true for on-demand streams delivered through a smart speaker unless the program segments are at least 3 hours long and accessible only at random points within a 3 hour loop, or if the program is at least 5 hours long and made accessible for less than 2 weeks. There are nuances in these rules that need to be observed to avoid going beyond the limits of the SoundExchange license and potentially incurring significant liability for copyright infringement.

In essence, as these smart speakers grow in popularity, the business of the broadcaster providing its programming through these speakers will change. Unlike programming received over-the-air which bears no SoundExchange royalty (see our articles here and here), broadcasters growing a smart-speaker based audience need to budget to meet the costs of the sound recording performance royalty paid to SoundExchange. As the aggregate fee grow right along with the audience size, the broadcaster faces the conundrum that many pure webcasters face – that the royalties grow faster than the additional income generated from the streams as audiences increase.

Is there a solution? For talk and sports radio, there are far fewer issues as, just as long as a station has the digital rights to stream the programming that it airs, the SoundExchange royalties are generally low. But for music-intensive stations, the royalties grow and need to be dealt with. The vast majority of all digital audio services have thus far been unprofitable primarily because of royalties they have to pay. Perhaps, as broadcasters end up more and more reliant on digitally-delivered streams like those heard on Alexa and Google Home, it is time for broadcasters to consider discussions with the record labels about royalties that would perhaps include a “piece of the action” from over-the-air broadcasting in exchange for dramatically lower digital royalties at a level that would allow for a profitable operation. Something to think about next time you ask Alexa to play your favorite radio station.

THE MUSIC MIDTOWN LINEUP IS OUT!

Delivered... Spacelab - Independent Music and Media | Scene | Mon 11 Jun 2018 2:00 pm
Kendrick Lamar, Imagine Dragons, Post Malone and Fall Out Boy headline! Khalid, Thirty Seconds To Mars, Gucci Mane, Janelle Monae and Portugal. The Man also top the lineup!

Parklife festival review – Manchester turns night to day with punishing party energy

Delivered... Daniel Dylan Wray | Scene | Mon 11 Jun 2018 1:44 pm

Heaton Park, Manchester
Liam Gallagher is incongruous, but prompts massive singalongs, while the xx and Confidence Man are other big successes

‘Who here is on drugs?” asks a member of Levelz, at 4.30 on Saturday afternoon. This may seem like an odd question, but at Parklife time feels mirrored, with the 11am-11pm festival feeling as if it’s 11pm-11am. Drugs seem to be ubiquitous; wide eyes, wonky gurns and euphoric grins cannot be hidden in the beaming sun. The Manchester rap collective then play the infectious Drug Dealer as part of a genre-hopping set.

With the crowd full of shirtless dudes, buckets of glitter and people wearing sparkly outfits, Parklife seems like Coachella relocated to Prestwich and sponsored by the North Face. There are a lot of stages: the tower block-shaped Valley, the Bronx-themed Elrow tent, the foliage-filled Palm House, the oil rig-resembling Temple (which shoots flames) and a giant airplane hangar. The production is impressive and the sound systems are pristine, with afternoon DJ sets from Jackmaster and Peggy Gou pushing their limits.

Continue reading...
TunePlus Wordpress Theme