Warning: mysql_get_server_info(): Access denied for user ''@'localhost' (using password: NO) in /home/customer/www/e-music.india-meets-classic.net/public_html/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872

Warning: mysql_get_server_info(): A link to the server could not be established in /home/customer/www/e-music.india-meets-classic.net/public_html/wp-content/plugins/gigs-calendar/gigs-calendar.php on line 872
Indian E-music – The right mix of Indian Vibes… » Video


Videosync 1.0 arrives: visuals integrate with Ableton Live Session, Arrangement, Warping

Delivered... Peter Kirn | Scene | Mon 26 Apr 2021 4:22 pm

Now at 1.0, Videosync from Showsync is a deep Max for Live visual engine, complete with integration with Ableton Live's native interface, modulation, Warp Markers, and edit/play workflows.

The post Videosync 1.0 arrives: visuals integrate with Ableton Live Session, Arrangement, Warping appeared first on CDM Create Digital Music.

OP-Z VJ: app from Teenage Engineering adds video powers

Delivered... Peter Kirn | Scene | Mon 21 Dec 2020 6:31 pm

The OP-Z's ultra-minimalist, candybar form factor hides some serious synthesis, sampling, and audiovisual powers - and now new live visual/VJ functions, too.

The post OP-Z VJ: app from Teenage Engineering adds video powers appeared first on CDM Create Digital Music.

Colourlab Ai 1.0 arrives, and the future of color grading comes with it – details

Delivered... Peter Kirn | Scene | Wed 25 Nov 2020 12:49 am

This is some Kodachrome level color voodoo - color grading and shot matching powered by machine-learning. And it comes from a collaboration with some friends of ours from the artist and live visual side, so it's doubly worth mentioning.

The post Colourlab Ai 1.0 arrives, and the future of color grading comes with it – details appeared first on CDM Create Digital Music.

AI upscaling makes this Lumiere Bros film look new – and you can use the same technique

Delivered... Peter Kirn | Scene | Tue 4 Feb 2020 11:17 pm

A.I.! Good gawd y’all – what is it good for? Absolutely … upscaling, actually. Some of machine learning’s powers may prove to be simple but transformative.

And in fact, this “enhance” feature we always imagined from sci-fi becomes real. Just watch as a pioneering Lumiere Brothers film is transformed so it seems like something shot with money from the Polish government and screened at a big arty film festival, not 1896. It’s spooky.

It’s the work of Denis Shiryaev. (If you speak Russian, you can also follow his Telegram channel.) Here’s the original source, which isn’t necessarily even a perfect archive:

It’s easy to see the possibilities here – this is a dream both for archivists and people wanting to economically and creatively push the boundaries of high-framerate and slow-motion footage. What’s remarkable is that there’s a workflow here you might use on your own computer.

And while there are legitimate fears of AI in black boxes controlled by states and large corporations, here the results are either open source or available commercially. There are two tools here.

Enlarging photos and videos is the work of a commercial tool, which promises 600% scaling improvements “while preserving image quality.”

It’s US$99.99, which seems well worth it for the quality payoff. (More for commercial licenses. There’s also a free trial available.) Uniquely, that tool also is optimized for Intel Core processors with Iris Plus, so you don’t need to fire up a specific GPU like the NVIDIA. They don’t say a lot about how it works, other than it’s a deep learning neural network.

We can guess, though. The trick is that machine learning trains on existing data of high-res images to allow mathematical prediction on lower-resolution images. There’s been copious documentation of AI-powered upscaling, and why it works mathematically better than traditional interpolation algorithms. (This video is an example.) Many of those used GANs (generative adverserial networks), though, and I think it’s a safe bet that Gigapixel is closer to this (also slightly implied by the language Gigapixel uses):

Deep learning based super resolution, without using a GAN [Towards data science]

Some more expert data scientists may be able to fill in details, but at least that article would get you started if you’re curious to roll your own solution for a custom solution. (Unless you’re handy with Intel optimization, it’s worth the hundred bucks, but for those of you who are advanced coders and data scientists, knock yourself out.)

The quality of motion may be just as important, and that side of this example is free. To increase the framerate, they employ a technique developed by an academic-private partnership (Google, University of California Merced, and Shanghai’s Jiao Tong University):

Depth-Aware Video Frame Interpolation

Short version – you combine some good old-fashioned optical flow prediction together with convolutional neural networks, and then use a depth map so that big objects moving through the frame don’t totally screw up the processing.

Result – freakin’ awesome slow mo go karts, that’s what! Go, math!

This also illustrates that automation isn’t necessarily the enemy. Remember watching huge lists of low-wage animators scroll past at the end of movies? That might well be something you want to automate (in-betweening) in favor of more-skilled design. Watch this:

A lot of the public misperception of AI is that it will make the animated movie, because technology is “always getting better” (which rather confuses Moore’s Law and the human brain – not related). It may be more accurate to say that these processes will excel at pushing the boundaries of some of our tech (like CCD sensors, which eventually run into the laws of physics). And they may well automate processes that were rote work to begin with, like in-betweening frames of animation, which is a tedious task that was already getting pushed to cheap labor markets.

I don’t want to wade into that, necessarily – animation isn’t my field, let alone labor practices. But suffice to say even a quick Google search will quickly come up with stories like this article on Filipino animators and low wages and poor conditions. Of course, the bad news is, just as those workers collectivize, AI could automate their job away entirely. But it might also mean a Filipino animation company would face a level playing field using this software with the companies that once hired them, only now with the ability to do actual creative work.

Anyway, that’s only animation; you can’t outsource your crappy video and photos, so it’s a moot point there.

Another common misconception – perhaps one even shared by some sloppy programmers – is that processes improve the more computational resources you throw at them. That’s not necessarily the case – objectively even not always the case. In any event, the fact that these work now, and in ways that are pleasing to the eye, means you don’t have to mess with ill-informed hypothetical futures.

I spotted this on the VJ Union Facebook group, where Sean Caruso suggests this workflow: since you can only use Topaz on sequences of images, you can import into After Effects and go on and use Twixtor Pro to double framerate, too. Of course, coders and people handy with tools like ffmpeg won’t need the Adobe subscription. (ffmpeg, not so much? There’s a CDM story for that, with some useful comment thread, too.)

Having blabbered on like this, I’m sure someone can now say something more intelligent or something I’ve missed – which I would welcome, fire away!

Now if you’ll excuse me, I want to escape to that 1896 train platform again. Ahhhh…

The post AI upscaling makes this Lumiere Bros film look new – and you can use the same technique appeared first on CDM Create Digital Music.

Exquisite paintings, with analog signal and modular gear as brush

Delivered... Peter Kirn | Scene | Thu 23 Jan 2020 11:15 pm

Working with image signal instead of sound remains to many an undiscovered country. One artist is producing beautiful meditations on analog video – and charting his work.

Christopher Konopka treats the Eurorack modular as a canvas, carefully producing slow-moving, richly organic work. He calls them “emotional abstractions” – and relates the experience of navigating new textures to that of our perception of time and memory.

You can watch this, along with musings on what he’s doing and how the patches work, in an exhaustive YouTube channel. It’s some mesmerizing inspiration:

Oh yeah – so how do you remember work, when it exists as ephemeral combinations of knobs and patch cables? Christopher has added one obsessive layer of digital organization, a data project he calls “broadcast-research.” Using scripts and code he shares on his GitHub, he automates the process of recording and organizing texture output, all in open source tools.

So there’s a meeting of digital and analog – and Christopher even suggests this data set could be used with machine learning.

(Hot tip – even if you’re happy to let your own creations disappear “like tears in the rain” and all that jazz, you might poke around hit GitHub repository and fork it as you’ll find some handy recipes and models for working with these tools for other projects. It’s done in Go + Bash command line scripts + free graphics tools FFprobe, FFmpeg, and ImageMagick, which are great alternatives to getting sucked into Photoshop glacially loading and then crashing. Ahem.)

The hardware in question:

Lots more – including an artist statement – on his site:

https://github.com/cskonopka/broadcast-research

ImageMagick is genius, by the way – time to do another recipe round-up, a la (see also comments here):

Previously, related:

The post Exquisite paintings, with analog signal and modular gear as brush appeared first on CDM Create Digital Music.

With AI and GPUs, Vadim Epstein creates an engrossing, fragmented mirror

Delivered... Peter Kirn | Scene | Fri 2 Aug 2019 6:11 pm

AI and GPU tricks, with their rigid rigorous demands, can often come out as carbon copy clones. Not so in the work of Vadim Epstein, an artist and theoretical physicist based in Moscow. In his hands, machine learning becomes painterly.

Vadim got his school education as a physicist, but self-taught a range of work in computer science, media art, and VJing, starting in the mid 1990s. In his autobiographic text, he explains that the Web was a solution to the feeling that there was “no need to create something new” – instead offering the chance to re-imagine existing material as DJ, VJ, or even webmaster.

Vadim’s work is hyperactive and relentless. I asked him for documentation and got folders upon folders of images. Because he’s doing club gigs and VJ gigs and work for fun and experimentation and corporate events and so on and so forth, it’s also very often real-time and improvisatory. He’s not just doing a precious gallery piece or demo here or there. As a result, it’s also more exhaustively iterative, with his techniques evolving daily.

This week, he updated his showreel, full of “old and recent” work and live shows, made with the graphical dataflow development environment vvvv, plus custom shader code (utilizing the capabilities of internal graphics hardware), and TensorFlow, the machine learning library.

(Tokee’s “Numbers” is the music.)

Even as the technique of using GANs – Generative Adversarial Networks – spreads, Epstein’s work stands out, a kind of Old Dutch Master. The approach is based on using two neural networks for set-it-and-leave-it AI magic. One network uses a model to spit out images, the other network tests whether those images score well against known existing images. It’s a little bit like a couple of grade school kids quizzing one another on language flash cards – it’s a game where one has the answers and tries to drill the other so they improve.

Both Vadim’s still and motion images are ethereal acid trips, beginning to push to some strange new fluidity of the images.

To anyone who might think this could turn into a trope available everywhere, like the popular image processing filters of the 90s, Vadim says – bring it on. So he’s prolific with pushing this work further, and is even readying courses and code that will help others give it a go. I think it’s basically the VJ’s creed, following the ethos of the DJ. Rather than worry that other people have access to the same toys, you freely embrace democratization – in the believe you can hold your own in a battle against other artists, in the heat of the moment.

Vadim isn’t a one-trick pony, of course, having done live visuals since the mid 90s; machine learning is just the latest live visual novelty. He does the usual “AI” processing with stock TensorFlow and PyTorch libraries, then builds a performance around live-processing the results. To make that a reality, he works with vvvv as a convenience wrapper, so he can manipulate output of images in real time. Our friend Stanislav Glazov, also a Moscow native but now based in Berlin, has assisted with a TouchDesigner port – I expect it might make some cameo at the summit coming later this month.

Just like Stas Glazov. Vadim makes part of his living by offering in-depth paid courseware. For now, it’s Russian-language only, but he says an English version is in the works. (Meanwhile, you just have to accept Russian as the new lingua franca, or certainly for post-industrial, generative visuals of the ex-Soviet sphere, obviously!)

https://bangbangeducation.ru/course/neuralnet

If you don’t want to wait, and for a brush-up of the basics, it’s worth checking David Foster’s book on the topic (just published this summer by O’Reilly) and accompanying GitHub repository:

https://github.com/davidADSP/GDL_code?fbclid=IwAR1T49edT-iEQHp61GbXOzVi-nItw0Obb6jDCS1U-PakStITgBpI12scGZc

Beautiful digital LSD, perhaps. Nightmare transporter accident rendition of Michael Jackson’s “Black or White,” maybe. I find them irresistible. Vadim for his part calls it a new genre, “trash neurror.”

Regardless, it’s worth checking Vadim’s other work and long-running evolution and work. Even for all the push for new and shiny, making visuals expressive might also be akin to learning an instrument – it comes from years in the woodshed, and playing in the trenches.

More:

http://eps.here.ru/ [Vadim’s main homepage]

https://www.instagram.com/eps696/

See also this interview from 2015 and installation from late 2014:

INSIDE CAMP #14 – Art is just outcome of the human life

And if you like this sort of thing, and you’re near Montreal in August, well:

The post With AI and GPUs, Vadim Epstein creates an engrossing, fragmented mirror appeared first on CDM Create Digital Music.

10 hours of live drones celebrate Drone Day, a noise round the world

Delivered... Peter Kirn | Scene | Wed 29 May 2019 1:47 pm

Ready to drone the f*** out? Here’s your own personal all-night chillout stage, full of ten hours of drones. It’s all part of a growing international annual celebration of drone sounds.

Oh sure, if you’re American you probably had Memorial Day weekend on the mind last weekend. But there was another holiday, too, dedicated to ambient and experimental music.

“Every year we make a noise together that stretches around the world,” proclaim the organizers on the site.”The answer comes through tiny vibrations in our skin and between our bones,” they say. “Gather and drone with friends, with the public, or alone (though you are never truly alone in the drone).”

Drone, community, and experimental sounds are all welcome. The ritual began a few years ago with organizers Marie Claire LeBlanc Flanagan and Weird Canada. This year’s edition had some 60 drone events worldwide.

But if you missed Drone Day on Saturday, don’t worry – you didn’t miss out. We’ve got a full ten hours recorded (and streamed live) in Berlin for your droning needs.

The details of this broadcast, plus the (very lovely) performing lineup:

For Drone Day, May 25th 2019, a live studio broadcast and deep listening session was held in Berlin with funding support from the Musicboard Berlin GmbH. An audio broadcast was also streamed with kind thanks to Radio nunc from 14.00-22.00CET.

0:00:00 improvisation with diane + vida vojić
0:31:00 DuChamp
1:13:00 sn(50)
1:58:00 -akis
2:22:30 adsx
3:34:10 vida vojić
4:28:31 improvisation with diane + DuChamp
5:15:30 Auguste + Nina Guo
5:55:30 Nina Pixel
6:58:32 Inter Lineas
7:44:05 improvisation with diane + Alexandra Macià + sn(50)

It’s not actually shot in black and white murk; we just live like that in Berlin – it follows us around, like a fog.

Happy droning.

The post 10 hours of live drones celebrate Drone Day, a noise round the world appeared first on CDM Create Digital Music.

A haunting ambient sci-fi album about a message from Neptune

Delivered... Peter Kirn | Artists,Scene | Fri 30 Nov 2018 10:54 pm

Latlaus Sky’s Pythian Drift is a gorgeous ambient concept album, the kind that’s easy to get lost in. The set-up: a probe discovered on Neptune in the 26th Century will communicate with just one woman back on Earth.

The Portland, Oregon-based artists write CDM to share the project, which is accompanied by this ghostly video (still at top). It’s the work of Ukrainian-born filmmaker Viktoria Haiboniuk (now also based in Portland), who composed it from three years’ worth of 120mm film images.

Taking in the album even before checking the artists’ perspective, I was struck by the sense of post-rocket age music about the cosmos. In this week when images of Mars’ surface spread as soon as they were received, a generation that grew up as the first native space-faring humans, space is no longer alien and unreachable, but present.

In slow-motion harmonies and long, aching textures, this seems to be cosmic music that sings of longing. It calls out past the Earth in hope of some answer.

The music is the work of duo Brett and Abby Larson. Brett explains his thinking behind this album:

This album has roots in my early years of visiting the observatory in Sunriver, Oregon with my Dad. Seeing the moons of Jupiter with my own eyes had a profound effect on my understanding of who and where I was. It slowly came to me that it would actually be possible to stand on those moons. The ice is real, it would hold you up. And looking out your black sky would be filled with the swirling storms of Jupiter’s upper clouds. From the ice of Europa, the red planet would be 24 times the size of the full moon.

Though these thoughts inspire awe, they begin to chill your bones as you move farther away from the sun. Temperatures plunge. There is no air to breathe. Radiation is immense. Standing upon Neptune’s moon Triton, the sun would begin to resemble the rest of the stars as you faded into the nothing.

Voyager two took one of the only clear images we have of Neptune. I don’t believe we were meant to see that kind of image. Unaided our eyes are only prepared to see the sun, the moon, and the stars. Looking into the blue clouds of the last planet you cannot help but think of the black halo of space that surrounds the planet and extends forever.

I cannot un-see those images. They have become a part of human consciousness. They are the dawn of an unnamed religion. They are more powerful and more fearsome than the old God. In a sense, they are the very face of God. And perhaps we were not meant to see such things.

This album was my feeble attempt to make peace with the blackness. The immense cold that surrounds and beckons us all. Our past and our future.

The album closes with an image of standing amidst Pluto’s Norgay mountains. Peaks of 20,000 feet of solid ice. Evening comes early in the mountains. On this final planet we face the decision of looking back toward Earth or moving onward into the darkness.

Abby with pedals. BOSS RC-50 LoopStation (predecessor to today’s RC-300), Strymon BlueSky, Electro Harmonix Soul Food stand out.

Plus more on the story:

Pythia was the actual name of the Oracle at Delphi in ancient Greece. She was a real person who, reportedly, could see the future. This album, “Pythian Drift” is only the first of three parts. In this part, the craft is discovered and Dr. Amala Chandra begins a dialogue with the craft. Dr Chandra then begins publishing papers that rock the scientific world and reformulate our understanding of mathematics and physics. There is also a phenomenon called Pythian Drift that begins to spread from the craft. People begin to see images and hear voices, prophecies. Some prepare for an interstellar pilgrimage to the craft’s home galaxy in Andromeda.

Part two will be called Black Sea. Part three will be Andromeda.

And some personal images connected to that back story:

Brett as a kid, with ski.

Abby aside a faux fire.

More on the duo and their music at the Látlaus Ský site:

http://www.latlaussky.com/

Check out Viktoria’s work, too:

https://www.jmiid.com/

The post A haunting ambient sci-fi album about a message from Neptune appeared first on CDM Create Digital Music.

«Sampling kann politisch sein»

Delivered... norient | Scene | Wed 17 Oct 2018 5:30 am

Wie und warum benutzen Musiker*innen Samples? Welche Positionen stecken hinter der Wahl ihrer Samples? Können diese Samples sogar die Funktion eines gesellschaftlichen Kommentars haben? Solche Fragen stellt der Berner Musikwissenschaftler und Norient-Editor Hannes Liechti in seiner Doktorarbeit. Dieses Video ist Teil zwei der achtteiligen Reihe «Musikethnologie in der Schweiz».

Hannes Liechti studierte Musikethnologie, Musikwissenschaft und Geschichte in Bern und München. Er doktoriert seit 2016 zu kreativen Sampling-Strategien in experimenteller elektronischer Popmusik an der Universität Bern und der Hochschule der Künste Bern HKB und ist Mitglied der Graduate School of the Arts (GSA) Bern, wo er im Herbst 2018 in Zusammenarbeit mit Norient die Tagung «Pop–Power–Positions» organisierte. Hannes Liechti ist Teil des Kernteams von Norient (Bio hier).

Die achtteilige Videoreihe «Musikethnologie in der Schweiz» ist entstanden im Rahmen des Agora-Wissenschaftsvermittlungsprojekt «Communicating Music Research» (gefördert vom Schweizer Nationalfonds), das Norient gemeinsam mit dem Seminar für Kulturwissenschaft und Europäische Ethnologie der Universität Basel durchgeführt hat.

Video: Valentin Mettler
Interview: Elisabeth Stoudmann
Redaktion: Theresa Beyer
Produktion: Norient 2018

«Musikethnologie hilft gegen Vorurteile»

Delivered... norient | Scene | Sun 14 Oct 2018 6:00 am

«In Europa wird afrikanische Musik immer noch sehr eindimensional gesehen» sagt die Musikethnologin Anja Brunner. Für sie hat die musikethnologische Forschung die Aufgabe, solchen klischierten Bildern entgegenzuwirken. Im Video spricht Bunner darüber, wie sie ein Aufenthalt in Senegal zur Musikethnologie gebracht hat, wie stark Musik und Politik in afrikanischen Ländern verschränkt sind und wie sich die Macht des Forschenden in der heutigen Zeit verändert. Dieses Video ist der Auftakt zur sechsteiligen Reihe «Musikethnologie in der Schweiz».

Die sechsteilige Videoreihe «Musikethnologie in der Schweiz» ist entstanden im Rahmen des Agora-Wissenschaftsvermittlungsprojekt «Communicating Music Research» (gefördert vom Schweizer Nationalfonds), das Norient gemeinsam mit dem Seminar für Kulturwissenschaft und Europäische Ethnologie der Universität Basel durchgeführt hat.

Video: Valentin Mettler
Interview: Hannes Liechti
Redaktion: Theresa Beyer
Produktion: Norient 2018

Dr. Anja Brunner ist Ethnomusikologin und derzeit als assoziierte Wissenschaftlerin am Institut für Musikwissenschaft an der Universität Bern. Sie hat ihre Dissertation an der Universität Wien zur Entstehung des Popularmusikgenres Bikutsi in Kamerun verfasst. Ihre derzeitige Forschung beschäftigt sich mit syrischen Musikerinnen im deutschsprachigen Raum. Ihre Forschungsschwerpunkte sind Musik und Migration, Veränderungen von Musikpraktiken im globalen Kontext, Weltmusik/World Music und Postkolonialismus.

Max 8: Multichannel, mappable, faster patching is here

Delivered... Peter Kirn | Scene | Tue 25 Sep 2018 8:15 pm

Max 8 is released today, as the latest version of the audiovisual development environment brings new tools, faster performance, multichannel patching, MIDI learn, and more.

Max is now 30 years old, with a direct lineage to the beginning of visual programming for musicians – creating your own custom tools by connecting virtual cables on-screen instead of typing in code. Since then, its developers have incorporated additional facilities for other code languages (like JavaScript), different data types, real-time visuals (3D and video), and integrated support inside Ableton Live (with Max for Live). Max 8 actually hits all of those different points with improvements. Here’s what’s new:

MC multichannel patching.

It’s always been possible to do multichannel patching – and therefore support multichannel audio (as with spatial sound) – in Max and Pure Data. But Max’s new MC approach makes this far easier and more powerful.

  • Any sound object can be made into multiples, just by typing mc. in front of the object name.
  • A single patch cord can incorporate any number of channels.
  • You can edit multiple objects all at once.

So, yes, this is about multichannel audio output and spatial audio. But it’s also about way more than that – and it addresses one of the most significant limitations of the Max/Pd patching paradigm.

Polyphony? MC.

Synthesis approaches with loads of oscillators (like granular synthesis or complex additive synthesis)? MC.

MPE assignments (from controllers like the Linnstrument and ROLI Seaboard)? MC.

MC means the ability to use a small number of objects and cords to do a lot – from spatial sound to mass polyphony to anything else that involves multiples.

It’s just a much easier way to work with a lot of stuff at once. That was present in open code environment SuperCollider, for instance, if you were willing to put in some time learning SC’s code language. But it was never terribly easy in Max. (Pure Data, your move!)

MIDI mapping

Mappings lets you MIDI learn from controllers, keyboards, and whatnot, just by selecting a control, and moving your controller.

Computer keyboard mappings work the same way.

The whole implementation looks very much borrowed from Ableton Live, down to the list of mappings for keyboard and MIDI. It’s slightly disappointing they didn’t cover OSC messages with the same interface, though, given this is Max.

It’s faster

Max 8 has various performance optimizations, says Cycling ’74. But in particular, look for 2x (Mac) – 20x (Windows) faster launch times, 4x faster patching loading, and performance enhancements in the UI, Jitter, physics, and objects like coll.

Also, Max 8’s Vizzie library of video modules is now OpenGL-accelerated, which additionally means you can mix and match with Jitter OpenGL patching. (No word yet on what that means for OpenGL deprecation by Apple.)

Node.JS

This is I suspect a pretty big deal for a lot of Max patchers who moonlight in some JavaScript coding. NodeJS support lets you run Node applications from inside a patch – for extending what Max can do, running servers, connecting to the outside world, and whatnot.

There’s full NPM support, which is to say all the ability to share code via that package manager is now available inside Max.

Patching works better, and other stuff that will make you say “finally”

Actually, this may be the bit that a lot of long-time Max users find most exciting, even despite the banner features.

Patching is now significantly enhanced. You can patch and unpatch objects just by dragging them in and out of patch cords, instead of doing this in multiple steps. Group dragging and whatnot finally works the way it should, without accidentally selecting other objects. And you get real “probing” of data flowing through patch cords by hovering over the cords.

There’s also finally an “Operate While Unlocked” option so you can use controls without constantly locking and unlocking patches.

There’s also a refreshed console, color themes, and a search sidebar for quickly bringing up help.

Plus there’s external editor support (coll, JavaScript, etc.). You can use “waypoints” to print stuff to the console.

And additionally, essential:

High definition and multitouch support on Windows
UI support for the latest Mac OS
Plug-in scanning

And of course a ton of new improvements for Max objects and Jitter.

What about Max for Live?

Okay, Ableton and Cycling ’74 did talk about “lockstep” releases of Max and Max for Live. But… what’s happening is not what lockstep usually means. Maybe it’s better to say that the releases of the two will be better coordinated.

Max 8 today is ahead of the Max for Live that ships with Ableton Live. But we know Max for Live incorporated elements of Max 8, even before its release.

For their part, Cycling ’74 today say that “in the coming months, Max 8 will become the basis of Max for Live.”

Based on past conversations, that means that as much functionality as possibly can be practically delivered in Max for Live will be there. And with all these Max 8 improvements, that’s good news. I’ll try to get more clarity on this as information becomes available.

Max 8 now…

Ther’s a 30-day free trial. Upgrades are US$149; full version is US$399, plus subscription and academic discount options.

Full details on the new release are neatly laid out on Cycling’s website today:

https://cycling74.com/products/max-features?utm_source=press&utm_campaign=max8-release

The post Max 8: Multichannel, mappable, faster patching is here appeared first on CDM Create Digital Music.

DU-NTSC is a modular visualizer, AV toy, and video generator

Delivered... Peter Kirn | Scene | Tue 14 Aug 2018 10:35 am

From hyper-nerdy label Detroit Underground comes a new tool that both visualizes signals and acts as a modular visual instrument – a new way of looking at what you’re doing in modular sound, and an object for visual creation.

Eurorack may be saturated with modules that do sound, but get ready for the next frontier: more visuals. Boutique label Detroit Underground has been a retro-futuristic hub for just those sorts of audiovisual fascinations. Label head Kero is an audiovisual artist himself, and has gravitated to music with visual elements – not to mention he’s put out a glitchy app and website and even started a series of album releases on VHS. (Yes, VHS. Full disclosure: I somehow wound up on this series; my VHS tape drops in September. I’m still shopping around flea markets for a deck before the tapes arrive.)

DU-NTSC is really two modules in one. It’s a visualizer/oscilloscope – so you can look at signal in your patches. And it’s a hackable, creative visual module, capable of outputting visuals and visual signals. (Not everyone is interested exclusively in their rack doing sound; some also want to stimulate sight.)

Plus since it’s analog, it’s time to dig up all your analog ins and outputs again. Edirol video mixer? Sega Genesis? CRT tubes? Half-broken projectors? Yes, yes, yes, yes!

Watch (with some Richard Devine sounds, of course):

The whole thing is based on Arduino nano, making it easy to hack your own patterns through some simple coding, for more of these wild black-and-white creations:

Highlights:

  • Input video – there’s a composite (analog, natch) video input
  • Input audio – it’s a one-channel oscilloscope with wide sample rate spectrum (228 Hz – 700KHz), just with unusually cool visualizations
  • Generate video patterns via multiple presets (or make your own with Arduino code!)
  • Control patterns with CV and gate control of visual parameters
  • Output video – video generator can be routed to composite, as can video signal and sync (with a hack) through Video Cinch feature

That patch-ability extends in all directions: you can use control voltage both to generate and control visuals, and gate keeps everything in time.

Here’s a video (via Kero / DU) of the module getting connected:

The project was built in collaboration with Razmasynth, a video modular maker based in France specializing in open source kits. And if you like this, you should definitely also check out their Telewizor.

Full specs:

  • One channel AV Display for Eurorack Modular, PAL, NTSC Video Generator and One Channel Oscilloscope
  • CV Input controls Video Generator (Jack Mono 3.5)
  • Universal CV from modular, bipolar (+5/-5V) and unipolar (+5V)
  • Gate Input: resets the current pattern and turns the screen black when a positive signal is applied (Jack Mono 3.5)
  • Chaos Button: inverts on-screen colors in all patterns; use this button to decrease the sample rate on an Oscilloscope sub-program in order to display signals from a LFO
  • Video In: supports PAL/NTSC; it can connect with DVD, VCD, TFT Camera, Super Nintendo, VHS Tape, etc. via Cinch input
  • Composite Video Out: exports the video generator signal via the Cinch output
  • DU-NTSC is based on the Arduino Nano platform, allowing you to easily hack and create your own video patterns.
  • Over 16 video patterns (demo available)
  • One Channel Oscilloscope; with a sample rate from 700KHz to 228Hz

Like Razmasynth’s Telewizor, DU-NTSC is based on Arduino Nano, so you can easily hack and create your own video pattern (making your own 120×96-pixel image) or upload code.

And that hardware:

  • Digital TFT Chimei LCD
  • Display size: 3.5″
  • Display Format: 4:3
  • System: PAL/NTSC
  • Pixel: 480(W)X272(H)
  • 10P Eurorack bus connector
  • 191ma power draw (+12/-12V)
  • Reverse polarity protection
  • Depth: 16hp
  • Skiff Friendly
  • Optional solder pads on the PCB back, for powering display with an external power supply 12V DC, without causing main power supply to drop down

More video action – this video from Kero gives you a sense of what it’s like to use:

And Richard Devine accidentally teased this over the weekend with this tripping-in-space video using the module:

Here’s more. Dig the pink lighting:

There are only twenty of these units, but I suspect this may be a sign of more to come in visual modules – both from Detroit Underground and in the scene generally. US$250 per module, available from Bandcamp (and including a free Richard Devine music download, of course):

https://detund.bandcamp.com/merch/du-ntsc

Previously:

This Eurorack module was coded wrong – and you’ll like it

Speaking in signal, across the divide between video and sound: SIGINT

The post DU-NTSC is a modular visualizer, AV toy, and video generator appeared first on CDM Create Digital Music.

Watch five hours of one of SONAR’s best stages in video

Delivered... Peter Kirn | Artists,Scene | Mon 18 Jun 2018 5:13 pm

Got some festival envy? Relax, sit back – one of the best stages from SONAR Festival in Barcelona last week is now online.

Of course, there’s no substitute for checking out live music. On the other hand, there’s also no substitute for partying at home, with no queues when you get thirsty and no one around but you. It’s all balance.

CDM will be bringing you a bit of SONAR Festival, but having scoped out the place myself, the Resident Advisor-sponsored night stage – and specifically this particular night of programming from said state – was one of the best programmed. And it seems that’s what our friends at RA chose to put online. So whether you know these artists or not or are getting a first introduction, full endorsement.

Octo Octa’s hair swinging back and forth while she killed that set is actually one of my enduring visual memories of this festival. I think things are currently truncated from the live stream but I’ll ask. Certainly this Saturday night on the RA stage was ideal – like a dream lineup.

The artists – DJ sets from Octo Octa on, but the rest live – with more links to more music and resources:

JASSS

Lanark Artefax

Errorsmith (interview with him coming soon to CDM, finally!)

Ben Klock B2B [back to back] with DJ Nobu

DJ Nobu official Facebook page

Motor City Drum Ensemble B2B Jeremy Underground

http://motorcitydrumensemble.com

The post Watch five hours of one of SONAR’s best stages in video appeared first on CDM Create Digital Music.

Debate: Archive and Sampling

Delivered... norient | Scene | Wed 23 May 2018 6:00 am

Today the copying and sampling of not just sound but of all material from infinite sources challenges the «spectacular aura» of the pre-recorded original in order to claim autonomy. We asked musicians from the Norient network: How Does the Digital Availability of Sources Change Music? A virtual debate from the Norient exhibition Seismographic Sounds (see and order corresponding book here).

Abandoned School Archive (Photo © by publicdomainpictures/Lode Van de Velde, 2018)

Complete Debate: The Video

Quotes

«My sample library is full of glitchy sounds. I started to build it years ago and I’m continuously updating it. It works like this: I make recordings from prepared instruments or amplified objects, or I record jams with digital instruments. Then I work with these sounds, paying close attention to details. I can spend hours designing just one three hundred milliseconds glitch, or I can build a huge wall of sound out of intersecting layers. These layers create beautiful and dense textures that I’m gradually transforming in my software by changing many parameters at each moment. I edit my samples to the point that they gain a totally new identity — all associations are gone and in the end just their aesthetic qualities count. Success is when I can make thousands of variations from a single sample. These sounds define my library. I think that gives a certain stamp to all of my works.»

Svetlana Maraš, composer and sound artist (Serbia)

«I sample my own music. It helps to exaggerate my egomania. By recombining myself my self-referential cosmos grows day by day.»

Christoph Ogiermann, composer, singer, instrumentalist and conductor in the fields of of contemporary music and free improvisation (Germany)

«Everything is a remix.»

Joe Bennett, Popular Music Scholar (Great Britain)

«Art and music in an archive will function like words in our minds. In the near future we will reuse them at will, just like we create sentences.»

Eduardo Navas, Remix Studies Scholar (USA)

Video Debate Credits

Statements by
Eduardo Navas, Remix Studies Scholar (USA)
Joe Bennett, Popular Music Scholar (Great Britain)
Christoph Ogiermann, composer, singer, instrumentalist and conductor in the fields of of contemporary music and free improvisation (Germany)
Garo Gdanian, Metal Musician (Lebanon)
Svetlana Maraš, composer and sound artist (Serbia)

Video Cut: Stephan Hermann, Coupdoeil

Some quotes from this debate were published in the second Norient book Seismographic Sounds. Click on the image to know more.

Read More on Norient

> Eduardo Navas: «Regenerative Culture»
> Hannes Liechti: «Perspectives on Sampling»
> Thomas Burkhalter: «The Sample Shapes the Song»

Debates from Seismographic Sounds

> on Bedroom Producers
> on Power and Positions
> on Music and War

Speaking in signal, across the divide between video and sound: SIGINT

Delivered... Peter Kirn | Artists,Labels,Scene | Wed 16 May 2018 5:58 pm

Performing voltages. The notion is now familiar in synthesis – improvising with signals – but what about the dance between noise and image? Artist Oliver Dodd has been exploring the audiovisual modular.

Integrated sound-image systems have been a fascination of the avant-garde through the history of electronic art. But if there’s a return to the raw signal, maybe that’s born of a desire to regain a sense of fusion of media that can be lost in overcomplicated newer work.

Underground label Detroit Underground has had one foot in technology, one in audiovisual output. DU have their own line of Eurorack modules and a deep interest in electronics and invention, matching a line of audiovisual works. And the label is even putting out AV releases on VHS tape. (Well, visuals need some answer to the vinyl phonograph. You were expecting maybe laserdiscs?)

And SIGINT, Oliver Dodd’s project, is one of the more compelling releases in that series. It debuted over the winter, but now feels a perfect time to delve into what it’s about – and some of Oliver’s other, evocative work.

First, the full description, which draws on images of scanning transmissions from space, but takes place in a very localized, Earthbound rig:

The concept of SIGINT is based on the idea of scanning, searching, and recording satellite transmissions in the pursuit of capturing what appear to be anomalies as intelligent signals hidden within the transmission spectrum.

SIGINT represents these raw recordings, captured in their live, original form. These audio-video recordings were performed and rendered to VHS in real-time in an attempt to experience, explore, decipher, study, and decode this deeply evocative, secret, and embedded form of communication whose origins appear both alien and unknown, like paranormal imprints or reflections of inter-dimensional beings reflected within the transmission stream.

The amazing thing about this project are the synchronicities formed between the audio and the video in real time. By connecting with the aural and the visual in this way, one generates and discovers strange, new, and interesting communications and compositions between these two spaces. The Modular Audio/Video system allows a direct connection between the video and the audio, and vice versa. A single patch cable can span between the two worlds and create new possibilities for each. The modular system used for SIGINT was one 6U case of only Industrial Music Electronics (Harvestman) modules for audio and one 3U case of LZX Industries modules for video.

Videos:

Album:

CDM: I’m going through all these lovely experiments on your YouTube channel. How do these experiments come about?

Oliver: My Instagram and YouTube content is mostly just a snapshot of a larger picture of what I am currently working on, either that day, or of a larger project or work generally, which could be either a live performance, for example, or a release, or a video project.

That’s one hell of an AV modular system. Can you walk us through the modules in there? What’s your workflow like working in an audiovisual system like this, as opposed to systems (software or hardware) that tend to focus on one medium or another?

It’s a two-part system. There is one part that is audio (Industrial Music Electronics, or “Harvestman”), and there is one part that is video (LZX Industries). They communicate with each other via control voltages and audio rate signals, and they can independently influence each other in both ways or directions. For example, the audio can control the video, and the control voltages generated in the video system can also control sources in the audio system.

Many of the triggers and control voltages are shared between the two systems, which creates a cohesive audio/video experience. However, not every audio signal that sounds good — or produces a nice sound — looks good visually, and therefore, further tweaking and conditioning of the voltages are required to develop a more cohesive and harmonious relationship between them.

The two systems: a 3U (smaller) audio system on the left handles the Harvestman audio modules, and 6U (taller) on the right includes video processing modules from LZX Industries. Cases designed by Elite Modular.

I’m curious about your notion of finding patterns or paranormal in the content. Why is that significant to you? Carl Sagan gets at this idea of listening to noise in his original novel Contact (using the main character listening to a washing machine at one point, if I recall). What drew you to this sort of idea – and does it only say something about the listener, or the data, too?

Data transmission surrounds us at all times. There are always invisible frequencies that are outside our ability to perceive them, flowing through the air and which are as unobstructed as the air itself. We can only perceive a small fraction of these phenomena. There are limitations placed on our ability to perceive as humans, and there are more frequencies than we can experience. There are some frequencies we can experience, and there are some that we cannot. Perhaps the latter can move or pass throughout the range of perception, leaving a trail or trace or impressions on the frequencies that we can perceive as it passes through, and which we can then decode.

What about the fact that this is an audiovisual creation? What does it mean to fuse those media for a project?

The amazing thing about this project are the synchronicities formed between the audio and the video in real time. By connecting with the aural and the visual in this way, one generates and discovers strange, new, and interesting communications and compositions between these two spaces. The modular audio/video system allows direct connection between the video and the audio, and vice versa. A single patch cable can span between the two worlds and create new possibilities for each.

And now, some loops…

Oliver’s “experiments” series is transcendent and mesmerizing:

If this were a less cruel world, the YouTube algorithm would only feed you this. But in the meantime, you can subscribe to his channel. And ignore the view counts, actually. One person watching this one video is already sublime.

Plus, from Oliver’s gorgeous Instagram account, some ambient AV sketches to round things out.

More at: https://www.instagram.com/_oliverdodd/

https://detund.bandcamp.com/

https://detund.bandcamp.com/album/sigint

The post Speaking in signal, across the divide between video and sound: SIGINT appeared first on CDM Create Digital Music.

Next Page »
TunePlus Wordpress Theme