The future of music performance

by • 12. February, 2015 • All, Featured, MusicComments (0)4110


Last week Berlin transformed once more to a widespread electronic-laboratory: the annual Transmediale Festival organized an almost-gender-equal media-art program, with performances, lectures, concerts and exhibitions. The festival exposed with the complex issue of man-machine interfacing of a new generation.

„CAPTURE ALL“  is neatly reflected in the video-trailer, which is a collaboration between berlin-based-artists Hanne Lippard (Text), Giacomo Gianetta (Sound), Martin Kohout (Animation) and The Laboratory of Manuel Buerger (Art Direction).

At the same time CTM Festival teamed up with Transmediale. This years title UN TUNE speaks for itself. My key highlight of the event was the one-week-long MusicMakersHacklab, due to its fresh movement of what appears to be immobile. The initiator Peter Kirn and co-initiator Leslie Garcia, selected creative professionals from various disciplines and countries to test out the limits of interdisciplinary collaboration.

The result of this artistic study was presented in form of multi-medial performances in HAU 2 on Sunday. Each one adressed the audience in an indivdual sensual way.

I have reviewed two performances to highlight the diversity of the overall. Whereas I recommend everyone who is interested in this topic, to check out CreateDigitalMusic, where you can find a more in depth report soon.

3cycles by Muharrem Yildirim (Visuals and analysis) and Juan Duarte (Sound)


Hi Muharrem and Juan. What is the story behind this mesmerizing performance?
For the work we collected about 3 months of images/videos from Solar Dynamics Observatory of NASA (that means the sun rotates around itself about 3 times) and analyzed them in real time during the performance. The output is a visual interface that is used as a performance tool for the visuals, a sonification tool for the sound output and also a projected visually appealing composition for the audience.

And for more advanced readers, can you go more into detail?
Images captured by Solar Dynamics Observatory (SDO) of NASA over 3 months (about 3 solar cycles) are analyzed with several computer vision algorithms and fed into a sound generation software/hardware to create an interactive sonification and visualization of the process and turned into a performance.

Sonification part strives to present sonic information through digital and analog processes of synthesis that interact and modify each other. The data is translated into table of values for FM synthesizers and Voltage Control for a Modular Synth that depend on the data presented on the visuals.


Tell us about your background, and what brought you to the Hacklab:
Muharrem: My background is in Computer Science, Visual Communication Design and Media Arts+Sciences. I have been creating interactive installations and tools for a while now mostly for gallery and exhibition settings and I worked with dancers, performers musicians before. The Hacklab seemed like a great environment to find new collaborators and to be exposed to new ideas. Also personally I like these kind of settings that you can focus on one project for a limited amount of time and be really productive in this time period. Also I like the approach to art as a practice of research where I also create the tools as well as what will be on the stage/installation sometimes personally sometimes in collaboration with others.

Juan: I came to the Hacklab since the topic of tuning machines it is quite to the subject of my studies in new media at the medialab Helsinki, in Finland. From there I develop projects around interactive media. Specially on interfaces for ludic and sonic interaction. Moreover I’ve been working lately on the cross between analog and digital platforms for experimentation with open technologies.

How is your collaboration going to develop in the future?
We are planning to work on both to develop the tools that we created further, experiment with them and develop the concept and the idea behind it to have a more polished performance piece.

TITOMB by Omer Eilam, with a muscle-contraction-sensor by Marco Donnarumma


Tell us about your performance in short, Omer.
For my work I’m using the data of my muscle contraction and electromagnetic radiation, and transform them into noise. This sound is again giving a feedback to my body in form of electric pulses and as a visual output in form of a distorted webcam stream.

And for more advanced readers, what kind of equipment are you using?
TITOMB (Two Input Three Output Mixing Board) explores the notion of feedback systems. Based on Toshimaru Nakamura’s No Input Mixing Board (NIMB) concept I will create a feedback loop amplifying the internal noise of the mixer, but instead of using no inputs I will feed the mixer with two signals. One for muscle contractions (Xth sense), and another for electromagnetic radiation -thus blending together body and machine sounds in a textural noisy soundscape. As a second outer layer of feedback the sound produced will feed into a computer and used to distort a mixed live/delayed webcam stream and to drive electric pulses from electrodes attached to my body.


Tell us about your background, and what brought you to the Hacklab:
I was born and raised in Tel Aviv, Israel. I studied computational biology in Tel Aviv University and am currently in the process of finishing writing up my PhD thesis. In parallel I started studying music composition 2 years ago in Tel Aviv but after one year decided to leave and move to The Hague, where I am currently studying electroacoustic music in The Institute of Sonology.

My interest lies in the interplay between artists from different disciplines working together to create new innovative art forms. I’m especially interested in harnessing technological advances, not so much from a technical point of view, but since they allow us to invent and explore new musical languages.

Related Posts