Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
credits Kyle McLean
Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.
All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.
Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).
Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.
StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia,gwern.net/Danbooru2018, OpenAI and Google respectively.
After two years of fighting cancer, my sense of touch and hearing decreased as a side effect of chemotherapy. They call that "Nueropathy". While I was recovering, I wanted to go back to the art-technology projects, which I couldn't find any opportunity while working. And I was inspired by my situation and started to do a project about it.
People who enters the area of art-installation will be connected to invisible second-self in the other
(virtual) side. And they will interact with it without touching anything. They will use their body movements to
manupilate the parameters and shape the sound.
Software used in this project are vvvv for skeletal motion tracking, VcvRack for sound synthesis and Reaper for recording osc parameters.
made with vvvv,
.inf is an abstract and constantly changing visualization of data, taken in real-time from current weather situations on earth.
Incoming data like humidity, temperature, atmospheric pressure, wind speed, wind direction and cloudiness at a special location causes change of the position, speed, lighting and colouring of two million virtual points, floating inside a three dimensional space.
Atmospheric noise, which is taken through online radio transmission is used for the sound synthesis. This data, taken in real-time, switches randomly between different places and their existing weather situations on earth.
This causes unpredictable images.
Every frame of the resulting image and representation of the sound is unique and not reproducible.
High differences in the values, by switching randomly between different current weather situations of specific locations can trigger interrupting events, phases and flickering effects in the image and sound.
The endless generative visual and sound patterns in which one can easily lose themself, leaves the viewers and listeners with a different perception of time and a third person view.
FOAM CORE, MIRRORS, LIGHT, SOUND.
A collaboration with Shawn “Jay Z” Carter, commissioned by Barneys New York
Art Director: Joanie Lemercier
Technical designer and developer: Kyle McDonald
Sound designer: Thomas Vaquié
Producers: Julia Kaganskiy & Juliette Bibasse
Short excerpts of each chapter, recorded at the Kubus at ZKM Karlsruhe.
The audiovisual performance LGM#2 is based on an extensive database of pulsars and neutron stars from the Australian National Telescope Facility. Exploring the rhythms and wavelengths of these rotating sources of radio emission for aesthetic patterns and harmonies, the extremely fast pulses and strong gamma rays of these exotic celestial bodies are transformed into clicks, sine waves and light. The minimalistic multi-channel piece stringently follows the specifications of the currently known 2659 pulsars. When discovered in 1967, the precision and apparent artificiality of the received signal led the researchers to nickname the unknown phenomenon LGM-1, for “little green men”. Over three movements, Quadrature's interpretation builds an increasingly dense space of sound and light in honor of these metronomes of the universe.
In collaboration with soundartist Kerim Karaoglu.
Duration: 20 minutes
ATNF Pulsar Catalogue (www.atnf.csiro.au)
Manchester, R. N., Hobbs, G.B., Teoh, A. & Hobbs, M., AJ, 129, 1993-2006 (2005)
Documentation of an audiovisual performance for radio telescope, artificial intelligence and self-playing organ.
Located outside of the venue, the radio telescope listens to the vastness of space. The received electromagnetic waves are transferred to the audible frequency range. Electronic noises, hums and beeps fill the air. An algorithm transcribes the noise of the universe into midi tones, which are reproduced by the organ with the help of an electronic apparatus. During the play various selection and filtering methods (e.g. specific artificial intelligences) are applied in a determined order, serving as a kind of score and dividing the piece into three main chapters. The artists on stage have a set of parameters available to respond live to the incoming signals, using both the radio telescope and the AI as musical instruments. On the floating canvas above the stage, the sounds turn into abstract images, into a visual representation of what is heard.
Little by little, neural networks take control over the organ and seek for known harmonies in these non-worldly noises, for the smallest traces of human music. In the last chapter, ideas of melodies evolve as the artificial intelligence begins to fantasize about familiar tunes in these archaic sounds from outer space.
In collaboration with Christian Losert.
Thanks to Sebastian Müllauer for the telescope and Klaus Holzapfel for his Orgamat.
Duration: 25 - 35 minutes
Flos, B&B Italia, Louis Poulsen
Milano Design Week, Salone del Mobile 2019
Interaction Design & User Experience by Dotdotdot
Illustration by Loris F. Alessandria
Motion Graphics & Animation by Workroom / Alessandro Barzaghi
Exhibition Design by Calvi Brambilla
Video by Malaka Studio
anonymous user login