» Blog
This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.

Blog

new post

Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.

  reset tags

addon-release core-release date devvvv gallery news screenshot stuff
delicious flickr vimeo
Order by post date popularity

credits Concept, production and video: VOLNA
Engineering: Alexey Belyakov vnm
Mechanics: Viktor Smolensky
Camera/photo: Polina Korotaeva, VOLNA
Special thanks: William Cohen, Michael Gira, Igor Matveev, Alexander Nebozhin, Jason Strudler, Ivan Ustichenko, Artem Zotikov
Music: Swans – Lunacy
Project commissioned by Roots United for Present Perfect Festival 2019
© VOLNA (2019) © Swans ℗ 2012 Young God Records

More info: https://volna-media.com/projects/duel

The gloom ripples while something whispers,
Silence has set and coils like a ring,
Someone’s pale face glimmers
From a mire of venomous color,
And the sun, black as the night,
Takes its leave, absorbing the light.

M. Voloshin

read more...
iam404, Tuesday, Aug 20th 2019 Digg | Tweet | Delicious 2 comments  

Watch on Vimeo

http://everyoneishappy.com
https://www.instagram.com/everyoneishappy/
Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.

All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.

Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).

Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.

Setup is using vvvv, Python and Ableton Live.

StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia, gwern.net/Danbooru2018, OpenAI and Google respectively.

Kyle McLean, Wednesday, Aug 7th 2019 Digg | Tweet | Delicious comments on vimeo  

Watch on Vimeo

The installation consisted in a hiperrealystic 3D production simulating the water flowing down the wall of a dam. The 6 x 14 meter-long projection showed the dam closing and openning the gates; the water falling from the top, would flood the 350 m2 surrounding area allowing the visitors to step, play and interact with the inundated surface

PasosLargos, Wednesday, Aug 7th 2019 Digg | Tweet | Delicious comments on vimeo  

Watch on Vimeo

A part of live-coding gatherings at dotdotdot, Milan. This session is recorded on 29.05.2019.

Sound :
Nicola Ariutti
Davide Bonafede
Daniele Ciminieri
Simone Bacchini
Antonio Garosi

Visual :
Jib Ambhika Samsen

jib, Wednesday, Aug 7th 2019 Digg | Tweet | Delicious comments on vimeo  

credits Kyle McLean

Waifu Synthesis- real time generative anime

Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.

All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.

Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).

Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.

Setup is using vvvv, Python and Ableton Live.
everyoneishappy.cominstagram.com/everyoneishappy/

StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia,gwern.net/Danbooru2018, OpenAI and Google respectively.

Models
GPT-2 Text
everyoneishappy, Monday, Jul 1st 2019 Digg | Tweet | Delicious 1 comments  
Neuropathy is an interactive sound installation and a presentation about technology and art. I've presented this work on Digilogue - Future Tellers 2018 and Sonar +D Istanbul 2019. This video is from my workshop with METU (ODTU) contemporary dance group in March 2019. Also I've made a workshop at ODTU Contemporary Dance Days on May 2019.

After two years of fighting cancer, my sense of touch and hearing decreased as a side effect of chemotherapy. They call that "Nueropathy". While I was recovering, I wanted to go back to the art-technology projects, which I couldn't find any opportunity while working. And I was inspired by my situation and started to do a project about it.

People who enters the area of art-installation will be connected to invisible second-self in the other
(virtual) side. And they will interact with it without touching anything. They will use their body movements to
manupilate the parameters and shape the sound.

Software used in this project are vvvv for skeletal motion tracking, VcvRack for sound synthesis and Reaper for recording osc parameters.
https://vimeo.com/336382441

onur77, Tuesday, Jun 11th 2019 Digg | Tweet | Delicious 0 comments  

made with vvvv,
by
Paul Schengber
Emma Chapuy
Felix Deufel

.inf is an abstract and constantly changing visualization of data, taken in real-time from current weather situations on earth.
Incoming data like humidity, temperature, atmospheric pressure, wind speed, wind direction and cloudiness at a special location causes change of the position, speed, lighting and colouring of two million virtual points, floating inside a three dimensional space.
Atmospheric noise, which is taken through online radio transmission is used for the sound synthesis. This data, taken in real-time, switches randomly between different places and their existing weather situations on earth.
This causes unpredictable images.
Every frame of the resulting image and representation of the sound is unique and not reproducible.
High differences in the values, by switching randomly between different current weather situations of specific locations can trigger interrupting events, phases and flickering effects in the image and sound.
The endless generative visual and sound patterns in which one can easily lose themself, leaves the viewers and listeners with a different perception of time and a third person view.

wisp, Monday, Jun 10th 2019 Digg | Tweet | Delicious 0 comments  

scenography for Picnic Fonic festival

ggml, Sunday, Jun 9th 2019 Digg | Tweet | Delicious 0 comments  

Watch on Vimeo

AUDIOVISUAL INSTALLATION
FOAM CORE, MIRRORS, LIGHT, SOUND.

A collaboration with Shawn “Jay Z” Carter, commissioned by Barneys New York

Art Director: Joanie Lemercier
Technical designer and developer: Kyle McDonald
Sound designer: Thomas Vaquié
Producers: Julia Kaganskiy & Juliette Bibasse

http://joanielemercier.com/quartz/

Watch on Vimeo

Short excerpts of each chapter, recorded at the Kubus at ZKM Karlsruhe.

The audiovisual performance LGM#2 is based on an extensive database of pulsars and neutron stars from the Australian National Telescope Facility. Exploring the rhythms and wavelengths of these rotating sources of radio emission for aesthetic patterns and harmonies, the extremely fast pulses and strong gamma rays of these exotic celestial bodies are transformed into clicks, sine waves and light. The minimalistic multi-channel piece stringently follows the specifications of the currently known 2659 pulsars. When discovered in 1967, the precision and apparent artificiality of the received signal led the researchers to nickname the unknown phenomenon LGM-1, for “little green men”. Over three movements, Quadrature's interpretation builds an increasingly dense space of sound and light in honor of these metronomes of the universe.

In collaboration with soundartist Kerim Karaoglu.

Duration: 20 minutes

ATNF Pulsar Catalogue (www.atnf.csiro.au)
Manchester, R. N., Hobbs, G.B., Teoh, A. & Hobbs, M., AJ, 129, 1993-2006 (2005)

Quadrature, Thursday, Jun 6th 2019 Digg | Tweet | Delicious comments on vimeo  

anonymous user login

Shoutbox

~2h ago

joreg: Don't forget: This months #vvvv meetup in #berlin is happening on the 27th: 12-berlin-vvvv-meetup

~2h ago

joreg: @u7 @CeeYaa, we're investigating this...

~4h ago

tonfilm: Under the hood: We switched from #SharpDX to #Xenko math. vl-switch-to-xenko-math VL vvvv #visualprogramming #dotnet #creativecoding

~9h ago

CeeYaa: haha it was frozen for 10 seconds when I send this Shout before - using Firefox

~9h ago

CeeYaa: hui really slow speed on MainPage, Contributions ... speed in Forum quite OK

~12h ago

u7angel: mmm, the site is really slow now.

~21h ago

joreg: PSA: and we're back!

~2d ago

joreg: PSA: Thursday night starting 11pm CET vvvv.org will move servers. If all goes well we should be back soon after.

~3d ago

joreg: But first: This Friday in Berlin: Join our full day "Getting started with Generative Design Algorithms" workshop https://nodeforum.org/announcements/workshop-getting-started-with-generative-design/

~3d ago

joreg: In #Linz for #ArsElectronica? Join us for a free 2 days #vvvv workshop sponsored by businesses/responsive-spaces-gmbh Apply here: 2-day-gamma-vvvvorkshop-at-responsive-spaces-in-linz