» Blog
This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.

Blog

new post

Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.

  reset tags

addon-release core-release date devvvv gallery news screenshot stuff
delicious flickr vimeo
Order by post date popularity

credits Concept, production and video: VOLNA
Engineering: Alexey Belyakov vnm
Mechanics: Viktor Smolensky
Camera/photo: Polina Korotaeva, VOLNA
Special thanks: William Cohen, Michael Gira, Igor Matveev, Alexander Nebozhin, Jason Strudler, Ivan Ustichenko, Artem Zotikov
Music: Swans – Lunacy
Project commissioned by Roots United for Present Perfect Festival 2019
© VOLNA (2019) © Swans ℗ 2012 Young God Records

More info: https://volna-media.com/projects/duel

The gloom ripples while something whispers,
Silence has set and coils like a ring,
Someone’s pale face glimmers
From a mire of venomous color,
And the sun, black as the night,
Takes its leave, absorbing the light.

M. Voloshin

read more...
iam404, Tuesday, Aug 20th 2019 Digg | Tweet | Delicious 2 comments  

credits Kyle McLean

Waifu Synthesis- real time generative anime

Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.

All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.

Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).

Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.

Setup is using vvvv, Python and Ableton Live.
everyoneishappy.cominstagram.com/everyoneishappy/

StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia,gwern.net/Danbooru2018, OpenAI and Google respectively.

Models
GPT-2 Text
everyoneishappy, Monday, Jul 1st 2019 Digg | Tweet | Delicious 1 comments  
Neuropathy is an interactive sound installation and a presentation about technology and art. I've presented this work on Digilogue - Future Tellers 2018 and Sonar +D Istanbul 2019. This video is from my workshop with METU (ODTU) contemporary dance group in March 2019. Also I've made a workshop at ODTU Contemporary Dance Days on May 2019.

After two years of fighting cancer, my sense of touch and hearing decreased as a side effect of chemotherapy. They call that "Nueropathy". While I was recovering, I wanted to go back to the art-technology projects, which I couldn't find any opportunity while working. And I was inspired by my situation and started to do a project about it.

People who enters the area of art-installation will be connected to invisible second-self in the other
(virtual) side. And they will interact with it without touching anything. They will use their body movements to
manupilate the parameters and shape the sound.

Software used in this project are vvvv for skeletal motion tracking, VcvRack for sound synthesis and Reaper for recording osc parameters.
https://vimeo.com/336382441

onur77, Tuesday, Jun 11th 2019 Digg | Tweet | Delicious 0 comments  

made with vvvv,
by
Paul Schengber
Emma Chapuy
Felix Deufel

.inf is an abstract and constantly changing visualization of data, taken in real-time from current weather situations on earth.
Incoming data like humidity, temperature, atmospheric pressure, wind speed, wind direction and cloudiness at a special location causes change of the position, speed, lighting and colouring of two million virtual points, floating inside a three dimensional space.
Atmospheric noise, which is taken through online radio transmission is used for the sound synthesis. This data, taken in real-time, switches randomly between different places and their existing weather situations on earth.
This causes unpredictable images.
Every frame of the resulting image and representation of the sound is unique and not reproducible.
High differences in the values, by switching randomly between different current weather situations of specific locations can trigger interrupting events, phases and flickering effects in the image and sound.
The endless generative visual and sound patterns in which one can easily lose themself, leaves the viewers and listeners with a different perception of time and a third person view.

wisp, Monday, Jun 10th 2019 Digg | Tweet | Delicious 0 comments  

scenography for Picnic Fonic festival

ggml, Sunday, Jun 9th 2019 Digg | Tweet | Delicious 0 comments  

In the era of Big Data, we like to be watched and be safe at the same time. We share our lives on social media and watch other people's lives. Amnesia is an interactive artwork about people who forget their true identity on social media and become addicted to data and algorithms.

Amnesia is an interactive video installation by Onur Comlekci. Programmed in vvvv. (2018)

onur77, Saturday, May 18th 2019 Digg | Tweet | Delicious 2 comments  

credits Client: Humanscale | Concept & Design: Todd Bracher & Studio TheGreenEyl | Technical director: Andreas Schmelas | Sound design: Marian Mentrup | Video: Maco Film Venice | Photography: David Zanardi


Bodies in Motion is an immersive light installation created by Todd Bracher and Studio TheGreenEyl for Humanscale at Milan Design Week 2019.

The installation is inspired by Humanscale’s history as pioneers in human factors and natural ergonomics, bringing a scientific approach to furniture design. A related influence was the research psychophysicist Gunnar Johansson in 1973, which involved placing lights on key points of the human body to highlight movement. Situated in the warehouse of Ventura Centrale in Milan, the work features a minimal representation of the human body formed of lights that respond to the movements of visitors. As visitor's bodies are scanned, 15 motorised lights project tightly-focussed white beams onto a screen fifteen metres away. The points of light on the screen correspond to key points of the person's body including the head, shoulders, elbows, hands, sternum, hips, knees or feet. Each person that interacts triggers a specific visual and sound experience that is tightly synced across the two.

https://inmotion.humanscale.com/

Year: 2019
Client: Humanscale
Concept & Design: Todd Bracher & Studio TheGreenEyl
Technical director: Andreas Schmelas
Sound design: Marian Mentrup
Video: Maco Film Venice & Dezeen
Photography: David Zanardi

read more...
m9dfukc, Saturday, Apr 13th 2019 Digg | Tweet | Delicious 1 comments  

I needed to implement a BVH acceleration structure before I could proceed to more complicated scenes. Now this thing rips.

polyrhythm, Tuesday, Mar 19th 2019 Digg | Tweet | Delicious 0 comments  

credits concept & lead: Christoph Diederichs (Atelier Markgraph), software, interaction & motion design: Chris Engler (wirmachenbunt)

In times of 4k and even higher resolutions, the pixel vanishes. It becomes a thing of the past. Think again, with 756 very physical pixels, 3 color channels and ~20cm depth. It's a limited canvas which demands careful decisions in terms of movement, speed, sound and light composition. It breathes, it is a living, yet mechanical thing, communicating with it's surrounding.

Big thanks to vux & colorsound for support.

u7angel, Thursday, Feb 21st 2019 Digg | Tweet | Delicious 1 comments  

credits Project by onformative. Production: Julia Laub, Creative Direction: Cedric Kiefer, Art Direction: Moco Ziegler, Technical Direction & Code: Moco Ziegler, Code: João Fonseca, Aristides Garcia, Max Mittermeier, Commissioned by: VOK DAMS, Architecture & Interior: Universal Design Studio / MAP Project Office, Technical Setup: Archimedes Exhibitions, Built with vvvv

»FLUX« is a data-driven art installation visualizing the different facets of the Internet of Things and cognitive technologies. Through four unique visual modes, the sculpture cycles through mesmerizing imagery created through streams of living data. Intelligently engaging with its viewers, the piece is a focal point of the IBM Watson Headquarters, Germany.

More information onhttps://onformative.com/work/ibm-flux
Documentation videohttps://vimeo.com/263903586

julialaub, Monday, Feb 11th 2019 Digg | Tweet | Delicious 0 comments  

anonymous user login

Shoutbox

~40min ago

CeeYaa: haha it was frozen for 10 seconds when I send this Shout before - using Firefox

~41min ago

CeeYaa: hui really slow speed on MainPage, Contributions ... speed in Forum quite OK

~3h ago

u7angel: mmm, the site is really slow now.

~13h ago

joreg: PSA: and we're back!

~2d ago

joreg: PSA: Thursday night starting 11pm CET vvvv.org will move servers. If all goes well we should be back soon after.

~3d ago

joreg: But first: This Friday in Berlin: Join our full day "Getting started with Generative Design Algorithms" workshop https://nodeforum.org/announcements/workshop-getting-started-with-generative-design/

~3d ago

joreg: In #Linz for #ArsElectronica? Join us for a free 2 days #vvvv workshop sponsored by businesses/responsive-spaces-gmbh Apply here: 2-day-gamma-vvvvorkshop-at-responsive-spaces-in-linz

~5d ago

joreg: Need your custom dose of #vvvv training? Join us at our studio in #berlin: vvvv-training-at-the-source

~10d ago