» Blog
This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.

Blog

new post

Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.

  reset tags

addon-release core-release date devvvv gallery news screenshot stuff
delicious flickr vimeo
Order by post date popularity

credits see below

Steel City is an interactive model of a futuristic steel plant. By using four different terminals, which are assigned to specific topics, the user can dive deeply into the world of steel making. Economic strategies become vibrant visual experiences, ecological considerations are suddenly evident and logistics somehow starts to look sexy.

The tangible key to unlock the experience is the keen combination of a screen and a highly innovative rotary knob including a haptical feedback engine, which originally has been developed for the automotive industry.

zepi, Tuesday, Aug 27th 2019 Digg | Tweet | Delicious 0 comments  

credits Concept, production and video: VOLNA
Engineering: Alexey Belyakov vnm
Mechanics: Viktor Smolensky
Camera/photo: Polina Korotaeva, VOLNA
Special thanks: William Cohen, Michael Gira, Igor Matveev, Alexander Nebozhin, Jason Strudler, Ivan Ustichenko, Artem Zotikov
Music: Swans – Lunacy
Project commissioned by Roots United for Present Perfect Festival 2019
© VOLNA (2019) © Swans ℗ 2012 Young God Records

More info: https://volna-media.com/projects/duel

The gloom ripples while something whispers,
Silence has set and coils like a ring,
Someone’s pale face glimmers
From a mire of venomous color,
And the sun, black as the night,
Takes its leave, absorbing the light.

M. Voloshin

read more...
iam404, Tuesday, Aug 20th 2019 Digg | Tweet | Delicious 2 comments  

Watch on Vimeo

The installation consisted in a hiperrealystic 3D production simulating the water flowing down the wall of a dam. The 6 x 14 meter-long projection showed the dam closing and openning the gates; the water falling from the top, would flood the 350 m2 surrounding area allowing the visitors to step, play and interact with the inundated surface

PasosLargos, Wednesday, Aug 7th 2019 Digg | Tweet | Delicious comments on vimeo  

Watch on Vimeo

http://everyoneishappy.com
https://www.instagram.com/everyoneishappy/
Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.

All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.

Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).

Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.

Setup is using vvvv, Python and Ableton Live.

StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia, gwern.net/Danbooru2018, OpenAI and Google respectively.

Kyle McLean, Wednesday, Aug 7th 2019 Digg | Tweet | Delicious comments on vimeo  

Watch on Vimeo

A part of live-coding gatherings at dotdotdot, Milan. This session is recorded on 29.05.2019.

Sound :
Nicola Ariutti
Davide Bonafede
Daniele Ciminieri
Simone Bacchini
Antonio Garosi

Visual :
Jib Ambhika Samsen

jib, Wednesday, Aug 7th 2019 Digg | Tweet | Delicious comments on vimeo  

credits Kyle McLean

Waifu Synthesis- real time generative anime

Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.

All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.

Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).

Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.

Setup is using vvvv, Python and Ableton Live.
everyoneishappy.cominstagram.com/everyoneishappy/

StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia,gwern.net/Danbooru2018, OpenAI and Google respectively.

Models
GPT-2 Text
everyoneishappy, Monday, Jul 1st 2019 Digg | Tweet | Delicious 1 comments  
Neuropathy is an interactive sound installation and a presentation about technology and art. I've presented this work on Digilogue - Future Tellers 2018 and Sonar +D Istanbul 2019. This video is from my workshop with METU (ODTU) contemporary dance group in March 2019. Also I've made a workshop at ODTU Contemporary Dance Days on May 2019.

After two years of fighting cancer, my sense of touch and hearing decreased as a side effect of chemotherapy. They call that "Nueropathy". While I was recovering, I wanted to go back to the art-technology projects, which I couldn't find any opportunity while working. And I was inspired by my situation and started to do a project about it.

People who enters the area of art-installation will be connected to invisible second-self in the other
(virtual) side. And they will interact with it without touching anything. They will use their body movements to
manupilate the parameters and shape the sound.

Software used in this project are vvvv for skeletal motion tracking, VcvRack for sound synthesis and Reaper for recording osc parameters.
https://vimeo.com/336382441

onur77, Tuesday, Jun 11th 2019 Digg | Tweet | Delicious 0 comments  

made with vvvv,
by
Paul Schengber
Emma Chapuy
Felix Deufel

.inf is an abstract and constantly changing visualization of data, taken in real-time from current weather situations on earth.
Incoming data like humidity, temperature, atmospheric pressure, wind speed, wind direction and cloudiness at a special location causes change of the position, speed, lighting and colouring of two million virtual points, floating inside a three dimensional space.
Atmospheric noise, which is taken through online radio transmission is used for the sound synthesis. This data, taken in real-time, switches randomly between different places and their existing weather situations on earth.
This causes unpredictable images.
Every frame of the resulting image and representation of the sound is unique and not reproducible.
High differences in the values, by switching randomly between different current weather situations of specific locations can trigger interrupting events, phases and flickering effects in the image and sound.
The endless generative visual and sound patterns in which one can easily lose themself, leaves the viewers and listeners with a different perception of time and a third person view.

wisp, Monday, Jun 10th 2019 Digg | Tweet | Delicious 0 comments  

scenography for Picnic Fonic festival

ggml, Sunday, Jun 9th 2019 Digg | Tweet | Delicious 0 comments  

Watch on Vimeo

AUDIOVISUAL INSTALLATION
FOAM CORE, MIRRORS, LIGHT, SOUND.

A collaboration with Shawn “Jay Z” Carter, commissioned by Barneys New York

Art Director: Joanie Lemercier
Technical designer and developer: Kyle McDonald
Sound designer: Thomas Vaquié
Producers: Julia Kaganskiy & Juliette Bibasse

http://joanielemercier.com/quartz/

anonymous user login

Shoutbox

~2d ago

joreg: vvvvTv S02E01 is out: Buttons & Sliders with Dear ImGui: https://www.youtube.com/live/PuuTilbqd9w

~9d ago

joreg: vvvvTv S02E00 is out: Sensors & Servos with Arduino: https://visualprogramming.net/blog/2024/vvvvtv-is-back-with-season-2/

~9d ago

fleg: hey there! What's the best tool for remote work? Teamviewer feels terrible. Thanks!

~23d ago

joreg: Last call: 6-session vvvv beginner course starting November 4: https://thenodeinstitute.org/courses/ws24-5-vvvv-beginners-part-i/

~1mth ago

joreg: Missed the last meetup? You can rewatch it here: https://www.youtube.com/live/MdvTa58uxB0?si=Fwi-9hHoCmo794Ag

~1mth ago

theurbankind: When is the next big event, like node festival ?

~1mth ago

~1mth ago

joreg: Join us for the next vvvv meetup on Oktober 17th: https://visualprogramming.net/blog/2024/25.-vvvv-worldwide-meetup/

~2mth ago

joreg: 6 session beginner course part 2 "Deep Dive" starts January 13th: https://thenodeinstitute.org/courses/ws24-5-vvvv-beginners-part-ii/