Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
credits wirmachenbunt / Atelier Markgraph
This is an attempt to show a little bit more of a project, than just a video. There are some interesting bits and pieces, relying on brilliant contributions and sometimes overlooked cookies.
But first, let's have the video anyway.
The EQC-Scanner is sort of a augmented, kinetic installation. One could argue with the term augmented here but it certainly adds information to the "real" layer. The whole thing is controlled by a touch screen, allowing you to pick topics or move the screens with your fingertip. It is all and all car technology communication, but in a playful package.
This is based on some articles like this LINK A pretty neat recursive routine to learn what recursive is. I used c# but i bet this is easy in VL, anyone ?
The plugin was used for the battery scene. Thank you captain obvious :)
Usually, when you are in RS232 or some fancy protocol land, you have to decode and encode values efficiently. Like encode high values with little use of digits. I always come back to jens.a.e'sBitWiseOps, this is one of the overlooked contributions.
For this project, the plugins were used to encode the engine controller messages.
The image above shows some sensor recordings of the screen movement. Creating this data viz in vvvv was quick and revealed a physical feedback loop. The violet curve shows the frame difference of the screen position. And while the real screen movement actually looked kind of smooth, the sensor data showed some heavy regular spikes. Obviously, the engines did some regular overshooting. Not a big problem and solvable on the hardware side. Interesting how data visualization can help to track down problems.
Controlling Particles in a meaningful way can be painful. Using vectorfields can bring some structure into the chaos. The tool Vectorraygen helps generating the vectors the easy way. It even has a node-based enviroment to drop some organic variation into your fields. And btw. the devs are very friendly.https://jangafx.com/
The tool was used to create the key visual, 500k floating particles along the car exterior.
Sure, this is not a big secret, it's one of vvvv's selling points. But i have to say, it just works. Bringing together 3 machines was actually fun. Hopefully VL preserves this killer feature.
This was pretty much the first time i really used my own plugin and it was surprisingly helpful. :) It consists of a initialization tree for the engines & sensors and an abstract transition model of the visual software part. This is an attempt to leave content out of the statemachine but rather use the state TOPIC for every topic in use. It might be harder to read and it doesn't allow you jump to a specific content, but it makes your software expandable (without touching Automata again).
Not sure if this is a best practise for everything, let's see how this works out in the future.
There is always the dream of a fully, totally dynamic vvvv project. Our ruby-on-rails based web tool helps to manage all texts and images. It even renders the texts as images, freeing vvvv from rendering crappy text but rather display smooth pre-rendered images. Most of the vvvv project is depended to the CMS but of course, there are some bits which are always hard-wired, like 3D stuff.
I hope you find this "article" informative, any questions, comments ?
credits Students: Jan Barth (jancouver), Roman Stefan Grasy (roman_g), Mark Lukas, Markus Lorenz Schilling (dl-110) Academics: Prof. Hartmut Bohnacker, Prof. Ralf Dringenberg
Interaction designers develop our digital everyday life. They concept, sketch and create new possibilities of interaction, which are evaluated with functional and technical prototypes.
The book “Prototyping Interfaces – Interactive Sketches with VVVV” covers within 280 pages the applied handling of interactive sketches with the visual programming language VVVV.
credits Kyle McLean
Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.
All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.
Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).
Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.
StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia,gwern.net/Danbooru2018, OpenAI and Google respectively.
we'd like to share our latest VVVV / VL project.
In collaboration with ART+COM. We breathed life into a kinetic installation, which explores a dynamic interaction between the digital and the physical. In this choreography the screens interact with generative images and vice versa in real time.
More information and project video:http://schnellebuntebilder.de/projects/inspiration-wall/
All the visuals are procedurally created in VVVV to interact with the movement of the screens.
We researched a couple of generative algorithms and approaches for this project and implemented a deferred shading pipeline for raymarched content. This was used for 3D reaction diffusion and growth systems and also to shade the 3D fluid system we've been developing over the last year.
lasal supported us with a couple of conents. A fair part of the project was implemented in VL. For example the pre-simulation of the screen movements, which ensured valid driving data for the motors. We also used the VL based timeline of lasal and implemented a content switcher system in VL, which selected contents based on a set of rules and distributed scene parameters like colors between the content patches.
credits Alexey Beljakov, Andrey Boyarintsev, Anna Kerbo, Nikolay Matveev, Julien Vulliet, Minoru Ito
Architectural Light Studio (The ALS) is unrivaled software suite designed for creating, editing and playing back architectural and art lighting scenarios.
The debut of the ALS took place on The Light+Building 2014 in Frankfurt from March 30 to April 4, 2014
The ALS is written in vvvv by Ivan Rastr electronic crafts.
Please take a few minutes for video preview of The ALS v.0.95 beta
For further information about The Architectural Light Studio please contact IntiLED office
Sergey Matveev, Evgeniy Moskov, Uljana Vinogradova, Anton Lysenko, Elena Belova, Oleg Yanchenkov, Denis Ichalovsky
Ivan Rastr electronic crafts team:
Alexey Beljakov vnm, Andrey Boyarintsev bo27, Anna Kerbo kerbo, Nikolay Matveev unc and specially invited Julien Vulliet vux Thanks to Minoru Ito mino for helping with AntTweakBar.
credits elliot woods, chris plant (catweasel)
In April 2012 myself and Elliot Woods were approached to create an application to project onto a car on a turntable, the brief expanded as the project went on, until by the end, it was to be a different car (of the same model) for every show, they wanted to be able to do multiple shows per day and the alignment was to be done by a member of the staff at the Landrover experience.
A challenging brief.
The building was finally opened in December 2012, the car takes approximately 10-15 minutes to lineup using 3 sliders and a rotatary control on an iPad, with a couple of buttons to spin the turntable around so you can check the setup.
It using 7 projection design projectors all running at 1920x1200, 3 for the back wall, 2 for the car and 2 for the floor, we also had to minimise any visible shadows, content was rendered at 4k and is all run off 1 pc.
credits Chris Plant (catweasel) , Tonfilm for VAudio.
Harmonic Portal is a new work commissioned for Lumiere Festival Durham, 2017 http://www.lumiere-festival.com/durham-2017/
There are 5 pieces arranged around the walls of St Godrics Church, Castle Chare, Durham, each is a portal into the surface of the wall, the light amplifies surface details due to the angle of incidence and feels like a magnifying glass (or even an electron microscope at times.).
They question how we perceive colour, the inside and outside contrast and harmonise as time progresses, while this is mirrored by the sound track which reflects the colours, the frequencies of red, green and blue light have been dropped by 42 octaves and vary in proportion to those colours of light.
It is partially contemplating how our brains create colour from 3 frequencies, but also how limited a bandwidth that actually is, the fundamental notes are a flattened Ab, A and B.
anonymous user login