Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
Live AV show of generative real-time visuals and live electronic music.
An abstract feast of raymarching, KINECT point clouds, particle systems and geometry shaders extravaganza.
Entirely made with code: marching tamed noise functions and mixing procedural geometries with more mundane polygons.
Visuals made by evvvvil in HLSL and vvvv, played live with Novation LaunchCONTROL XL.
Music made by OddJohn and played live in Abelton with Minibrute and Push2.
In the project of “Deep Sound of Maramures”, with the unique geometric graphic style and spatial related artistic philosophy, several visual effects along with Peter Gate’s (Petru Pap) composition were created. A sequence of spatial visual elements was inspired by Peter’s “The Deep Sound of Maramures”. They (music and visual) are not fixed forms; they are alive. In order to make the audiences experience a unique journey, the approach is to create visually alive elements to follow the emotion of the music. It is not a common background visual effect running through the concert, but rather a set of real-time interactive resulting images interacted with the live music performance. For example, in the scene of “sea waves”, the speed and the curvature of the waves will be lively-generated in accordance with the live music. It illustrated a journey of a bird’s fantasy through different spatial environments from the nature of earth to outer space, from concrete landscape to abstract imagination…etc. While adding the time dimension of the music, it generates a 4D immersive space for people to fully engage in the show.
Commissioned to work with SALT Research collections, artist Refik Anadol employed machine learning algorithms to search and sort relations among 1,700,000 documents. Interactions of the multidimensional data found in the archives are, in turn, translated into an immersive media installation. Archive Dreaming, which is presented as part of The Uses of Art: Final Exhibition with the support of the Culture Programme of the European Union, is user-driven; however, when idle, the installation "dreams" of unexpected correlations among documents. The resulting high-dimensional data and interactions are translated into an architectural immersive space.
Shortly after receiving the commission, Anadol was a resident artist for Google's Artists and Machine Intelligence Program where he closely collaborated with Mike Tyka and explored cutting-edge developments in the field of machine intelligence in an environment that brings together artists and engineers. Developed during this residency, his intervention Archive Dreaming transforms the gallery space on floor -1 at SALT Galata into an all-encompassing environment that intertwines history with the contemporary, and challenges immutable concepts of the archive, while destabilizing archive-related questions with machine learning algorithms.
In this project, a temporary immersive architectural space is created as a canvas with light and data applied as materials. This radical effort to deconstruct the framework of an illusory space will transgress the normal boundaries of the viewing experience of a library and the conventional flat cinema projection screen, into a three dimensional kinetic and architectonic space of an archive visualized with machine learning algorithms. By training a neural network with images of 1,700,000 documents at SALT Research the main idea is to create an immersive installation with architectural intelligence to reframe memory, history and culture in museum perception for 21st century through the lens of machine intelligence.
SALT is grateful to Google's Artists and Machine Intelligence program, and Doğuş Technology, ŠKODA, Volkswagen Doğuş Finansman for supporting Archive Dreaming.
Location : SALT Gatala, Istanbul, Turkey
Exhibition Dates : April 20 - June 11
6 Meters Wide Circular Architectural Installation
4 Channel Video, 8 Channel Audio
Custom Software, Media Server, Table for UI Interaction
For more information:
Jac de Haan
Refik Anadol Studio Members & Collaborators
Raman K. Mustafa
Ho Man Leung
Welcome dear patchers to a new episode of devvvvs giving you control over your PC mainboard.
When you work in vvvv or VL the evaluation of your patch is automatically driven by a mainloop. It executes the nodes in your patch (usually) 60 times per second and by this allows changes to happen in your patch over time.
If you have a look at the PerfMeter in a renderer with a mainloop timer without any tweaks you will see lots of flickering like this:
Those flickers indicate that the time between two frames of the mainloop is changing a bit every frame. In an ideal world those flickers would not be there and the time between two frames would always be the same. An unstable mainloop like this creates jitter in animations, drops video frames and lets the visual output of your patch look less smooth.
It's quite a difficult task to get high-precision timer events on a modern computer architecture. Timers and me go way back to the early vvvv days at MESO when i worked on the vvvv mainloop and the Filtered time mode. Since then we could improve the vvvv mainloop time stability quite a bit by doing tricks like changing the windows system timer resolution and introducing a short busy wait phase at the end of the mainloop. The result of this work looks like this:
The experience gathered from the vvvv mainloop improvements is now available in the VL library, so you can build your own sub-mainloops.
But why would you need your own timer at all if you have a good mainloop already? There are a few reasons:
In VL the patch of a process node by default has a Create and an Update operation. Create gets called once an instance of the process is created and Update gets called periodically by the mainloop. In this process node patch you can place other process nodes that 'plug into' those two operations by placing their own Create on the Create of the surrounding patch and their Update on the Update of the surrounding patch.
This is the same for stateful regions like ForEach [Reactive], only that the Update of the ForEach region doesn't get called automatically by the surrounding patch but gets called by the events of the incoming observable. More on that in this blog post: VL: Reactive Programming
There are many sources of observable events. For example Mouse, Keyboard and other input devices as well as AsyncTask or MidiIn. The timer nodes work in the same way. The output is an Observable that is on a new thread and either sends the frame number (for the system timer nodes) or a TimerClock (for the MultimediaTimer or BusyWaitTimer). A patch would look like this:
The use of observables also makes it easy to swap one timer for another if neccessary.
Basically there are 3 ways to setup timers in windows and now VL has them all!
This is the most common timer but it usually only has a precision of 16ms. It can be used for recurring events when accuracy is not the most important issue and the interval is in the seconds range or a higher millisecond range. Nodes that use these timers are for example Interval and Timer in category Reactive:
This is a dedicated timer for applications that do video or midi event playback. It is fairly accurate to about 1ms and doesn't need much CPU power. So it can be used for most time critical scenarios. To use this timer, make sure you enable the Experimental button in the VL node browser and create the node MultimediaTimer:
So that's nice, but it has two little draw backs. You can only specify the period in whole milliseconds and as you can see there is still some flickering in the measured period times. The flickering is well below 1ms but still, we can improve that:
Since its possible to measure time with high accuracy, one can write an infinite loop that always checks the time and executes an event once the specified interval time has passed. This timer always uses 100% of one CPU core because it checks time as often as it can. But hey, how many cores do you have these days? With this method you can achieve precision in the micro second range, which is insane!
If any patch processing is happening on the timer event, the power of your core is of course shared with the busy wait. Just make sure that the processing doesn't take longer as the specified period:
This timer has an option to reduce CPU load for period times that are higher than the accuracy of your system timer. You can specify a time span called Wait Accuracy. This is a time span before the desired end of the period that specifies when the busy wait phase should start. Before that time the timer is set to sleep for 1ms periodically. 16ms is a safe value, but you can decrease it until the Last Period starts to jump in order to reduce CPU load even more.
Both the MultimediaTimer and the BusyWaitTimer start their own background thread with priority AboveNormal. The thread priority setting might become an input pin in the future.
So now download latest alpha, enable the Experimental button in the VL node browser and give it a shot. If anything unexpected happens, let us know in the forums.
anonymous user login