» Blog
This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.

Blog

new post

Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.

  reset tags

addon-release core-release date devvvv gallery news screenshot stuff
delicious flickr vimeo
Order by post date popularity

Here is to let you all know about the existence of MediaFutures. In a nutshell:

MediaFutures ... brings together startups, SMEs and artists in the media value chain to expand on standard models and comes up with unconventional ways for people to engage with quality journalism, science education and democratic processes. It aims to create products, services, digital artworks and experiences that will reshape the media value chain through innovative, inclusive and participatory applications of data and user-generated content.

...

MediaFutures funds and supports products, services, artworks and experiences that transform for the better the ways people consume news and engage with facts, and the ways experts make decisions and contribute to society.

...

MediaFutures will support 51 startups or SMEs and 43 artists through a total of three Open Calls in the coming three years, distributing a total amount of €2.5M. It will support the selected applicants through a 6-month acceleration (startups & SMEs)/ residency (artists) programme including funding, mentoring and training.

If you're interested, checkout the details for the 1st open call which is closing on January 28!

This info is from their FAQ regarding who is eligible in the calls:

Only applicants legally established, and working, in the case of groups of individuals, in any of the following countries will be eligible:

  • The Member States (MS) of the European Union (EU), including their outermost regions;
  • The Overseas Countries and Territories (OCT) linked to the Member States;
  • H2020 Associated countries: according to the updated list published by the EC
  • The UK applicants are eligible under the conditions set by the EC for H2020 participation at the time of the deadline of the call.

Note that individual artist applicants will be eligible from any country in the world provided that they are able to travel to Europe for the MediaFutures programme and always provided that Covid-19 situation allows it.

On short notice: They have a webinar on January 13, 1pm CET to learn more about the program.

Hope some of you have an idea to apply and if your idea involves vvvv, don't hesitate to get in touch. It may be worth to explore opportunities together...

joreg, Tuesday, Jan 12th 2021 Digg | Tweet | Delicious 0 comments  

Hello everyone!

2020.2

Here is the second big release this year for vvvv gamma! Available for download and purchase. Now!

It comes with improvements that strengthen the toolkit character of vvvv gamma and VL, which are outlined over here: vvvv-the-tool.

It also comes with small improvements for object-oriented programming patterns by introducing the This node. Many of those changes were motivated to be able to roll out a certain library. So this is the last version without that exact library.

Bugfix releases:

  • 2020.2.2 on October 1st, 2020
  • 2020.2.3 on November 17, 2020
  • 2020.2.4 on November 25, 2020
  • 2020.2.5 on January 8, 2021

For the full list of changes, see the The Log of Changes.

Updating libraries

This release introduces a breaking change for certain NuGet packages! If you use any of the following packages then you'll have to make sure to use the latest available version:

  • VL.OpenCV
  • VL.Devices.Kinect2
  • VL.Devices.AzureKinect
  • VL.Devices.AzureKinect.Body
  • VL.Devices.Nuitrack
  • VL.Devices.RealSense
  • VL.MediaFoundation
  • VL.Elementa -pre
  • VL.Audio -pre
  • VL.IO.OSC -pre
  • VL.RunwayML -pre

So when updating to 2020.2.x you need to update to the latest version of these libraries. See Manage Nugets on how to do so via command-line.

Running 2020.1.x on the same machine

If, after running 2020.2.x and updating the above libraries you want to run 2020.1.x again, you'll have to make sure to first remove the new packs that are only working for 2020.2.x.

To do so, you'd go to your

    AppData\Local\vvvv\gamma\nugets

and either delete packs selectively or simply clean the whole directory and re-install NuGets as you need them.

Info for library developers

This breaking change only affects your library if it is using a CSharp-project that is referencing VL.Core! In this case, all you have to do is to update VL.Core to 2020.2.x.

Then please communicate which version of your library is compatible with which version of vvvv.

Sorry for the inconvenience, we hope to make it up soon, by sharing a certain library...

Happy patching & see you soon,
yours devvvvs!

gregsn, Friday, Jan 8th 2021 Digg | Tweet | Delicious 0 comments  

Previously on vvvv: vvvvhat happened in November 2020


So,

December happened and what not happened is the planned release of 2020.3, the first stable one including the shiny new 3d engine VL.Stride, as planned. We're in the final commits for it, so please bear with us and just get a preview if you can't wait.

Meanwhile, the promised rework of the OSC and TUIO nodes has landed in previews, which should make it the easiest ever to work with those two ubiquitous protocols.

Working Groups

I think it makes sense to start giving you an overview of a few bigger community efforts that have recently started. Please add your expertise where you see fit! Hoping we can gather some regular updates from them in the coming months:

Contributions

New:

Updates:

Learning material

Gallery

Binärtransformation by ravel

And more:

Jobs


That was it for December. Anything to add? Please do so in the comments!

joreg, Wednesday, Jan 6th 2021 Digg | Tweet | Delicious 0 comments  

credits Concept, production and video: VOLNA (incl. vnm)
Camera / photo: Polina Korotaeva, VOLNA
Music (video documentation): Hush Pup – Flower Power
© VOLNA (2020) © Hush Pup (2018)

The kinetic light installation “Nymphéas” is inspired by the lyrical images of ponds overgrown with water lilies by whose side poets and artists have spent countless hours in contemplation of clean, cold flowers.

read more...
iam404, Monday, Jan 4th 2021 Digg | Tweet | Delicious 0 comments  

Watch on Vimeo

Kykeon is an immersive virtual reality experience exploring shamanism as a way to reimagine our current society. Inspired by an ancestral knowledge and wisdom, it invites the audience to take a part in a new ritual.

concept, creation, creative direction | Mária Júdová

choregrapher | Taneli Törmä
dancers | Tanzmainz / Staatstheater Mainz, Amber Pansters, Bojana Mitrović, Finn Lakeberg, Milena Wiese, Zachary Chant

sound designer | Alexandra Timpau
VFX support | Florian Friedrich / Narranoid
HW support | Marko Júda
producer | Mara Nedelcu
co-producers | Motion Bank / Hochschule Mainz, Sensorium festival
project title, logo design | Constantine Nisidis

videographers | Andreas Etter, Erik Čermák, Ivona Lichá, Lucia Byšfyová
sound recordist | René Bošeľa / Môlča Records
gong performer | Stanislav Abrahám
script editor | Salwa dor Carmenaissa

maria judova, Monday, Dec 21st 2020 Digg | Tweet | Delicious comments on vimeo  

credits Nils Weger (phlegma), Christine Mayerhofer (ravel), Jan Mayerhofer (Hoerfeld), Sebastian Hohberg (Hoerfeld)

view full show athttps://www.youtube.com/watch?v=Ckq2Sn75qtg

The light sculpture hangs above the heads of the spectators is 8 meters high. The animated light moves in complete symbiosis with the music, which combines electronic sounds with classical instruments.

In four acts, the audience in the Dreikönigskirche follows the change of society through the digitalization of our lives.

The digital lifestyle is illuminated artistically. The story begins with the early days of the computer, a mechanical calculating machine. It tells of machines becoming faster and faster and more and more high-resolution, until they are finally more powerful than their designers and users in many areas. The border between the digital and analogue world becomes blurred.

Among other things, human dimensions of change, our permanent interaction with machines and their algorithmic filter worlds, emotional highs and lows, and the speed of life in the digital age are addressed.

Where will this journey take us?

Thanks to dottore for the runtime model editor and nsynk for the timeline tilda and the vvvv group and the amazing community.

ravel, Monday, Dec 21st 2020 Digg | Tweet | Delicious 0 comments  

After NODE 2020, having seen all the wonderful things Stride and vvvv can do together, it was inevitable to fall head first into the adventure that has been bringing Stride and VL.OpenCV into a playful and seamless friendship.

I am happy to announce that as of version 1.2.0 of VL.OpenCV, you can effortlessly and painlessly:

Find your camera's position based on a pattern or marker

Need to know where your camera is and what it's looking at based on an Aruco marker or a chessboard calibration pattern?

Say no more:

Find camera position and rotation based on Aruco marker 2

And now from the outside:

Find camera position and rotation based on Aruco marker 2

Dizzy yet?

Estimate the pose of an Aruco marker to create AR applications

Bring 3D objects and animations into your image using Aruco markers to create augmented reality projects:

AR Teapot

Calibrate a projector

Remember this beauty? It helps you figure out the position and characteristics of your projector in your 3D scene.

Calibrate projector

And reproject

Once you know where your projector and the spectator are, you only need to worry about the content. 3D projection mapping made easy!

Reproject

Not bad huh?

So there you have it boys and girls, 3D computer vision based adventures for all! Head to your local nuget distributor and grab a copy while it's still hot.

A big thank you to motzi, gregsn and tebjan for their invaluable help as well as to many others who contributed one way or another.

And as always, please test and report.

Keep your cameras calibrated kids!

Happy holidays!

Changelog

Added Stride compatible versions of
  • SolvePnP
  • ApplyNearAndFar
  • Perspective
  • ExtrinsicsToViewMatrix
  • ExtrinsicsToProjectionMatrix
New and improved help patches
  • Calculate a camera position using Aruco
  • Calculate a camera position using SolvePnP
  • Estimate the pose of Aruco markers
  • Calibrate a projector and reproject
  • Calibrate a camera
Bug fixes for
  • EstimatePose
  • FindChessboardCornersSB
  • VideoIn nodes
  • VideoPlayer nodes
  • CalibrateCamera
  • Others
New nodes
  • VideoIn nodes with lower level access to device index which enable using previosuly unsupported devices
General cleanup
Improved documentation
Moved beta OpenCV to DX11 transformation nodes to a separate document (\vvvv\nodes\vl\VVVV.OpenCV.vl)
ravazquez, Wednesday, Dec 16th 2020 Digg | Tweet | Delicious 4 comments  

Here is,

another addition to the series of things that took too long. But then they also say that it is never too late... VL was shipping with OSC and TUIO nodes from the beginning, but frankly, they were a bit cumbersome to use. So here is a new take on working with those two ubiquitous protocols:

OSC

Receiving OSC messages

To receive OSC messages you need to place an OSCServer node which you configure to the IP and Port you want to listen on. Immediately it will show you if it is receiving OSC messages at all on the Data Preview output.

Then use OSCReceiver nodes to listen to specific addresses. Either specify the address manually or, hit the "Learn" input to make the node listen to the address of the first OSC message it now receives.

Note, that the OSCReceiver is generic, meaning it'll connect to whatever datatype you want to receive. Supported typetags are:

  • i: Integer32, h: Integer64
  • f: Float32, d: Float64
  • s: String, c: Char
  • r: RGBA color
  • b: blob byte[]
  • T: true, F: false

In case of multiple floats, you can also directly receive them as vectors. And this works on spreads of the above types and even on tuples, in case you're receiving a message consisting of multiple different types.

Sending OSC messages

To send OSC messages you first need an OSCClient which you configure with a ServerIP and Port. Then you're using SendMessage nodes to specify the OSC address and arguments to send. Again note that the "Arguments" input is generic, so you can send any of the above types, spreads of those and even tuples combining different types!

By default, vvvv is collecting all the data you send and sends it out in bundles per frame. For optimal usage of UDP datagram size (depending on your network) you can even specify the maximum bundle size on the OSCClient node.

These are the basics. There are a couple of more things which are demonstrated in the howto patches!

TUIO

Receiving TUIO data

For receiving TUIO data you're using a TUIOClient which you configure to the IP and Port you want to listen on. The client already returns a spread of cursors, objects and blobs that you can readily access.

Sending TUIO data

For sending TUIO data you're using a TUIOTracker node which you configure with a ServerIP and Port. Then you give it a spread of cursors, objects and blobs to send out.


Available for testing now, in latest 2020.3 previews!

joreg, Wednesday, Dec 9th 2020 Digg | Tweet | Delicious 0 comments  

anonymous user login

Shoutbox

~2d ago

karistouf: i m searching for a solution on a cylinder end and 3 Hittest... tx https://discourse.vvvv.org/t/hitest-on-a-cylinder-end-in-3d/20725

~19d ago

mediadog: @schlonzo Oh just wait - see how far we have already come: http://old.noirflux.com/VideoFiles/FireAndIce-1.m4v

~20d ago

schlonzo: it is really hot here

~2mth ago

Joanie_AntiVJ: I have an issue with some reflection calculations, any vector expert may know the secret of mirrors ? :) https://discourse.vvvv.org/t/help-to-mirror-a-vector/20523/4

~2mth ago

Joanie_AntiVJ: I have an issue with some reflection calculations, any vector expert may know the secret of mirrors ? :) https://discourse.vvvv.org/t/help-to-mirror-a-vector/20523/4