Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
This was long requested and it's finally here! Latest VVVV.OpenVR can use vive trackers without HMD (head mounted device). There is a dedicated pose output on the Poser (OpenVR) node and you can request the serial numbers of all connected devices.
Here is how to get started with high-performance 6DOF positional tracking for as little as $230 bucks. Minimum hardware requirement is one base station and one tracker. Although two base stations are recommended for much better tracking stability.
In order to get the trackers running without HMD you need to do the following steps:
Find this file on your drive:
change "enable" to "true".
Then open this file:
Add the following entries to the "steamvr" section:
"forcedDriver": "null", "activateMultipleDrivers": "true",
SteamDirectory is usually C:\Program Files (x86)\Steam.
Also make sure to disable the "SteamVR Home" on startup. Otherwise it will try to render into the null HMD and consume 100% of one CPU core:
If SteamVR was running, close and restart it.
When SteamVR restarts, you can connect a tracker or controller without the HMD. Follow these instructions to pair the trackers ("Pair Tracker" is now "Pair Controller"): Pairing Vive Tracker
SteamVR should then look similar to this:
Note: The red “Not Ready” text can appear occasionally but that should be no problem if you are using the null driver.
If you don't run the calibration process, the first found vive lighthouse base station will be the origin of the tracking space. If you can live with that you need to provide your own calibration matrix in vvvv and multiply it with the pose matrices coming out of the Poser node.
If you have the vive controllers you can run the room setup normally (no need for the HMD to be connected if you use the null driver).
You can also use the tracker as a controller for calibration, but you need to connect a simple circuit to the pogo pins to be able to activate the 'trigger' button during the calibration process.
More detailed developer info on the pogo pins can be found here: Vive Tracker For Developers
You can download the new OpenVR pack here: VVVV.OpenVR
Open the demo patch 02_TrackersOnlyDemo.v4p in the VVVV.OpenVR\girlpower folder and enjoy tracking!
Please welcome beta38.1,
which basically, only fixes one bug that got introduced with the beta38 release and prevented certain VL patches from loading up.
Sorry about that! Heads will roll in quality management, guaranteed!
Apart from that, you see some new swizzle nodes as well as more help texts for nodes in our core library. Also, the VL splash screen is not top-most anymore.
And if you haven't already, now is a good time to testdrive our two "prerelease" packages that both come with plenty of examples to explore:
And for the very brave there is a lot of good stuff ready for testing in the work-in-progress section in the forum.
That's about it,
Who maarja, id144
When Wed, Dec 12th 2018 - 12:00 until Thu, Dec 13th 2018 - 17:00
Where Perte de Signal, 5445 avenue De Gaspé, Montreal, Canada
Photo by Mario Hernandez
Supported using public funding by Slovak Arts Council
Welcome back to the second sneak peek into our adventures with xenko. Together with MLF we've been busy patching the first project done entirely with vl and xenko: Ocean of Air. So far the combination works superbé, and you can experience that for yourself until the 20th of January, if you are in the London area. Alongside the project we explored the xenko code base and now that it's live, we can give you some more insights into our research.
In the last blog post we used predefined entities to set up a little scene graph. This time we will dig a little bit deeper.
Having primitive objects like Box, Sphere, Plane etc. is nice for casual patching and quickly visualizing something. But you will need more sophisticated objects for the final output of your project. What you want to do is designing your own objects that are specific to your use case.
Luckily, game engines have quite similar requirements and came up with a good solution, and they call it entity/component/system, short ECS, which is also the latest hype in Unity. Xenko has a good documentation page if you want to go into detail. But for now let's stay on topic and keep two things in mind:
We found two appealing ways to create custom entities that can also be combined with each other in any way that suits you. You can either patch them or design them in xenko's game studio using their prefab workflow. Here is a simple example for both cases:
Let's look inside the BoxEntity from the last blog post:
As you can see, it adds a BoxComponent to the entity on Create (white) and exposes parameters like Color, Transformation, Enabled etc. as input pins on Update (gray). This is more or less an arbitrary choice of how the BoxEntity is designed and it will probably change a bit before it becomes official. The patch is also an example of how vl's process nodes work nicely together with the entity component model. Each instance of an entity or a component can be represented by a process node and connected with each other in an understandable way.
In the patch we saw the EmptyEntity node, which is a general entity object that contains nothing more than a TransformComponent, hence the transform input pin. To make something useful with it, we add more components (e.g. model, material, audio, physics etc.) to it. There are many of them and you can combine them as it suits your use case. The big advantage here is, that the components are able to interact with each other via the common parent entity and that the scene graph system automatically processes them in an optimized way. This is where it gets interesting!
Let's say we want the box from the patch above to emit a sound from its current position. In order to do that we only have to add a SpatialAudioComponent to the same entity as the box component:
Since the SpatialAudioComponent and the BoxComponent have a common parent entity they will share the same transformation. Also, if an entity has child entities, the children get transformed by the parent. We could use that feature to add an AxisEntity to our custom entity:
Again, there is no need to connect the input transformation to the AxisEntity since it gets added as a child to the main entity and gets transformed automatically.
Here is what a little scene could look like:
Let's add a second one and let them rotate in the scene to hear the spatial audio effect. Aaaaaand action! (works best with headphones):
There is also a super easy way to design custom entities in xenko's game studio and use them in your patch. Suppose we have a 3d model with animation and skinning imported and edited in game studio. All we have to do now is to create a prefab from it and give it a meaningful name:
Learn more about xenko's prefab workflow here. Once we have that it's as simple as this to use it in your scene:
And finally we will start the walk animation:
Starting the animation is also patched in this case, but let's save that one for another post.
The entity component model of xenko works very well together with vl's process node feature. VL's automatic recompile and instant stateful hot reload allows to dynamically combine and configure entities and the scene graph in real-time while the application is running. You can combine different workflows with each other, simple primitives, custom patched entities or imported prefabs. There is no right or wrong, just build up the scene in a way that suits your way of thinking and the requirements of your project.
We still have only scratched the surface here, there is much more to come.
Who id144, dvj_jenda
When Sat, Dec 15th 2018 - 10:30 until Sun, Dec 16th 2018 - 17:30
Where National Gallery Prague – Trade Fair Palace, Dukelských hrdinů 47, 170 00 Praha 7- Holešovice, Czech Republic
<WORKSHOP> #8 w/ Andrej Boleslavský
➖ Mixing Reality using VVVV toolkit ➖
Workshop is intended for artist, designers and developers to get started with VVVV, the platform for visual programming used for real-time motion graphics, video, audio and ever VR.
Andrej will introduce the basic concepts of the VVVV; an environment designed for creative use, which allows fast and flexible runtime editing without syntax errors. If you never tried programming - here called patching - you will love VVVV. If you tried programming before, you will love it even more.
➖ Instruction for participants:
Andrej Boleslavský (SK/CZ) is an independent digital artist purposing technology in the fields of new media art, virtual reality, light installations and physical computing. His work also maintains a strong fascination with the entanglement of nature and technology. In his collaborative works with digital artist Mária Júdová he explores the boundary between physical and digital. He has developed many interactive installations and lectured on open source and creative coding tools. In addition to his work he is actively involved as a technologist for other artists and interaction designers. VVVV is his tool of choice since 2008.
➖15. - 16. 12. 2018
10.30 - 17.30 – Veletržní palác, National Gallery in Prague - Dukelských hrdinů 47
➖ entry: 1700,- / 1300,- students
➖ reserve your seat at email@example.com
➖ capacity is limited
➖ please note: workshops will be held in English
In his short artist talk Andrej Boleslavský is going to tell us more about his creative work as an independent digital artist purposing technology in the fields of new media art, virtual reality, light installations and physical computing. He will lead us through his latest projects he's been working on as well.
15. 12. - 11:00 - 11:45 – Veletržní palác, National Gallery in Prague
// FREE ENTRY
MORE --> INPUT #8 Artist talk w/ Andrej Boleslavský
// An intensive series of new media workshops and artist talks presented by notable artists focused on the intersection of art & technology.
// Experience the artists through different perspectives as lecturers, workshop leaders and their new media practice (performance, installation, video art)
// more info on www.lunchmeat.cz/input
// organised by Lunchmeat z.s
// co-funded by Ministry of Culture, Czech Republic
// supported by Prague College
previously on vvvv: vvvvhat happened in October 2018
one week to go for the opening of We live in an Ocean of Air which you normally wouldn't have to care about but in this special case, how this connects to you through this blogpost is, that this is the first fully vl + Xenko project we've helped to realized. if you're in london between December 7th and January 20th, go see it, it is good, promised! if not, let's hope that everything goes well, because then the thing should go on tour and maybe come a bit closer to you.. here is a little teaser:
but that is not all:
next up: we're going to evaluate where we are with vl-standalone and xenko after this above mentioned project and we should have an update for you by the end of this year. and everybody yayy.
looking for a job at where it all once started? here are two opportunities athttp://meso.design
several works in progress announced:
and something in chinese:
that was it for november. anything to add? please do so in the comments!
welcome the first update to our beloved 2D graphics library.
Since this summer's release quite a few things have happened.
The main focus was on covering every single aspect of the Skia's Paint (defines how everything looks like when rendered) and cleaning it up.
We would say it is now complete.
We've also introduced some special gems like Masks, Precompositions, Ellipsis and some more examples.
Never heard of Skia before? Check Skia on Wikipedia.
You already were able to clip layers by rectangles or paths (ClipRect, ClipPath), now you can use any layer as a mask (as you know it from Photoshop), be it an image or very complicated layer pipeline. Welcome the Mask. It comes in two flavors: one uses a layer as a mask, another one just an image. The node has useful helpers which allow you to see how the mask looks like and where it is applied.
While researching for the masks we've stumbled upon Skia's superpower: we call it precomposing. Layers can be precomposed (leaving canvas unchanged) and then applied (grouped) with other layers. It's like having an extra render pass that works like a layer.
So now you've got two options: you can blur every single particle alone (by setting SetMaskFilter > Blur) or prepare all of them (precompose) and then blur the whole scene at once (by setting SetImageFilter > Blur).
In the screeshot above:
Left - every single circle is blurred alone, the background comes into play.
Right - the whole precomposition is blurred.
Then we have these boring Ellipsis nod... nodes. They clip your text (left, right or center) by the number of letters or the width in units. Like this:
We've cleaned up the Paint a lot and now you can set or get any of its properties. Don't forget to turn on the "Advanced" filter of the NodeBrowser (like in the screenshot or just press TAB-key) to get the full power. SetFakeBoldText anyone?
In the Paint category:
Don't forget to check the /examples folder of the package (your-vvvv-folder/lib/packs/VL.Skia.xxxx/).
There are also some updates to the /examples/demos, like the Slideshow.vl which let's you click through your-very-big-images asynchronously preloading them in the background.
1. Install the latest vvvv_50beta38.1 (older versions are not supported)
2. In vvvv, middleclick > Show VL
3. In VL, go to: Dependencies > Manage Nugets > Commandline and type:
When Wed, Nov 28th 2018 - 19:30 until Wed, Nov 28th 2018 - 23:00
Where Spektrum Berlin, Bürknerstraße 12, 12047 Berlin, Germany
It is happening: vvvv berlin meetup #6
As always, feel free to bring your project/notebook/questions or whatever you want to share with the community. We have space and time for spontaneous presentations!
There will be a bar serving us drinks. Thanks go to Lieke and Alfredo who are running the fantastic space Spektrum Berlin
If you feel like, please rsvp on our Getogether page!
anonymous user login