Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
After NODE 2020, having seen all the wonderful things Stride and vvvv can do together, it was inevitable to fall head first into the adventure that has been bringing Stride and VL.OpenCV into a playful and seamless friendship.
I am happy to announce that as of version 1.2.0 of VL.OpenCV, you can effortlessly and painlessly:
Need to know where your camera is and what it's looking at based on an Aruco marker or a chessboard calibration pattern?
Say no more:
And now from the outside:
Bring 3D objects and animations into your image using Aruco markers to create augmented reality projects:
Remember this beauty? It helps you figure out the position and characteristics of your projector in your 3D scene.
Once you know where your projector and the spectator are, you only need to worry about the content. 3D projection mapping made easy!
Not bad huh?
So there you have it boys and girls, 3D computer vision based adventures for all! Head to your local nuget distributor and grab a copy while it's still hot.
A big thank you to motzi, gregsn and tebjan for their invaluable help as well as to many others who contributed one way or another.
And as always, please test and report.
Keep your cameras calibrated kids!
another addition to the series of things that took too long. But then they also say that it is never too late... VL was shipping with OSC and TUIO nodes from the beginning, but frankly, they were a bit cumbersome to use. So here is a new take on working with those two ubiquitous protocols:
To receive OSC messages you need to place an OSCServer node which you configure to the IP and Port you want to listen on. Immediately it will show you if it is receiving OSC messages at all on the Data Preview output.
Then use OSCReceiver nodes to listen to specific addresses. Either specify the address manually or, hit the "Learn" input to make the node listen to the address of the first OSC message it now receives.
Note, that the OSCReceiver is generic, meaning it'll connect to whatever datatype you want to receive. Supported typetags are:
In case of multiple floats, you can also directly receive them as vectors. And this works on spreads of the above types and even on tuples, in case you're receiving a message consisting of multiple different types.
To send OSC messages you first need an OSCClient which you configure with a ServerIP and Port. Then you're using SendMessage nodes to specify the OSC address and arguments to send. Again note that the "Arguments" input is generic, so you can send any of the above types, spreads of those and even tuples combining different types!
By default, vvvv is collecting all the data you send and sends it out in bundles per frame. For optimal usage of UDP datagram size (depending on your network) you can even specify the maximum bundle size on the OSCClient node.
These are the basics. There are a couple of more things which are demonstrated in the howto patches!
For receiving TUIO data you're using a TUIOClient which you configure to the IP and Port you want to listen on. The client already returns a spread of cursors, objects and blobs that you can readily access.
For sending TUIO data you're using a TUIOTracker node which you configure with a ServerIP and Port. Then you give it a spread of cursors, objects and blobs to send out.
Available for testing now, in latest 2020.3 previews!
Previously on vvvv: vvvvhat happened in October 2020
A seemingly calm month, but it is boiling under the covvvvers: First, you notice that we continue to update the 2020.2 release with bugfixes. The latest release is vvvv gamma 2020.2.4.
Then, as mentioned previously, we're currently mostly focused on getting a stable 2020.3 out which will include VL.Stride, our shiny new 3d engine. Best of it: you can follow our daily progress by downloading the preview releases. Already comes with tons of help and demo patches. Give it a spin!
And finally done are the completely reworked, easy to use OSC and TUIO nodes which will show up in one of the coming previews soon!
Two new ones:
A little teaser:
And some new works in progress:
That was it for November. Anything to add? Please do so in the comments!
Previously on vvvv: vvvvhat happened in August 2020
So once again, where were we...
If you haven't noticed yet, the latest previews for vvvv gamma now include VL.Stride, the fancy new 3d engine. We're quite happy with the feedback so far. Things mostly seem to work as expected. We're now focusing on making this preview into the first 2020.3 stable release including VL.Stride. But 3d is not all, we've also included a few other goodies in the 2020.3 branch, which are summarized in a separate blog post with the juicy title: vvvv - The Tool.
Quite a few new works in progress:
That was it for September and October. Anything to add? Please do so in the comments!
the biggest NODE so far, in terms of reach. At least if you want to believe the viewing numbers on the videos of the daily streams. This time the whole world was able to participate and not only a handful of privileged being able to come to Frankfurt. What an undertaking to run a pop-up TV station for 7 days next to a 2 track, 9h a day workshop program...
On behalf of the whole team that made this edition possible, vvvv wants to thank david and Jeanne Charlotte Vogt, directors of NODE20 - Second Nature, for pulling the strings. Once again very well done, chapeau!
The team was huge and a lot of different things happened over the course of this week, too numerous to recap here. So in this blogpost I want to particularly summarize the vvvv focused parts and highlight the members of the vvvv community who helped make NODE20 possible.
You should watch them all: 7 days of quality panels and discussions around this years topic "Second Nature". But then, as promised, the following is a listing of the more vvvv related shows for your viewing pleasure:
And of course to every single one of the 26 workshop hosts and co-host who took the time to bring their knowledge to all of us: andresc4, Anna Meik, antokhio, baxtan, domj, dottore, elias, everyoneishappy, gregsn, Gene Kogan, hayden, idwyr, joreg, jule, kleinkariert, lasal, Maria Heine, Marian Dziubiak, motzi, ravazquez, sebl, sunep, Takuma, tonfilm, untone, vux.
NODE is a community effort. Everyone is chipping in what they can. So finally I want to list a few companies without whose continued support in the form of material or human resources, NODE20 would not have been possible:
vvvv takes a deep bow in front of everyone mentioned. I sincerely hope I didn't forget anyones contribution but am well aware that this is not unlikely. So in case I missed someone, please someone let me know so I can add the info here!
After NODE is before the next NODE.
Back to work!
Me and the complete team recover slowly. NODE was a blast and we can be incredibly proud to made it happen under the 2020 conditions. I do believe that the hybrid approach is something that has some future potential. Heads are spinning already how a next node would need to be.
To get some structured feedback we have setup this survey for all participants:
You can help to make NODE better by filling this out. Thank you so much!
Many have asked us for the workshop recordings. And here are some good news:
Here is the story behind the decision: When we announced the festival in July/August it was clear that we have to give the ticket owners some exclusive access to the recordings afterwards to actually make them onboard the festival. Otherwise - we assumed - many could have chosen to simply wait until the festival is over and wait for the public recording. The festival would not have worked at all.
Now after the festival it feels a bit unnatural to hide the recordings to curious new people. Why not ride the wave of attention we created? Selling the recordings became an option. It would also help to close a financial gap of the overall festival budget. After some talks with the hosts about how we can handle this in a fair way we came to the conclusion that we will split the income between the Instructors and the festival. This feels natural as the institutes idea is to help the community to sustain and help instructors to get revenue for their educational work. The income does not got to the vvvv group but to all community instructors.
Love goes to all of the instructors and organizers and contributors. We are all deeply thankful for their effort and contribution. ravel, sebescudie, Rayment, katzenfresser and Ben Schiek, andresc4, Anna Meik, antokhio, baxtan, domj, dottore, elias, everyoneishappy, gregsn, Gene Kogan, hayden, idwyr, joreg, jule, kleinkariert, lasal, Maria Heine, Marian Dziubiak, motzi, ravazquez, sebl, sunep, Takuma, tonfilm, untone, vux, readme, bjoern,kopffarben and more vvvv people in the program.
Thank you !
David for The NODE Institute and Festival Team
The long wait is over!
vvvv gamma 2020.3 public previews now include VL.Stride, the new 3d rendering library, based on the opensource Stride 3d engine. You be the judge, but spoiler: this is rather huge!
Massive thanks go out to all early accessors who helped us uncover and fix countless buggers that you no longer have to run into. So this is also on your behalf. You're welcome!
All of the basics are now in place. Find your favorite among these:
To give you an idea, here is a random collection of screenshots of what earlyaccessors have created with this already.
To give you a heads-up, here are things you might expect already but are yet to come:
And then some more, but the above should be the most obvious ones you'll stumble upon.
Open the Helpbrowser (F1) and check out the explanations, howtos and examples. Remember the preview status, ie. those are not yet in their best shape. But they should help you find your way.
And if you really got nothing better to do in the week of October 2nd to 8th, then consider joining us for NODE20 where we have the following series of workshops dedicated to getting you started with VL.Stride:
A couple of people believed in the development of VL.Stride from the beginning and substantially supported its development. We bow before you:
Following tests from a few months ago regarding publishing your shiny VL nugets with Github Actions, we now have a dedicated action ready to be used!
For more informations on nugets, please refer to this section of the Gray Book.
Github actions are small scripts with a specific purpose, allowing you to automate tasks on your repos. They are actually the building blocks of what's called a workflow : you chain several actions one after the other in your own small script, and decide under which condition the workflow is triggered (a new commit on master, a new tag, etc).
There are already tons of actions allowing you to do all sorts of things from creating issues to running unit tests, and even make phone calls with a predefined text!
This action allows you to easily do the following tasks :
In other words, you just have to push your code/patches and nuspec to github, and the script takes care of bringing it to nuget.org for you.
You can find the list of input parameters the action expects here.
To get started with workflows, head over to the Github documentation.
The action can adapt to many different scenarios. Let's cover three cases so you get an idea of how the action works, and how to adapt it to your needs.
So your plugin solely consists of one (or many) VL documents and some help patches, plus your nuspec file that describes how your package will be structured.
Here, we are just using the nuspec and nuget-key inputs of the action.
Your csproj file can also describe how your nuget will be packed. In that case, simply omit the nuspec input. Note that if you provide a nuspec file anyway, it will take priority over your csproj.
By default, the action will push your package to nuget.org. You can simply use the nuget-feed input to push to a different feed.
The icon must be downloaded to an existing folder in your repo. We suggest you simply download it to its root :
Here, we ask the Github Action to download the icon from our URL and place it at the root of the repo.
Then, in the file section, your nuspec file must reference it from where the action will download it (src attribute) and place it wherever you like (target attribute), making sure target matches where the metadata section expects it.
You can setup an icon for your project inside Visual Studio. The tricky part here is that you'll have to specify a path to a file that does not exist yet, since the Action will take care of downloading it later on. This can feel weird since Visual Studio's UI gives your a Browse button for you to pick a file. Simply fill the path manually to match the icon-src property of your workflow file.
For instance, your workflow file would look like this:
and your VS package settings :
Thanks for reading, hope you'll enjoy using this one! If you are stuck or want more precision, don't hesitate to shout in the comments or in the forums.
I'd like to give you an update on the toolkit front, that vvvv has always been. While vvvv beta can be described as a dynamic system, mutating while you mold your patches, vvvv gamma and its workhorse VL are of a different kind. With VL we embraced features like
In short, we embraced robust software developing strategies that at first seem to contradict the idea of a playful development toolkit that allows you to mold your app. We went for compiled patches, running, and hot-swapping them while you are building them.
But we envisioned vvvv to be both
While my last blog post was about the language, let's focus on the toolkit this time.
Let's have a look at some features that allow you to interact with the VL runtime, the system that runs your patch while it allows you to edit it. The features here empower you to enrich the patching experience. We understand that these improvements need to "trickle up" into the libraries and only thereafter will have an effect for all users.
So the following is probably mostly interesting for advanced user and library developers.
You now can react to a selection within the patch editor. The more libraries do this the more playful the environment gets. We still have to figure out all the use cases, but here is a list of what's possible already
And there is more:
You can get a Live Element for a certain Pin or Pad.
useful for the cases where you want to always inspect a specific pin or pad of some patch. This can be helpful for debugging.
When a Skia Renderer is your active window, Ctrl-^ let's you jump to the patch in which it is used. This is handy when you opened a bunch of help patches and you want to see the help patch that is responsible for the output.
You can use the node ShowPatchOfNode to do the same trick.
Here you can see a custom tooltip for a user patched type "Person".
You now can patch your own tooltip with RegisterViewer. This way the patching experience will be so much more fun. We're in the process of adding more and more viewers for our types.
Up to now, we had
And now we introduce to you:
You can try it yourself by using the Warn or the Warn (Reactive) node.
The warning will not only show up on the Warn node, but also on the applications of your patch.
Sometimes it's just convenient to be able to send data from one patch to another without the need of feeding the data via pins. We now have send and receive nodes, like known from beta.
Some libraries focus on a simple idea:
Let the user build an object graph that describes what he wants in a declarative manner and the library will do the hard work to follow that description.
Examples for this approach are
VL.Stride and VL.Elementa have in common that they focus on a very certain type of object graph: A tree made out of entities and components.
Libraries like these can now talk to the user and enforce the user to not build any kind of graph, but a tree-shaped graph (where one child doesn't have many parents).
VL.Stride uses TreeNodeParentManagers, Warn nodes and S&R nodes internally to the deliver this feature:
You'll very soon be able to inspect those patches.
Help patches to all those topics will show up in the CoreLib API section (at the bottom of the listing).
We hope you'll enjoy these ways of integrating with the development env.
Thank you and we'll see you soon!
Are you teaching or studying vvvv in an educational institution? Want to join NODE20 with a group of students? Please get in touch, we want to offer you a discount!
NODE20 features 25+ online vvvv workshops on various topics covering the needs of beginners and advanced users. This means the week of October 2nd to 8th will be a very good moment to divvvve deep. Besides, there'll also be a rather fine conference program you'll not want to miss.
You may say we're biased, but we believe that vvvv is one of the more suitable ways to get people in touch with topics like creative coding, generative design, computer graphics, interaction design, data visualization, computer vision, physical computing, machine learning and similar. This is even more true for the all-new vvvv gamma. Why? Here is a list of pros and cons with a focus on use in education:
Have more? Let us know in the comments.
Hope to see you at NODE!
anonymous user login