Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
Here we go!
We have a candidate for our upcoming release:
This is going to be the first stable release including the huge shiny new 3d rendering library we created based on the Stride Engine!
For more details of what's new, please consult the Change Log.
So please test and report your findings. If we don't find any complete show-stoppers within the next days, this is going to be it!
Here is,
another addition to the series of things that took too long. But then they also say that it is never too late... VL was shipping with OSC and TUIO nodes from the beginning, but frankly, they were a bit cumbersome to use. So here is a new take on working with those two ubiquitous protocols:
To receive OSC messages you need to place an OSCServer node which you configure to the IP and Port you want to listen on. Immediately it will show you if it is receiving OSC messages at all on the Data Preview output.
Then use OSCReceiver nodes to listen to specific addresses. Either specify the address manually or, hit the "Learn" input to make the node listen to the address of the first OSC message it now receives.
Note, that the OSCReceiver is generic, meaning it'll connect to whatever datatype you want to receive. Supported typetags are:
In case of multiple floats, you can also directly receive them as vectors. And this works on spreads of the above types and even on tuples, in case you're receiving a message consisting of multiple different types.
To send OSC messages you first need an OSCClient which you configure with a ServerIP and Port. Then you're using SendMessage nodes to specify the OSC address and arguments to send. Again note that the "Arguments" input is generic, so you can send any of the above types, spreads of those and even tuples combining different types!
By default, vvvv is collecting all the data you send and sends it out in bundles per frame. For optimal usage of UDP datagram size (depending on your network) you can even specify the maximum bundle size on the OSCClient node.
These are the basics. There are a couple of more things which are demonstrated in the howto patches!
For receiving TUIO data you're using a TUIOClient which you configure to the IP and Port you want to listen on. The client already returns a spread of cursors, objects and blobs that you can readily access.
For sending TUIO data you're using a TUIOTracker node which you configure with a ServerIP and Port. Then you give it a spread of cursors, objects and blobs to send out.
Available for testing now, in latest 2020.3 previews!
Hello everyone,
I'd like to give you an update on the toolkit front, that vvvv has always been. While vvvv beta can be described as a dynamic system, mutating while you mold your patches, vvvv gamma and its workhorse VL are of a different kind. With VL we embraced features like
In short, we embraced robust software developing strategies that at first seem to contradict the idea of a playful development toolkit that allows you to mold your app. We went for compiled patches, running, and hot-swapping them while you are building them.
But we envisioned vvvv to be both
While my last blog post was about the language, let's focus on the toolkit this time.
Let's have a look at some features that allow you to interact with the VL runtime, the system that runs your patch while it allows you to edit it. The features here empower you to enrich the patching experience. We understand that these improvements need to "trickle up" into the libraries and only thereafter will have an effect for all users.
So the following is probably mostly interesting for advanced user and library developers.
You now can react to a selection within the patch editor. The more libraries do this the more playful the environment gets. We still have to figure out all the use cases, but here is a list of what's possible already
And there is more:
You can get a Live Element for a certain Pin or Pad.
useful for the cases where you want to always inspect a specific pin or pad of some patch. This can be helpful for debugging.
When a Skia Renderer is your active window, Ctrl-^ let's you jump to the patch in which it is used. This is handy when you opened a bunch of help patches and you want to see the help patch that is responsible for the output.
You can use the node ShowPatchOfNode to do the same trick.
Here you can see a custom tooltip for a user patched type "Person".
You now can patch your own tooltip with RegisterViewer. This way the patching experience will be so much more fun. We're in the process of adding more and more viewers for our types.
Up to now, we had
And now we introduce to you:
You can try it yourself by using the Warn or the Warn (Reactive) node.
The warning will not only show up on the Warn node, but also on the applications of your patch.
Sometimes it's just convenient to be able to send data from one patch to another without the need of feeding the data via pins. We now have send and receive nodes, like known from beta.
Features:
Some libraries focus on a simple idea:
Let the user build an object graph that describes what he wants in a declarative manner and the library will do the hard work to follow that description.
Examples for this approach are
VL.Stride and VL.Elementa have in common that they focus on a very certain type of object graph: A tree made out of entities and components.
Libraries like these can now talk to the user and enforce the user to not build any kind of graph, but a tree-shaped graph (where one child doesn't have many parents).
VL.Stride uses TreeNodeParentManagers, Warn nodes and S&R nodes internally to the deliver this feature:
You'll very soon be able to inspect those patches.
Help patches to all those topics will show up in the CoreLib API section (at the bottom of the listing).
We hope you'll enjoy these ways of integrating with the development env.
Thank you and we'll see you soon!
yours devvvvs
Did you ever wonder what the first things were, that the cool kids in the VL.Stride EarlyAccess program created with the new 3d rendering engine for vvvv gamma?
It's been only a few weeks, but stunning pixel combinations got posted into our early access chat room.
And we collected them in a gallery for you:
A big THANK YOU to everyone involved!
We can't wait to see what you will create with it. And don't miss the workshops at NODE20 if you want to learn how to use it.
We are looking forward to the public release as much as you do,
yours devvvvs
In a quest to get more basic things working out of the box with VL (ie. using vvvv beta>=40 or the all-new vvvv gamma), we took on to support your favorite depth cameras. Most of the cameras and their APIs share basically the same features as a baseline and then some of them have a few extra features. This means that using them in vvvv works mostly the same for all of them.
You have the main device node that you connect ColorImage, DepthImage, PointCloud, Skeleton,... nodes to, to get the desired info out of them. See the help patches coming with the packs for details.
Here is a list and comparison of all available depth cameras with links to the respective packs on nuget.org. To learn how to use nuget packs with vvvv please watch HowTo use Nugets.
The original Microsoft Kinect or the XBOX 360 that was released a bit later.
Get the VL pack on nuget.org.
Created with support by chaupow.
Pros
|
Cons
|
The second version of the Microsoft Kinect.
Get the VL pack on nuget.org.
Created with support by ravazquez.
Pros
|
Cons
|
The third version. AzureKinect.
Get the VL pack on nuget.org.
Get the VL pack for skeleton tracking on nuget.org.
Pros
|
Cons
|
Orbbec Astra.
Get the VL pack on nuget.org.
Pros
|
Cons
|
Intel RealSense.
Get the VL pack on nuget.org.
Pros
|
Cons
|
Nuitrack is a piece software that works with all of the above cameras and provides skeleton, hand and face tracking.
Get the VL pack on nuget.org.
Created with support by ravazquez.
Pros
|
Cons
|
The Leap Motion Controller device provides hand and finger tracking.
Get the VL pack on nuget.org.
Pros
|
Cons |
Please help us improve this list of pros and cons. Know any other or disagree with some mentioned, please add them in the comments! This could eventually grow into a page of the gray book.
Evvvveryone,
is happy, that we're finally looking at a candidate for beta40, which will ship with the latest and may i say greatest integration of VL to date that is 2020.1.4!
For the latest changes in VL, please consult the vvvv gamma series 2020.1.X changelog.
Most notably this gives you access to all the latest goodies that are popping up as .NET nugets lately. Yes, we're still missing a convenient overview of those, but meanwhile a search for VL on .nuget.org gives you an idea. Still wondering how to use those? Watch this tutorial on How to use Nugets to find out.
So please test this against your current projects. Make sure that everything is running as expected. If not, please leave a comment below or let us know in the forum.
vvvv beta40 x64 RC2
vvvv beta40 x86 RC2
RC2
Hello everyone!
We are pleased to announce that from now on VL language design ideas will be specified in public.
This allows you to see
But it also allows you to join forces with us. Since in the end, it's all about your patches, we appreciate your feedback.
We decided to start clean: For now, we didn't throw all our language ideas into this repository. In its current state, we only see issues that came up in the last days, so the selection of issues is quite incomplete. Other ideas that might be more important and didn't come up in these days will eventually make it there as soon as they come up again.
We'll address quests by you or us with proposals that might be fresh or have been around for some time. We'll try to communicate different approaches and the pros and cons. And we'll try to single out very few issues that are just too promising to not having a shot at. Changing the language is quite hard, so expect an insane ratio of proposalsThatSoundNice / featuresComingSoon.
We were quite impressed by how this was handled by the C# Language Design Team. So we copied the approach.
Sometimes it's hard to distinguish between the language and ways of expression within the language. You just search for a way to address a certain problem. How would I structure my patches? We'll allow these so-called design patterns to be discussed in this repository as we want the language to be able to follow well-established ideas on how to solve certain types of problems. Here is an example.
But for now: Welcome to the club! \o/
helo evvvveryone,
we're preparing for a vvvv beta39.1 release and here is a first release candidate. As you'll see in the change-log it is a rather minor update with only fixes. It does not yet include the anticipated update to latest VL, which we save for upcoming beta40. We just want to get another stable version out before such a bigger update that including latest VL will mean.
Remember that via VL you have access to many more goodies. Here is a convenient list of VL nugets that work with this release. To learn how to install nugets please consult this documentation and then use these commands to install them:
nuget install VL.OpenCV -Version 0.2.141-alpha nuget install VL.Devices.Kinect2 -Version 0.1.45-alpha nuget install VL.Devices.Realsense -Version 0.1.7-alpha nuget install VL.GStreamer -Version 1.0.18-gadcd7f95e5
nuget install VL.Audio -pre nuget install VL.IO.M2MQTT -pre nuget install VL.IO.NetMQ -pre nuget install VL.2D.DollarQ -pre nuget install VL.2D.Voronoi -pre
If you have other public nugets that you tested to work with this release, please post them in the comments so we can all mention them in the upcoming release notes!
vvvv beta39.1 x64 RC1
vvvv beta39.1 x86 RC1
And as always, please test and report your findings!
This is my talk at Ircam earlier this year, where I tried to introduce vvvv gamma to an audience not necessarily familiar with vvvv but most likely already with the idea of visual programming. Given the fact that Ircam was the birthplace of Max and PD which are still both in heavy use there.
In 25 minutes I tried to give a glimpse at vvvv by focusing on four things that I believe make it shine:
Also I pretended that it is completely normal to already have a 3d engine with it...
For talk description and recording of other talks see: https://medias.ircam.fr/xcc0abe
And here we go!
Only about a year after the first public preview of vvvv gamma we hereby announce what will be the final round of previews:
The vvvv gamma 2020.1 series.
We have a code-freeze. This is essentially what will be in the final release. We'll only be adding to documentation and fix showstopper bugs, should they come up. Of course we're aware of many more issues but we hope at this point to have squashed all the biggest buggers and are confident to release a first stable version within the next weeks.
LanguageBesides staying true to its nature of being a an easy to use and quick prototyping environment, vvvv is also a proper programming language with modern features, combining concepts of dataflow and object oriented programming:
|
Node LibraryWhile for now the number of libraries we ship is limited, the fact that you can essentially use any .NET libary directly mitigates this problem at least a bit. Besides there is already quite some user contributions available in the wip forum and here is what we ship:
|
The integrated help-browser comes with a lot of examples and howto-patches and a growing number of video tutorials is available on youtube.
We've announced the pricing model of vvvv gamma in a separate post. Until further notice, the previews of vvvv gamma are free of charge but due to its preview-nature we don't recommend using it in commercial scenarios yet.
Here you go: vvvv gamma 2020.1 preview 0040
Upcoming
0040 27 03 20
0032 24 03 20
Compared to the 2019.2 series
Ideally this will be the last preview, realistically we'll have to release some more. So please check back often and report your findings in the chat, forum or a comment below!
Yours truely,
devvvvs.
anonymous user login
~3d ago
~3d ago
~4d ago
~5d ago
~7d ago
~8d ago
~9d ago
~9d ago
~13d ago