Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
With the release of vvvv gamma 2019.2 preview it's now finally possible to create standalone applications out of any vvvv patch.
The steps to create an application are as simple as this:
To learn how to deal with referenced assets, read Exporting Applications in the gray book.
So please try it out with your projects and report any findings in the forums or the chat.
With the release of vvvv gamma 2019.2 we introduced a new backend compiling patches in real time using Roslyn. This blog post is primarily intended for a technical audience, if you're solely interested what new features it brings to the table have a look at the before mentioned blog post.
In the past VL (the language behind vvvv gamma) compiled in-memory directly to CIL using CCI. With the recent changes in the .NET ecosystem and CCI being superseded by Roslyn it became more and more apparent that at some point we'd also have to make the switch to keep up with the latest developments happening at Microsoft.
What finally pushed us into making the switch was two-folded:
Until this year in march @sebl came to the rescue by randomly dropping us a link in the chat pointing to a neat little "trick" which suddenly made it possible to translate our adaptive nodes feature directly.
After initial tests in March and April and having the patched tooltip feature still pending for the final release, we decided to let myself jump into the rabbit hole which I've finally crawled out of after more than half a year ;)
Let's go back to the example of the LERP node and let's further try to write it down in C#:
Looks neat but sadly won't work, C# will tell us that the operators +, - and * and the constant 1 are not available on T.
The trick to make it work is to outsource those operators to a so-called "witness" which in turn will provide the implementation when the LERP gets instantiated with say a vector. So let's see how the actual needed C# code is gonna look like:
and when applying it with say float we need to define a witness implementing the needed interfaces
which finally allows us to call
Fancy, no? The beauty is that when the JIT compiler hits such a code path it will be smart enough to inline all calls so in the end for the CPU the code to execute is the same as the initial naive attempt. But don't worry, this is all happening behind the scenes. In the patching world, it is as simple as it was before to patch generic numeric algorithms.
So now that we're able to translate patches directly to C# what are the implications apart from being able to export an application?
Well for us as developers it will be much easier to bring in new language features, because the code we generate will be be checked by the C# compiler and more important, we can fully debug the generated code with Visual Studio. That by the way is not only restricted to us, anyone can now attach a debugger to vvvv (or the exported app) and debug the patches.
The generated C# code will make full use of .NET generics. So when building a generic patch the generated class will also be generic in the pure .NET world. As an example let's consider the Changed node, while in the CCI based backend a Changed class was emitted for each instantiation (Changed_Float, Changed_Vector, etc.), the new Roslyn based backend will only emit one Changed<T> class and it is left to the JIT compiler of .NET to create the different versions of the needed target code. This should lead to much less code the CPU needs to execute as the JIT compiler is much smarter on when to generate new code and when not.
But what's even more important is the fact that it opens up the world of compiling VL patches as pure .NET libraries. So we can finally pre-compile our libraries (like VL.CoreLib, VL.Skia, etc.) which in turn reduces the general overhead and leads to much quicker loading times and less memory usage. As an example loading the Elementa "All At Once" help patch takes ~15 seconds the first time (compared to ~33 seconds in the old backend) and thanks to caching to disk only ~8 seconds when opening at a later time.
Apart from better loading times, it also gives the patcher the ability to instantiate any VL patch during runtime. In the previous backend, one had to use a hack and put all possible instantiations into a non-executing if-region. This is not necessary anymore as all the patches get compiled. However, I should mention here that this is only true for non-generic patches. Generic patches usually require a witness which is not so straight forward to provide.
Sadly the new backend also required some major internal changes in the frontend so it wasn't possible to guarantee existing patches would work the same way as they did before. Here follows a list of potential breaking changes:
as mentioned previously, elias and me were at DotNextConf in early November where we talked vvvv to a whole different audience than you are. We titled our presentation "vvvv - Visualprogramming for .NET" hoping to get the benefits of what you all take for granted (that is "live, visual programming") across to people who are still stuck with an EDIT/COMPILE/RUN mode of development. And to hear if they could see some application of it in their fields.
The general feedback can be summed up as "Huh, this looks interesting, but I don't know what to do with it". Clearly our demonstrated use-cases didn't resonate with them. A few people came up to us after the talk and were interested in details about the state hot-reload and one guy told us he is working in aero-space engineering and he believes they could very well use it: When working together with physicists they often need to exchange code with them but they always deliver very bad code, he claimed. He saw vvvv as a perfect fit of a tool they could prepare some high-level nodes in, so that the physicists have a tool to experiment with, where they cannot make too many mistakes... Interesting take and in a sense exactly what we were hoping for. So let's see where this goes...
Now grab a drink and some snacks, this goes for 50 minutes:Watch on Youtube
Just in time!
Only a whopping 6 years and one and an half months after its first mention during Keynode 13 and to the day exactly 5 years after the release of the The Humble Quad Bundle, you can finally hold it in your own hands. Not exactly as the full release we had planned but as a preview:
To our own surpise we couldn't finish all the things we had planned to release today. Most notably the "windows executable export" didn't make it. We know this is a bummer, but we want to get this right and it just needs some more time.
Apart from that we figured there is no more need at this point, to keep it to ourselves. It is definitely good enough for a preview, definitely good enough to gather some feedback to incorporate into the final 1.0 release for which we take some more time to finish our plans. So let's have a look at what we got:
Besides staying true to its nature of being a an easy to use and quick prototyping environment, vvvv is also a proper programming language with modern features, combining concepts of dataflow and object oriented programming:
While for now the number of libraries we ship is limited, the fact that you can essentially use any .NET libary directly mitigates this problem at least a bit. Besides there is already quite some user contributions available in the wip forum and here is what we ship:
To accommodate for the fact that from now on we essentially have 2 products, we added two main categories to the forum:
The existing question, feature, bug, general sections were moved into vvvv beta, and the vvvv gamma section got its own question, feature, bug and general sub-sections. Note that by default the search is constrained to the category you're currently viewing. When you're using vl in beta, still feel free to ask questions in the beta forum. We'll handle that.
Head over to this forum section to watch some video tutorials:https://discourse.vvvv.org/c/tutorials
We've previously announced the upcoming pricing model for vvvv gamma, which we're currently refining and we'll update you on changes to it soon.
Until further notice, the previews of vvvv gamma are free of charge but due to its preview-nature we don't recommend using it in commercial scenarios yet.
Here you go: vvvv gamma 2019.1 preview 975
975 26 11 19
959 19 11 19
930 02 11 19
923: 31 10 19
827: 09 10 19
703: 16 09 19
667: 03 09 19
618: 22 08 19
615: 21 08 19
573: 08 08 19
552: 01 08 19
411: 12 06 19
406: 10 06 19
398: 05 06 19
380: 01 06 19
369: 27 05 19
344: 14 05 19
318: 09 05 19
303: 08 05 19
301: 07 05 19
287: 06 05 19
273: 02 05 19
252: 27 04 19
230: 24 04 19
222: 18 04 19
200: 15 04 19
191: 13 04 19
180: 11 04 19
177: 10 04 19
Apart from the promised and still missing parts, we're aware of quite some little glitches still and will update the download link above periodically. So please check back often and report your findings!
It's been a while since the b38.1 release. But finally we're getting ready to release an update to vvvv beta. Here is the release-candidate, meaning it has all we wanted to add for beta39. We only want to give you a chance to test this with your current projects so we have a last chance to squash any new bugs, you may encounter.
Here are the highlights of the upcoming release:
If you're also using VL already, good for you, because here you'll find even more goodies you will benefit from:
Besides those, it is important to understand that with VL you also have access to numerous more libraries that have been released recently. A lot of new packs these days come as nugets. For an overview, see VL packs on nuget.org and you can use them all in vvvv beta, via VL...
So please give this release candidate a spin and be sure to report your findings, preferrably in the forum using the "preview" tag, or also just by posting a comment below.
Previously on vvvv: vvvvhat happened in July 2019
What's been the happenings?
Glad you aks! First off: Mid August saw the 2nd incarnation of the LINK summercamp. This time organized by StiX near Bratislava. He wanted to write a little report of our activities there so I'm not gonna spoil it. Only wanna say so much: It was a blast! We cannot thank StiX and his friends enough, who did all the organization and prepared all the amazing food. Chapeau! And I wittnessed talks of possible future LINKs in france and spain. Let's see who gets this done first...
Then we just released a new vvvv gamma preview which finally includes nodes like the MultiFlipFlop, a Switch with multiple inputs, Resample nodes and more... Otherwise we're still polishing executable export and hope to be able to give you a preview of it soon. Besides that VL.Xenko is progressing quite well. More interesting for developers, read about how we switched to Xenko.Math and a more popular one:
As I hope you've noticed already, we've been strong on education lately: We're now offering personal training including a desk in our studio in Berlin: vvvv Training at the Source and we have regular activities going:
Besides two new contributions:
and one update:
We saw saw quite some activity in the forums work-in-progress section:
Aas always if you're looking for a vvvv job or even have one to announce, remember these:
That was it for August. Anything to add? Please do so in the comments!
In preparation for the Xenko game engine integration we decided to change the default math library of VL from SharpDX to Xenko. The decision was particularly easy since both math libraries have the same origin and most types and methods are identical. And thanks to the VL import layer it's easy to switch out the types, without any noticeable changes for the VL user.
What you get:
We are (again) in luck with Xenko since it just so happened that Alexandre Mutel, who developed SharpDX, was a core developer at Xenko. We actually didn't know that at the time we started to work on the VL core library. We chose SharpDX mainly because it was well established, complete and open source. So it was quite a nice surprise when we browsed the Xenko source code for the first time and saw that they basically use the same math code.
This section is only relevant for library developers.
Xenko's 4x4 matrices have a transposed memory layout compared to SharpDX. This is not to be confused with transposed matrix elements (M11, M12, M13 etc.), it is only relevant when doing low-level operations with memory and pointers, such as uploading them to the GPU. The big advantage of it is, that Xenko's matrices can directly be uploaded to the GPU without the overhead of transposing them.
Most C# projects written for VL don't need to be changed. Only if they use the SharpDX.Mathematics nuget to work with vectors, matrices, rectangles etc.:
If you then get an error on compilation, your project might be in the old format. Upgrading is quite easy, it just involves changing the header and deleting most lines in the project file. Follow this guide or join our chat if you need help.
Please give the new version a spin and send us a report if anything doesn't work as before.
Here we go!
As mentioned previously, an update to how tooltips look and work, was one of the two main things missing before we call vvvv gamma a 1.0 release. And they have just landed in the preview, horray!
Previously tooltips where text-only, rendered all in one style and often contained rather cryptic information. Now we have structured information that is nicely presented and we also tried to replace weird messages with human readable text where possible.
Tooltips on nodes foremost show the nodes full name and category plus its "Summary" and "Remarks" help information in two separate paragraphs. Additionally, if available you'll see timing information, ie the amout of time the node needs to execute. Operation nodes can also show you the name of the operation they are currently executed on.
In case a node has an error or warning, we try to help you understand what's going on by answering the following three questions:
Also, while a warning/error tooltip is visible, pressing CTRL+C copies the message for convenient pasting, eg. in the forums.
Tooltips on pins foremost show the pins name and datatype. For for primitive types (like numbers, strings, colors,...) that can easily be displayed, we also show the current value.
In cases of collections (like spread), we also show the current count and again, if the datatype is displayable, we now show up to three slices, as compared to the previously only one.
Oh, and the obvious:
Tooltips on links are by default only visible, if the link has an error or warning. To get a tooltip showing on normal links, to see their datatype, press CTRL while hovering it.
Zooming patches is nice, but we figured independent of that, we also want to be able to define the size of a tooltip. so zooming tooltips it is:
Also the patch explorer got a bit more informative using the new tooltips.
Same goes for the nodebrowser, which should make it easier to find the right node as the summary and remarks are now much more pleasant to read.
And finally, there are a now a couple of more settings to tweak for tooltips:
A few tweaks here and there and more viewers to come for more special datatypes over time...but the biggest part is hereby done. To test, download the latest preview and then please let us know what you think in the comments.
one of the more important features for quick prototyping in vvvv always were the IOBoxes. Here is an update that finally brings the vl IOBoxes up to par (and beyond) with what you were used to from vvvv beta.
Most notably missing so far was proper support for spreads. Sorted. When creating an IOBox via "start link -> middleclick" you now always get an interactive IOBox for the supported primitive types: ints, floats, bool, string, path, color, enum, even if they are spreaded or spread-of-spreaded or...
Or configure your own, by first creating a normal IOBox via right doubleclick and then configuring its type (middleclick it) via the Inspektor to a Spread type:
Key to spread IOBoxes is that you can directly set their count, without the need to open an inspektor. By default they now show a maximum of 5 entries and add a scrollbar to show more. If you want to see more, you can change the "Maximum Visible Entries" count via the Inspektor.
To quickly modify a constant spread you can also insert/remove slices when the inspektor is active:
Same as with other editors, the spread editors also work on inputs of a node to quickly tweak values:
And you can now specify defaults for input pins that are spreads:
Mostly useful for numbers and bools, in vl you can override upstream values directly, by manipulating an IOBox that sits in the middle:
What we're used to from beta: Entering values via formula now also works:
Vectors now allow you to change all components at once:
Also the Inspektor now shows all properties that you get on a float IOBox, so you can now also configure e.g. a vectors precision.
Both can now optionally show non-printable characters:
Color IOBoxes now also show you transparency:
Paths finally can be reduced to smaller sizes and show proper path ellipsis, ie. preferring to keep the last part of the value visible:
Click the little O icon to open the current file/directory with their associated program. ALT+click the icon to show the file/directory in the explorer.
As you know, efforts have been going for the last year and a half into bringing a computer vision nodeset to VL.
The goal was to incorporate as many of the features covered by the world renowned ImagePack; contributed by Elliot Woods some years ago; while bringing in as many new features and improvements as we could.
Since then, listening to your needs and constant feedback, we have tried to polish every corner, fix every bug, document every odd scenario, add plenty of demos and specially we tried to give you a clean, consistent and easy to use nodeset.
At this point in time, we are happy to announce that the goal has been nearly met. Most of the features available in the ImagePack made it into VL.OpenCV with the exception of Structured Light, Feature Detection, StereoCalibrate and some of the Contour detection functionality. At the same time, newer features such as YOLO, Aruco marker detection and others have been brought in for you to play with.
So what's next? Even better documentation and loads of examples!
In the mean time, here is a summary of the new things that have been brought into the package in the last couple of months:
The new CvImage wrapper around OpenCV's Mat type allows for some optimizations, specially when dealing with non-changing images.
CvImage allows nodes to run only once when the image has changed, significantly reducing CPU usage
Since it is now possible to detect if an image has changed, CvImage is a perfect candidate to benefit from Cache regions.
Cache regions can now make proper usage of image inputs and outputs
The Renderer was re-built from the ground up to improve usability and to fix bugs and issues. Title, Render Mode and Show Info features were added. Renderer also remembers its bounds on document reload.
New Renderer implementation introduces Title, Renderer Mode and Show Info pins
Histogram analysis has been added to VL.OpenCV. A useful tool in many scenarios.
Histograms allow you to analyze pixel value tendencies per channel
Homography and reverse homography are now available in VL.OpenCV.
Homography (vvvv used only for point IOBox)
Two new Stereo Matchers were added, these allow you to create a depth map from a set of stereo images. For more see the StereoDepth demo patch in VL.OpenCV.
Depth map obtained from a pair of stereo images
Serialization support was added for CvImage and Mat types, allowing you to use CvImage as part of more complex data structures which get serialized with no effort. This can be a heavy operation so make sure to trigger it when needed only.
For a hands on demonstration check out the Serialization VL demo that ships with VL.OpenCV.
As part of this final effort to clean everything even further and make the nodeset consistent and properly organized, we needed to rename and move a few things around which as you can imagine means the introduction of breaking changes. We understand this is an annoying thing to cope with, but this was basically the reason why we chose to keep this pack in a pre-release state until we felt confident with its structure and approach.
In summary yes, you will get red nodes when you upgrade your VL.OpenCV projects to the latest version, but in most cases it should be as easy as to double-click and find the node in its new category.
An exception to this are the nodes that got renamed, which we list below:
Remember that in order to use VL.OpenCV you first need to manually install it as explained here. Also, until we move away from pre-release you need to use the latest alpha builds.
We hope you find good use for this library in your computer vision projects and as always, please test and report.
anonymous user login