Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
As you know, efforts have been going for the last year and a half into bringing a computer vision nodeset to VL.
The goal was to incorporate as many of the features covered by the world renowned ImagePack; contributed by Elliot Woods some years ago; while bringing in as many new features and improvements as we could.
Since then, listening to your needs and constant feedback, we have tried to polish every corner, fix every bug, document every odd scenario, add plenty of demos and specially we tried to give you a clean, consistent and easy to use nodeset.
At this point in time, we are happy to announce that the goal has been nearly met. Most of the features available in the ImagePack made it into VL.OpenCV with the exception of Structured Light, Feature Detection, StereoCalibrate and some of the Contour detection functionality. At the same time, newer features such as YOLO, Aruco marker detection and others have been brought in for you to play with.
So what's next? Even better documentation and loads of examples!
In the mean time, here is a summary of the new things that have been brought into the package in the last couple of months:
The new CvImage wrapper around OpenCV's Mat type allows for some optimizations, specially when dealing with non-changing images.
CvImage allows nodes to run only once when the image has changed, significantly reducing CPU usage
Since it is now possible to detect if an image has changed, CvImage is a perfect candidate to benefit from Cache regions.
Cache regions can now make proper usage of image inputs and outputs
The Renderer was re-built from the ground up to improve usability and to fix bugs and issues. Title, Render Mode and Show Info features were added. Renderer also remembers its bounds on document reload.
New Renderer implementation introduces Title, Renderer Mode and Show Info pins
Histogram analysis has been added to VL.OpenCV. A useful tool in many scenarios.
Histograms allow you to analyze pixel value tendencies per channel
Homography and reverse homography are now available in VL.OpenCV.
Homography (vvvv used only for point IOBox)
Two new Stereo Matchers were added, these allow you to create a depth map from a set of stereo images. For more see the StereoDepth demo patch in VL.OpenCV.
Depth map obtained from a pair of stereo images
Serialization support was added for CvImage and Mat types, allowing you to use CvImage as part of more complex data structures which get serialized with no effort. This can be a heavy operation so make sure to trigger it when needed only.
For a hands on demonstration check out the Serialization VL demo that ships with VL.OpenCV.
As part of this final effort to clean everything even further and make the nodeset consistent and properly organized, we needed to rename and move a few things around which as you can imagine means the introduction of breaking changes. We understand this is an annoying thing to cope with, but this was basically the reason why we chose to keep this pack in a pre-release state until we felt confident with its structure and approach.
In summary yes, you will get red nodes when you upgrade your VL.OpenCV projects to the latest version, but in most cases it should be as easy as to double-click and find the node in its new category.
An exception to this are the nodes that got renamed, which we list below:
Remember that in order to use VL.OpenCV you first need to manually install it as explained here. Also, until we move away from pre-release you need to use the latest alpha builds.
We hope you find good use for this library in your computer vision projects and as always, please test and report.
Up until now VL had a rather rudimentary support for pin groups. Only nodes following a certain pattern had the option to have a dynamic amount of input pins. For simple nodes like a plus or a multiply this worked out fine, but for others it either felt like a hack or it was simply impossible to use at all. A node having a dynamic amount of outputs was never supported at all.
This all changes now by introducing proper support for pin groups. So let's jump right into it and have a look at the definition of the very famous Cons node:
As we can see the pin inspektor is showing one new entry called "Pin Group". This flag has to be enabled obviously. Then we annotate the pin with type Spread. This creates pins with the name "Input", "Input 2", "Input 3" etc. on the node.
If we now look at an application of the Cons node we can already see a couple of nice new features:
Pin groups are not limited to inputs, they also work for outputs which brings us to a new node called Decons - deconstructing a spread into its outputs:
Cons and Decons are examples of using a pin group as a Spread. But there is another variant where the group gets annotated as a Dictionary<string,*>. Instead of addressing the pins by index, they get addressed by their actual name. Let's have a look at two other new nodes again called Cons and Decons but residing in the Dictionary category:
Pins can get added as usual with Ctrl - +, but what's new is that those pins can be renamed in the inspektor UI giving us the ability to quickly build up dictionaries.
The patch of the Cons building up a dictionary compared to the one building up a spread only differs in the type annotation of the input pin.
Apart from Spread and Dictionary the system also supports pin groups of type Array, MutableArray and MutableDictionary. According Cons and Decons nodes can be found when enabling the Advanced view in the node browser.
So far the pins of a pin group have always been created by the user interface of the patch editor. Things get really interesting though when creating them from within the patch itself:
Imagine the string being an expression of some sort generating inputs for each unbound variable. The possibilities are endless :)
The nodes needed to create and remove pins can be found in the VL category after adding a reference to VL.Lang - the patch from the gif above can be found in the help folder of the VL.CoreLib package.
More information on those nodes will be covered in an upcoming blog post. Until then you can try these new pin groups in our latest alpha downloads and happy patching,
previously on vvvv: vvvvhat happened in January 2019
right when we have only one month left to the announced release of vvvv gamma, february cut us short of another 2 days...had we only known that.. so where are we?
also a lot of work happened in the VL.OpenCV library which we now consider to be in a pretty mature state and shall therefore release it as a proper (ie. non-prerelease) package soon.
but most importantly we've announced the pricing for vvvv gamma which has faced some criticism as you can read yourself in the comments. being aware that this would not be an easy topic, we consulted with a couple of longterm vvvv users, to get some outside views on our thinking before the announcement. from the public responses we now got, we understand that this was not enough. our attempt to keep the announcement simple and compact backfired with quite some missunderstandings of the terms that we presented. we appreciate your feedback. please add your thoughts and also feel free to contact us via mail if you don't feel like adding to the public discussion. every thought helps us clarify the discussion.
so, less than a month left till the announced release..we make very good progress but still this is going to be tough...
also last call to take part in the 2019 Survvvvey. we've reached more than 300 participants and will be closing it and releasing the results soon!
|quite a couple new things:||and one update:|
and some more fine projects:
that was it for february. anything to add? please do so in the comments!
Since a while, VL comes with the idea that you can organize node and type definitions in your VL document.
But now, we want to give you back another, alternative way to look at things - an organization structure, which is more intuitive and also well known from vvvv beta: The application side of things...
And also, we did this in reaction to the feedback we got from Link festival:
You want to be able to navigate the running object graph, where it's about the instances of patches, not about their definitions. You want to be able to navigate into a running patch and see the values that flow in this instance, not in another instance of the same patch...
Also, typically you approach your project top-down and just add more details to it since this is the basic idea of rapid prototyping: patching a running system that you incrementally grow and modify.
So we took the chance to shift the focus a bit so that in VL you again get confronted with the application side of things first.
This is what you know from vvvv beta: a patch can contain a sub-patch - you navigate into it and inspect the values flowing. You go outwards - to the caller - via "Ctrl-^". With the ^-Key we actually refer to a key at a certain position on the keyboard.
In VL this now is just exactly the same. Navigating into a process node shows you the right values. Ctrl-^ brings you back to the caller. So you are basically navigating the living node tree of the application. In VL it's been hard to think in these terms, but now it's the default. We also made sure that this works when navigating between vvvv beta and embedded VL nodes.
Also, try to use the back and forth mouse buttons if you happen to have a 5-button mouse. Ctrl-MouseBack will bring you to the calling patch and Ctrl-MouseForth will travel back into where-ever you were coming from.
Every VL document comes with an Application patch, which will open by default. You can start patching right away. A bit as it is like in vvvv beta.
Patching top-down never has been easier. Creating an Ape simulation from scratch:
You can run many applications at the same time, e.g. several example patches in addition to your project app. The application menu lists all documents that actually make use of the application patch.
Definitions in vvvv beta basically correspond to the .v4p files, in VL you can have more of them per document.
Library developers or more advanced users will of course still want to organize types and nodes and approach them from the definition side. This is like saying "There is one idea of a wheel, but if you feel like you can instantiate three of them".
For an overview of the definitions, each document comes with a separate Definitions patch - basically what's been the document patch.
Here you see what happened during patching top-down: on the definition side, we now have two Processes.
That's where you would from now on also place your Classes, Records...
Navigation within the current document structure works with Ctrl-Shift-^, Ctrl-Shift-MouseBack, Ctrl-Shift-MouseForth.
When navigating into a patch like that you will see some instance of the patch or maybe none, if none is instantiated or currently running. In this case, you will not be able to see any values.
If the patch is not yet inspecting a particular instance it will wait for the moment an instance gets executed and then attach to this particular instance.
We took the chance to clean up some bits in the node browser and the patch explorer as well.
The application patch e.g. now doesn't offer confusing options, but basically only shows the properties stemming from pads, the Process Node Definition now is called that way (was "Order"), Process Nodes in the node browser look a bit like process nodes in the patch, choices like "Input", "Node" appear at the top of the list of choices in the node browser...
That should be it for now!
Thanks, yours devvvs
here is to talk about numbers once again.
We recently introduced you to vvvv gamma and what it will initially be about, but also what you can expect from it in the future. Now is the time to inform you about the licensing options we have planned for it.
Trust me, when I tell you that this is the most rewritten blog-post we ever published. Creating a suitable licensing model seems far more complex than creating the software itself. So initially this resulted in rather complex models that we were only able to boil down to the below by talking to some long-term commercial users and incorporating their (sometimes contradicting) feedback. So, many thanks to those who listened and engaged in a discussion with us!
Foremost it is important for us to keep our (pattern pending) T.R.U.S.T model, which means:
That's the world we want to live in. We don't believe in any form of copy-protection or artificial feature limitations that usually only restrict honest users. Others will always find a way around such restrictions and thus not be bothered anyway. We're all grown-ups here. If vvvv helps you make a living, then help us make a living by providing vvvv for you. How simple is that?!
Same as above, this is what we believe in. In the end everything always comes down to education and equal access to it. We don't want to be responsible for anyone to not have the pleasure of learning vvvv.
There are commercial educational institutions that could make us a lot of money indeed. But also we're smart businessmen and know how to cash in on the free drugs we hand out now, later.
Further it is in the interest of any professional user of vvvv to quasi support the free educational use as this keeps the flow of new talents steady. WinWinWin.
So why not simply keep the existing licensing model? Indeed, to make this clear: The licensing for vvvv beta will not change! As long as you're not interested in vvvv gamma, everything stays the same for you (but you're missing out! Just sayin...).
But then, regarding vvvv gamma, there are a couple of reasons to adapt the licensing model:
So bottom line up front: vvvv gamma is more and therefore requires a more defined licensing model.
Having said all this, we want you to declare your use of vvvv gamma using the following licenses:
In other words: the idea is that every commercial devvvveloper will have an Editor Plus license and that is it. With it you can work on and export as many projects as you please. For cases where you want to run the vvvv development environment also on a device in, say an exhibition, to use the advantage that you can e.g. remotely maintain the program while it is running, you need additional Editor licenses for each device. This is comparable to how you'd buy a full license for each PC running vvvv beta so far. Obviously you can still use your developer seat license also in an installation/show, as long as you're running only one instance of vvvv and don't need the license to work on another project at the same time.
If you're not working on projects that will be deployed as executables, then you can stick to the Editor license also during development.
And finally, the Product license, which is similar to the vvvv beta OEM license. Only now it is in addition to the Editor Plus license every developer needs who is working on a product.
Another thing we wanted to improve over the beta licensing model is the fact that we understand that vvvv is used in quite diverse financially potent scenarios. To accommodate for all of those on an individual basis is not really feasible. But at least we thought, we can add an option on either end of the default "professional" user.
Different use-cases also demand different payment options. Here is what we have planned:
Accommodating the various requirements of all types of users and use-cases is a tricky task. This paired with trying not to completely disregard the pricing politics of "the competition" but also adding our own ideas and still balancing an economically viable solution didn't make it any easier. We still hope that we found a way that can be sustainable for all of us.
We are aware that the above may leave some questions open and we are ready to further refine the fine print and add examples to make it easier for everyone to declare their licenses. Please help us do so by asking your questions in the comments below, so we can understand where we need to get more specific.
we have regular expressions in vl. What the? Here is the gist:
vvvv beta comes with the RegExpr (String) which is quite handy but doesn't cover all cases. vux provides a RegExpr (String Replace) via the addonpack, which adds the "replace" case, but there is more. So let's see what we got in shop for vvvv gamma:
The simplest case: Just figure out if a given string matches a given pattern:
Sometimes a simple replace by string is not enough. See this example where we're stripping a string of all occurences of html-tags, ie. replacing them with nothing.
Sometimes a split by string is not enough. See this example where we're splitting a string by any multiple occurances of lowercase letters:
Find all substrings that match a given pattern. Imagine a string that contains many dates written in the format "Month Day, Year" and you want to get all of those:
The last pin on all of the above nodes is the Options enum pin. Since this enum allows multiple selections (ie. a bitwise combination of its member values), there is a RegexOptions node that allows you to set multiple of the options at the same time:
The above should cover most typical usecases. But regular expressions can do even more. Luckily with vl you're not restricted to what we decide to provide for you, but you have direct access to the full set of functionality .NET regular expressions offer. For example there are situations where you want to use the static operations that .NET provides instead of the process nodes shown above. If so, simply choose "Advanced" in the nodebrowser and, navigate to the "Regex" type and choose the static operations from there...
Available for testing in latest alphas now!
When Tue, Feb 26th 2019 - 19:30 until Tue, Feb 26th 2019 - 22:00
Where Spektrum, Bürknerstraße 12, 12047, Berlin, Germany
It is happening: vvvv berlin meetup #7
We'll have Mr. ravazquez tell us everything you were always afraid to ask about VL.OpenCV, the premium computervision library in development for vvvv. He'll show us a few things you can do with the library and we're hoping to get into a bit of a discussion as to what are your current and future requirements when it comes to computervvvvision. bring your wildest ideas...
As always, feel free to bring your project/notebook/questions or whatever you want to share with the community. We have space and time for spontaneous discussions and presentations!
There will be a bar serving us drinks. Thanks go to the fantastic team of Spektrum Berlin
If you feel like, please rsvp on our Getogether page!
previously on vvvv: vvvvhat happened in December 2018
here we go,
we're still on track regarding our roadmap and here is a little status report:
none of these are in alphas yet, but if things go well, they should land soon.
while you're waiting and haven't yet, please fill out the 2019 Survvvvey!
released as work in progress:
and three more:
that was it for january. anything to add? please do so in the comments!
welcome back to everyone's favorite number-show, 2018 edition. don't know where you're finding yourself? no worries, you can read up on all of it. par example you may wanna first read about the 2017 numbers before diving into the recap of this season with the soothing title:
"the calm before the storm"
as a loyal reader of this segment you rightfully ask: where is my table showing the access to vvvv.org per country? i'm afraid, in the wake of the great GDPR we couldn't be bothered to figure out how legal it was to still track you around with google-analytics, so we simply dropped it. means no such data for 2018. we're planning to install a more privacy friendly tool sometime this year. so there should be some such data again next year.
did you fall asleep yet? how about instead we offer you a peek at your favorite forum-search terms of this past year:
note: terms are listed in order of frequency, where "kinect" is curiously about 3 times ahead of the follow-up "dx11". the rest is rather evenly distributed. i'm only a bit concerned about that "integer value"...
overall it seems 2018 was a rather stagnant year for vvvv.org as we can also see from the graph above depicting the number of new daily topics on the forum. nothing we'll be able to impress our investors with... but can you blame yourself? the same old website for over 10 years now. wish i could tell you about what's brewing, but i'm afraid, we've signed an NDA with ourselves...
* x86 and x64 combined
don't be fooled by the spike: certainly the high download-counts can mostly be attributed to the fact that we also had the highest number of relases this year. but then if you take the ratio addons/core you'll see that the number of serious users (those with addons) keeps slightly decreasing...
boom. and still. despite the rather modest numbers shown above, arguably one of the more important numbers went up again. 2018 brought us the 3rd best result in terms of licenses and dongles sold. so let's see who contributed to this:
ahm..not so good. the number of individual commercial users is still going down, even though those who're using vvvv are apparently getting more productive with it. team marketing, your turn!
looking more closely we can totally infer global economic trends from the listing of "licenses sold per country": for the first time in recorded history the UK is brexiting from spot 2, overtaken by russia who made a surprise jump with its by far highest percentage to date. also big up austria for their best share so far. and both first time showing in the ranks: china and italy. anything to learn from that?
|austria||3%||russia||2.5%||aut, aus, usa||4.2%||austria||3.2%||switzerland||1.5%||russia||2.6%||china||4.1%|
|spain||2%||france||2.5%||russia, norway, czech||2.8%||russia||2.9%||France||1.6%||denmark||2%||italy||2.5%|
so here we are. after 16 years in development vvvv still hasn't made it out of beta and it may seem this fact takes a toll in the numbers. we were really hoping we could finally release our next big thing (that is vvvv gamma) by the end of 2018, but then we luckily got sidetracked by Xenko. while we're quite optimistic that gamma will conquer new worlds where no 3d-animation has ever been seen before, Xenko certainly completes our vision and certainly only will make vvvv gamma useful for many of you.
we've done our homework by realizing the first vvvv gamma/xenko-only project Ocean Of Air together with MLF. we learned a lot and it showed us where workflows are still not optimal. we're now in the process of polishing those and fixing the most annoying buggers before we'll be releasing it into your precious hands.
but don't lean back and wait! according to a recent study, 55% of you are not using VL at all yet. who do you think we're doing this for? and 40% of those are whining "because i just didn't find the time to get into it yet". well fellows, it is about time to find that time! because it may take you a while. it may be hard in the beginning. so let us take you by the hand and lead you through the struggles of learning, we'll show you something to make you change your mind. we're there for you here and here with our unprecedented 24/7-free-unlimited (fair use) helplines to guide you along all of your steps. it is only on you to take that offer, because as i wisely just came up with: "s/he who asks a question is already one step closer to the answer".
btw. that study i just mentioned is still running. if you haven't already, please fill it out now.
so up next: 2019, allegedly the most exciting year in vvvv history (so far). the road ahead is packed with goodies and we can't wait to start working on them and share them with you. to those of you who happily bought their licenses: it is right and just. stay who you are and you'll be forever in our quads! to those who didn't: not so smart. grow up!.
and everybody: happy new!
previously on vvvv: vvvvhat happened in November 2018
aaand that was it again..
but before we head off into this happy new year, let's recap what happened on the last mile:
we finished our work on "Ocean Of Air" on time (horray) and it is running ever since December 7th. that is 12 networked PCs + server running a vl+xenko multiuser VR installation without any regrets. if you happen to be in london, treat yourself a visit. click here for infos and tickets.
if you're interested in how we pulled this off, here is the second in a series of blogposts on how we're integrating Xenko with vl.
with the job out of our heads, we took to defining the coming milestones for vvvv and paved the road to vvvv gamma <- must read. and if you're interested in a view more details about the milestone that brought us to where we are now and what more is to come after the initial release of vvvv gamma, then please checkout our new roadmap.
and if you haven't already, then prettyplease fill out the 2019 survvvvey. it takes no more than 5 minutes. promised!
|horray for 3 new:||and two updates:|
|two works in progress:||
and a little tease:
beautiful beautiful renderings from the pros:
and some more fine stuff:
these have been announced for a while now. if you're interested, be quick!
that was it for december. anything to add? please do so in the comments!
anonymous user login