Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.
Dear patchers worldwide,
we're planning to take the vvvv academy on tour this summer. There will be another course in Berlin in July and we'd like to also take it to 2 more cities in europe or beyond.
The course ideally runs like this:
• wednesday, thursday, friday plus monday, tuesday wednesday of the next week
• 2x3 hours a day
• with ~6 participants
• I'll be coming to run the course and could need a local assistent
So here is our question:
• Do you have an office space (~30sqm room with power, internet, projector) that you could offer for a week?
• In the time between end of July to end of September?
• Can you help us activate your local community so we get ~6 people signing up for a course that will cost 800€?
What's in it for you?
• To be discussed, but for one thing you'll meet a crowd of motivated people who could potentially be interested to work with you in the future.
Please let us know your thoughts and questions. If you're interested please get in touch via email@example.com
it is happening: beta36 is scheduled for a release in early february. we're quite confident with the state of the new features we've added and would like to ask you to give it a final spin with the release-candidates as listed below. please open the projects you're currently working on and see if they run as expected. if not, please let us know in the forum using the alpha tag.
since the dawn of vl, vvvv has become increasingly more powerful. we see initial proof in the works of schnellebuntebilder and intolight who are using the combination already to their advantage. it allows them to create projects of a complexity that would have been very hard if not impossible to realize with vvvv alone.
so far though, vl could only be used for IO and logic tasks. anything related to rendering was still in the hands of vvvv DX9/DX11 only. with beta36 we're introducing a new bridge that will allow you to prepare textures and buffers with the convenience of vl features and hand them over to vvvv using a new set of nodes. have a look at \girlpower\VL\DX\DynamicBuffersAndTextures.v4p to see how this works! and here are some more highlights:
for an in depth list of changes have a look at the changelog.
VL documents you save with these candidates will not open anymore with beta35.8!
if you have the feeling that this release will not have anything for you, we'd only partly agree. true, maybe not directly. but we'd like to point out that what's hidden behind the unpretentious bullet point "Use .NET Libraries and Write Custom Nodes" listed under vl above can conservatively be understood as a bombshell. it means that anyone now has access to a vast range of .NET libraries in vl and therefore can also use those in vvvv. while this may exceed your personal abilities, it lowers the barrier to contribute to vvvv/vl in general by far and if we get this communicated right, this should be a win-win for evvvveryone. so tell your .NET developer friends about this..they should understand the implications.
at the same time this makes it easier for ourselves to now start building more interesting libraries for vl, which in turn will be a win for all vvvv users as well. hope this makes sense..
but now we wish you all some happy holidays and are waiting for your feedback on the candidate!
Who maarja, tobyk
When Tue, May 1st 2018 - 10:00 until Fri, May 4th 2018 - 20:00
Where Pistoriho palác, Štefánikova 834/25, Bratislava, Slovakia
CCL is an internationally traveling format offering digital media ‘code savvy’ artists the opportunity to translate aspects of choreography and dance into digital form and apply choreographic thinking to their own practice. In a one week interdisciplinary peer-to-peer setting, the participants work on their own projects, share knowledge, find new collaboration partners, discuss ideas and challenges. Previous CCLs have been joined by digital artists, musicians, architects, physicists and dancers, all bringing along a variety of emerging new media tools and setups that can be experimented with and explored for new artistic ideas. Results from the week might range from prototypes for artworks to new plug-ins for working with dance related datasets (www.choreographiccoding.org). CCLs finally seek to initiate a sustainable collaborative practice among its participants encouraging ongoing exchange in an artistic research community of individuals.
Limited number of participants. Submissions deadline is 23th of March.
Workshop is free of charge thanks to the support of the Goethe-Institut Bratislava.
When Fri, Mar 16th 2018 - 14:00 until Sun, Mar 18th 2018 - 02:30
Where Ri-make, Via Artesani 47, Milan, Italy
We are happy to be hosted by Ri-Make, that is a sort of squat in a dismissed bank, for few installations and a free vvvv workshop!
The workshop will be on saturday 17th March 2018, in the afternoon. In the last Patcher Kucha I organized I knew many people from Brera Accademy in Milan and noticed and increased interest in vvvv.
Thanks also to fusomario who is spreading the vvvvord really hard, there will be few vvvv installations hosted.
My installation is related to mathematics, of course, and let user interact with music generated by prime numbers and tells the user their history while a bitcoin data graph in real time is showed... in some way they are related.
On the other hand fusomario will present two installations (of course using vvvv, otherwise italian mafia gang is disappointed): one is hard to believe at first, there is some vegetable connected with an arduino to get its feelings into an IObox. I saw with my eyes and it is true: the vegetable responds to stimuli (light, sounds and caresses) , connects with people, and manifests its feelings.
I have heard other people from Brera will show other installations using vvvv.
Come to vvvvisit us!
previously on vvvv: vvvvhat happened in Jannuary 2018
have we mentioned beta36 yet? it is still a release candidate and about to be released soon. quasi. so please take this last chance and make sure your latest projects will work with it when it will be released, by getting the latest candidate and testing them against it. in the unlikely event that you have something bugging you, prettyplease let us know. in the comments, in the forum or in the chat. thanks.
in case you've missed the february news from the devvvv room, please catch up here. both of these can be tested in the candidate already:
and meanwhile we're already working on a branch towards b36.1 scheduled for in about 2 months after b36 release. it will mostly focus on a cleanup of all vl libraries and finally opensourcing them so we can start accepting contributions.
and in general this place seems to have some related listings: das auge
two new ones
plus a work in progress
and a teaser
and quite a couple of updates
the DX11 pack by vux comes with fixes and new features and its further development can now be supported: if you're using the pack you should definitely consider becoming a patreon!
Who David Gann, schnellebuntebilder
When Sat, Apr 28th 2018 - 12:00 until Sat, Apr 28th 2018 - 18:00
Where schnellebuntebilder, Rudolfstr. 11, Berlin, Germany
Recently VVVV.js received a huge update which implements advanced rendering techniques like physical based rendering, instancing and depth-buffer based post effects.
To jump right into action follow this link:https://tekcor.github.io/vvvv.js-examples/
There is also a very detailed overview in written form here:http://000.graphics/tutorial/02_VVVV.js_Introduction.html
or watch the video:
In detail we will look at the following topics:
• Physical Based Rendering
• glTF Import and three.js model loading
• Instancing Engine
• Deferred Effects
• Collision Detection
• Terrain Rendering
• Derivative Maps (Tangent free Normal and Parallax Occlusion Mapping)
• Shader and Node Development for VVVV.js
Booking and Fees
Reserve your workshop seat now via mailto:firstname.lastname@example.org
There will be a fee of 50,00 € per attendee (recerving room + support for lecturer).
If you consider yourself a professional or your company sends you here, we use T.R.U.S.T to encourage you to pay a professional fee, which is 200,00 €.
Become a Patreon
If you can`t make it to berlin but want to learn it,
consider joining my online course or receive direct mentoring by becomming a Patreon. Your support will accelerate the development of this amazing framework.
After a little while, here we are, a new version bump for dx11 rendering.
This as usual comes with some bug fixes and new features.
First, there were 2 reasonably major issue which are now fixed:
Next, of course there are new features, here are a few selected ones (full changelog below)
For more than 6 years DX11 pack has been free, and will stay free.
The question about supporting development has been asked several times, and for now there was no official way to do so (except contacting privately).
So from now, dx11 contribution has a Patreon page, in which you can provide monthly donations to (various pledges with various rewards are available, including access to upcoming video workshop patches and custom support).
Who maarja, id144, nissidis
When Tue, Feb 13th 2018 - 20:00 until Tue, Feb 13th 2018 - 21:00
Where Prague, Františka Křížka 36, 170 00 Prague, Czech Republic
Everywhen is dark, depressive and captivating contemplation on the recurrence in history. Everywhen orbits around two sides of life - the personal and the political. An often caring, gentle and well-meaning personal life is contrasted here with the reality of the social individual, who can often be sinister, vengeful and hateful. But how can we be critical to our own world views when they are created by those closest to us?
Visuals, concept: Mária Júdová
Dance, concept: Soňa Ferienčíková
Music, concept: Alexandra Timpau
Lights: Ints Plavnieks
Technical support: Andrej Boleslavský, Constantine Nisidis
Produced by: BOD.Y – Zuzana Hájková
Supported using public funding by Slovak Arts Council
When we talk with our trusted VL pioneers we often find them implementing timeline like applications, which come with the main problem to find the keyframe that is the closest to a given time, often even finding the two closest keyframes and interpolate between them weighted by the position of the current time.
Easy? Just order all keyframes by time and start at the first keyframe and go thru the collection until you find one that has a time greater than the time you are looking for. This is called linear search and might work very well at first, but obviously has two performance problems:
Binary search does the same task in a much smarter way: It starts with a keyframe in the middle of the collection and checks whether the time you are looking for is greater or smaller than this middle keyframe. Now it can rule out half of all keyframes already and search in the interesting half in the same way: Take the middle keyframe and compare its time. As this rules out half of all remaining keyframes in every step, the search is over very quickly. In fact it's so stupid fast that on a 64-bit machine the maximum steps it has to perform is 64, because the machine cannot manage memory with more than 2^64 elements.
The VL nodes cover several use cases. Depending on how your data is present you can choose from the following options.
The most simple node is just called BinarySearch and takes a collection of values. It returns the element that is lower and the one that is higher, their indices and a success boolean indicating whether the search key was in the range of the input values at all:
For simple scenarios that don't require a custom keyframe data type the BinarySearch (KeyValuePair) version can be used. It operates on the simple data type KeyValuePair that comes with VL.CoreLib and returns the values, keys and indices:
It also comes as BinarySearch (KeyValuePair Lerp) with an integrated linear interpolation between the values that is weighted by how far the search key is from the two found keyframes:
If you have your own keyframe data type the BinarySearch (KeySelector) is your friend. It can be created as a region with a delegate inside that tells the binary search how to get the key from your custom type:
There is also BinarySearch (KeySelector Lerp) which has the same delegate and needs a Lerp defined for your keyframe that it can use internally. You keyframe data type could look like this:
The usage is then basically the same:
A timeline is of course just one use case where binary search is useful. All data that can be sorted by a specific key can be searched by it.
Speaking of sorting, if you add elements to a sorted collection binary search can help you to find the index at which to insert the new element. Use the Upper Index output as insert index like this:
So it can help you to keep the very same collection up to date that you use to lookup the elements.
A usage example can be found in girlpower\VL\_Basics\ValueRecorder.
Enjoy the search!
In the VVVV world you'll find four new nodes, UploadImage and UploadImage (Async) - both for DX9 and DX11 returning a texture. The former just takes an image and when requested uploads the image to the GPU, the latter takes an IObservable<IImage> and will upload whenever a new image gets pushed.
In the VL world you'll find ToImage nodes which allow you to build images out of arbitrary data. Here is a little Game Of Life example:
That one image is gray and the other red comes from the fact that we map a pixel format with one red channel to a format with one luminance channel in DX9 - not entirly correct, but better than seeing nothing at all.
So what is this new image interface exactly? Well it came up in the past (https://discourse.vvvv.org/t/bitmap-data-type/6612) and re-surfaced again in VL - the topic of how to exchange images from different libraries. Nearly all of them come with their own image representation, like a Mat in OpenCV, a Sample in GStreamer, a Bitmap in GDI, an Image in WPF or just plain pointers in CEF - just to name a few we stumbled accross in the past.
All of those libraries provide different sets of operations one can perform on their image representation, they have different sets of supported pixel formats and they also differ in how they reason about the lifetime of an image. In the end though we want all those node sets which will be built around those libraries to work together.
We therefore decided to add a new interface - simply called IImage - to our base types in VL with the intention to allow different node libraries to exchange their images. The idea is that the node libraries itself work with the image type they see fit and only provide ToImage and FromImage nodes which will act as the exit and entry points. Whether or not those entry and exit points have to copy the image is up to the library designer and probably also the library itself. For some it will be possible to write simple lightweight wrappers, for others a full copy will have to be done. If a certain pixel format is not supported by the library it is fine to throw an UnsupportedPixelFormatException which will inform the user to either change the whole image pipeline to a different pixel format or insert a conversion node so the sink can deal with it.
Before diving any deeper here are two screenshots from a little example image pipeline, getting images pushed in the streaming thread from a GStreamer based video player, using OpenCV to apply a dilate operator on them and passing them down to vvvv for rendering:
The image interface comes with a property Info returning a little struct of type ImageInfo containing size and pixel format information. With this struct it's easy to check whether the size or the pixel format of an image changed. The pixel format is an enumeration with just a few entries of what we thought are the most commonly used formats. Since there're many many others the image info comes also with a OriginalFormat property where an image source can simply put in the original format string - whatever that is. But it at least gives sinks a little chance to interpret the image data correctly.
The second method on the interface called GetData is used for reading the image. It returns the IImageData interface pointing to the actual memory. Since the IImageData inherits from IDisposable the returned image data needs to be disposed by the caller. With this design it should be possible to implement all sorts of image reading facilities - as pin/unpin, map/unmap, lock/unlock etc.
In order to avoid copying data the image interface comes with a last property IsVolatile which when set tells a sink that the data in the image is only valid in the current call stack - so it can either read from the image immediately or if that is not possible it will need to clone it. We expect image implementations to return data of the default image in case the read access happended too late. Imagine one puts volatile images into a queue without copying them first, the result should be a bunch of white quads so those errors should become visible immediately.
In case the volatile flag is not set we expect the image data to stay the same so no further copying is necessary on the sink. It can hold on to the image as long as it wants.
We further provide a couple of helpful extension methods to the IImage interface like Clone/CloneEmpty or making an image accessible as an System.IO.Stream
With this in mind let's look how to expose library specific image types:
anonymous user login