» Blog
This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.

Blog

new post

Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.

  reset tags

addon-release core-release date devvvv gallery news screenshot stuff
delicious flickr vimeo
Order by post date popularity

Growing a consistent library of nodes and presenting it in a clear way in the NodeBrowser is one of the great challenges we are facing. For the upcoming release we made this our focus: Clean up the core library vl offers and add new features to the NodeBrowser for easier browsing. So here is what you get:

Filtering by Aspects

The NodeBrowser got a couple of new toggle buttons that let you filter for different aspects. Aspects include "Advanced", "Experimental", "Obsolete"...

Node list filter buttons

Button layout: Time goes from left to right (Obsolete -> Experimental), high level to low level from up to down (Basic -> Advanced).
By default you see all normal nodes, ie. those without any aspect. This is what we consider the 80% most important nodes for the casual user. Browsing the categories with this setting should give you a good, not too overwhelming overview of what functionality is available. For example it excludes all nodes using types we consider advanced, like Float64, Integer64,... mutable collections (like Array, ..) and often nodes that we hope you'd not have to use every day.

Browsing the "Primitive" Category

Advanced

If you're looking for a node and it doesn't show up you can enable the "Advanced" aspect. This will include many more nodes. Obviously the distinction between normal and advanced is very subjective and will vary widely for different users and use-cases. So we are aware that there is room for debate and nodes may be moved between normal and advanced in future versions. The good thing about this is that such a move between aspects never breaks any patches!
You can toggle this filter button with the TAB key.

Experimental

Includes nodes that we haven't yet fully committed to. Use those at your own risk. We don't guarantee that they'll still work the same in future versions

Obsolete

Includes nodes that we only still ship to not break existing patches. Don't use them in new patches. Most likely there is a better version of the node available already.

Internal

Nodes can have the "Internal" aspect set, meaning they are only visible within their .vl document but don't get exposed. So if another document references the first one, it will only see nodes that are not Internal. Using the toggle you can hide those internal nodes from the NodeBrowser to have a view on the node set that other documents would see.

Defaults

The default setting for the filter buttons for Advanced, Experimental and External on startup of VL can be adjusted in the settings.xml file. If you are working a lot with .dlls from c# projects and you need to restart often, you probably want to enable the 'External' button by default.

Assign Aspects to Nodes

For information about assigning nodes to aspects, see this gray book section and feel free to open a thread in the forums for specific questions.

Filter externals

When referencing an external .dll it often leads to a very large number of extra nodes in the NodeBrowser. Using the "External" toggle you can decide whether to see those in the NodeBrowser or not. Also the NodeBrowser is faster when it doesn't have to deal with those.
You can toggle this filter button with the SHIFT+TAB keyboard shortcut.

Stateful Nodes

Aka process nodes got merged with their stateless siblings with the same name. If there is a process node and a stateless node with the same name you get a "..." icon in the NodeBrowser and after clicking on it you have to choose which one you want.

More Collections

This category got the most rework so it's worth to mention its specific changes here.

VL encourages the use of immutable collections because they work very well together with the data flow paradigm. But when working with .NET libraries you often have to deal with mutable collections, so we now provide all commonly used collections of the .NET framework!

In general we try to do as little renaming as possible when importing data types. But for the collections we took the liberty to do the following renamings from the original ones:

All immutable .NET collections drop the Immutable pre-fix since it's the default in VL:

  • ImmutableArray -> Array
  • ImmutableDictionary -> Dictionary
  • ...
All mutable .NET collections get a Mutable pre-fix:

  • Array -> MutableArray
  • Dictionary -> MutableDictionary
  • ...

All interfaces for collections moved into the Interfaces sub-category and we introduced a Common category that contains data types and nodes that are used together with collections like KeyValuePair or CustomEqualityComparer.

Rectangle Improvements

The nodes to create rectangles are much more flexible now. You can specify the size of the rectangle and an anchor position. Depending on the enum input Anchor, the position gets interpreted as any one of the significant points in a rectangle, like TopLeft, Center, BottomRight, ... Also the split node got this enum to specify which point of the rectangle the output position should be.

Additionally there is a node to create a rectangle spanned by two points and one to create a rectangle by the coordinate values of it's edges.

We also added Inflate and Scale nodes. Inflate can offset the rectangle edges in each direction by specific amount and Scale multiplies the size of the Rectangle in each direction. Both nodes come with version Centered and Uniform which assign the same value to the horizontal and vertical direction or the same value for all directions.

Easier switch from Adaptive to specific

Some nodes (like the +,..) that you place in a patch are not tied to a specific type (like Float32) unless you make a connection to them. This is a very useful feature in vl and works 99% of times. Sometimes though you are smarter than the compiler and you want to specify to use a concrete implementation. In this case, after placing the node you now simply have to double click it to get all available options and chose one of those.

Destroy is now Dispose

In general we're planning to stick as close as possible to the naming in the .NET world. This will make VL more inviting for external developers to join and also makes documentation more easy to find. Therefore we decided to rename "Disposable" back to "IDisposable" and "Destroy" back to "Dispose".

As always, head over to the alpha builds and report your findings.
Enjoy the new order and happy patching!

yours,
devvvvs

tonfilm, Thursday, Jun 7th 2018 Digg | Tweet | Delicious 4 comments  

just because!

since we can, the addonpack is now shipping with 4 new nodes:

  • HTMLTexture (DX11.Texture String CEF)
  • HTMLTexture (DX11.Texture Url CEF)
  • HTMLView (Image String)
  • HTMLView (Image Url)

What happened here is that the original EX9 based HTMLTextures have been replaced by the more generic HTMLView (Image) nodes which can easily be used for both EX9 and DX11 via the respective UploadTexture nodes introduced recently. So the EX9 and DX11 variants of the HTMLTexture nodes are now mere modules wrapping HTMLView -> UploadTexture.

The 'CEF' in the version for the DX11 modules is to have them distinguished from the also available contribution HTMLTexture (DX11) by gumilastik, with the same name, which is internally using a different backend than CEF. So horray, more choices for you!

Just in case you wonder: Obviously the DX11 variants still require the DX11 Pack to work!

And while at it we also updated the underlying backend to reflect Chromium 66.0.3359.117

Bonustrack:
With the Image nodes you can also pipe websites into the recently introduced VL.OpenCV. This maybe quite special interest but i'm sure one fine day someone will need to do exactly that:

HTMLView verwurstelt via VL.OpenCV and displayed via Preview (DX11.Texture)). Thankyouverymuch!

Available in latest alphas now. Please test and report your findings!

joreg, Thursday, May 31st 2018 Digg | Tweet | Delicious 8 comments  

Welcome dear patchers to a new episode of devvvvs giving you control over your PC mainboard.

When you work in vvvv or VL the evaluation of your patch is automatically driven by a mainloop. It executes the nodes in your patch (usually) 60 times per second and by this allows changes to happen in your patch over time.

If you have a look at the PerfMeter in a renderer with a mainloop timer without any tweaks you will see lots of flickering like this:

Those flickers indicate that the time between two frames of the mainloop is changing a bit every frame. In an ideal world those flickers would not be there and the time between two frames would always be the same. An unstable mainloop like this creates jitter in animations, drops video frames and lets the visual output of your patch look less smooth.

It's quite a difficult task to get high-precision timer events on a modern computer architecture. Timers and me go way back to the early vvvv days at MESO when i worked on the vvvv mainloop and the Filtered time mode. Since then we could improve the vvvv mainloop time stability quite a bit by doing tricks like changing the windows system timer resolution and introducing a short busy wait phase at the end of the mainloop. The result of this work looks like this:

The experience gathered from the vvvv mainloop improvements is now available in the VL library, so you can build your own sub-mainloops.

But why would you need your own timer at all if you have a good mainloop already? There are a few reasons:

  • Have a second mainloop on a different thread to split performance or don't block the mainloop
  • Process something at a slower rate as the mainloop to save performance, e.g. web requests every few seconds
  • Process something at a higher rate as the mainloop, e.g. output to a micro controller or servo motor
  • Run parts of your patch in it's own mainloop to avoid blocking user input from vvvv

General Node Design

In VL the patch of a process node by default has a Create and an Update operation. Create gets called once an instance of the process is created and Update gets called periodically by the mainloop. In this process node patch you can place other process nodes that 'plug into' those two operations by placing their own Create on the Create of the surrounding patch and their Update on the Update of the surrounding patch.

This is the same for stateful regions like ForEach [Reactive], only that the Update of the ForEach region doesn't get called automatically by the surrounding patch but gets called by the events of the incoming observable. More on that in this blog post: VL: Reactive Programming

There are many sources of observable events. For example Mouse, Keyboard and other input devices as well as AsyncTask or MidiIn. The timer nodes work in the same way. The output is an Observable that is on a new thread and either sends the frame number (for the system timer nodes) or a TimerClock (for the MultimediaTimer or BusyWaitTimer). A patch would look like this:

The use of observables also makes it easy to swap one timer for another if neccessary.

Basically there are 3 ways to setup timers in windows and now VL has them all!

System Timer

This is the most common timer but it usually only has a precision of 16ms. It can be used for recurring events when accuracy is not the most important issue and the interval is in the seconds range or a higher millisecond range. Nodes that use these timers are for example Interval and Timer in category Reactive:

Multimedia Timer

This is a dedicated timer for applications that do video or midi event playback. It is fairly accurate to about 1ms and doesn't need much CPU power. So it can be used for most time critical scenarios. To use this timer, make sure you enable the Experimental button in the VL node browser and create the node MultimediaTimer:

So that's nice, but it has two little draw backs. You can only specify the period in whole milliseconds and as you can see there is still some flickering in the measured period times. The flickering is well below 1ms but still, we can improve that:

Busy Wait Timer

Since its possible to measure time with high accuracy, one can write an infinite loop that always checks the time and executes an event once the specified interval time has passed. This timer always uses 100% of one CPU core because it checks time as often as it can. But hey, how many cores do you have these days? With this method you can achieve precision in the micro second range, which is insane!

If any patch processing is happening on the timer event, the power of your core is of course shared with the busy wait. Just make sure that the processing doesn't take longer as the specified period:

This timer has an option to reduce CPU load for period times that are higher than the accuracy of your system timer. You can specify a time span called Wait Accuracy. This is a time span before the desired end of the period that specifies when the busy wait phase should start. Before that time the timer is set to sleep for 1ms periodically. 16ms is a safe value, but you can decrease it until the Last Period starts to jump in order to reduce CPU load even more.

Threading

Both the MultimediaTimer and the BusyWaitTimer start their own background thread with priority AboveNormal. The thread priority setting might become an input pin in the future.

So now download latest alpha, enable the Experimental button in the VL node browser and give it a shot. If anything unexpected happens, let us know in the forums.

yours,
devvvvs

tonfilm, Wednesday, Apr 18th 2018 Digg | Tweet | Delicious 7 comments  

computervisionados!

we're starting to collect the fruits of our hard efforts of making it easy to use thirdparty libraries. please give a warm welcome to VL.OpenCV

What the?

remember the amazing ImagePack initiated by @elliotwoods years ago? VL.OpenCV is essentially the same, only for vl: a collection of nodes for computer-vision tasks based on the industry standard library OpenCV.

OpenCV is a vast library with an endless number of interesting features. elliot back in the days did a great job in hand-picking some of the most interesting ones and wrapping them into easy to use nodes for evvvveryone.

meanwhile OpenCV has progressed and so we thought we'd give it a try and make it accessible for everyone in vl. watch this first episode of vvvvTv where ravazquez who has been working on this for the past 2.5 months, explains how you can use the prerelease package today.

Status

if we haven't missed anything, most of the functionality you know from the ImagePack should already be available, except some special video input devices, StructuredLight and FeatureDetection stuff but on the other hand already much more:

so we have:

  • sources: ImageFile, VideoFile, VideoIn (Directshow)
  • sinks: ImageFile, VideoFile
  • filters: Blur, EdgeDetection, Average, FrameDifference...
  • different background substraction options
  • optical flow
  • camera/projector calibration
  • trackers: contours, face recognizer, houghlines, object detection (via haarcascades), different 2d tracker options
  • ar marker tracking and pose estimation
  • utils: Info, CountNonZero, FindNonZero, Resize, Crop, Insert, Split, Merge, Delaunay, Voronoi,...

and most of the nodes and pins come with documentation in the tooltips!

Threading

as opposed to the ImagePack, this library is completely free of the complexities of threading. instead a user can use the threading regions of vl to define their own threading. while this indeed puts a bit more effort on the user we hope that the flexibility in dealing with their own threading outweighs the cons of this.

Next

the library is open for everyone to contribute. since it is mostly done in pure vl, with hardly any c# written, it is quite accessible for everyone to extend. so please do so and best join us in the chat to discuss matters when they arise.

joreg, Thursday, Mar 29th 2018 Digg | Tweet | Delicious 9 comments  

helo evvvveryone,

it is happening: beta36 is scheduled for a release in early february. we're quite confident with the state of the new features we've added and would like to ask you to give it a final spin with the release-candidates as listed below. please open the projects you're currently working on and see if they run as expected. if not, please let us know in the forum using the alpha tag.

Release Highlights

since the dawn of vl, vvvv has become increasingly more powerful. we see initial proof in the works of schnellebuntebilder and intolight who are using the combination already to their advantage. it allows them to create projects of a complexity that would have been very hard if not impossible to realize with vvvv alone.

so far though, vl could only be used for IO and logic tasks. anything related to rendering was still in the hands of vvvv DX9/DX11 only. with beta36 we're introducing a new bridge that will allow you to prepare textures and buffers with the convenience of vl features and hand them over to vvvv using a new set of nodes. have a look at \girlpower\VL\DX\DynamicBuffersAndTextures.v4p to see how this works! and here are some more highlights:

VVVV

VL

for an in depth list of changes have a look at the changelog.

Download

64-bit
vvvv (beta36_rc15)
addons (beta36_rc15)
32-bit
vvvv (beta36_rc15)
addons (beta36_rc15)
VL documents you save with these candidates will not open anymore with beta35.8!

if you have the feeling that this release will not have anything for you, we'd only partly agree. true, maybe not directly. but we'd like to point out that what's hidden behind the unpretentious bullet point "Use .NET Libraries and Write Custom Nodes" listed under vl above can conservatively be understood as a bombshell. it means that anyone now has access to a vast range of .NET libraries in vl and therefore can also use those in vvvv. while this may exceed your personal abilities, it lowers the barrier to contribute to vvvv/vl in general by far and if we get this communicated right, this should be a win-win for evvvveryone. so tell your .NET developer friends about this..they should understand the implications.

at the same time this makes it easier for ourselves to now start building more interesting libraries for vl, which in turn will be a win for all vvvv users as well. hope this makes sense..

but now we wish you all some happy holidays and are waiting for your feedback on the candidate!

joreg, Saturday, Mar 17th 2018 Digg | Tweet | Delicious 2 comments  

When we talk with our trusted VL pioneers we often find them implementing timeline like applications, which come with the main problem to find the keyframe that is the closest to a given time, often even finding the two closest keyframes and interpolate between them weighted by the position of the current time.

Easy? Just order all keyframes by time and start at the first keyframe and go thru the collection until you find one that has a time greater than the time you are looking for. This is called linear search and might work very well at first, but obviously has two performance problems:

  • The bigger the time you are looking for gets, the more checks you have to perform
  • The more keyframes you have, the more checks you have to perform

Enter Binary Search

Binary search does the same task in a much smarter way: It starts with a keyframe in the middle of the collection and checks whether the time you are looking for is greater or smaller than this middle keyframe. Now it can rule out half of all keyframes already and search in the interesting half in the same way: Take the middle keyframe and compare its time. As this rules out half of all remaining keyframes in every step, the search is over very quickly. In fact it's so stupid fast that on a 64-bit machine the maximum steps it has to perform is 64, because the machine cannot manage memory with more than 2^64 elements.

VL Nodes

The VL nodes cover several use cases. Depending on how your data is present you can choose from the following options.

All nodes expect that the input collection is ordered by the search key from low to high.
Only Values

The most simple node is just called BinarySearch and takes a collection of values. It returns the element that is lower and the one that is higher, their indices and a success boolean indicating whether the search key was in the range of the input values at all:

Key Value Pairs

For simple scenarios that don't require a custom keyframe data type the BinarySearch (KeyValuePair) version can be used. It operates on the simple data type KeyValuePair that comes with VL.CoreLib and returns the values, keys and indices:

It also comes as BinarySearch (KeyValuePair Lerp) with an integrated linear interpolation between the values that is weighted by how far the search key is from the two found keyframes:

Custom Data Types

If you have your own keyframe data type the BinarySearch (KeySelector) is your friend. It can be created as a region with a delegate inside that tells the binary search how to get the key from your custom type:

There is also BinarySearch (KeySelector Lerp) which has the same delegate and needs a Lerp defined for your keyframe that it can use internally. You keyframe data type could look like this:

The usage is then basically the same:

Other usages

A timeline is of course just one use case where binary search is useful. All data that can be sorted by a specific key can be searched by it.
Speaking of sorting, if you add elements to a sorted collection binary search can help you to find the index at which to insert the new element. Use the Upper Index output as insert index like this:

So it can help you to keep the very same collection up to date that you use to lookup the elements.
A usage example can be found in girlpower\VL\_Basics\ValueRecorder.

Enjoy the search!

Yours,
devvvvs

tonfilm, Monday, Feb 12th 2018 Digg | Tweet | Delicious 17 comments  

In the VVVV world you'll find four new nodes, UploadImage and UploadImage (Async) - both for DX9 and DX11 returning a texture. The former just takes an image and when requested uploads the image to the GPU, the latter takes an IObservable<IImage> and will upload whenever a new image gets pushed.
In the VL world you'll find ToImage nodes which allow you to build images out of arbitrary data. Here is a little Game Of Life example:

Generating images in VL
Rendering images in DX9 and DX11

That one image is gray and the other red comes from the fact that we map a pixel format with one red channel to a format with one luminance channel in DX9 - not entirly correct, but better than seeing nothing at all.

The interface in detail

So what is this new image interface exactly? Well it came up in the past (https://discourse.vvvv.org/t/bitmap-data-type/6612) and re-surfaced again in VL - the topic of how to exchange images from different libraries. Nearly all of them come with their own image representation, like a Mat in OpenCV, a Sample in GStreamer, a Bitmap in GDI, an Image in WPF or just plain pointers in CEF - just to name a few we stumbled accross in the past.
All of those libraries provide different sets of operations one can perform on their image representation, they have different sets of supported pixel formats and they also differ in how they reason about the lifetime of an image. In the end though we want all those node sets which will be built around those libraries to work together.

We therefore decided to add a new interface - simply called IImage - to our base types in VL with the intention to allow different node libraries to exchange their images. The idea is that the node libraries itself work with the image type they see fit and only provide ToImage and FromImage nodes which will act as the exit and entry points. Whether or not those entry and exit points have to copy the image is up to the library designer and probably also the library itself. For some it will be possible to write simple lightweight wrappers, for others a full copy will have to be done. If a certain pixel format is not supported by the library it is fine to throw an UnsupportedPixelFormatException which will inform the user to either change the whole image pipeline to a different pixel format or insert a conversion node so the sink can deal with it.

Before diving any deeper here are two screenshots from a little example image pipeline, getting images pushed in the streaming thread from a GStreamer based video player, using OpenCV to apply a dilate operator on them and passing them down to vvvv for rendering:

The image interface comes with a property Info returning a little struct of type ImageInfo containing size and pixel format information. With this struct it's easy to check whether the size or the pixel format of an image changed. The pixel format is an enumeration with just a few entries of what we thought are the most commonly used formats. Since there're many many others the image info comes also with a OriginalFormat property where an image source can simply put in the original format string - whatever that is. But it at least gives sinks a little chance to interpret the image data correctly.

/// <summary>
/// Gives read-only access to images.
/// </summary>
public interface IImage
{
    /// <summary>
    /// A structure containing size and format information of the image.
    /// </summary>
    ImageInfo Info { get; }
 
    /// <summary>
    /// Gives access to image's data. Must be disposed after being used.
    /// </summary>
    IImageData GetData();
 
    /// <summary>
    /// A volatile image is only valid in the current call stack.
    /// </summary>
    bool IsVolatile { get; }
}

The second method on the interface called GetData is used for reading the image. It returns the IImageData interface pointing to the actual memory. Since the IImageData inherits from IDisposable the returned image data needs to be disposed by the caller. With this design it should be possible to implement all sorts of image reading facilities - as pin/unpin, map/unmap, lock/unlock etc.

/// <summary>
/// Used for reading images.
/// </summary>
public interface IImageData : IDisposable
{
    /// <summary>
    /// The pointer to the data.
    /// </summary>
    IntPtr Pointer { get; }
 
    /// <summary>
    /// The data size in bytes.
    /// </summary>
    int Size { get; }
 
    /// <summary>
    /// The scan size (one row of pixels including padding) in bytes.
    /// </summary>
    /// <remarks>If the scan size times the image height is not equal to the size data copying has to be done row by row.</remarks>
    int ScanSize { get; }
}

In order to avoid copying data the image interface comes with a last property IsVolatile which when set tells a sink that the data in the image is only valid in the current call stack - so it can either read from the image immediately or if that is not possible it will need to clone it. We expect image implementations to return data of the default image in case the read access happended too late. Imagine one puts volatile images into a queue without copying them first, the result should be a bunch of white quads so those errors should become visible immediately.
In case the volatile flag is not set we expect the image data to stay the same so no further copying is necessary on the sink. It can hold on to the image as long as it wants.

We further provide a couple of helpful extension methods to the IImage interface like Clone/CloneEmpty or making an image accessible as an System.IO.Stream

With this in mind let's look how to expose library specific image types:

  • In case the library newly allocates the memory for the image on the managed heap, not much has to be done except of writing a little wrapper implementing our image interface, returning false on the IsVolatile property and basically just forwarding all interface calls to the original image type.
  • The library takes the memory from a pool or uses some ref count mechanism. In this case it's most certainly mandatory to ensure that the original image gets disposed. If the image gets pushed from the library we recommend to simply push the image further and dispose it right after. If the image needs to get pulled from the library the wrapper should also implement the IDisposable interface and hand it downstream inside the resource provider monad so that the disposal behavior is correct once all the sinks are done using the wrapper. The third option is to simply copy the data into a private image one can hand downstream.
  • The library always returns an image pointing to the same memory. Similiar to the previous case except that one must not call dispose on the original image.

Example implementations can be found in VL.Core, VL.OpenCV and VL.GStreamer

Elias, Monday, Feb 5th 2018 Digg | Tweet | Delicious 0 comments  

Dynamic Buffers

Current vvvv alpha and upcoming vvvv beta36 has a new set of nodes that allows you to quickly upload data from VL to the graphics card. We had a WIP forum discussion about it here: VL - Custom Dynamic Buffer

On the VL side the nodes are called ToBufferDescription and we have them for the basic data types that usually hold big chunks of data: Spread, Array, IntPtr and Stream. The vvvv side is rather easy and only has one node called UploadBuffer (DX11.Buffer).

Primitive Data Types

Primitive types work out of the box and don't need any special treatment. Just make sure you define the correct Buffer type in the shader. This works for Integers, Floats, Vectors and so on, everything that is available in the shader as primitive type. Here is an example for Float32:

The only exception is Matrix it needs to be transposed in order to work like a normal transformation input. If you send a large amount of individual matrices to the shader the most efficient way is to do the transpose in the shader directly:

If the same matrix is re-used very often or you don't have access to the shader code simply transpose in VL:

Custom Data Types

If you want to define your own data types like light information or a custom vertex type in the shader then you need to pack the data accordingly in the buffer description. For this task the ToBufferDescription (Stride) nodes are used. They allow you to make a buffer description out of primitive types like float or even byte and set the stride size of your custom type in bytes so that the shader can read the custom type directly out of the buffer.

Matrix hint: If you define a matrix in a custom type in the shader you can use the row_major modifier to automate the transpose operation.

struct MyLightType
{
    float3 Direction;
    float Brightness; 
    row_major float4x4 Transformation; //set matrix type
};

Performance hint: If you can, design your custom types in a way that the byte count is a multiple of 16, sometimes it makes sense to insert unused floats as padding:

//would have 20 bytes, but blown up to 32 bytes (2 x 16) for faster read performance
struct Circle
{
    float4 Position;
    float  Radius;
    float pad0;
    float pad1;
    float pad2;
};

More info: https://developer.nvidia.com/content/understanding-structured-buffer-performance

Custom types in C#

If you are a C# coder you can also define a struct in C# with attribute StructLayout(LayoutKind.Sequential) and the same byte layout, import it in VL and pass that directly into the buffer. Then you don't need the node with version StrideSize because the data type size already matches.

[StructLayout(LayoutKind.Sequential)]
public struct Circle
{
    public Vector4 Position;
    public float Radius;
    float pad0;
    float pad1;
    float pad2;
 
    public Circle(Vector4 position, float radius)
    {
        Position = position;
        Radius = radius;
    }
}

Dynamic Raw Buffers

While in the process of doing the dynamic buffer nodes it was easy to add raw buffers. These buffers are from older shader models and can only be filled with bytes. On the shader side however you can also define Custom types. Only difference in HLSL is that you write Buffer<YourType> instead of StructuredBuffer<YourType>.

The node set is basically the same except that the VL part is not generic and only accepts bytes as input. The node names are ToRawBufferDescription in VL and UploadBuffer (DX11.Buffer Raw) in vvvv.

Raw buffers have no advantage except when you have to deal with an older graphics card, driver or shader code.

Examples

A VL patch with shader code can be found in latest alphas girlpower\VL\DX\DynamicBuffersAndTextures.v4p. And it is also used by @mburk for material management in his latest superphysical pack.

So now you can start sending your data up to the card and enjoy the speed. As always, if any questions arise hit us up in the forums.

yours,
devvvvs

tonfilm, Thursday, Feb 1st 2018 Digg | Tweet | Delicious 5 comments  
Some brief statements


A quad is quadratic

This quad has a quadratic shape.


Positioning an element at the mouse position results in that element being shown at the mouse position

This small quad is aligned to the mouse.


Using touch positions for positioning results in elements drawn at your finger tips

These quads show up where i touched the screen.


Interact with objects in world space, even in complex multi screen setups. Do that with the system cursor, not a displaced rendered cursor

Star

All this wasn't something that you could take for granted. Up to now.
I had to tease you first, before going into detail. If you think about the statements above, or even don't think about it, all of the above should be just normal, no-brainers. Having a not-quadratic screen is the case 99% of the time. These cases occur that often, we should make them easier to work with.

So from now on we have

  • Auto Aspect Ratio in the renderer, so you don't need to do that AspectRatio (Transform) involving cylic graph with the 3 links
  • you can disable Auto Aspect Ratio and still feed your own for the more complex cases
  • mouse, touch, gesture nodes are now reporting positions in our notion of projection space, an undistorted space that didn't get treated by the aspect-ratio transformation. These postions are just easy to work with as you saw above.
Projection Space vs. Distorted Normalized Projection Space

The main output of the mouse is the Position (Projection) XY pin, values in the case above go from (-1.78, -1) to (+1.78, +1), reflecting that the renderer is not quadratic.


All the details

What's that Projection Space?

The underlying technology (DirectX) comes with the following spaces and transformations, to get from one space to the other:

              World T.           View T.           Proj. T.          
 Object Space   ->   World Space   ->   View Space   ->   Proj. Space  

World Transformation typically is set by the Transform pin at your "quad", that takes it from object space and places that object within the world (the 3d scene).
View Transformation is what you connect to the renderer and is about the position and orientation of your camera.
Projection Transformation is the other input on your renderer, that is for making your scene compatible to a 2d screen. It pojects that 3d stuff onto a screen.
Now, while the underlying DirectX also mixes aspect ratio into that transformation, vvvv at some point started to distinguish lense projection and aspect ratio transformation, which now feels to pay out in the end.
So here is our notion of spaces and transforms:

              World T.           View T.           Proj. T.          Aspect Ratio T.
 Object Space   ->   World Space   ->   View Space   ->   Proj. Space   ->   Norm. Proj. Space 

Our renderer comes with this additional pin Aspect Ratio (and now also comes with that auto aspect ratio feature), treating this transformation a seperate step. Since the transformations are seperated, we got an additional space that you can think in.
And this is the space you want to be in. This at least is our theory. In our projection space the aspect ratio transformation didn't get applied yet.

Let's look at some gif before we theorize further:

Operating in projection space

Here we see how to operate in projection space when a camera is attached.
With the node WithinProjection (Transform) we tell the system that we want to operate in projection space, which is the same as saying "do not care about the camera (don't apply view and projection transformation as we already are in the right space)". So the spheres get affected by the camera, the quad does not get affected by the camera.
So what you take from the lesson should be that mouse pin Position (Projection) XY goes well together with the WithinProjection (Transform) node. The node you only need if a camera is connected to the renderer.

Normalized Projection Space

Now, the next step the pipeline does is applying aspect ratio, which distort everything in a way that a quadratic space matches the rectangular window or viewport. This is just technical necessity as DirectX asks for that. We are now in normalized projection space. You know, that space where left & bottom border are at -1, and right & top border are at +1. The one that you learned in your first tutorial about.
We always thought that this is the nicest space to think in, which is obviously not true. It feels nicely quadratic in size, which just doesn't align to the fact, that your renderer typically is not. So it is a distorted space.

Several render passes

Here is how we still give it a raison d'être:
If you have several render passes you often just want to have a fullscreen quad textured by a previous render pass. Now how would you place a quad so that it goes from left to right border and bottom to top border. Well this is obviously easy to do in a space where these borders are always at a fixed position like in the normalized projection space.

not so quadratic

What if you want to use and render the mouse in an early render pass, maybe with many viewports, softedge and aspect ratio settings, while actually hovering with the mouse over the final renderer, that comes with different settings? Does this align?
Well, this is a rare case where you again need to use manual aspect ratio nodes. With them you can adjust how to map to meaningful mouse positions that make sense in an earlier render pass. Actually you just need to reason about the aspect ratio of your orginal scene to make this work nicely. Note however, that in this special case - especially when softedge is involved - system cursor position and rendered cursor position don't align anymore, as you were used to in earlier vvvversions. Note that the editors from the editing framework still work, you just need to use the Cursor node to visualize the cursor, since the system cursor is off.

Cursor gets rendered in another renderer that you hover. Softedge adds to the complexity.

Old patches and a breaking change

Patches get converted so that they now work with the new mouse positions, those in projection space.
By that all patches fit well together. We are pretty sure that the benefits outweight the cons. This however still is a breaking change. If you have a patch where you don't use the mouse position for positioning elements, but map it to something else, and experience that the new value range doesn't feel right, you need to manually switch to the old behavior. Check the mouse node to access the now hidden Position (Normalized Window) XY, to access the exact old behavior. Gesture and Touch nodes come with the same pins.
Old renderers get converted in a way that the Auto Aspect Ratio is turned off - on newly created renderers it's turned on.
Patches working with touch or gesture were complicated as they just had to correct the touch position by manually transforming it in compliance to the aspect ratio. Where with mouse you got away with showing a rendered cursor that is just displaced, touch and gesture just don't let you do the same trick. You really expect the elements under your fingers to react. Those patches get converted in a way that they still work by using the Position (Normalized Window) XY, but you should consider cleaning that up by using the standard output Position (Projection) XY and throwing away all the unnecessary aspect ratio related tweaks and hacks.

DX11

Directx 11 doesn't come with the features for now. There would of course be a way to do the same with DX11, but let's see first, if the new system prooves to be easier to use for the majority of the tasks, while not failing at the more complex setups. When we have that proof of concept, it'll be doable to copy the concepts over to DX11. Let's wait for that first.
Depending on whether new DX11 builds shall still support older vvvversions or not, the implementation gets trickier or easier. So give us some time here to figure out what route to take. Thank you!

The fact that DX11 works a bit different for now isn't a big issue. Most patches that are supposed to work for both node sets actually do work for both environments. The only difference typically is how a rendered cursor comes into view. Interaction in most cases should feel the same though. For DX11 nothing has changed and all patches should work exactly like before.
gregsn, Friday, Jan 26th 2018 Digg | Tweet | Delicious 10 comments  

As you might know, enums in vvvv got our attention several times in the past. But still, we found something to improve.

There's been the NULL (Enumerations) node, that we now decided to drop.
Often when using Ord2Enum, String2Enum, Enum2Ord or Enum2String you additionally needed this node to specify which enum you actually want to work with.

Now, Ord2Enum, String2Enum, Enum2Ord, and Enum2String come with a configuration pin that lets you specify the enum. So no need for NULL anymore.

The mentioned nodes got legacy. Old patches will be converted in a way that they still use these legacy versions. (NULL (Enumerations Legacy), Enum2Ord (Enumerations Legacy)...)

If you want to update your patches, so that they work with the new versions

  • delete the null node
  • double click on the legacy Ord2Enum (..) node and select the new node in the node browser
  • select the right enum (using the inspector). Yes, the list wasn't sorted alphabetically in earlier versions. Sorry for that!

The patches should get cleaner in the end, which should make them easier to understand.
The system has less to infer over links (less magic = less unwanted side effects). It just takes the enum specified.


Side note:

As the enum encoding changed (in vvvv50beta35.7) and now works with strings, you now are allowed to connect a source of one enum to a sink of another enum:

Bingo

There just might be cases where this makes sense.


EDIT:
It's a bit unfortunate, but we had to keep the old nodes still active. There are cases where the enum in question is not available via the global enum list. E.g. a shader has this technique pin that can differ from shader to shader and sometimes even between instances of one and the same shader. So these enums need to be "pushed" towards the connected Ord2Enum node. So you still need the old nodes.

The old ones keep their names.
The new nodes now are named Enum2Ord (Enumerations Explicit), ...
Null (Enumerations) is legacy.

Please excuse the confusion.

gregsn, Friday, Jan 26th 2018 Digg | Tweet | Delicious 1 comments  

anonymous user login

Shoutbox

~7d ago

joreg: @beyon too bad. but from now on we have a fixed schedule: every 4th tuesday in the month! hope this helps to plan evvvveryones visits

~7d ago

beyon: joreg: ah, bad timing, I would be happy to attend but I doesn't look like it will work out

~8d ago

joreg: @beyon any chance you can add 2 days to stay? would be great to have you at the (not yet announced) #vvvv meetup on the 23rd!

~8d ago

~8d ago

beyon: I'll be in Berlin July 13-22 - anything interesting going on in that time frame?

~9d ago

~10d ago

joreg: moments later: admin fixed it. back to normal.

~10d ago

joreg: indeed. site broken. hope to get this sorted soon. sorry for the inconvenience.