» Blog
This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.

Blog

new post

Blog-posts are sorted by the tags you see below. You can filter the listing by checking/unchecking individual tags. Doubleclick or Shift-click a tag to see only its entries. For more informations see: About the Blog.

  reset tags

addon-release core-release date devvvv gallery news screenshot stuff
delicious flickr vimeo
Order by post date popularity

Who David Gann, schnellebuntebilder
When Sat, Apr 28th 2018 - 12:00 until Sat, Apr 28th 2018 - 18:00
Where schnellebuntebilder, Rudolfstr. 11, Berlin, Germany

Introduction

''VVVV.js features many essential JavaScript and WebGL programming methods packed in a set of over 300 nodes. The browser-based patch editor enables you to get started right now on any platform and it is possible to quickly deploy your application to web and mobile.
Recently VVVV.js received a huge update which implements advanced rendering techniques like physical based rendering, instancing and depth-buffer based post effects.

To jump right into action follow this link:https://tekcor.github.io/vvvv.js-examples/
There is also a very detailed overview in written form here:http://000.graphics/tutorial/02_VVVV.js_Introduction.html
or watch the video:

Workshop Content

In detail we will look at the following topics:

• Physical Based Rendering
• glTF Import and three.js model loading
• Instancing Engine
• Deferred Effects
• Collision Detection
• Terrain Rendering
• Multi-Texturing
• Particles
• Derivative Maps (Tangent free Normal and Parallax Occlusion Mapping)
• Shader and Node Development for VVVV.js

Booking and Fees

Reserve your workshop seat now via mailto:vvvvorkshop@schnellebuntebilder.de

There will be a fee of 50,00 € per attendee (recerving room + support for lecturer).
If you consider yourself a professional or your company sends you here, we use T.R.U.S.T to encourage you to pay a professional fee, which is 200,00 €.

Become a Patreon

If you can`t make it to berlin but want to learn it,
consider joining my online course or receive direct mentoring by becomming a Patreon. Your support will accelerate the development of this amazing framework.
https://www.patreon.com/davidgann

tekcor, Friday, Mar 2nd 2018 Digg | Tweet | Delicious 0 comments  

After a little while, here we are, a new version bump for dx11 rendering.

This as usual comes with some bug fixes and new features.

Bug fixes

First, there were 2 reasonably major issue which are now fixed:

  • Quad node would start to throw error if fed a nil anytime, and would not recover from it.
  • Deleting texture fx and renderer temp target would not release the associated resources, resulting in a leak (only while authoring, not while having a patch running).

New features

Next, of course there are new features, here are a few selected ones (full changelog below)

  • Kinect 2 nodes now also have help patches, so now all nodes from plugins do have a help file.
  • Sample and Hold for Buffers, 2d and 3d textures
  • Support for shared buffers. This is identical as shared textures, but for buffers, so you can share compute work cross process for example.
  • New library (TexProc), which contains some nodes for image processing which are not conveniently done using texture fx. Check RGBASplit , HSLASplit, ExtractChannel and Composite nodes.

Supporting DirectX11 development

For more than 6 years DX11 pack has been free, and will stay free.

The question about supporting development has been asked several times, and for now there was no official way to do so (except contacting privately).

So from now, dx11 contribution has a Patreon page, in which you can provide monthly donations to (various pledges with various rewards are available, including access to upcoming video workshop patches and custom support).

https://www.patreon.com/mrvux

Full changelog:

  • Pack version info is now integrated, which allows to use pack versioning feature (as well as diffing).
  • Kinect 2 nodes now all have help patches as well, so now every node in the pack has a help file.
  • Fix spelling on Frustum (Transform)
  • Fix spelling on FrustumTest (DX11.Validator)
  • Fix Quad layer which would not recover if fed a Nil input.
  • Renderer (DX11.TextureArray) now has a UAV pin (disabled by default), so texture can be written by compute shaders.
  • Softworld node did not allow to create contstraints.
  • Temp target renderer was not releasing resource when deleting the node, which was creating memory leak when authoring.
  • Fix shader releasing their resources when deleting them from patch.
  • Add ExtractChannel (DX11.Texture), which allows to pick individual channel of a texture (also auto handling input/output formats when required).
  • Add RGBASplit (DX11.Texture), extract all texture channels in a separate texture.
  • Add HSLASplit (DX11.Texture), extract all texture channels in a separate texture (converts into either HSL or HSV)
  • Add Composite (DX11.Texture), combines a spread of texture into a single one, each texture can have individual blend mode,opacity and texture transform
  • AsSharedTexture (DX11.Texture) , now forces evaluation by default
  • Add AsSharedResource (DX11.Buffer) , to allow to share a dx11 buffer between various processes.
  • Add FromSharedResource (DX11.Buffer Structured) , receiver side for shared buffer.
  • Add FrameDelay (DX11.Texture 3d) , orthogonal to all framedalys, for 3d textures.
  • Add WriteMask (DX11.RenderState) : Allows to control which channels are written to
  • Add WithinSphere (DX11.Validator) : Only draw objects which have a bounding box contained within a sphere.
  • Added ConstantFactor preset in Blend (DX11.RenderState), to allow to use BlendFactor (DX11.RenderState) more easily.
  • All Create Body nodes now have custom string input (which was missing from previous version)
  • IsYoungerThan (Bullet Rigid.Filter) new node
  • Add AlphaOperation (DX11.RenderState) : Allows to control how the alpha channel is written in the texture (independently of color blending).
  • Add S+H (DX11.Texture 2d) , Same as standard S+H nodes, copies a resource if the set pin is on, blocks evaluation and render otherwise (also optimize resource flags/usage behind the scenes).
  • Add S+H (DX11.Texture 3d) , for 3d textures
  • Add S+H (DX11.Buffer Structured) , for structured buffers

Download here:
directx11-nodes

vux, Saturday, Feb 17th 2018 Digg | Tweet | Delicious 6 comments  

Who maarja, id144, nissidis
When Tue, Feb 13th 2018 - 20:00 until Tue, Feb 13th 2018 - 21:00
Where Prague, Františka Křížka 36, 170 00 Prague, Czech Republic

Everywhen is dark, depressive and captivating contemplation on the recurrence in history. Everywhen orbits around two sides of life - the personal and the political. An often caring, gentle and well-meaning personal life is contrasted here with the reality of the social individual, who can often be sinister, vengeful and hateful. But how can we be critical to our own world views when they are created by those closest to us?

Trailer https://vimeo.com/249193177

Visuals, concept: Mária Júdová
Dance, concept: Soňa Ferienčíková
Music, concept: Alexandra Timpau
Lights: Ints Plavnieks
Technical support: Andrej Boleslavský, Constantine Nisidis
Produced by: BOD.Y – Zuzana Hájková
Supported using public funding by Slovak Arts Council

maarja, Tuesday, Feb 13th 2018 Digg | Tweet | Delicious 0 comments  

When we talk with our trusted VL pioneers we often find them implementing timeline like applications, which come with the main problem to find the keyframe that is the closest to a given time, often even finding the two closest keyframes and interpolate between them weighted by the position of the current time.

Easy? Just order all keyframes by time and start at the first keyframe and go thru the collection until you find one that has a time greater than the time you are looking for. This is called linear search and might work very well at first, but obviously has two performance problems:

  • The bigger the time you are looking for gets, the more checks you have to perform
  • The more keyframes you have, the more checks you have to perform

Enter Binary Search

Binary search does the same task in a much smarter way: It starts with a keyframe in the middle of the collection and checks whether the time you are looking for is greater or smaller than this middle keyframe. Now it can rule out half of all keyframes already and search in the interesting half in the same way: Take the middle keyframe and compare its time. As this rules out half of all remaining keyframes in every step, the search is over very quickly. In fact it's so stupid fast that on a 64-bit machine the maximum steps it has to perform is 64, because the machine cannot manage memory with more than 2^64 elements.

VL Nodes

The VL nodes cover several use cases. Depending on how your data is present you can choose from the following options.

All nodes expect that the input collection is ordered by the search key from low to high.
Only Values

The most simple node is just called BinarySearch and takes a collection of values. It returns the element that is lower and the one that is higher, their indices and a success boolean indicating whether the search key was in the range of the input values at all:

Key Value Pairs

For simple scenarios that don't require a custom keyframe data type the BinarySearch (KeyValuePair) version can be used. It operates on the simple data type KeyValuePair that comes with VL.CoreLib and returns the values, keys and indices:

It also comes as BinarySearch (KeyValuePair Lerp) with an integrated linear interpolation between the values that is weighted by how far the search key is from the two found keyframes:

Custom Data Types

If you have your own keyframe data type the BinarySearch (KeySelector) is your friend. It can be created as a region with a delegate inside that tells the binary search how to get the key from your custom type:

There is also BinarySearch (KeySelector Lerp) which has the same delegate and needs a Lerp defined for your keyframe that it can use internally. You keyframe data type could look like this:

The usage is then basically the same:

Other usages

A timeline is of course just one use case where binary search is useful. All data that can be sorted by a specific key can be searched by it.
Speaking of sorting, if you add elements to a sorted collection binary search can help you to find the index at which to insert the new element. Use the Upper Index output as insert index like this:

So it can help you to keep the very same collection up to date that you use to lookup the elements.
A usage example can be found in girlpower\VL\_Basics\ValueRecorder.

Enjoy the search!

Yours,
devvvvs

tonfilm, Monday, Feb 12th 2018 Digg | Tweet | Delicious 17 comments  

In the VVVV world you'll find four new nodes, UploadImage and UploadImage (Async) - both for DX9 and DX11 returning a texture. The former just takes an image and when requested uploads the image to the GPU, the latter takes an IObservable<IImage> and will upload whenever a new image gets pushed.
In the VL world you'll find ToImage nodes which allow you to build images out of arbitrary data. Here is a little Game Of Life example:

Generating images in VL
Rendering images in DX9 and DX11

That one image is gray and the other red comes from the fact that we map a pixel format with one red channel to a format with one luminance channel in DX9 - not entirly correct, but better than seeing nothing at all.

The interface in detail

So what is this new image interface exactly? Well it came up in the past (https://discourse.vvvv.org/t/bitmap-data-type/6612) and re-surfaced again in VL - the topic of how to exchange images from different libraries. Nearly all of them come with their own image representation, like a Mat in OpenCV, a Sample in GStreamer, a Bitmap in GDI, an Image in WPF or just plain pointers in CEF - just to name a few we stumbled accross in the past.
All of those libraries provide different sets of operations one can perform on their image representation, they have different sets of supported pixel formats and they also differ in how they reason about the lifetime of an image. In the end though we want all those node sets which will be built around those libraries to work together.

We therefore decided to add a new interface - simply called IImage - to our base types in VL with the intention to allow different node libraries to exchange their images. The idea is that the node libraries itself work with the image type they see fit and only provide ToImage and FromImage nodes which will act as the exit and entry points. Whether or not those entry and exit points have to copy the image is up to the library designer and probably also the library itself. For some it will be possible to write simple lightweight wrappers, for others a full copy will have to be done. If a certain pixel format is not supported by the library it is fine to throw an UnsupportedPixelFormatException which will inform the user to either change the whole image pipeline to a different pixel format or insert a conversion node so the sink can deal with it.

Before diving any deeper here are two screenshots from a little example image pipeline, getting images pushed in the streaming thread from a GStreamer based video player, using OpenCV to apply a dilate operator on them and passing them down to vvvv for rendering:

The image interface comes with a property Info returning a little struct of type ImageInfo containing size and pixel format information. With this struct it's easy to check whether the size or the pixel format of an image changed. The pixel format is an enumeration with just a few entries of what we thought are the most commonly used formats. Since there're many many others the image info comes also with a OriginalFormat property where an image source can simply put in the original format string - whatever that is. But it at least gives sinks a little chance to interpret the image data correctly.

/// <summary>
/// Gives read-only access to images.
/// </summary>
public interface IImage
{
    /// <summary>
    /// A structure containing size and format information of the image.
    /// </summary>
    ImageInfo Info { get; }
 
    /// <summary>
    /// Gives access to image's data. Must be disposed after being used.
    /// </summary>
    IImageData GetData();
 
    /// <summary>
    /// A volatile image is only valid in the current call stack.
    /// </summary>
    bool IsVolatile { get; }
}

The second method on the interface called GetData is used for reading the image. It returns the IImageData interface pointing to the actual memory. Since the IImageData inherits from IDisposable the returned image data needs to be disposed by the caller. With this design it should be possible to implement all sorts of image reading facilities - as pin/unpin, map/unmap, lock/unlock etc.

/// <summary>
/// Used for reading images.
/// </summary>
public interface IImageData : IDisposable
{
    /// <summary>
    /// The pointer to the data.
    /// </summary>
    IntPtr Pointer { get; }
 
    /// <summary>
    /// The data size in bytes.
    /// </summary>
    int Size { get; }
 
    /// <summary>
    /// The scan size (one row of pixels including padding) in bytes.
    /// </summary>
    /// <remarks>If the scan size times the image height is not equal to the size data copying has to be done row by row.</remarks>
    int ScanSize { get; }
}

In order to avoid copying data the image interface comes with a last property IsVolatile which when set tells a sink that the data in the image is only valid in the current call stack - so it can either read from the image immediately or if that is not possible it will need to clone it. We expect image implementations to return data of the default image in case the read access happended too late. Imagine one puts volatile images into a queue without copying them first, the result should be a bunch of white quads so those errors should become visible immediately.
In case the volatile flag is not set we expect the image data to stay the same so no further copying is necessary on the sink. It can hold on to the image as long as it wants.

We further provide a couple of helpful extension methods to the IImage interface like Clone/CloneEmpty or making an image accessible as an System.IO.Stream

With this in mind let's look how to expose library specific image types:

  • In case the library newly allocates the memory for the image on the managed heap, not much has to be done except of writing a little wrapper implementing our image interface, returning false on the IsVolatile property and basically just forwarding all interface calls to the original image type.
  • The library takes the memory from a pool or uses some ref count mechanism. In this case it's most certainly mandatory to ensure that the original image gets disposed. If the image gets pushed from the library we recommend to simply push the image further and dispose it right after. If the image needs to get pulled from the library the wrapper should also implement the IDisposable interface and hand it downstream inside the resource provider monad so that the disposal behavior is correct once all the sinks are done using the wrapper. The third option is to simply copy the data into a private image one can hand downstream.
  • The library always returns an image pointing to the same memory. Similiar to the previous case except that one must not call dispose on the original image.

Example implementations can be found in VL.Core, VL.OpenCV and VL.GStreamer

Elias, Monday, Feb 5th 2018 Digg | Tweet | Delicious 0 comments  

previously on vvvv: vvvvhat happened in December 2017


jannuary 2018,

or as i'd like to call it: a new dawn.

so many things...where to start... probly here if you haven't yet: vvvv-in-numbers-2017
then an apology for the current release-candidate hiatus. we've been too optimistic end of december, when we let out the first. meanwhile 3 more things got finalized:

a new RC is scheduled to be announced this week.

also, the five part vl for vvvv users series of videos is now complete: part 1, part 2, part 3, part 4, part 5. it is basically the workshop i gave at node17. it picks you up where you are as a vvvv user and shows you how the basic things are mostly the same in vl but only when it comes to spreading you need to learn about loops instead. and once you're there you'll not want to look back...

...like e.g. this doctor, who was kind enough to give us an update on the status of his latest developments. must watch!

what next:
we have a few independent libraries in the works at the moment that we're planning to announce properly in the coming weeks each. for the bravvvve ones among you, go check our public repositories already:

and even more to come... but beware: there aren't really instructions yet on how to use those. so you might as well want to wait for the announcements.

with beta36 hopefully out soon we'll then make plans for the next release. high priorities are endless, will be some tough decisions to make again.

vvvv Academy

it happened. it was great. 6 participants from zero to vvovv in 6 days. a few impressions are here. we're planning to do this again, stay tuned!

Contributions

here is a big one: u7angel open sourced his Automata UI and in addition is also giving it away for free now for commercial projects! you be stoopid not use it and get him at least a drink the next time you see him. thanks man!

further quite a load of new stuff:

and updates to many top contributions:

Gallery

Recursive Infinity - Endless Procedural Crazyshit by evvvvil

otherwise sadly not so much in the gallery last month. seems everyone is too busy networking their stuff socially. and then i stumbled upon this: https://vimeo.com/search/sort:latest?q=vvvv with many more recent uploads.. but please don't forget to share your works also in our gallery to show people having their first contact with vvvv.org, what you're doing with it.

and on a final note: tekcor has announced a workshop titled Chaos - Noise - Motion that will take place on march 10th in leipzig.


that was it for jannuary. anything to add? please do so in the comments!

joreg, Sunday, Feb 4th 2018 Digg | Tweet | Delicious 2 comments  

Dynamic Buffers

Current vvvv alpha and upcoming vvvv beta36 has a new set of nodes that allows you to quickly upload data from VL to the graphics card. We had a WIP forum discussion about it here: VL - Custom Dynamic Buffer

On the VL side the nodes are called ToBufferDescription and we have them for the basic data types that usually hold big chunks of data: Spread, Array, IntPtr and Stream. The vvvv side is rather easy and only has one node called UploadBuffer (DX11.Buffer).

Primitive Data Types

Primitive types work out of the box and don't need any special treatment. Just make sure you define the correct Buffer type in the shader. This works for Integers, Floats, Vectors and so on, everything that is available in the shader as primitive type. Here is an example for Float32:

The only exception is Matrix it needs to be transposed in order to work like a normal transformation input. If you send a large amount of individual matrices to the shader the most efficient way is to do the transpose in the shader directly:

If the same matrix is re-used very often or you don't have access to the shader code simply transpose in VL:

Custom Data Types

If you want to define your own data types like light information or a custom vertex type in the shader then you need to pack the data accordingly in the buffer description. For this task the ToBufferDescription (Stride) nodes are used. They allow you to make a buffer description out of primitive types like float or even byte and set the stride size of your custom type in bytes so that the shader can read the custom type directly out of the buffer.

Matrix hint: If you define a matrix in a custom type in the shader you can use the row_major modifier to automate the transpose operation.

struct MyLightType
{
    float3 Direction;
    float Brightness; 
    row_major float4x4 Transformation; //set matrix type
};

Performance hint: If you can, design your custom types in a way that the byte count is a multiple of 16, sometimes it makes sense to insert unused floats as padding:

//would have 20 bytes, but blown up to 32 bytes (2 x 16) for faster read performance
struct Circle
{
    float4 Position;
    float  Radius;
    float pad0;
    float pad1;
    float pad2;
};

More info: https://developer.nvidia.com/content/understanding-structured-buffer-performance

Custom types in C#

If you are a C# coder you can also define a struct in C# with attribute StructLayout(LayoutKind.Sequential) and the same byte layout, import it in VL and pass that directly into the buffer. Then you don't need the node with version StrideSize because the data type size already matches.

[StructLayout(LayoutKind.Sequential)]
public struct Circle
{
    public Vector4 Position;
    public float Radius;
    float pad0;
    float pad1;
    float pad2;
 
    public Circle(Vector4 position, float radius)
    {
        Position = position;
        Radius = radius;
    }
}

Dynamic Raw Buffers

While in the process of doing the dynamic buffer nodes it was easy to add raw buffers. These buffers are from older shader models and can only be filled with bytes. On the shader side however you can also define Custom types. Only difference in HLSL is that you write Buffer<YourType> instead of StructuredBuffer<YourType>.

The node set is basically the same except that the VL part is not generic and only accepts bytes as input. The node names are ToRawBufferDescription in VL and UploadBuffer (DX11.Buffer Raw) in vvvv.

Raw buffers have no advantage except when you have to deal with an older graphics card, driver or shader code.

Examples

A VL patch with shader code can be found in latest alphas girlpower\VL\DX\DynamicBuffersAndTextures.v4p. And it is also used by @mburk for material management in his latest superphysical pack.

So now you can start sending your data up to the card and enjoy the speed. As always, if any questions arise hit us up in the forums.

yours,
devvvvs

tonfilm, Thursday, Feb 1st 2018 Digg | Tweet | Delicious 3 comments  
Some brief statements


A quad is quadratic

This quad has a quadratic shape.


Positioning an element at the mouse position results in that element being shown at the mouse position

This small quad is aligned to the mouse.


Using touch positions for positioning results in elements drawn at your finger tips

These quads show up where i touched the screen.


Interact with objects in world space, even in complex multi screen setups. Do that with the system cursor, not a displaced rendered cursor

Star

All this wasn't something that you could take for granted. Up to now.
I had to tease you first, before going into detail. If you think about the statements above, or even don't think about it, all of the above should be just normal, no-brainers. Having a not-quadratic screen is the case 99% of the time. These cases occur that often, we should make them easier to work with.

So from now on we have

  • Auto Aspect Ratio in the renderer, so you don't need to do that AspectRatio (Transform) involving cylic graph with the 3 links
  • you can disable Auto Aspect Ratio and still feed your own for the more complex cases
  • mouse, touch, gesture nodes are now reporting positions in our notion of projection space, an undistorted space that didn't get treated by the aspect-ratio transformation. These postions are just easy to work with as you saw above.
Projection Space vs. Distorted Normalized Projection Space

The main output of the mouse is the Position (Projection) XY pin, values in the case above go from (-1.78, -1) to (+1.78, +1), reflecting that the renderer is not quadratic.


All the details

What's that Projection Space?

The underlying technology (DirectX) comes with the following spaces and transformations, to get from one space to the other:

              World T.           View T.           Proj. T.          
 Object Space   ->   World Space   ->   View Space   ->   Proj. Space  

World Transformation typically is set by the Transform pin at your "quad", that takes it from object space and places that object within the world (the 3d scene).
View Transformation is what you connect to the renderer and is about the position and orientation of your camera.
Projection Transformation is the other input on your renderer, that is for making your scene compatible to a 2d screen. It pojects that 3d stuff onto a screen.
Now, while the underlying DirectX also mixes aspect ratio into that transformation, vvvv at some point started to distinguish lense projection and aspect ratio transformation, which now feels to pay out in the end.
So here is our notion of spaces and transforms:

              World T.           View T.           Proj. T.          Aspect Ratio T.
 Object Space   ->   World Space   ->   View Space   ->   Proj. Space   ->   Norm. Proj. Space 

Our renderer comes with this additional pin Aspect Ratio (and now also comes with that auto aspect ratio feature), treating this transformation a seperate step. Since the transformations are seperated, we got an additional space that you can think in.
And this is the space you want to be in. This at least is our theory. In our projection space the aspect ratio transformation didn't get applied yet.

Let's look at some gif before we theorize further:

Operating in projection space

Here we see how to operate in projection space when a camera is attached.
With the node WithinProjection (Transform) we tell the system that we want to operate in projection space, which is the same as saying "do not care about the camera (don't apply view and projection transformation as we already are in the right space)". So the spheres get affected by the camera, the quad does not get affected by the camera.
So what you take from the lesson should be that mouse pin Position (Projection) XY goes well together with the WithinProjection (Transform) node. The node you only need if a camera is connected to the renderer.

Normalized Projection Space

Now, the next step the pipeline does is applying aspect ratio, which distort everything in a way that a quadratic space matches the rectangular window or viewport. This is just technical necessity as DirectX asks for that. We are now in normalized projection space. You know, that space where left & bottom border are at -1, and right & top border are at +1. The one that you learned in your first tutorial about.
We always thought that this is the nicest space to think in, which is obviously not true. It feels nicely quadratic in size, which just doesn't align to the fact, that your renderer typically is not. So it is a distorted space.

Several render passes

Here is how we still give it a raison d'être:
If you have several render passes you often just want to have a fullscreen quad textured by a previous render pass. Now how would you place a quad so that it goes from left to right border and bottom to top border. Well this is obviously easy to do in a space where these borders are always at a fixed position like in the normalized projection space.

not so quadratic

What if you want to use and render the mouse in an early render pass, maybe with many viewports, softedge and aspect ratio settings, while actually hovering with the mouse over the final renderer, that comes with different settings? Does this align?
Well, this is a rare case where you again need to use manual aspect ratio nodes. With them you can adjust how to map to meaningful mouse positions that make sense in an earlier render pass. Actually you just need to reason about the aspect ratio of your orginal scene to make this work nicely. Note however, that in this special case - especially when softedge is involved - system cursor position and rendered cursor position don't align anymore, as you were used to in earlier vvvversions. Note that the editors from the editing framework still work, you just need to use the Cursor node to visualize the cursor, since the system cursor is off.

Cursor gets rendered in another renderer that you hover. Softedge adds to the complexity.

Old patches and a breaking change

Patches get converted so that they now work with the new mouse positions, those in projection space.
By that all patches fit well together. We are pretty sure that the benefits outweight the cons. This however still is a breaking change. If you have a patch where you don't use the mouse position for positioning elements, but map it to something else, and experience that the new value range doesn't feel right, you need to manually switch to the old behavior. Check the mouse node to access the now hidden Position (Normalized Window) XY, to access the exact old behavior. Gesture and Touch nodes come with the same pins.
Old renderers get converted in a way that the Auto Aspect Ratio is turned off - on newly created renderers it's turned on.
Patches working with touch or gesture were complicated as they just had to correct the touch position by manually transforming it in compliance to the aspect ratio. Where with mouse you got away with showing a rendered cursor that is just displaced, touch and gesture just don't let you do the same trick. You really expect the elements under your fingers to react. Those patches get converted in a way that they still work by using the Position (Normalized Window) XY, but you should consider cleaning that up by using the standard output Position (Projection) XY and throwing away all the unnecessary aspect ratio related tweaks and hacks.

DX11

Directx 11 doesn't come with the features for now. There would of course be a way to do the same with DX11, but let's see first, if the new system prooves to be easier to use for the majority of the tasks, while not failing at the more complex setups. When we have that proof of concept, it'll be doable to copy the concepts over to DX11. Let's wait for that first.
Depending on whether new DX11 builds shall still support older vvvversions or not, the implementation gets trickier or easier. So give us some time here to figure out what route to take. Thank you!

The fact that DX11 works a bit different for now isn't a big issue. Most patches that are supposed to work for both node sets actually do work for both environments. The only difference typically is how a rendered cursor comes into view. Interaction in most cases should feel the same though. For DX11 nothing has changed and all patches should work exactly like before.
gregsn, Friday, Jan 26th 2018 Digg | Tweet | Delicious 10 comments  

As you might know, enums in vvvv got our attention several times in the past. But still, we found something to improve.

There's been the NULL (Enumerations) node, that we now decided to drop.
Often when using Ord2Enum, String2Enum, Enum2Ord or Enum2String you additionally needed this node to specify which enum you actually want to work with.

Now, Ord2Enum, String2Enum, Enum2Ord, and Enum2String come with a configuration pin that lets you specify the enum. So no need for NULL anymore.

The mentioned nodes got legacy. Old patches will be converted in a way that they still use these legacy versions. (NULL (Enumerations Legacy), Enum2Ord (Enumerations Legacy)...)

If you want to update your patches, so that they work with the new versions

  • delete the null node
  • double click on the legacy Ord2Enum (..) node and select the new node in the node browser
  • select the right enum (using the inspector). Yes, the list wasn't sorted alphabetically in earlier versions. Sorry for that!

The patches should get cleaner in the end, which should make them easier to understand.
The system has less to infer over links (less magic = less unwanted side effects). It just takes the enum specified.


Side note:

As the enum encoding changed (in vvvv50beta35.7) and now works with strings, you now are allowed to connect a source of one enum to a sink of another enum:

Bingo

There just might be cases where this makes sense.


EDIT:
It's a bit unfortunate, but we had to keep the old nodes still active. There are cases where the enum in question is not available via the global enum list. E.g. a shader has this technique pin that can differ from shader to shader and sometimes even between instances of one and the same shader. So these enums need to be "pushed" towards the connected Ord2Enum node. So you still need the old nodes.

The old ones keep their names.
The new nodes now are named Enum2Ord (Enumerations Explicit), ...
Null (Enumerations) is legacy.

Please excuse the confusion.

gregsn, Friday, Jan 26th 2018 Digg | Tweet | Delicious 1 comments  

Who David Gann
When Sat, Mar 10th 2018 - 14:00 until Sat, Mar 10th 2018 - 17:30
Where LOFFT Verein zur Förderung des Leipziger OFF-Theaters e.V., Lindenauer Markt 21 , 04177, Leipzig, Germany

ATLAS @ Deep Space Ars Electronica

WORKSHOP: CHAOS - NOISE - MOTION
MIT VORSTELLUNGSBESUCH VON CABOOM
DAVID GANN / 000.GRAPHICS
Dem Körper angepasste Interfaces ermöglichen es Tänzer*innen und Performer*innen, direkt mit Sound und Computergrafik zu interagieren. Die Körperbewegung wird dadurch zum Ausdrucksvektor für audio-visuelle Kompositionen, basierend auf Chaos und Noise.

  • - English below --

Der Workshop vermittelt eine Basis, Tanz und Performance mit interaktiver Kunst und Medienkunst zu verbinden. Die Teilnehmenden nähern sich Methoden an, mit denen Körperbewegungen digital erkennbar werden und setzen diese Bewegungsdaten in Echtzeit für die Erzeugung audiovisueller Elemente ein. Dabei wird primär die visuelle Programmiersprache VVVV gezeigt, welche besonders gut für den flexiblen Einsatz auf der Bühne geeignet ist (auch für Anfänger). Es wird speziell auf die Themen Chaos und Noise in der Computergrafik und im Sound Design eingegangen mit dem Ziel, Wege zu finden, durch Bewegungen komplexe und eindrucksvolle Bilder und Klänge zu erzeugen.

David Gann studierte Biologie und Interface Art und arbeitet als Künstler, Entwickler und Designer im Bereich interaktive audio-visuelle Medien, Computergrafik und Sound Art.

English Description
Wearable DIY interfaces enable dancers and performers to directly connect to real-time generated audio and computer graphics. The body movement becomes an expression vector for generating audio-visual productions based on chaos and noise. The participants approach methods of digitally capturing body movements and use the data gathered to create audio-visual elements in real-time. The primary focus is on the coding language VVVV, which is especially applicable for the flexible use on stage (even for beginners).

David Gann studied biology and interface art and is working as a artist, developer and designer in the field of interactive audio-visual media, computer graphics and sound art.

Wearable Motion Controllers
FBM Noise

HINWEIS: Die Workshopgebühr beinhaltet ein Ticket für einen Vorstellungsbesuch von CABOOM vom 09. bis 11. März 2018.

Anmeldung unter: workshop@lofft.de

000.graphics

KONZEPT+DURCHFÜHRUNG+FOTO David Gann. Ein Workshop von David Gann/ooo.graphics in Kooperation mit LOFFT – Das Theater. Gefördert von der Stadt Leipzig, Kulturamt. Diese Maßnahme wird mitfinanziert durch Steuermittel auf der Grundlage des von den Abgeordneten des Sächsischen Landtags beschlossenen Haushaltes.

tekcor, Thursday, Jan 25th 2018 Digg | Tweet | Delicious 0 comments  

anonymous user login

Shoutbox

~4h ago

joreg: reminder: 7 days left in this call for residency: vvvvertigo-starts-residency-2019 #vvvv #vl #xenko

~19h ago

u7angel: @synth, why not ?

~19h ago

synth: Is DX11 Multiscreen boygrouping a thing? Just asking before starting a forum treat.

~1d ago

Elias: new beta38 is out: vvvv50beta38

~3d ago

joreg: sunday afternoon...best time to apply for a #vvvv #vl #xenko residency: vvvvertigo-starts-residency-2019

~5d ago

joreg: @microdee: yep i think the site takes user submissions..

~5d ago

microdee: @joreg: it's missing FlareTic... ;)

~5d ago

joreg: @motzi yeah, welcome to my life..

~5d ago

motzi: @joreg: i'm tempted to look into PraxisLIVE :)