Motion tracker to shader

Hey folks,

I’m trying to make a shader that takes data from the motion tracker and creates the effect of writing on the screen.
So far I’ve been able to take the motion tracking data and transform it into coords the shader can use, I now have a little box around the tracked coords (it doesn’t look like much but I’m very proud of getting this far :)).

But in order to make it appear as though you are writing on the screen the box has so be drawn on each coord were the tracked object has been as well as where it is at the moment. The most logical thing I could think of was feeding the coords to a spread but is it possible to feed that spread to the shader?

Or am I wrong with my spread idea and should I take a different route entirely?

sure. there are basically two ways a shader deals with spreads.

(1) Create a spread of transformations (use something like a Transform (2d) to connect your tracker and your shader). So your meshes will be drawn multiple times. The usual vvvv rules apply here: The shader is executed as many times as the maximum spread count of all shader inputs is. Lower spreadcounts get repeated to fill up larger ones.

So
*Ten meshes with one transform - All ten meshes will be drawn as expected
*One mesh with ten transforms - Draws the mesh ten times with ten different transforms applied
*Three meshes with three transforms - Each mesh gets drawn with its own transform
*2 meshes with 10 transforms - The even numbered transforms draw the first mesh 5 times, the odd numbered transforms draw the second mesh 5 times.

The second technique would be assemble a vertex buffer containing a list of the coordinates of the polygons, based on your tracker positions. So all coordinates are part of one big mesh, and canm be rendered by the shader as one unit. This is the more advanced way, which is usually taken for more complex and more static data.

and p.s. for storing where the object has been, check Queue (Spreads) or RingBuffer (Spreads)

Oh, I’m terribly sorry, at the moment I’m just trying to use a pixelshader to draw a texture on all coords.
Creating a 3d object sounds really cool tho, I’ll get on that as soon as I got the pixelshader goin.
I’m very much a beginner at all this shading stuff so I thought I’d be easier starting with just a pixelshader.
So say I make a queue with all the coords, how do I send that data to my pixelshader?
Normally I’d make an array and use a for loop but I don’t know how to do that. I’ve tried some stuff but none of it seems to do anything.

its as easy as
Queue -> Transform -> Shader

make sure to connect the Shader with a Mesh, so that the shader has some geometry to draw (a Grid (EX9.Geometry) could be a good choice)

note that it is not required for shaders to have Transform inputs. most of them have, as they are quite useful - see the Template shader for an example with a Transform input.

also make sure to change the FrameCount on the Queue to something like 100 and set DoInsert occasionally to 1

First of all, thanks a lot for all the help so far. You’ve been very patient with my noob questions, you don’t get that everyday on the internet anymore and I’m very grateful for that.

Having said that, I’m not quite following you, I’ve tried making a queue of 750 frames (at 25 frames per second that’s 30 seconds) so my queue now holds 750 coordinates x and y in values between 0 and 1.
I’ve tried adding the queue to my shader but it starts running very slowly, even if I bring down the queue to a 100 frames.
But even if I get it running smoothly, how do I interpret the queue in the shader? Is it passed 100 times through, once for each frame?

I thought I was getting somewhere now I’ve been stuck on this… I’ve included the patch & fx file if that clears anything up.

  • edit, sorry uploaded the wrong version, the 2nd one is the right one.

colortracker2pixelshader.zip (4.3 kB)

Well, turns out I’m an idiot.
The thing works by doing what you told me Oschatz, but I was drawing the color of the incoming texture over the older point drawings so they were being drawn, I just couldn’t see them.
Performance is also ok now, still slows down a bit when I use 750 frames in my queue but works perfectly at around 100 frames.
Would a svideo input through a capture card be faster then using a firewire like I am at the moment?
I’ll happily upload the patch and shader if anyone’s interested.
Thanks for all the help Oschatz!