table of contents
DirectX is a Windows technology that offers high performance graphics for personal computers. DirectX offers speed by making it possible to directly access the sophisticated 3d rendering hardware of today’s graphics cards. vvvv is designed to use at least DirectX 9
vvvv offers shape generators that render graphical objects such as quads, grids, lines, etc. vvvv also contains nodes for transforming these graphical elements by using the classic matrix transformations (translating, scaling, rotation, aspect ratio modification, inversion, and perspective projection). Complex motion hierarchies can be generated by appending multiple transformation nodes.
As modern graphics cards are built with a strong focus on texture mapping, vvvv has features for mapping images onto polygons. Because image mapping calculations are performed by the graphics card, vvvv performs extremely fast while handling textured graphical objects. vvvv imports standard formats for images. For example BMP, JPEG, PNG. Textures can be transformed (scaled, translated, etc) in the same fashion as graphical shapes. With its ability to read image data with transparency vvvv can use file textures to create masks and alpha channels.
The DirectX full-screen mode essentially takes over the machine, letting it output graphics at the best possible frame rate. The full screen mode is designed to run without any tearing or jittering - motion is always accurately rendered to exactly match the video frame rate. This will get best possible results well suitable to broadcast graphic applications.
vvvv objects can access low-level features and parameters to offer precise control over the hardware circuits used for rendering. vvvv is meant to hardware specific - certain features work better with certain graphics card hardware. This may be uncommon for users of conventional programs, but it will allow getting the high performance rendering known from today’s computer games.
In the following tutorial we want to create a simple scene with the Renderer (EX9).
First, open a new patch:
First of all, we need a Renderer (EX9). Make sure you create the DX9 Version and not the GDI or TTY version. The computer may pause for a few seconds while DirectX is loading. Now a second black window titled "DirectX Renderer" has appeared.
Next we need something to render: We select Quad (DX9) from the menu.
Note that only DX9 nodes will show up in this renderer. The GDI, SVG, DX11 render systems are independent from each other – you can not mix and match the different rendering systems. Use the GDITexture (EX9.Texture) or SVGTexture (EX9.Texture) nodes to use the GDI or SVG content as a texture in a DX9 window.
The Quad will appear Inside the Renderer as a white rectangle, showing up in a default size and position. To move the objects you will need one or more Transform Nodes. Create a couple of transform nodes: Create a Scale (Transform), a Translate (Transform), a Rotate (Transform), a Transform (Transform 2D) and a Trapeze (Transform) node:
Connect the output of the Scale (Transform) to the Quad (DX9) and change the X and Y inputs of the Scale (Transform) node. You will see the changing its width and height. Now disconnect the link and see the quad jumping back to the previous position.
Now try the Translate (Transform) and Rotate (Transform) nodes. You will see that you can translate or rotate the quad with these nodes. While experimenting with the pins you might be wondering why certain inputs don’t seem to work- this is because these nodes basically work in 3D, but we haven’t yet connected a camera. So for now ignore these inputs and focus on the working pins.
Next connect multiple transformation nodes:
What does that mean? It is important to understand that this means a sequence of operations. First we do a scaling, then a translation, then a rotation. When thinking about it and experimenting a little, you will notice, that the sequence usually does matter, and it is important to do things in the right sequence.
Next try the Transform (Transform 2D) node:
The Transform (2D) node has all parameters, which are necessary to work in 2d. This node is merely a shortcut for common operations. So if you just want to place an object somewhere on the renderer, you don’t have to deal with individual Scale, Translate and Rotate nodes - you can do all things with the Transform (2D) node.
There is another version of this node, called Transform (Transform 3D). This node has a lot more inputs and will allow you to place the object in 3D. But wait, without a camera you won’t get the right 3D feeling.
If you like, you can try out the Trapeze node. It will deform your Quad (DX9) into a trapeze.
Now remove all transformations: Why is this node called a Quad? You will notice that the image isn’t a square: The quad node obviously doesn’t draw a square but a rectangle. Why is this happening?
To understand this we need to take a look at the coordinate system of the Renderer:
The window area has a default range of -1 to +1 in both dimensions. The origin with coordinate 0 is in the middle of the window.
A standard quad has width and height of 1 with its center at (0, 0). Therefore the quad extends from -0.5 to +0.5 in both dimensions.
Now resize the window of the renderer to a very wide rectangle:
You will see the quad stretching more and more, because the coordinate system gets stretched in the same way as the window.
The aspect ratio of the rendering is dependant of the size of the window: If you make your render window square, the quad would be square as well.
But how can we make your quad square? Having the square renderer is usually quite nice, but when you switch in full screen mode you will most probably have a 4:3 aspect ratio. An obvious solution to this problem would be to use a Scale node on the Quad: Try it: Resize your render window back to a 4:3 aspect ratio (like a normal computer screen) and connect a scale node to the quad:
The proportions of the render window are 4:3, so if we scale the object in X direction by 0.75 the object would be square. Try it out.
But that sounds tedious -- to attach a scaling operation to every render node you have in the patch. The solution is to use the transform pin of the Renderer: If you connect the Transform (Transform 2d) node to the View Transform pin, it will influence all objects in the Renderer: Attaching a transformation to the ViewPort pin looks the same as attaching that transformation to all displayed render nodes.
Interesting enough, the quad looks exactly the same as before. But what does happen behind the scenes? See the diagram below: If we connect the Scale node to the Renderer, the coordinate system gets transformed. The point at (x=1, y=1) which was in the previous diagram at the upper left corner of the renderer is now inside the viewing area.
Now connect a Translate node to the Scale: You will see the Quad moving. You can understand the Scale and Transformation nodes connected to the viewport pin as a very basic camera to the scene. If you connect something to that pin, you’re not manipulating the object, but the camera with which the objects are displayed.
Also see the video tutorial about the Axis system.
Armed with these abstract concepts, we will now place a more interesting image on the screen. First, cleanup the patch keeping the Scale node on the Renderer.
Next we must create a FileTexture (EX9.Texture) node, and connect it to the quad. Do a right-click onto the FileName input of the texture node and select an image file from disk. The FileTexture node can load BMP or JPG, TIF and TGA files into memory -- In this example we will use the file ‘flower four.jpg’ from the lib/assets/images folder.
See HowTo Prepare Textures if you want to create your own textures.
Now connect a Transform node to the TextureTransform pin on the Quad. You will see that you can use the transform to move and zoom the image on the quad.
In the same way, it is possible to zoom outwards and have multiple repetitions of the texture on the Quad. Set ScaleX and ScaleY to something like 5:
Next, create a Address (EX9.SamplerState) node and attach it to the Quad. Sampler State-series of nodes control individual properties of the texturing Process. To set these properties, just connect the SamplerState to the Object. If you want to apply multiple parameters, just link different Sampler State nodes in a row.
Now try the different settings of the Address (EX9.SamplerState). You will see different patterns depending on the mapping mode:
Of course we can do all the things we did with spreads also on the 3D-objects. Setup the following patch:
Set the SpreadCount of the LinearSpread to 6 and play with the inputs of the Translate object. You will see the quads moving out of the center.
Note that if multiple objects laid over each other, the colours are added, similar in a way as the image of multiple slide projectors would be overlaid . This is different from the GDI mode where all objects just override each other.
Create a Blend (EX9.RenderState) node and connect it to the quad. (Make sure not to select the advanced version of the node). With the input you can select different blend modes. These are similar to the layer operations in programs like Adobe Photoshop. Try out the different modes and select one at will. The RenderState series of nodes is similar to the SamplerState nodes.
Another interesting option is the possibility of giving the quad a color. The effect is similar as putting a colored filter over a lamp.
Until now, we haven’t really explored 3D. We have seen various pins dealing with z-values, but until now they have done nothing really spectacular. We also have seen how to use the ViewPort pin to change our coordinate system.
In this section we want to explore how this pin can be used to create a perspective view. Now the coordinate systems will be even more distorted, but this will allow us a three dimensional view on our scenes. An example of a typical coordinate system is shown in the following diagram.
Create a Perspective (Transform) node. This is a special transformation which implements the very basic inner function of a perspective camera:
As soon as you connect the Perspective node you will see the screen getting black. Why is this? It is a simple consequence of the basic camera function: Objects become larger when they get nearer. We haven’t yet specified a distance between the objects and the camera. When we simply created a Perspective node, we just placed the camera right in our object. Therefore the object is infinitely large and nothing is shown.
So, let’s create a small distance: Insert a Translate node and set Z to approximately 1. You will notice that as soon as you change the value, the image re-appears and the size is changing. That’s because we move the camera to our virtual scene.
Try changing the values at the Rotate and Translate nodes of the Quad; You will see the objects moving in a 3D space.
Actually this structure looks somewhat boring in 3D, as it is only a simple two dimensional disc. To correct this, insert a Rotate node between the Translate and the Quad. Inserting the Rotate node in this position means that the objects are rotated after they were spread, rotated and transformed.
Now change the X input of the newly created Rotate to something like 0.3 and you will see the individual Quads turning.
Something is wrong with this image. It is not quite clear, which objects are in the front, and which objects are behind. Obviously the objects don’t occlude each other in the way it might be expected.
Why is this? Usually in vvvv objects are drawn in the order they are attached to a Group (EX9) node. An object attached to a groups leftmost pin is drawn first, an object attached to the rightmost pin is drawn last and will appear to be on top of all the others.
In our example, the spread generates a number of Quads: The first slices get drawn first, and the last slices last. Unfortunately our structure forms a circle where there is no clear front or back, so the objects drawn last override things which should be visible.
To solve this, there is a technique called Depth Buffering. To activate it, select Renderer (EX9) and view its configuration pins in an Inspektor. Note the Windowed DepthBuffer and Fullscreen DepthBuffer pins. Setting their enums to a value different than None will activate depth buffering. The value 16/24/32 in the enum denotes the precision of the depthbuffer. Typically one would chose a 24bit precision. As soon you activate the Depth Buffer you will notice that the occlusion handling is correct.
While the object is quite strange, it is completely plausible.
Camera control tends to be quite complicated and at some times you probably don’t want to program a camera, but more deal with the objects you are creating. Therefore a module is provided, which resembles the camera control in the 3d-modelling program SoftImage.
To use that module, just create a new node by double clicking in the patch and select the Camera (Transform Softimage) from the list.
To control the camera you have to hold certain keys and drag the mouse at the same time. The following commands are implemented:
|O||Orbit||Rotate around a given point of view|
|Z||Zoom||Right mouse button: Zoom (move mouse up and down) <br>Left mouse button: Move camera|
|P||Position||Move the camera forward and backward<br>Right mouse button: Fast<br>Left mouse button: Slow|
|R||Reset||Reset View to Default|
At the same time it might be interesting to include the Patch “modules\AxisAndGrids.v4p” to your project. This will allow for two more handy keyboard shortcuts:
|A||Axis||Display coordinate axis<br>x-axis: red<br>y-axis: green<br>z-axis: blue|
|G||Grids||Shows a grid in the x/z plane. <br> Note that this grid will appear as a white line in the default view.|
anonymous user login