» FAQ Rendering
This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.

FAQ Rendering

I'm working on a patch that uses objects with textures in a 3d environment. But my objects at the back sometimes appear over stuff at the front.

Most important thing to know is enabling on the depth buffer on the Renderer (EX9). If depth buffering is off (which is the default) all quads are just drawn in the order of their priority and their slice index - which looks weird as soon as they are seen in the wrong order. You can turn on depth buffering by selecting the Renderer (EX9) node in an Inspektor. There you'll see a "Windowed Depthbuffer" and a "Fullscreen Depthbuffer" pin. Change their default "None" values to D16, D24 (or similar, where 16 or 24 denote the number of bits available for the depth buffer).

The classic name for drawing without the depth-buffer is painters algorithm: start with the things in the background and then paint all things after another on top of it.

Using the depth buffer enables a test which is done for each pixel drawn: if a pixel nearer to the camera was already drawn, the new pixel gets rejected. Or to put it the other way round: If the current object to be drawn is nearer to the camera than the stuff already drawn, the object is just drawn on top of the existing color buffer.

But it still doesnt work. I am using transparent objects.

Unfortunately this wonderfully simple and effective technique doesn't work with transparent objects: if you first (high priority) drew a transparent object near the camera and after that (lower priority) draw a solid object behind it, the depth buffer check would just reject the drawing of the solid object behind the transparent one - because it knows nothing about semitransparent objects.

So note that alpha channels and depth buffering dont go well together. Make sure to draw all objects with alpha textures after everything else (that last sentence exactly describes the problem).

Using a Group (EX9) or the drawing order is determined by the order of inputs on the group node: left (first drawn) to right (last drawn). A Group (EX9 Priority) node allows you to change the drawing order/priority programmatically via the "Priority" pins that are associated with each "Layer" input pins.

If this turns out to be difficult, DirectX supports one classic computer graphics hack for dealing with transparent objects drawn in arbitrary order -- vvvv exposes this hack with the AlphaTest (EX9.Renderstate) node.

Basically it allows one to skip processing for all pixels whose alpha value is below a given reference value. This means these pixels are neither rendered nor written into the depth buffer. Connect the AlphaTest node to the render node in question, enable it, set the compare function to less (or was it greater?), and play with the "reference alpha" pin. this should cut out all transparent areas in your objects. Consult the Microsoft DirectX documentation for details.

Obviously this method will not give you smooth antialiased borders, but it is somehow better than nothing.

My Textures do look blurry. How can I turn off the bilinear texture filtering?

The Filter (EX9.SamplerState) node will help you. just connect it to your render object. it will control how the circuits within your grapics card (the so called Sampler) which maps the texture bitmap to your triangles will behave.
Note that you can specify different filters for sizing the image up (Magnification Filter) and down (Minification Filter). You are probably asking for setting the magnification filter to point which will do pixel replication -- so you will get a nice retro blocky pixellated image.

Note that the node shows all options DirectX provides. It is not necessary for graphic card vendors to implement all these options. Consequently i haven´t seen graphic cards yet which support the more interesting things - options like FlatCubic or GaussianBicubic will probably not have any effect. ask ATI or NVidia for details.

The "MipMap" pins on the Filter node decide how your image gets displayed in case you are using mipmaps. mipmaps are provided automatic within vvvv and will greatly improve performance when having large textures which get displayed small on the screen. This is a very usual case if you are having a 1024x1024 texture displayed on a 10 by 10 pixel quad. Mipmaps pre-calculate low resolution versions of your textures when loading the file (therefore the additional load time) and will then automatically use low-resolution versions when drawing only a small image on the screen. These versions will have half size, quarter size, one eigth, one sixteenth etc. to avoid visible jumps between the levels when having rendered objects getting smaller and larger usually two mipmap levels are used and interpolated. The MipMap filter pin sets the right filter for that. Linear will do a crossfade, Point will just jump etc.

The "MipMap LOD Bias" allows you to use other mipmap levels as the optimum - you can get some nice blurring effects when changing the value..

I´d like to draw on a white background, but I dont see anything?

In vvvv the default blending is "add" - with these default settings your animations will glow. But even more important is that all your 3d objects are visible by default on a black backrground.
If you want to change the blending behaviour create a node called Blend (EX9.RenderState) (for use the simple, not the advanced version of this node). There is one pin which lets you specify the blend mode. choose "blend", which means the particles are drawn opaque if alpha is 1.

Is it possible to disable individual slices of a spreaded render object?

This is only possible for Effects (see Effects category in the node-list). Slicewise enabling is not available for the primitives of the DX9 category. You can colorize the individual slices to transparent black, or scale them to zero size. The latter method should have slightly better performance.

How can I preload my textures?

Use the Preloader (EX9.Texture) and connect it to the same Renderer (EX9) you go fullscreen with. It takes a spread of directories containing textures.

Now you can use the FileTexture (EX9.Texture) anywhere in your patch and access the textures from those same directories you specified to preload. Watch the Renderer (TTY) and check that it does not load/unload any of the preloaded textures when you choose them via a FileTexture (EX9.Texture). Note though that a filetexture can only be shared with the preloader if it not only has the same filename, but also the same options (including width, height, textureformat,...).

What is the deal with those Device nodes?

Please see Device (EX9 Auto) or Device (EX9 Manual) for details.

I tried to use the REF device for rendering. It is so slow...

Ja, that is what it is meant to be. The reference rasterizer, which is built into the DirectX debug version is so slow even Microsoft doesnt believe in it. While it is slow it is but without bugs. It can be used to identify bugs in your graphic cards driver: if a scene looks different (apart from the framerate) rendered with the REF and the HAL device you are likely encountering a driver bug.

Does anyone know to what resolution vvvv switches if it goes fullscreen? My PC doesn`t seem to be powerful enough so i want to lower the resolution.

Did you have a look at Renderer (EX9) with Herrn Inspektor? There should be a pin named "Fullscreen Dimensions" with which you should be able to change the fullscreen resolution. By default the renderer tries to switch to 1024*768*24 bit or lower if that resolution doesn't exist.

How to set the renderer full screen on a second Monitor

Simply drag the renderer on the second monitor and ALT+Enter to make it fullscreen. If you want a renderer to be on one monitor in windowed mode and have it go fullscreen on the second monitor you have to use a Device (EX9 Manual) node.

How to capture the output of a Renderer?

There are many different ways to capture the output of a renderer that all have their drawbacks.

Make still Screenshots

On the FAQ GUI? page search for "screenshot" to master the art of taking snapshots within vvvv.

Record Monitor Output

Simply connect a video recording device to the video output of your graphics card. Then switch to fullscreen and press record. Try to use S-Video or DV connections to get the best quality. This method has the very advantage, that you dont need additional pc performance.

Screen Recorder

There are several screen capture tools listed on the Video Software Links site that can record portions of the screen to a movie file. You can also use vvvvs ScreenShot (EX9.Texture) node in connection with a Writer (EX9.Texture) node and patch your own screen recorder. Note that screen recorders have the advantage that they will capture antialiased versions of your image but also the resolution is limited to the actual resolution of the renderer.

Texture Writer

Writer (EX9.Texture)

A node that lets you save textures to files of different graphic formats. Best used in connection with (DX9Texture (EX9.Texture) )) which converts the output of a Renderer (EX9) node to a texture. This combination of nodes does not give you antialiased output!

Writer (EX9.Texture NRT)

For high resolution output try Ampops non realtime renderer (included in the vvvv release as Writer (EX9.Texture NRT)) which renders a still image sequence of any resolution your graphiccard supports (typically up to 4096x4096 or 8192x8192). Use VirtualDub or another video software to render the images into you favourite movie format. Best quality, but you have to deal with the non realtime mainloop issue. The NRT renderer has a MainLoop node inside, which can conflict with a MainLoop node inside your patch...

Writer (EX9.Texture AVI)

Quite similar to the Writer (EX9.Texture) node but saves the uncompressed .avi files with a given framerate. Make sure to use it similarly to the Writer (EX9.Texture NRT) module with a MainLoop (VVVV) node with "Time Mode" set to 'Increment'.

Capture DirectX Backbuffer

There is some special tools available that seems very promising: Fraps, Dxtory, OBS directly captures the backbuffer of any directx application. Note that it only works with fullscreen renderers in vvvv. On their website they claim to record at high framerates and high resolution directly to .avi files.
there's also the promising nvidia shadowplay software (no vvvv support yet)

Writing a DirectShow Video Stream using Writer (DShow9)

This should work well when connected directly to a VideoIn (DShow9) or FileStream (DShow9) node but not so well when connected to an AsVideo (EX9.Texture) node.
In the latter case the rendered image has to be transferred back from the graphiccards memory to cpu memory which is traditionally a slow process.

Writer (DShow9) writes .avis with a fixed 25 frames per second. For your resulting .avi to be kind of smooth (as smooth as 25fps can be...) you'll therefore have to make sure that your patch is running at exactly 25fps. Use the Timing (Debug) node to check this and the MainLoop (VVVV) node to limit the framerate. Good luck.

How to calculate the size of an image in the graphicscard ram?

Basically the data is uncompressed in memory, so its size is with*height*depth. This is independent of the fact whether the image has been read from a compressed file on disc (like a jpg or png image). The bit depth plays an important role - a color rgba image is four times as expensive as greyscale or indexed. You can set the bit depth with the Format pin in the FileTexture (EX9.Texture) node (the resulting bit depth is the sum of all the numbers in the format)

In practice the memory consumption is driver dependent in many ways.

Some drivers allocate more memory to round height/width to the next power of 2; the driver might create additional copies for better reading; mipmaps will add to this etc.

So the Memory (Debug EX9) node will give authorative answers, and it makes sense to create automated tests and do some scientific reasoning (and post the results here).

anonymous user login


~4d ago

joreg: Workshop on 18 07: Fluid simulations in FUSE, signup here: https://thenodeinstitute.org/courses/ss24-vvvv-fluid-simulations-in-fuse/

~4d ago

joreg: Workshop on 17 07: Working with particles in FUSE, signup here: https://thenodeinstitute.org/courses/ss24-vvvv-working-with-particles-in-fuse/

~14d ago

joreg: Here's what happened in June in our little univvvverse: https://visualprogramming.net/blog/2024/vvvvhat-happened-in-june-2024/

~16d ago

joreg: We're starting a new beginner tutorial series. Here's Nr. 1: https://visualprogramming.net/blog/2024/new-vvvv-tutorial-circle-pit/

~17d ago

joreg: Registration is open for LINK - the vvvv Summer Camp 24! Full details and signup are here: https://link-summercamp.de/

~17d ago

joreg: Workshop on 11 07: Compute Shader with FUSE, signup here: https://thenodeinstitute.org/courses/ss24-vvvv-compute-shader-with-fuse/

~25d ago

joreg: Workshop on 27 06: Rendering Techniques with FUSE, signup here: https://thenodeinstitute.org/courses/ss24-vvvv-rendering-techniques-with-fuse/

~1mth ago

joreg: Workshop on 20 06: All about Raymarching with FUSE, signup here: https://thenodeinstitute.org/courses/ss24-vvvv-all-about-raymarching-with-fuse/

~1mth ago

joreg: vvvv gamma 6.5 is out, see changelog: https://thegraybook.vvvv.org/changelog/6.x.html

~1mth ago

joreg: Workshop on 13 06: All about signed distance fields in FUSE, signup here: https://thenodeinstitute.org/courses/ss24-vvvv-all-about-signed-distance-fields-with-fuse/