Plugin Texture In

Hi, what i would like to use is the TextureIn pin for plugins, so some “logic” of data flow can be moved easily into c# computation (like custom cannyDetection/contours).
By the way, also a node able to set a texture into SharedMemory segment will be nice, useful to use with plugins(till there is not TextureIn), and and usefukl to share texture with third parts, not that I need it, but you never know :D

hi!
it’s impossible with plugin interfaces V2, don’t know on V1…

dont know v1 or v2 or whatever, just know nothing is impossible, except get rid of berlusconi ;D

+1!!!

ah, and so using System.Drawing; has no effect? (cause i can’t use Bitmap)

sorry, my fault, didn’t add the reference

There is no texture/mesh input as pin for the moment in either plugin interface.

There’s a couple of thoughts to be bring into as a texture/mesh input node would behave differently compared to a standard one (multiple devices in mind).

so one of the problems is to have a input pin abel to understand which video device (for multiple usage), is handling that texture ?

yeah, amongst other stuff, not really to the pin, but also the node, and also texture mode/usage/pool can be tricky.

so , how many chances to have it ? :)

certainly not in upcoming release (beta25.1), but soon thereafter. like vux already pointed out, it’s not that simple because of the device handling.
right now it’s like this:

public class TexturePlugin : IPluginEvaluate, IPluginDXTexture2
{
  [Output("Texture")](Output("Texture"))
  IDXTextureOut FTextureOut;

  public void Evaluate(int spreadMax)
  {
    // Based on input data, do we need to create a new texture?
  }

  public GetTexture(IDXTextureOut forPin, int onDevice, int slice, out int texture)
  {
    // Is forPin our FTextureOut?
    // Find the texture for this slice and this device and return it (texture.ComPointer.ToInt32())
  }

  // Called by the PluginHost every frame for every device. 
  // Therefore a plugin should only do device specific operations here 
  // and still keep node specific calculations in the Evaluate call.
  public UpdateResource(IPluginOut forPin, int onDevice)
  {
    // Is forPin our FTextureOut?
    // Do we know that device?
    // Do we have a texture for this device?
    // onDevice is an address, we need to call Device.FromPointer and later device.Dispose()
  }

  // Called by the PluginHost whenever a resource for a specific pin needs to be destroyed on a specific device.
  public DestroyResource(IPluginOut forPin, int onDevice, bool onlyUnManaged)
  {
    // See UpdateResource
  }	
}

to handle all these possible scenarios a lot of stuff had to be done by the plugin writer, involving dictionaries and what not, simply put: the signal to noise ratio was very low.

with beta24.1 tebjan added a few base classes to our plugin interfaces project which improved the situation:

public class TexturePlugin2 : DXTextureOutPluginBase
{
  [ImportingConstructor](ImportingConstructor)
  public TexturePlugin2(IPluginHost host) : base(host)
  {
  }

  public void Evaluate(int spreadMax)
  {
    // If inputs changed in a way that texture need to be recreated call
    Reinitialize();

    // If inputs changed in a way that only the texture needs to be filled with new data call
    Update();
  }

  protected override Texture CreateTexture(int slice, Device device)
  {
    // Create the texture for device and slice. For e.g.: new Texture(...)
  }

  protected override void UpdateTexture(int slice, Texture texture)
  {
    // Update the texture for device and slice. For e.g. fill texture with data based on input pin.
  }
}

much cleaner. but still a few limitations: the texture pin is created by the base class. so what if we wanted an additional texture output pin, or how would we handle a future texture input pin? and what if we need to inherit from another base class?

i gave the issue some thought this weekend and came to the conclusion that our interface needs some rework in this matter. let’s start with some constraints i was able to think of first:

  • the lifetime of a texture is bound to its device. devices can get lost or new devices can pop up during runtime. if a device gets lost, all associated resources (like textures) need to get disposed of. if a new device pops up, new textures must be created for that device. so we certainly need a mechanism to get informed about those scenarios.
  • the device handling should be done automatically by the system like it is done now. for example to be able to move the renderer to another display without worrying about device management is a super nice feature we certainly want to hold on to. therefore something like a device input for each node which needs a device in order to accomplish its work is not an option.
  • the interface should be super simple for a plugin writer. we want to encourage users to write their own code, which won’t happen if it gets that complicated like it is now (regarding textures of course).

so point 1) tells us that writing something simple like ISpread won’t be possible.
point 2) tells us that the device comes from a sink node, like a renderer on a specific display.
and point 3) tells us that we should avoid new interfaces or base classes at all costs.

having those things in mind i started to to experiment a little and so far i came up with this, example first:

public class TexturePlugin3 : IPluginEvaluate
{
  [Input("Texture In")](Input("Texture In"))
  ISpread<DXResource<Texture>> FTextureIn;

  [Output("Texture Out")](Output("Texture Out"))
  ISpread<DXResource<Texture>> FTextureOut;

  public void Evaluate(int spreadMax)
  {
    FTextureOut.SliceCount = spreadMax;
    for (int i = 0; i < spreadMax; i++)
    {
      FTextureOut[i](i) = new DXResource<Texture>(CreateTexture, UpdateTexture);
      // Optional: new DXResource<Texture>(CreateTexture, UpdateTexture, DestroyTexture);
    }

    // To access the texture input we'd need a device. We could get that by adding another pin
    // with a device enumeration for example. Or we could try something like:
    foreach (var dxResource in FTextureIn)
    {
      foreach (var texture in dxResource.CreatedResources)
      {
        // ...
      }
    }
  }

  private Texture CreateTexture(Device device, int slice)
  {
    // Create the texture for device and slice. Since we have a device at this point we could
    // also use the texture input like this:
    return FTextureIn[slice](slice)[device](device);
  }

  // This one is optional. If not set, nothing will be done with the texture.
  private void UpdateTexture(Texture texture, int slice)
  {
    // Fill the texture with data
  }

  // This one is optional. If not set, the texture will be disposed.
  private void DestroyTexture(Texture texture)
  {
    // Destroy the texture
  }
}

as you see, no additional interface, unlimited texture outputs or mesh outputs or whatever. and the methods used to create/update/destroy the resource are user defined. could also be anonymous methods.
the only tricky part would be to access the input textures if there would be no output dealing with directx, since we’d need a way to get a device from. like i said in the comment, one possibility would be to add another input pin for the user to select a specific device. like the renderer does based on the display it’s working on. well but i think i leave it up to the reader to figure that part out.

btw. DXResource is based on a very generic base class, which could be used for all these kind of problems, where one has to deal with a device, a world, a pipeline or however you call it. or at least i think it should be possible.

Ok good thinking on starting to have resources input.
Had couple of thoughts about that a while ago so gonna comment on some bits and add a few.

CreateTexture(Device device, int slice)
UpdateTexture(Texture texture, int slice)
DestroyTexture(Texture texture)

I would add a sender to it as well (maybe the related pin)

CreateTexture(Pin pin,Device device, int slice)
UpdateTexture(Pin pin,Texture texture, int slice)
DestroyTexture(Pin pin,Texture texture)

So if you have multiple texture out pins you can either use the same callback, or have a different one and ignore the pin parameter.

About device:
Well first I would not do calls in evaluate method (unless for a node like Info, Pipet, which is not purely device related).
If i use texture Input i’m quite likely to so some device processing (layer node for example). So you can easily know which device you render for.

Now the more fun bit, node connections and handling, so it’s more general considerations about how to handle some cases:

  • Connect/Disconnect pin
  • Multiple pin connections.
  • Connections to random other nodes, which don’t provide a texture on their own (iobox/switch…)

Got more ideas, but will post them around.

Also, a tiny bit off topic, but having texture in for that is not necessarily the best idea, adds quite some extensive overhead.

Since I don’t know a good gpu contour implementation (if any implementation at all) for contours (I guess you call cvContour in emgu).

This is the pipeline :

  • Get camera image
  • Convert to texture (gpu copy)
  • Get texture input
  • Copy back to cpu
  • Do contours

Well i’m sure you see the problem, you doing many memcopies for textures from gpu to cpu and back.

You also generally want to have your processing in a thread, so you end up with several locking issues (some camera drivers are blocking/non blocking), which can be quite tricky to implement.

Also you need to double check what texture format comes in/out, from different cameras you can have different formats(depth,channel count…) most of the time dshow will bring your texture in argb, whereas contour expects a binary thresholded single channel image.

Hope that helps as well :)

vux:
I would add a sender to it as well (maybe the related pin)

hmm, what about changing the resource type a little like this:

public class DXResource<Texture, TMetadata> : Resource<Device, Texture, TMetadata>
// this way we leave it up to the user what things to take into the callback
...
ISpread<DXResource<Texture, Tuple<object, int>>> FTextureOut; // funny i need to use object here
...
FTextureOut[i](i) = new DXResource<Texture, Tupl<object, int>>(CreateTexture, Tuple.Create(FTextureOut, i));
...
// and the callback would be
Texture CreateTexture(Device device, Tuple<object, int> metadata);

TMetadata would be perfect, specially if you want to generalize the concept outside of device dependent objects.

Tuple sounds good, although Item1 property is not as clear as Sender, and Item2 as slice. You also lose the ability to add a third property, even if it’s true you can put whatever you want in object.

i love all this moving neurons :)

regarding this :

* Get camera image
* Convert to texture (gpu copy)
* Get texture input
* Copy back to cpu
* Do contours

any tecnique used needs at least to pass by first 3 passages, if you wanna use a camera device as texture u do that anyway.
In the example of contours, in emgu, will be needed more a bg suppression, 2 texture conversions, and the managed countours call;actually contours are extracted using a video flow and with some obvious limitations and conversions between formats.
What i was pointing to, over the contour example, is the chance to run calculations over a texture when gpu cant be used to do that,ie.: o when you need textureIn and valuesOut.

But these are just reflections from italy :)

hey all!

has there been an update with this?
or are we currently taking int input from an /ex9 info node?

Hey!

Continuing this thread…
I’m trying to read back a Texture to CPU from a Texture input.

As far as I know, the only way to create a texture input is to accept the Handle such as in the SharedTexture plugin. So I’m doing it that way, but since i’m then instantiating a texture which is a shared texture, I can’t lock the pixels on it (and therefore read back the contents).

I presume the workaround for this is to create my own render context, blit the contents of the shared surface to the rendertarget, and read back the contents of my render target. This all sounds like the long way around.

So just checking first… is there currently a way to read back data from a texture in a VVVV plugin?

Also, why does AsRaw read back as an image format? surely it makes more sense just to get the raw image data? (and surely that function would be much quicker!)

Elliot

another question for this thread.
I’m getting a lot of integer overflows with the handles that are being reported, e.g. i’m getting 64bit intptr’s in 32bit apps.

Ok. I’ve got this working now.

Check https://gist.github.com/anonymous/5884203 for a single threaded example*.

Joreg informed that we need to let intptr wrap, e.g.:

int p = unchecked[int)FHandleIn[slice](slice](https://vvvv.org/documentation/int)FHandleIn[slice](slice);
            IntPtr share = new IntPtr(p);

which fixed the previous error with 64bit-like intptrs

(* i think all you need is a Dictionary to store a spread of offscreen buffers rather than just having 1 per node, then it should be threaded).