Replacing DirectShow with managed OpenCV. Video playback / capture / CV

Simple video playback / video capture using EmguCV in VVVV
Options are open for full replacement of DirectShow, including spreadable video playback, CV tasks, access to images in dynamic plugins, CV in dynamic plugins.

please check readme on github / download from github if you want to play around:

VideoPlayer
sites/default/files/imagecache/large/screenshot1314517446.png

VideoIn
sites/default/files/imagecache/large/screenshot1314514524.png

forum doesn’t seem to let me embed these images (some auto-formatting issue :()

Brilliant! Here s my test, seems like it is dragging some extra ticks but the Videotexture jumps often at over 40, I am going to use your plugin for my next video performace rehearsal and report back.
The only problem I see right now is that vvvv.exe won t close properly when I use it, I have to turn it off from the Task Manager.
S.

Had a couple of sudden quits with chnaging files, but have a project on so can’t get into too much testing, probably funny files, but nice start, and I look forward to testing some more, nice one E :)

I’m actually working on the spreading at the moment.
I wouldn’t suggest that this is even ready for testing yet, but hopefully will get some minds thinking / looking at what’s possible with this / which direction this should go in.

Currently working on the frame locking between nodes
(e.g. VideoIn node uses thread to produce frames, and AsTexture reads back those frames in the main thread, so need to lock the processes properly)
Then there’s the details of correctly initialising / destroying / reformatting when you change file selections / capture sources.

Had some good results so far, and things are cleaning up quick. I think we’re only a few hours of attentive work away from a reliable video player. but i definitely wouldn’t suggest thinking about testing this for reliability right now.

But lets open up the development process! (that’s what github’s for :))

Concerning license. I’m consdidering buying one here for EmguCV, then I could distribute EmguCV utilising plugins and you could use them without GPL restrictions, but if you wanted to write any of your own EmguCV code, then you’d need to buy a license.

VVVV not closing will be a thread close issue.

@io - fixed closing issue. i thought IPluginEvaluate inherited form IDisposable, had to inherit explicitly to get it to call my thread closing code.

Latest commit is fairly stable at spreaded videos / spreaded capture.
Capture ID is definitely sporadic, and likely a quick fix inside the node will fix it, but not looking at that right now.
Please read the readme on github for full notes on this effort

i tried this on another machine and totally failed.
will see if i can get it running…

EDIT : camera capture works fine, but codec issue loading files.

It’s using the HighGUI video loader, which uses VfW on Windows (i.e. not the same codecs as DirectShow)

I’m failing on Win32/XP but succeeding on Win64/7 with same codecs installed (i think :) )

I think we can deprecate WinXP support. Btw, what codecs i need to install on Win7 x64? On windows we have some different ffmpeg compilations like ffdshow and etc.

i was installing ffdshow, which has vfw support
and I also installed Media Player Codec Pack

I still presume the app is running 32bit, unless .NET is doing something magic that allows for 64bit code to run inside a 32bit class.

I’m upgrading that computer to Windows 7 this morning, then hopefully will report back better results.

Elliot

hey guys
this EmguCV looks really cool!
btw in the latest build
in test-FaceTracking patch the FaceTracking (EmguCV) node appear in red.
any ideas?
thanks

Hello, circuitb.

Can you provide some error info from Render(TTY)?
I will also commit new version today.

hi alg,
here is the error log:

00:00:23 - : Texture (Width: 1, Height: 1, Format: X8R8G8B8, Mip Map Count: 1) loaded in 0.000 seconds.
00:00:23 * : couldn’t connect pins of nodes FaceTracking (EmguCV) and Vector (2d Split).
00:00:23 * : couldn’t connect pins of nodes FaceTracking (EmguCV) and Vector (2d Split).
00:00:23 * : couldn’t connect pins of nodes VideoIn (EmguCV) and FaceTracking (EmguCV).
00:00:24 ERR : Corrupt link-message in Patch D:\projects\contributions\plugins\EmguCV\test-FaceTracking.v4p. srcViewPin or dstViewPin is nil!
00:00:24 ERR : Corrupt link-message in Patch D:\projects\contributions\plugins\EmguCV\test-FaceTracking.v4p. srcViewPin or dstViewPin is nil!
00:00:24 ERR : Corrupt link-message in Patch D:\projects\contributions\plugins\EmguCV\test-FaceTracking.v4p. srcViewPin or dstViewPin is nil!
00:00:24 : Creating new texture at slice: 0

Hello. Currently new ObjectTracking node is still in development - we have some minor problems with whole graph. You can track progress here https://github.com/elliotwoods/VVVV.Nodes.EmguCV/network .

Some notes on threading:

I think that the Capture node (either VideoIn, ImageLoader, VideoPlayer, etc) should give an enum pin (probably in config, but could also be a default hidden input pin) which gives these options:

  • Very immediate - completely unthreaded

  • Immediate - threaded capture (i.e. double buffered), unthreaded processing

  • Threaded - threaded capture, threaded processing (all processing in one thread)

  • Background - threaded capture, threaded processing (all processing nodes have their own thread)

  • Very Immediate = 0 frame latency, but vvvv fps can be locked to capture fps

  • Immediate = 0 to 1 frame latency, vvvv fps not hindered by capture fps

  • Threaded = 0 to N frame latency, all processing must run on 1 core (happens inside the capture’s thread)

  • Background = 0 to N frame latency, processing can be shared across cores

I think it makes sense for the capture node to decide this option for the graph beneath it (not sure right now what happens if we have a node further down that takes 2 seperate inputs from different captures set to different options. i presume most immediate will take precedence and that immediacy will be carried down the graph).

Examples:
For structured light I’d choose Very immediate
For face tracking with not so much processing I might choose Immediate
For face tracking with lots of processing I might choose Threaded.

It’s also possible that a node Immediacy (EmguCV) could accept frames, and change the immediacy mid-graph.

Sound quite nice to look around that.

Had a play with working on a nice back end system for it, only went trough the subsystem but the idea was to use AddonFactory + Hde nodes to deal with threading.

Roughly filters implements a separate interface, like IImageFilter/IImageSource (so they should not deal with threading/sync, and technically they know nothing about vvvv).

AddonFactory would register Node infos/Pins, and use Hde features to build a subgraph (which runs on it’s own thread), with a node holder to wrap filters.

Advantage of this is any improvement to the node holder/graph is immediately benefiting all filters.

Made some working prototypes for that concept, AddonFactory is fairly complete for this, but Hde was missing a few features (mainly Pin Connected/Disconnected Event), Pin Direction, and global graph events, to make it reliable enough.

As datatype Filters have:

  • Streams : Roughly it’s an input/output image, it is passed on the separate thread.
  • Parameters (vvvv input pins like value/string), which are synced to the subgraph
  • Output Data (same as above but output, things like contour data for example)

IImageFilter interface has the following methods:

  • CreateInputStream
  • CreateOutputStream
  • CreateParameter
  • CreateOutputData
  • Evaluation model (keeping that for later, enough to do already).

Questions/Ideas welcome :)

Hi
here we are trying to use CUDA instead of CPU for face tracking, seems much better on stability.

EDIT: now with compiled dll by @Ale

FaceTrackingCudaNode.zip (12.3 kB)

i like this thread :)

thanks for compiled dll, perfect! xd

here screen capture:

@ale I think we can made CUDA and CPU versions of ObjectTracking node. Need to test how CUDA works on ATI cards, maybe we don’t even need another version.