Project for deaf people

Hello people!

I need help with some friend’s project:

He’s trying to recognize some letters and words from deaf people’s gestures.

He has a Kinect for Windows (with near mode and finger detection).

Is vvvv useful for this goal?

Does he have to use OpenNI drivers or official drivers?

Any help will be highly apreciated!

Thanks in advance,

Pablo

Hi Pablo

interesting project. Using Kinect to detect hand positions is going to be complicated (for me at at least). Kinect works best when detecting body gestures, near mode enables the body to become closer to the Kinect, but does not enable the tracking of individual fingers.

A possible solution would be for you to investigate Detect Object, Detect Object cane be trained to recognise hands in certain positions.

Other hardware options soon to be released include Leap Motion (designed for hand tracking, reviews are not good at the moment, let’s see what happens on release) and Intel’s Perceptual Computing (to be released soon).

Obvious Engine looks really cool - I’ve never tested though, might be worth contacting the the company behind it:

http://obviousengine.com/

Please let us know how your research goes.

this is interesting. from what i know realtime sign language recognition is pretty advanced.

it also seems possible to implement it in vvvv. I dont think it is yet, though there is quite some opencv stuff here.
http://www.youtube.com/watch?v=cxHMgl2_5zg
Dont know it there is an implementable library.

gaz, thanks for your reply. I believe “Kinect for windows” DOES enable the tracking of individual fingers.

There are tons of videos and texts that proves that…eg: http://www.youtube.com/watch?v=TwlWIdSUGQ8

aivenhoe, thanks for the link.

Intel’s Perceptual Computing Toolkit is already released and I did the vvvv port: plugin-for-softkinetic-ds325-creative-interactive-gesture-camera-with-intel-perceptual-
(has quite good finger recognition, but does essentially the same as Kinect in near mode)

Also there’s a $P Point Cloud Recognizer which ethermammoth ported to vvvv: p-point-cloud-recognizer, you could try to just route all detected hand points (fingertips, knuckles, whatever) through that for detection. Or maybe even detect the hand and route the contour.

Leap Motion is probably no use for you as it works nearly exclusively for detecting downward facing gestures (because the camera is below your hands, not in front).

Pablo hand tracking and finger tracking are very different.

The demo you sent is nice that it shows how many fingers are being held up, but that really is the limit. If the person demoing the software instead pointed their hand directly at the Kinect and pointed one or two fingers at the Kinect the same result of zero would be given.

Can you post some example photos of hands in the positions you would like to detect?

Mid-air hand and gesture recognition is being developed now for Windows Kinect SDK: http://research.microsoft.com/apps/video/default.aspx?id=185502

Here is an older 3D tracking of complete hand shape, and its articulation:

Yes there are many hand recognition development out there using Kinect but what you need next is to implement an artificial intelligence system that its algorithm translates the different recognised shapes to letters, words and language. Of course you could use already existing gesture and hand pose recognition, for example:

http://www.microsoft.com/education/facultyconnection/articles/articledetails.aspx?cid=2466&c1=en-us&c2=0

http://developkinect.com/news/development/kinect-hand-pose-recognition

https://www.jitouch.com/characters/ (multi-touch based)

vvvv is certainly capable of doing the tran-domain mapping using Kinect or just computer vision via cameras, it depends how accurate you need it to be… it takes fair bit of intelligent patching!