OpenNI for alpha27.1

helos,

this is a new pre-release of the openni-plugin contribution only working with the latest b27.1 alphas.

the kinect node now returns the versions of the driver it is using which i hope will help with driver-version-troubles. hand and gesture-tracker got a cleanup, no more crashes when re-creating the kinect-node multiple times and it has some general stability issues fixed.

tested with:

  • OpenNI 1.5.2.23 (32bit, stable, redist edition)
  • NITE 1.5.2.21 (32bit, stable, redist edition)
  • SensorKinect091 5.1.0.25 (32bit)

OpenNI_Kinect.zip (29.7 kB)

new version fixing a problem with nodes only running after disable/enable once and a problem with handtracker not working with high-precision values on inputs. also adapt-to-rgb-view on depth-node now defaults to 0.

OpenNI_Kinect.zip (29.9 kB)

thanks i’m on it! seem working well and fast even on a old centrino.
the user node has a texture pin but i get notyhing out of it, is it supposed to send out the user without backgrund?
The Adapt to RGB view pin didn’t worked untill i added a RGB node in the patch.
is it possible in the skeleton enum to have an entry for ALL the joints?
will test more…
ciao!

hei sapo, good to hear it works.
check the user nodes helppatch. yes, the texture returns a mask to distinguish user from background. in order not to do costly pixelwise operations the node returns the texture exactly as it comes from openni, which is in a way that a user-pixel is only set to a value of 1 (out of a 16bit value range). therefore you won’t see it but can still easily work with it in a shader or even make it visible. i’ve been thinking that an option to choose an output-mode would make sense (as on the depth node).

true, adapt-to-rgb only works when an rgb-node is present. i’d argue that makes sense. if you’re not using rgb, you’ll not need to adapt to its view.

concerning the skeleton enum…you mean on the Joint enum you want an entry All instead of having to spread that pin and select all joints manually?

Ah got it! i can’t see anything in the user help patch because of the intel videocard. yea a non GPU option could be helpfull for some.

Yes Joint i meant that sorry . About the Skeleton helpatch i don’t understand the relation beetween the _Skeleton Profile _ and the joint wich allows to select parts not specified in the Skeleton Profile. In the this patch if i set the Skeleton Profile. to Lower the patch freezes.
Attacched a small patch made for testing and fun.

SVGeleton.v4p (23.8 kB)

ok, this is with a viewable option on the user-node.

i don’t know what the profile is for actually. i don’t get a freeze when i set it to ‘Lower’ in your patch. concerning an ‘All’ option…that would certainly be possible, only work abit against the current implementation, as for every input joint-slice you get an output. with ‘All’ you’d get multiple output-slices for that one input. i understand the convenience-issue but it should be possible to prepare a module that selects the desired joints for you…

OpenNI_Kinect.zip (30.1 kB)

nice, testing it, i see the user node outputs one texture for each user detected but they seem to be the same texture repeated. i’m triing to give a color to each user.
Also, a few times i pointed kinect to some objects wich tried to calibrate for some time untill the patch crashed completely. Same happent when i was not soo well in the field of view while it was calibrating me.

right, the user-texture was not spreaded correctly. i now changed that to the node only returning one texture always, but it being colored according to given input colors. see latest alpha.

cannot say anything about the crashes as i haven’t experienced them.

great, the users node works well! maybe also the user node should have a Adapt to RGB pin? could be possible to have the depth information on the users maps too, also with colors, ala hierro’s plugin? that’s nice for using FX shaders like colorramp on the usermap. prolly can be done with a mask shader and the depthmap but might be faster natively. details apart, seems like we finally have a robust update/replacement for the old good hierro plugin :) thanks! will test more and report

not sure exactly what you mean by this, but it sounds like one would rather combine depth and usermap in a custom shader. the basics are there for the user to combine them as needed.

concerning Adapt to RGB view on User node…the user node accesses the same depth-generator as the depth node. so connecting a depth-node to the kinect-node and changing that pin also influences the output on the user node. ie. there is only one depth-generator per kinect node…not sure yet if this is the final way to go, but i’ll leave it like that for now…

Like in this image but without the background,

the green color it’s not flat, it’s showing the depth data as luminosity too, i tought it was something already coming like that from OPENNI, but yea, can be done also thru shaders by combining the 2 textures.

About the depth-generator, ok understud.
Could be possible with OPENNI to get the raw IR camera too as Texture?

thanks&ciao

ja, thats definitely a shader application. and i haven’t checked for the IR image. what would you do with it?

ok, just for using the kinect as ir camera, maybe even covering the ir laser and using another ir light. in my case i’d want to do face tracking while projecting on the same face.

Hey!

I installed the drivers u7angel mentioned in another post but I can´t get the kinect to work under beta 27.1. The Driver Pin of the Kinect(Devices OpenNI) node says “Unable to connect to Device!” but I can receive a depth picture. Skeleton node and everything else refuse to work… The Kinect(Devices Microsoft) turns red after a few seconds…

Here the list of the installed drivers:
OpenNI 1.5.2.23 (32bit, stable, redist edition)
NITE 1.5.2.21 (32bit, stable, redist edition)
SensorKinect091 5.1.0.25 (32bit)

As I mentioned earlier I´m working with a MacBookPro with Bootcamp and Win 7 32 bit. Here is a picture of the patch including the nodes I´m working with and a tty renderer.

Any help is much appreciated.

Thanks in advance,
dl-110

sounds like some driver issue, try uninstall all very carefully and reinstall or try on another pc

Yeah, I thought the same thing.

So I tried another PC. I am now working on my Lenovo Laptop and everything is fine here. I installed EXACTLY the same drivers, in the same row and from the same source. But now I want to get it to work under the MacBook Pro.
I will give it another try tomorrow and hopefully it´ll work. Does anyone have a suggestion what might be the problem when working with the Kinect under a MacBook Pro running Win7 32bit with Bootcamp?

Thanks!
dl-110

jusy uninstall all in the unisntall window, some are at the very bottom of the list named primesense. Give it a reboot too and try plug the kinect without instlling anything and make sure no driver is autoinstalled. the unplug and reinstall the stuff.
Eventually google that “can’t find any node of the requested type” error for specific answers about it, it’s a OPENNI common error

Didn´t notice that “can’t find any node of the requested type” is a common OpenNI error. I thought this error is directly connected with vvvv… Anyhow, I´ll google it. Thanks for the hint.

i just had the same problem on a system with windows 7 64 bit, solved it by running vvvv as administrator, lemme know if works for you too

Hi, I was trying to get some rotations to work and couldn’t figure out how to use the skeleton orientation right. I’ve tried to do a little code which converts the 3 orientations to a quaternion, which then can be used with the Quaternion (Rotate) node. It seems to work somehow, sometimes though the rotations might be inverted etc. Maybe someone with some better than my poor math skills want to check it. Could also be nice to have something like this as a single output on the Skeleton node. If Iam totally wrong and someone wants to explain how to use the orientations right, I would also be very glad. Thanks

- region usings
using System;
using System.ComponentModel.Composition;

using VVVV.PluginInterfaces.V1;
using VVVV.PluginInterfaces.V2;
using VVVV.Utils.VColor;
using VVVV.Utils.VMath;

using VVVV.Core.Logging;
- endregion usings

namespace VVVV.Nodes
{
	#region PluginInfo
	[PluginInfo(Name = "OrientationToQuaternion", Category = "Kinect", Help = "transforms Kinect orientation to Quaternion", Tags = "quaternion")](PluginInfo(Name = "OrientationToQuaternion", Category = "Kinect", Help = "transforms Kinect orientation to Quaternion", Tags = "quaternion"))
	#endregion PluginInfo
	public class KinectOrientationToQuaternionNode : IPluginEvaluate
	{
		#region fields & pins
		[Input("OrientationX")](Input("OrientationX"))
		ISpread<Vector3D> FJointOrientationXIn;
		[Input("OrientationY")](Input("OrientationY"))
		ISpread<Vector3D> FJointOrientationYIn;
		[Input("OrientationZ")](Input("OrientationZ"))
		ISpread<Vector3D> FJointOrientationZIn;
		
		[Output("QuaternionOUT")](Output("QuaternionOUT"))
		ISpread<Vector4D> FQuaternionOut;

		[Import()](Import())
		ILogger FLogger;
		#endregion fields & pins

		//called when data for any output pin is requested
		public void Evaluate(int SpreadMax)
		{
			
			int inputCount = FJointOrientationXIn.SliceCount;
			FQuaternionOut.SliceCount = inputCount;
			
			for(int x=0; x<inputCount; x++)
			{
				Matrix4x4 o = new Matrix4x4();
				o.row1 = new Vector4D(FJointOrientationXIn[x](x).x, FJointOrientationXIn[x](x).y, FJointOrientationXIn[x](x).z, 0);
				o.row2 = new Vector4D(FJointOrientationYIn[x](x).x, FJointOrientationYIn[x](x).y, FJointOrientationYIn[x](x).z, 0);
				o.row3 = new Vector4D(FJointOrientationZIn[x](x).x, FJointOrientationZIn[x](x).y, FJointOrientationZIn[x](x).z, 0);
				
				double tr = FJointOrientationXIn[x](x).x + FJointOrientationYIn[x](x).y + FJointOrientationZIn[x](x).z;
				
				double qw = 0.0d;
				double qx = 0.0d;
				double qy = 0.0d;
				double qz = 0.0d;
				
				if(tr > 0)
				{
					double S = Math.Sqrt(tr+1.0d) * 2.0d;
					qw = 0.25d * S;
					qx = (o.m23 - o.m32) / S;
					qy = (o.m31 - o.m13) / S;
					qz = (o.m12 - o.m21) / S;
				}else if[o.m11 > o.m22) && (o.m11 > o.m33](https://vvvv.org/documentation/o.m11->-o.m22)-&&-(o.m11->-o.m33)
				{
					double S = Math.Sqrt(1.0d + o.m11 - o.m22 - o.m33) * 2.0d;
					qw = (o.m33 - o.m32) / S;
					qx = 0.25d * S;
					qy = (o.m21 + o.m12) / S;
					qz = (o.m31 + o.m13) / S;
				}else if(o.m22 > o.m33)
				{
					double S = Math.Sqrt(1.0d + o.m22 - o.m11 - o.m33) * 2.0d;
					qw = (o.m31 - o.m13) / S;
					qx = (o.m21 + o.m22) / S;
					qy = 0.25d * S;
					qz = (o.m32 + o.m23) / S;
				}else{
					double S = Math.Sqrt(1.0d + o.m33 - o.m11 - o.m22) * 2.0d;
					qw = (o.m12 - o.m21) / S;
					qx = (o.m31 + o.m13) / S;
					qy = (o.m32 + o.m23) / S;
					qz = 0.25d * S;
				}
				
				FQuaternionOut[x](x) = new Vector4D(qx, qy, qz, qw);		
			}
			
		}
	}
}