Kinecttoolkitdx11 and Kinect 2

Hi all,

Firstly, thanks tmp for the amazing contribution.
I am trying to get it to work with kinect 2 and as recommended I started with the pointcloud examples.

I am using the drivers from: kinect2-nodes
The ones build by noobusdeer (thanks btw!) and kinect latest drivers.

First issue, is there are two rgb-depths kinect2 nodes:
RGBDepth and DepthRGB

If I connect the depth, the RGB and the RGBDepth (or the depthRGB) on the pointcloud, I can see nothing being generated.

If I connect the depth, the RGB and the DepthRBG (without the depthRGB being connected on the kinect Runtime) I can see the pointcloud.

My question is, what does the RGBDepth do and am I missing something from the pointcloud since it is essentially not using it?

thanks.

best,

Doros

first of all you should use the rgbdepth node (by sebl).

it delivers a texture that contains the offsets from the depth to the rgb image (since the texture coordinates in your depth frame are not the same as in your rgb image because 2 different cameras are used for depth & rgb)

i cannot help you at the moment, because i have no kinect2 right now. but did you make sure that in your kinect2 node the enable color pin is set to 1?

yes, use Sebl’s node and set raw data to 0 if I recall correctly. Then you have a uv map you can use to correctly sample the rgb.

Simple tfx:

//@author: Everyoneishappy
//@help: template for texture fx
//@tags: fx
//@credits: 

Texture2D texture2d : PREVIOUS;
Texture2D uvTex ;

SamplerState linearSampler : IMMUTABLE
{
    Filter = MIN_MAG_MIP_LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
};

struct psInput
{
	float4 p : SV_Position;
	float2 uv : TEXCOORD0;
};


float4 PS(psInput input) : SV_Target
{
	float4 tUV = uvTex.Sample(linearSampler,input.uv);
	float4 c = texture2d.Sample(linearSampler,tUV);
	return c;
}

technique10 Process
{
	pass P0
	{
		SetPixelShader(CompileShader(ps_4_0,PS()));
	}
}