Image masking from pointcloud data

hello,

I have a pile of small cubes on a table. these cubes can be grouped to heaps or arranged flat on the table.

I try now with the kinect to identify these situation from above, whether a particular area on the table is free of cubes. this filter works already rudimentary.

in this free area, I would now like to show a graphic, which is masked in the free area. The user should then reveal the image further.

how can I create from the xy data of the point cloud a mask now.

how to create a masked image. shader?

regards

Next try: I have a color array with the values 1, on which the image should be visible.

I can now use a DynamicTexture.DX11 and x and y resolution of the Point Cloud. As a result, I get a black and white texture not hard-edged, the image masked with this image looks not fine.

And, I would like to create the mask and do the masking in one step.

Hi few screenshots would describe your problem much better…
In any case u want to have white in the texture on certain depth of kinect? Why don’t u take just a raw kinect depth image and do something like:

float low;
float high;

if (col.r > low && col.r < high)
{
col = 1;
};
col = 0;

return col;

perfect thank you. I would then take the original depthimage for masking, all other math with the pointcloud.

I tried to scale down the kinect2 depth image before converting to pointcloud. see screenshot. I get echoes from the image borders. the pointcloud from the original image is clean.

I tried serveral settings of the Render (DX11 - TempTarget )…

well post if u have more troubles…

Make sure in your scaling you are doing POINT not LINEAR etc. in the shader or you will get interpolation at depth edges, introducing artifacts like the above.

@mediadog: thanxs!

if I make the mask with the depth image, I get problems with the float precicion the IOBox. when I put the precicion on 5 or 6, it remains at 4 decimals.

The other problem is the threshold is vertically below the kinect closer than in a corner …

I’d recommend handling everything in millimeters at that level, as that’s how the data is represented in the depth image (16 bit luminance as I recall); keeps it all nice and integer-y with no float rounding or comparison problems. Only convert to float meters for pointcloud data.

I’m a little bit slow on the brain area responsible for shaders. how can I get more resolution in depth?

you meen I should use values from 0 to 50000 and scale it down inside the shader to do the threshold?

float high = 1.0;
float low = 0.0;

float4 PS_Threshold(float4 PosWVP:SV_POSITION,float2 x:TEXCOORD0):SV_TARGET{float4 c0=tex0.Sample(s0,x);float4 c1=tex1.Sample(s0,x)*float4(1,1,1,0);
	
	if (c1.r > low && c1.r < high) {
		return c0; 
	} else {
 	 	return 0; 
	}

	
}

Yes, that’s what I do, sorry I was not more clear. For a DX9 example, here’s what I do to get a greyscale image from a depth image with specifiable white and black limits, very similar I think to what you are talking about. For DX11 I actually use compute shaders for everything, but a similar idea.

DepthMin and Depthmax are integer millimeters, and you can see in the pixel shader where I multiply the red value by 65535 to get back to millimeters before doing the comparisons (and note the sampler states set to POINT):

//@author: mediadog
//@help: Convert Primesense depth map to inverted grayscale
//@tags:
//@credits:

// --------------------------------------------------------------------------------------------------
// PARAMETERS:
// --------------------------------------------------------------------------------------------------

//transforms
float4x4 tW: WORLD;        //the models world matrix
float4x4 tV: VIEW;         //view matrix as set via Renderer (EX9)
float4x4 tP: PROJECTION;
float4x4 tWVP: WORLDVIEWPROJECTION;

//texture
texture Tex <string uiname="Texture";>;
sampler Samp = sampler_state    //sampler for doing the texture-lookup
{
    Texture   = (Tex);          //apply a texture to the sampler
    MipFilter = POINT;         //sampler states
    MinFilter = POINT;
    MagFilter = POINT;
};

//texture transformation marked with semantic TEXTUREMATRIX to achieve symmetric transformations
float4x4 tTex: TEXTUREMATRIX <string uiname="Texture Transform";>;

int DepthMin <string uiname="DepthMin";>;

int DepthMax <string uiname="DepthMax";>;

float Pedestal <string uiname="Pedestal";>;

//the data structure: "vertexshader to pixelshader"
//used as output data with the VS function
//and as input data with the PS function
struct vs2ps
{
    float4 Pos  : POSITION;
    float2 TexCd : TEXCOORD0;
};

// --------------------------------------------------------------------------------------------------
// VERTEXSHADERS
// --------------------------------------------------------------------------------------------------
vs2ps VS(
    float4 PosO  : POSITION,
    float4 TexCd : TEXCOORD0)
{
    //declare output struct
    vs2ps Out;

    //transform position
    Out.Pos = mul(PosO, tWVP);
    
    //transform texturecoordinates
    Out.TexCd = mul(TexCd, tTex);

    return Out;
}

// --------------------------------------------------------------------------------------------------
// PIXELSHADERS:
// --------------------------------------------------------------------------------------------------

float4 PS(vs2ps In): COLOR
{
	unsigned int Depth;
	
    float4 col = tex2D(Samp, In.TexCd);
	
	// Get depth data from R
	Depth = col.r * 65535;
	
	bool good = [Depth >= DepthMin) && (Depth <= DepthMax](https://vvvv.org/documentation/Depth->=-DepthMin)-&&-(Depth-<=-DepthMax);
	
	col.rgb = good * (Pedestal + [DepthMax - Depth) / (float)(DepthMax - DepthMin)](https://vvvv.org/documentation/DepthMax---Depth)-/-(float)(DepthMax---DepthMin));
	col.a *= good;
	
    return col;
}

// --------------------------------------------------------------------------------------------------
// TECHNIQUES:
// --------------------------------------------------------------------------------------------------

technique TSimpleShader
{
    pass P0
    {
        //Wrap0 = U;  // useful when mesh is round like a sphere
        VertexShader = compile vs_1_1 VS();
        PixelShader  = compile ps_2_0 PS();
    }
}

technique TFixedFunction
{
    pass P0
    {
        //transforms
        WorldTransform[0](0)   = (tW);
        ViewTransform       = (tV);
        ProjectionTransform = (tP);

        //texturing
        Sampler[0](0) = (Samp);
        TextureTransform[0](0) = (tTex);
        TexCoordIndex[0](0) = 0;
        TextureTransformFlags[0](0) = COUNT2;
        //Wrap0 = U;  // useful when mesh is round like a sphere
        
        Lighting       = FALSE;

        //shaders
        VertexShader = NULL;
        PixelShader  = NULL;
    }
}

in the pointcloud pack are other ways to filter points independently of the actual position/orientation of the kinect (or that of the floor, for everything is relative).

the easiest is to just use the points of the cloud that are in a “safe box” (aka a box transform) and filter all other out

thanks for sharing the shader!

@velcrome - Absolutely the way to go for pointcloud data! But I think what is needed here is an image mask, and trying to build that from a pointcloud can get more complicated than doing an image shader.

@mindthegap - You’re welcome! Oh, and the reason there is that boolean expression to get “good” and then using it in the following expressions is to avoid the use of “if” conditional statements which I am told would slow the shader down. I have not benchmarked it, but doing it this way ensures the same execution path is taken in all cases, something GPUs like. That “&&” operator may break that if the left side of it evaluates false (in optimized C code it would do a test and branch there), but hey…