Render Shading to Texture

Hi,

here is what I’d like to do:

1.) Load a 3d-file plus texture (DX11).

2.) Do some shading via f.e. PhongPoint.

3.) Render an updated version of just the texture with the shading informations (basisc texture + phong-shading).

Sounds so easy, but I’ve no clue how to achieve point 3.) on the list.

Any ideas? Is it necessary to write a own Renderer for this or is there maybe a way to output a plane with the 2D-texture out of the PS-shader of an adapted PhongPoint?

Best,
timpi

well, techincally saying anything passes the regular renderer is texture after…
then i’m struggling with guesses:
If u want multiple slices textures you can use texture array renderer
If u want separate passes like texture, lightning u have to use MRT technique

Maybe I have to be more specific: I want to bake the shading-effects, for example of the Phong-Shader, into to the initial texture (diffuse-color-texture of the model).

So at the end I need a plane 2D render-output with the original texture + shading-effects with the original UV-layout.

Hey Johannes,
just pass different screenspace positions to PS, based on UV instead of object space.
Example Constant.fx in VS:

Replace

Out.Pos = mul(Pos, tWVP);

by

Out.Pos = float4((TexCd.xy-0.5)*2*float2(1,-1),0,1);

Astroboy Normalmap:

Yay! Works great with the standard phong-shader! Now we have to tackle some problems with our home-cooked Phong, but will work.

Super handy Shader Snippet. Do you like to add it here --> shadersnippets as well?

Done… no text …

that is indeed very cool.
Probably that could be used for baking high-polymodels to low polymodels via normal mapping too. Thought it would be way more complicated :D

It would be interesting to extend it for baking ambient occlusion in models too.
With forward methods like the one from DX9 that could be possible.

But approach for baking screenspace effects would be different i guess.
Anyone have an idea?

The whole point of using screenspace approximations is not to bake though. If you bake you can easily afford better quality by not doing stuff in screenspace.

This is debatable you can render depth, normals, pos this way, but the occlusion gonna need to know about other objects in the scene. Unless you do the self occlusion however this also screen space.

@m4d i know what you mean and of course it makes sense.
I was just thinking about exporting advanced effects from deffered shading with DX11 to for example WebGL applications
but yeah probably its relatively glumpsy since you need to do something like create a cube map and reproject it on the model texture.

regarding occlusion… it can make sense to bake it for example in terms of terrain ( http://codeflow.org/entries/2011/nov/10/webgl-gpu-landscaping-and-erosion/#ambient-occlusion )

Can imagine it can be nice if you have different frequencies of ambient occlusion, baked and real-time at the same time for different scales.

Also global illumation can be baked in many scenarios.

there is also an interesting blog article about baked ambient occlusion and interaction with movable objects (shadow reporjection etc) http://blog.wolfire.com/2010/12/Overgrowth-graphics-overview

cheers, tekcor

i wasn’t really trying to say that baking makes no sense. just wanted to point out that for baking occlusion it makes more sense not to sample in screenspace as other techniques are better suited imho. (e.g. like the codeflow article you linked where he does calc his occlusion from his heightmap)