屏幕空間法線-創建,法線貼圖和拆包


5

我正在嘗試壓縮我的延遲渲染G緩衝區。所以我對獲取2分量屏幕空間法線有一些疑問。我知道凍傷和殺戮地帶(我能找到的僅有的兩家AAA公司的G緩衝區)都在使用它們。

如何創建屏幕空間法線,並且此步驟是在使用法線貼圖或凹凸貼圖之前還是之後?如果在使用法線貼圖之前完成了此操作,法線貼圖將如何受到法線的屏幕空間大小的影響;如果在使用法線貼圖之後,如何證明在所有片段上而不是在頂點上使用模型視圖矩陣是合理的?不是很多計算嗎?

最後,它們如何打開包裝?我知道您可以使用畢達哥拉斯獲得藍色成分,但是它們如何返回世界空間?

4

How are screenspace normals created, and is this step before or after using normal maps or bump maps?

They are created after using normal maps. In deferred rendering, you write to the various buffers (diffuse, normal, depth, etc.) in fragment shaders. By this time, the normal maps would have already been read.

If it's done before using normal maps, how are normal maps going to be affected by the screenspace-ness of the normals,...

If you are referring to normal maps on models, normal maps can stay the same. You can compress normal maps as well if you want, but this becomes irrelevant to what you do for writing to the normal buffer.

...and if after, how can you justify using a Model-View matrix on all fragments rather on vertices? Isn't that a lot more calculations?

Your model's normals are computed per vertex and interpolated through rasterization to be per fragment. The normal maps are processed per fragment, but these are just texture lookups, and these can be applied to the camera-space normals produced from the previous pipeline steps. Therefore, a view matrix is no longer necessary.

Finally, how are they unpacked? I realize you can get the blue component using pythagoras, but how are they returned to world space?

The idea behind 2-component normals versus 3-component normals is that the vector magnitude isn't often taken into account for normals. Also, normals facing away from the camera generally aren't useful. With two components, you can use model a hemisphere facing the camera. You unpack them the same way you pack them.

Think about normals as positions. When converting from world space (x, y, z) to screenspace (x, y, depth), but the depth does not effect where on the screen a mesh gets displayed. Convert the 3D vector to a vector in terms of screenspace (through view matrix multiplications).

Then you can discard the z-component (aligned with the camera) as long as you normals are unit vectors (because you can derive z from the other two vectors).


3

I'm trying to condense my Deferred Rendering G-Buffer. So I have some questions about getting 2-component Screenspace Normals. I know Frostbite and Killzone (the only two AAA company's G-Buffers I could find) use them.

I'm confused when you say "screenspace normals", Killzone uses view-space normals, they store the X & Y coordinate of the normal in FP16 format, and reconstruct Z using $z = sqrt(1.0 - Normal.x^2 - Normal.y2)$ a problem with that is that we lack the sign of Z, even in view-space the normals can point away from the camera, i.e. Z is not always positive.

How are screenspace normals created, and is this step before or after using normal maps or bump maps? If it's done before using normal maps, how are normal maps going to be affected by the screenspace-ness of the normals, and if after, how can you justify using a Model-View matrix on all fragments rather on vertices? Isn't that a lot more calculations?

After loading your normal maps, you construct a TBN matrix to transform from tangent space to e.g. world space before writing to the G-Buffer. So in essence, your normals in the G-Buffer are stored in world-space, not screen-space. With that you can apply lighting in world space, as far as I'm aware, Crytek & Epic do the lighting in world-space now.

I'm not sure what you mean by "screenspace-ness", but when writing any sort of data to the G-Buffer, you're essentially writing their current value, if the normals are in world-space, they will remain in world-space when written to the G-Buffer. When you sample the G-Buffer on the second pass, for each texel, you get an RGB color (assuming you're using RGB) that maps to your coordinate data.