Monday, August 1, 2016

Mess With Matrix Multiplication

Mess With Matrix Multiplication

I am currently refactoring the code and i bumped once again on the shader. There is something definitely wrong. I read the "mul" documentation, the row/column matrix/vector and in my opinion, the math in my shader is wrong, but suddenly it turns out that it renders properly...

 

So, inside my shader file i have

cbuffer cbPerFrameViewProjection : register(b1)
{
    column_major float4x4 PV;
};

cbuffer cbPerObjectWorldMatrix : register(b0)
{
    column_major float4x4 World;
};

Where "PV" is set once at the beginning of a frame, and world is per object.

 

The multiplication inside vertex shader

float4x4 wvp = mul(World, PV);
output.Pos = mul(wvp,inPos);

I set the constant buffers like that:

Camera:

&XMMatrixTranspose(cam.GetViewMatrix()*cam.GetProjectionMatrix())

And for every object:

&XMMatrixTranspose(world.GetTransform())

Now the maths: C++ stores matrices as row major, hlsl as column.

Camera ((view row * projection row) transposed) equals to (((projection column * view column) transposed) transposed) which is (projection column * view column).

World row transposed equals to world column.

So - both constant buffers are fed a column matrices.

Then, inside shader, to count WVP transposed (which in column order is PVW) i should do mul(PV,World) from constant buffers data to get the right answer, but NOT mul(World,PV) as i do now - and the second (wrong in my opinion) solution is correct. Why is that?

 

And second problem.

Why is that correct

output.Pos = mul(inPos, wvp);

And this is wrong??

output.Pos = mul(wvp, inPos);

HLSL uses column matrices, and as far as i know from theory, to transform a vector you multiply matrix * vector.

 

Could someone explain what is goin on here? What am i missing in the matrix mess?

Thanks in advance


No comments:

Post a Comment