COVARIANCE MATRIX

If you ever had to compute an object oriented bounding box by yourself you probably came across the term covariance matrix. Well, this is no accident since covariance matrices are widely used in computer graphics for many different applications such as data fitting, rigid transformation, point cloud smoothing and many more. A covariance matrix is a N x N matrix basically measuring the variance of data in N dimensions. In case of Houdini’s three-dimensional coordinate system it’s therefore a symmetric 3 x 3 matrix capturing the variance in its diagonal and the covariance in its off-diagonal. Computing the covariance matrix is fairly easy and could be done efficiently in VEX or Python/Numpy. The interesting thing about the matrix is that it represents the variance and the covariance of the data. In other words, it describes the “shape” of the data. This means that we can, for example, find the direction of the largest variance measured from the centroid, which consequently is the largest dimension of the data. This is done by decomposing the matrix into its eigenvalues and eigenvectors. In case we are working with a 3 x 3 covariance matrix we get three eigenvectors. The eigenvector with the largest eigenvalue is the vector pointing into the direction of the largest variance. The eigenvector with the smallest eigenvalue is orthogonal to the largest eigenvector and, of course, pointing into the direction of the smallest variance.

 

Finding the eigenvectors for a small matrix is quite easy and could be done, for example, by using singular value decomposition (SVD) in Numpy. In VEX we don’t have such a function but we could implement a (inverse) power iteration algorithm to find the largest and/or smallest eigenvector. The missing third eigenvector is then computed simply by talking the cross product of the other two already known vectors.
Finding these eigenvectors is quite useful for many applications. For instance, if we need to compute object oriented bounding boxes for each polygon of a mesh. We could do this by building the covariance matrix for the points of each polygon and then searching for the largest eigenvector. Together with the normal, which is the smallest eigenvector, we finally can compute the oriented bounding box. Another useful application of the covariance matrix is the computation of point normals for a point cloud. In this case we need to find the smallest eigenvector, which represents the best fitting plane and thus the normal of the point. Building the covariance matrix is also important if we want to find the rigid transformation from one object to another. All these are just a few examples and to cut a long story short, covariance matrices are quite useful and definitely worth looking into.

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s