FIEDLER VECTOR AND THE CURVE SKELETON

Computing curve skeletons of 3D shapes is an important tool in computer graphics with many different applications for shape analysis, shape matching, character skeleton construction, rigging and many more. It’s an active field of research and over the years, numerous algorithms have been developed. Most of these algorithms are based on mesh contraction using smoothing or thinning. Another popular approach is to use the voronoi diagram generated by the points of the mesh. Quite an interesting alternative, however, is to rely on spectral analysis for computing the skeleton. The idea is fairly simple and works as follows:

  • Compute the first non-zero eigenvector of the Laplace-Beltrami operator, namely the Fiedler vector.
  • Compute contour lines based on the Fiedler vector
  • Find the center point for every contour
  • Fit curves to the points

 

VECTOR FIELDS … AGAIN

I guess I know what you think: “Not again another post about vector fields!” Well, that might be true but vector fields are omnipresent in computer graphics and therefore it’s incredible important to get a basic understanding of vector calculus and how it’s used in Houdini. A vector field is, as the name implies, a field of vectors in n-dimensional space. In Houdini terms, it’s a bunch of vectors assigned to points, vertices, polys or voxels of a mesh or a volume. That’s roughly what a vector field is. It’s easy to understand and easy to visualize. However, much more interesting than what it is, is to know what we can do with it and how we compute it. And that’s exactly what I’m trying to explain in this blog post. I’m not going into all the details but trying to give a general overview in the context of Houdini’s Surface Operators, better known as SOPs.
So, how do we compute a vector field in Houdini? Well, honestly, there is no single answer to this question because it mainly depends on what kind of field we would like to generate. One of the most important vector fields, however, is the gradient vector field. So, what is a gradient vector field? The gradient is the first order derivative of a multivariate function and apart from divergence and curl one of the main differential operators used in vector calculus. In three-dimensional space we typically get it by computing the partial derivatives in x, y and z of a scalar function. In other words, if we apply the gradient operator on a scalar field we get a vector field. Well, that’s more or less the general definition but what does this mean in the context of Houdini? I’ll try to explain it on a simple example.

Lets create a grid, wire it into a mountainSop and we’ll get some kind of a landscape like in the images below. Now let’s have a look at different points on that “landscape” and measure their height. For every of these positions, the gradient is the direction in which the height (P.y) is changing the most. In other words, it is the direction of the steepest ascent. We could check this by simply sampling the height values of our landscape around the points. Just copy small circles onto the points, use a raySop to import the height attribute and search for the point with the largest value. This way we get a good approximation to the gradient, depending on the number of sampling points. Of course, that’s not how the gradient is computed but it might help to get a better understanding of it’s meaning. The height is simply the scalar input for the gradient operator and the result is a vector. If we apply the gradient operator on every point of the geometry, we finally get our gradient vector field over the mesh.

 

So far so good, but what’s so special about the gradient vector field you might ask. Well, a gradient field has several special and important properties and we are going to explore some of them by having a closer look at our previous example. For instance, if we generate contour lines of the scalar field (our height attribute which is the input for the gradient operator) we can see that the vectors are locally perpendicular to the isolines. If we rotate the vectors by 90 degrees, or rather compute the cross product with N, we get a vector field which is tangent to the contour lines. While both vector fields are tangent to the surface,  the rotated vector field is in terms of its properties pretty much the opposite of the gradient. I won’t write much about it since I already did in another post, but nevertheless it’s important to mention what the main difference is. The gradient vector field is curl-free, it’s rotated counterpart, however, is a solenoidal vector field and hence divergence-free. If the field is curl- and divergence-free, it’s a laplacian (harmonic) vector field. But let’s go back to the gradient for now and have again a look at our “landscape” example.

 

Imagine, you’re standing on the top of the hill, put a ball on the ground and give it a light push in a specific direction. Most probably the ball will start rolling and if you’re not fast enough to stop it, it’s rolling down the hillside to the very bottom of the valley. Well, that’s nothing special and not much of a surprise. What is special, however, is the path the ball will take on its way down. It’s the path on which the height decreases locally as fast as possible and this route is exactly following the gradient vector field, or more precisely, the opposite direction in our example. This principle is called gradient descent/ascent and is used extensively for local minimization and maximization problems, as well as for local optimization problems in general. But mind, the keyword here is local. For global optimization problems we have to use different methods. This is quite obvious if we again look at our example. Depending on the sample point, or rather starting point of our path, we reach the local but not necessarily the global minimum/maximum. To make sure, that we actually find the global minimum, in other words, the point at the very bottom of the valley, we have to use a rather large number of samples.
Since the paths are following the gradient vector field, they are perpendicular to the contour lines of our height field and together they form a network of conjugate curves. Conjugate curve networks are quite important for many different applications such as remeshing, polygon planarization, parameterization and so on. The paths following the rotated vector field are, of course, parallel to the contour lines of our scalar function.

 

Talking about the gradient typically means that we have a scalar field, apply the gradient operator and finally get a vector field. The good thing about a gradient vector field is that we can reverse this operation. In other words, if we have a vector field which is a gradient field, we can calculate the original scalar field. This is possible as long as the vector field is curl-free, what a gradient field per definition is.
Understanding the concept of the gradient operator is quite important since it’s interrelated to pretty much everything in (discrete) differential geometry. For instance, if we apply the gradient operator on a scalar function, in other words, we compute the first order partial derivatives we get a vector as we already know. If we compute the second order partial derivative, the result is a matrix called the Hessian matrix. The trace of this matrix is then the Laplacian of the scalar function.

But now back to the beginning of the post. So, how do we compute the gradient? Well, the definition is quite simple: We have to compute the first order partial derivatives in x, y, and z. On volumes this is pretty easy to do since it’s basically a spatial grid and we can translate the equation pretty much one-to-one to VEX and use it in Houdini. On a parameterized surface we have to change it slightly and compute the derivatives in u and v. On meshes, however, Houdini’s prefered geometry type, we have to do it differently. The simplest method is just using the polyFrameSop in attribute gradient style. This way Houdini is doing the work for us and we don’t have to care about the underlying algorithm. If we don’t want to use the polyFrameSop for some reason, it’s fairly easy to implement our own version in VEX. If you take a look at the HIP you can find three examples. The first method uses the fact that the gradient vector field is perpendicular to contour lines. The second method is basically based on a piecewise linear function and the third is using point clouds to compute the gradient.

And finally some examples of using the gradient operator for various applications, such as magnetic field line generation, flow field generation, medial axis extraction and so on.

 

SPECTRAL ANALYSIS

Some time ago I wrote a blog post about the covariance matrix and how it’s eigenvectors can be used to compute principal axes of any geometry. This time I’m writing again about eigendecompositions but instead of the covariance matrix it’s the Laplace-Beltrami operator which is decomposed into its eigenvalues and eigenvectors. Now, the question is why would we like to find eigenvalues and eigenvectors of the laplacian? Well, there are a number of applications for which this is very useful. For instance, calculating the biharmonic distance on a mesh. Dimensionality reduction and compression are obvious examples but beside that it’s quite important for various other things too, such as parameterization, smoothing, correspondence analysis, shape analysis, shape matching and many more.

Eigenvectors 1-50

By projecting the point coordinates onto the eigenvectors we can transform the geometry to the corresponding eigenspace. Similar to the Discrete Fourier Transform, the new mapping could then be used for doing various calculations. which are easier to perform in the spectral domain, such as clustering and segmentation.

 


A very important property of eigenvalues and eigenvectors of the Laplace-Beltrami operator is that they are invariant under isometric transformations/deformations. In other words, for two isometric surfaces the eigenvalues and eigenvectors coincide and consequently their eigenvector projection also coincide. Since a character in different poses is usually a near-isometric deformation, eigenfunctions are quite useful for finding non-rigid transformations or for matching different poses. For instance, if we have two characters like in the example below, we can rely on eigenvectors to compute a correspondence matrix and therefore find corresponding points on the two meshes. Subsequently, these points could then be used to match one pose to the other. Now you might ask why do we need eigenvectors for this? Well, if it is the same character in different poses, or more precisely, the same geometry with the same pointcount, this is easy. If it’s not the same geometry, things get much harder. This is typically the case if we are working with 3D scanned models or shapes which have been remeshed like in the example below. The two characters have different pointcounts: The mesh of the left character has 1749 points, the mesh of the right character has 1928 points.

 

 

A very useful tool is the Fiedler vector which is the smallest non-zero eigenvector. It is a handy shape descriptor and can be used to compute shape aware tangent vector fields, to align shapes, to compute uvs, for remeshing and many other things. The images below show that isolines based on the Fiedler vector are nicely following the shape of different meshes.

 

 

H. Zhang, O. Kaick, R. Dyer: Spectral Mesh Processing
V. Jain, H. Zhang: Robust 3D Shape Correspondence in the Spectral Domain
S. Dong, P. T. Bremer, M. Garland, V. Pascucci, J. C. Hart: Spectral Surface Quadrangulation
B. Vallet, B. Lèvy: Spectral Geometry Processing with Manifold Harmonics
A. Shtern, R. Kimmel: Matching the LBO Eigenspace of Non-Rigid Shapes via High Order Statistics

 

 

TOPOLOGY OPTIMIZATION

Just an old experiment I did using ToPy in Houdini. ToPy is written by William Hunter and it is the Python implementation and 3D extension of Ole Sigmund’s famous 99 line topology optimization code in Matlab. Since ToPy is written entirely in Python, it’s easy to use in Houdini. Preparing and converting the input and output files worked pretty flawless by using the Python VTK library.

 

 

For more about topology optimization, check out the following links:
https://github.com/williamhunter/topy
http://www.topopt.dtu.dk/
http://www.topopt.dtu.dk/files/matlab.pdf