Just some fun doing direction field guided reaction diffusion with OpenCl.

Skip to content
# FIELD GUIDED REACTION DIFFUSION

# POLYGON PLANARIZATION

# CURVES … appendix

# CURVES

# FIEDLER VECTOR AND THE CURVE SKELETON

# VECTOR FIELDS … AGAIN

# SPECTRAL ANALYSIS

A very important property of eigenvalues and eigenvectors of the Laplace-Beltrami operator is that they are invariant under isometric transformations/deformations. In other words, for two isometric surfaces the eigenvalues and eigenvectors coincide and consequently their eigenvector projection also coincide. Since a character in different poses is usually a near-isometric deformation, eigenfunctions are quite useful for finding non-rigid transformations or for matching different poses. For instance, if we have two characters like in the example below, we can rely on eigenvectors to compute a correspondence matrix and therefore find corresponding points on the two meshes. Subsequently, these points could then be used to match one pose to the other. Now you might ask why do we need eigenvectors for this? Well, if it is the same character in different poses, or more precisely, the same geometry with the same pointcount, this is easy. If it’s not the same geometry, things get much harder. This is typically the case if we are working with 3D scanned models or shapes which have been remeshed like in the example below. The two characters have different pointcounts: The mesh of the left character has 1749 points, the mesh of the right character has 1928 points.

Last week I had a great conversation with someone in the Houdini discord chat group about an interesting problem. He had a surface made up of quads and wanted to “planarize” the polygons in order to build a large physical model from it. He told me that he usually uses Grasshopper and Kangaroo for this but was wondering if it’s also doable in Houdini. Well, I don’t know Kangaroo very well but as far as I know it’s basically a particle-spring based solver. It uses a force based approach which works quite well for many different problems such as planarizing a polygonal mesh. Pretty much the same, of course, could be done in Houdini. By using a “planarization force” together with constraints in the grainSolver it’s easy to replicate such a system. Instead of using grains it’s also possible with wires. Most probably they’re not as stable as grains but nevertheless it should do the job. If we like we could also build our own force based solver. Implementing verlet integration in a solverSop is easy and apart from that there’s the gasIntegrator microsolver which we can use in DOPs for that purpose. However, to get a stable system it’s important to choose the right step size for the integration. If it’s too large the system explodes and if it’s to small it takes forever to find its equilibrium. Of course, higher order integrators like Runge Kutta are minimizing this problem and generally they are quite stable. The same is true for the position based approach of grains but the problem still remains, at least to some degree.

Anyway, it’s an interesting problem and even though it’s typically not too hard to solve it could become quite tricky depending on the geometry. The hard part is not so much to obtain planar polygons but to stay as close as possible to the original mesh. In the worst case we might end up with an entirely flat mesh what’s clearly not the intended result. The reason might be, for example, that the mesh describing the surface has an unfavourable structure. And sometimes it’s simply not possible because not every surface can be made up of planar polygons except triangles.

During our chat I remembered that there was a thread on OdForce about pretty much the same topic. Someone had a saddle-shaped surface for which he computed the dual mesh and subsequently wanted to make the resulting hexagons planar. At the same time the boundary points of the surface should keep their original position. Well, even though a saddle surface seems to be rather simple, it is not and I think this is a good example to highlight some of the problems we might run into when we’re trying to planarize polygons.

But first let’s take one step back and start simple. Instead of using hexagons let’s assume it is a quad mesh. One way to tackle this problem is to use a force based approach similar to what Kangaroo/Grasshopper is doing. We need to set up a number of forces and run the system until it reaches equilibrium. For calculating the “planarization-force”, for instance, we could project the polygon points onto the plane defined by center and normal. Or we compute the diagonals and check their distance to each other. To preserve the mesh structure we can rely on edges and diagonals as spring forces. Additionally we need to keep the points of the boundary at their position. The key point, however, is to adjust the weighting between all these forces to finally get the result we were looking for. Usually there is no way to find a solution satisfying all the objectives and hence the result is highly depending on how we balance the forces. As said above, such an approach is easy to set up and it works quite well under most circumstances. However, for a number of reasons I thought it might be better to take a slightly different approach. Instead of using a force based system I used a method which is conceptually somewhat similar to the “As Rigid As Possible Surface Modelling” approach. Intuitively it’s easy to understand and works basically in the following way:

For every polygon of the mesh, the planar counterpart is computed and then fitted to the original poly in the least squares sense. For our specific “planarization problem” we could also just project the points onto the tangent plane but in general we may want to rely on rigid shape matching to find the best fitting orientation. This could be done easily in VEX by using polardecomp() or by using Numpy. Even though polar decomposition in VEX is faster I tend to use Numpy’s SVD algorithm since it doesn’t have problems with flat axis aligned polys. Of course, we could make our matrix positive by adding a very small value but this might lead to other problems instead. For a high number of polygons it’s a good idea to implement a simplified SVD algorithm in VEX which doesn’t have such problems but in most cases Numpy should be fast enough. But let’s not get off the track and back to the planarization problem.

After the “planarization” step is applied to each polygon the mesh is obviously torn apart. What we need to do next is to reconstruct the mesh in the least squares sense based on the planar polygons. In other words, we need to set up and solve a linear system. If we repeat these steps iteratively a number of times the system finally converges to the solution we are looking for. The result is the mesh made up of planar polygons. So, the process boiles down to the following threes steps:

- Fit planar polygon to original polygon in the least squares sense
- Reconstruct mesh based on planar polygons in the least squares sense
- Repeat until system converges

Of course, we also need to implement all the other constraints too, such as fixed boundary points, mesh structure preservation and so on. In the example below, the four corner points are fixed at their position all the other points are allowed to move in all directions. In my tests the algorithms works quite well and is also relatively fast. For the mesh shown below it needs about 175 iterations and converges in about 280 ms on my rather old laptop.

But now let’s go back to the example file from OdForce. The original question was how to obtain planar hexagons while retaining the overall shape of the mesh. Additionally, even though it’s not mentioned in the post, we may want to keep the mesh structure intact. In other words, the polygons should stay connected. Well, this sounds easier than it is. It is indeed possible but we might end up with unwanted results. The problem is that the saddle-shaped surface has negative gaussian curvature and hence it can’t be approximated by convex hexagonal polygons. However, if we allow non-convex polygons it’s doable although the deviation to the original shape is rather large. This is clearly visible in image 2 and 3 below. If we additionally allow for open edges the overall shape is preserved very well (image 4). However, this is just the case for hexagonal polygons and won’t work for quads in the same way. So let’s take a look what happens if we choose four-sided instead of six-sided polygons.

(Side note: For computing planar hexagonal meshes it’s most probably more efficient to rebuild the mesh instead of planarizing it’s polygons but that’s maybe a topic for another blog post)

Using the planarization algorithm on the quad mesh results in a completely flat surface which is clearly not what we want. But how could this happen? Well, the simple reason is that it’s just not possible to obtain planar AND connected polygons based on our initial mesh without getting large deviations in the overall shape. This is clearly visible in the images below. Picture 1 is the original mesh made up of non-planar polygons. Picture 2 shows the result of the planarization with fixed corner points and picture 3 is the planarized poly mesh without any additional constraints.

So what can we do to get a better result? The answer is simple. We need to provide a better starting point for the algorithm. In other words, we need to remesh the surface. The most natural way to remesh a surface is based on it’s principal curvature direction. By integrating the fields, for example, we could extract a conjugate curve network which we then use to construct the mesh. Such a curvature aligned mesh is exactly what we need since it’s quads are naturally close to planar. For simple geometries this is not too hard to implement and the resulting mesh is a perfect starting point for the subsequent planarization algorithm. In the case of our saddle-shaped surface it’s even better because we don’t need to do any planarization at all. The new curvature aligned mesh is already composed of fully planar polygons.

For some related reading:

R. Poranne, E. Ovreiu, C. Gotsman: Interactive Planarization and Optimization of 3D Meshes

S. Bouaziz, S. Martin, T. Liu, L. Kavan, M.Pauly: Projective Dynamics: Fusing Constraint Projections for Fast Simulations

W. Wang, Y. Liu: A Note on Planar Hexagonal Meshes

W. Wang, Y. Liu, B Chan, R. Ling, F. Sun: Hexagonal Meshes with Planar Faces

My last blog post was mainly about curves and how some of the tools known from Grasshopper/Rhino could be implemented in Houdini. Somewhere at the very end of the entry I mentioned inflection points but I didn’t explain what it is. Well, I’ll try to do it now.

An inflection point is a point on a curve at which the curvature changes sign. For example, when we look at a two dimensional s-shaped curve and visualize it’s normals then there’s a point at which the normals flip from one side of the curve to the other. This point is the inflection point. Not every curve does have such a point but if it has one (or several), it’s sometimes useful if we know how to find it. For instance, it was important in the example in which I was approximating a curve by circular arcs. However, the inflection point is just an example and maybe we need to search for other points, such as global or local extreme points, points of max. or min. curvature and so on.

To find such a point we basically have two options. One way is to recursively sample the curve in smaller and smaller steps until we get the point of our interest. This is easy to implement and works generally very well for all kinds of curves. A more elegant and more accurate method, however, is to remember that we are dealing with polynomials and hence to find the point analytically. In case of searching for the inflection point this simply means that we have to solve the polynomial for the point at which curvature is zero.

From time to time I’m having interesting talks with people who are mainly using Grasshopper and Rhino for their job. They are working in the field of architecture and/or design and even though they are usually quite interested in Houdini one of the first things I always hear is how bad the curve tools are compared to Rhino/Grasshopper. Well, that’s not true! Or let’s say, it’s just half the truth. Even though curves, and Nurbs in general, could get some love they are quite robust and not too bad to work with. (As long as we don’t need to fiddle around with trimmed surfaces). Houdini is not a CAD program and naturally it doesn’t have all the features which are available in Rhino/Grasshopper. So, when I asked what the main curve tools are that Grashopper users are missing the most in Houdini I got a list with the following five points:

- Curvature
- Curvature graph
- Curve deviation
- Curvature based resampling
- Approximate curves by circular arcs

Of course, there are many other things which are missing, for instance, better handles to create and modify curves but since the points above seem to be the most important features to Grasshopper users I’ll stick with those. However, curves are not too complicated and if you ever need to implement or generate “custom” curves there’s plenty of good information available online. For a start you could take a look at these simple example files on OdForce.

But now back to the missing feature list. Before we start, it’s important to note that there are mainly two different kinds of curves in Houdini. On one hand we have Nurbs and Bezier curves which are basically polynomial curves parameterized by a function. On the other hand we have polygonal curves which are usually just points connected by straight lines. If we like, we could call the two types smooth and discrete curves.

**Curvature**

Curvature is fairly easy to calculate if we understand what it actually means in the context of curves. Basically it tells us how fast a curve is changing direction when we travel along it. In more mathematical terms, it’s the magnitude of acceleration. So far so good but what exactly does this mean? Well, acceleration is the second derivative, or in other words, the normal of the curve. The first derivative is velocity or slope and this is nothing more than just the tangent. So, if we want to compute curvature it comes down to implementing the following three simple steps:

- Compute first derivative namely the tangent
- Compute second derivative (which is the normal) based on tangents
- Measure magnitude of normal which finally is curvature

A nice side effect of this procedure is that we can easily compute a local frame since we already have tangent and normal. We just need to compute the cross product of these two known vectors and we’ll get the third vector and hence an orthogonal local frame on our curve. To get the osculating circle at a given point, like it’s possible in Grasshopper, we need to calculate the radius which is just the inverse of curvature.

All this is relatively easy to implement in VEX, both for Nurbs and polygonal curves and it’s also quite fast. However, for discrete curves the easiest method is most probably the use of a slopeChop in CHOPS. On OdForce there is a rather old example file using this technique to compute the Frenet frame. It’s not exactly about curvature but it shows the general setup.

**Curvature graph**

Well, this is easy because it’s just the normals reversed and scaled by curvature and a multiplier by the user.

**Curve deviation**

This is easy too. We just need to use xyzdist() at sampling points on the first curve to get the distance to the second curve.

**Curvature based resampling**

This is again a job for xyzdist() which we could use to implement our own curve sampling algorithm in VEX. This could be done based on a minimal distance or maximal angle threshold by using a bit of basic trigonometry together with the xyzdist() and primuv() combo.

**Approximate curves by circular arcs**

This was the most important point on the list what makes totally sense for applications in the field of architecture and design. If someone is actually going to built something physically it might be a good idea to find a way how to approximate an arbitrary 3D curve by circular arcs as close as possible. So, how could we implement this in Houdini? Well, there are many different ways but the easiest would be to “brute-force” it. Since an arc is well defined by three points we could start with two close points at one end of the curve. Next we find the point in between construct an arc and estimate the maximal deviation to the curve. If it’s smaller than an user given threshold increase the distance between the points and repeat the steps until we get the largest arc within our tolerances. Then we just repeat the whole procedure until we finally get our arc approximated curve. VEX is pretty fast and hence this algorithm should work quite well performance-wise, even for very dense resampled curves.

However, I thought it might be better to choose a different and probably smarter method. The reason is mainly because the approximation should be as smooth as possible without large kinks along the curve. Instead of using the algorithm described above, which is not guaranteed to result in tangent arcs, we could use a method called biarc fitting. So, what is a biarc? A biarc is a pair of two arcs with the same tangent at the point at which they meet. And this is exactly what we need for a good and smooth approximation of the curve. The general algorithm is fairly easy to implement and works basically as follows:

- Get tangent at endpoints
- Sample curve and search for best connection point for biarc
- Estimate deviation between biarc and curve
- If deviation is larger than given tolerance split curve and repeat
- If deviation is smaller than given tolerance compute the biarc

Instead of using just the distance between biarc and curve I’m using various different measurands to estimate the error, such as max. distance, average distance, torsion angle and tangent angle. The images below show some results using the distance with different tolerance settings. Of course, there are some things we need to take care of, for instance, splitting curves at inflection points but generally biarc fitting is fairly easy to implement and works quite well for all sorts of curves.

Computing curve skeletons of 3D shapes is an important tool in computer graphics with many different applications for shape analysis, shape matching, character skeleton construction, rigging and many more. It’s an active field of research and over the years, numerous algorithms have been developed. Most of these algorithms are based on mesh contraction using smoothing or thinning. Another popular approach is to use the voronoi diagram generated by the points of the mesh. Quite an interesting alternative, however, is to rely on spectral analysis for computing the skeleton. The idea is fairly simple and works as follows:

- Compute the first non-zero eigenvector of the Laplace-Beltrami operator, namely the Fiedler vector.
- Compute contour lines based on the Fiedler vector
- Find the center point for every contour
- Fit curves to the points

I guess I know what you think: “Not again another post about vector fields!” Well, that might be true but vector fields are omnipresent in computer graphics and therefore it’s incredible important to get a basic understanding of vector calculus and how it’s used in Houdini. A vector field is, as the name implies, a field of vectors in n-dimensional space. In Houdini terms, it’s a bunch of vectors assigned to points, vertices, polys or voxels of a mesh or a volume. That’s roughly what a vector field is. It’s easy to understand and easy to visualize. However, much more interesting than what it is, is to know what we can do with it and how we compute it. And that’s exactly what I’m trying to explain in this blog post. I’m not going into all the details but trying to give a general overview in the context of Houdini’s Surface Operators, better known as SOPs.

So, how do we compute a vector field in Houdini? Well, honestly, there is no single answer to this question because it mainly depends on what kind of field we would like to generate. One of the most important vector fields, however, is the gradient vector field. So, what is a gradient vector field? The gradient is the first order derivative of a multivariate function and apart from divergence and curl one of the main differential operators used in vector calculus. In three-dimensional space we typically get it by computing the partial derivatives in x, y and z of a scalar function. In other words, if we apply the gradient operator on a scalar field we get a vector field. Well, that’s more or less the general definition but what does this mean in the context of Houdini? I’ll try to explain it on a simple example.

Lets create a grid, wire it into a mountainSop and we’ll get some kind of a landscape like in the images below. Now let’s have a look at different points on that “landscape” and measure their height. For every of these positions, the gradient is the direction in which the height (P.y) is changing the most. In other words, it is the direction of the steepest ascent. We could check this by simply sampling the height values of our landscape around the points. Just copy small circles onto the points, use a raySop to import the height attribute and search for the point with the largest value. This way we get a good approximation to the gradient, depending on the number of sampling points. Of course, that’s not how the gradient is computed but it might help to get a better understanding of it’s meaning. The height is simply the scalar input for the gradient operator and the result is a vector. If we apply the gradient operator on every point of the geometry, we finally get our gradient vector field over the mesh.

So far so good, but what’s so special about the gradient vector field you might ask. Well, a gradient field has several special and important properties and we are going to explore some of them by having a closer look at our previous example. For instance, if we generate contour lines of the scalar field (our height attribute which is the input for the gradient operator) we can see that the vectors are locally perpendicular to the isolines. If we rotate the vectors by 90 degrees, or rather compute the cross product with N, we get a vector field which is tangent to the contour lines. While both vector fields are tangent to the surface, the rotated vector field is in terms of its properties pretty much the opposite of the gradient. I won’t write much about it since I already did in another post, but nevertheless it’s important to mention what the main difference is. The gradient vector field is curl-free, it’s rotated counterpart, however, is a solenoidal vector field and hence divergence-free. If the field is curl- and divergence-free, it’s a laplacian (harmonic) vector field. But let’s go back to the gradient for now and have again a look at our “landscape” example.

Imagine, you’re standing on the top of the hill, put a ball on the ground and give it a light push in a specific direction. Most probably the ball will start rolling and if you’re not fast enough to stop it, it’s rolling down the hillside to the very bottom of the valley. Well, that’s nothing special and not much of a surprise. What is special, however, is the path the ball will take on its way down. It’s the path on which the height decreases locally as fast as possible and this route is exactly following the gradient vector field, or more precisely, the opposite direction in our example. This principle is called gradient descent/ascent and is used extensively for local minimization and maximization problems, as well as for local optimization problems in general. But mind, the keyword here is local. For global optimization problems we have to use different methods. This is quite obvious if we again look at our example. Depending on the sample point, or rather starting point of our path, we reach the local but not necessarily the global minimum/maximum. To make sure, that we actually find the global minimum, in other words, the point at the very bottom of the valley, we have to use a rather large number of samples.

Since the paths are following the gradient vector field, they are perpendicular to the contour lines of our height field and together they form a network of conjugate curves. Conjugate curve networks are quite important for many different applications such as remeshing, polygon planarization, parameterization and so on. The paths following the rotated vector field are, of course, parallel to the contour lines of our scalar function.

Talking about the gradient typically means that we have a scalar field, apply the gradient operator and finally get a vector field. The good thing about a gradient vector field is that we can reverse this operation. In other words, if we have a vector field which is a gradient field, we can calculate the original scalar field. This is possible as long as the vector field is curl-free, what a gradient field per definition is.

Understanding the concept of the gradient operator is quite important since it’s interrelated to pretty much everything in (discrete) differential geometry. For instance, if we apply the gradient operator on a scalar function, in other words, we compute the first order partial derivatives we get a vector as we already know. If we compute the second order partial derivative, the result is a matrix called the Hessian matrix. The trace of this matrix is then the Laplacian of the scalar function.

But now back to the beginning of the post. So, how do we compute the gradient? Well, the definition is quite simple: We have to compute the first order partial derivatives in x, y, and z. On volumes this is pretty easy to do since it’s basically a spatial grid and we can translate the equation pretty much one-to-one to VEX and use it in Houdini. On a parameterized surface we have to change it slightly and compute the derivatives in u and v. On meshes, however, Houdini’s prefered geometry type, we have to do it differently. The simplest method is just using the polyFrameSop in attribute gradient style. This way Houdini is doing the work for us and we don’t have to care about the underlying algorithm. If we don’t want to use the polyFrameSop for some reason, it’s fairly easy to implement our own version in VEX. If you take a look at the HIP you can find three examples. The first method uses the fact that the gradient vector field is perpendicular to contour lines. The second method is basically based on a piecewise linear function and the third is using point clouds to compute the gradient.

And finally some examples of using the gradient operator for various applications, such as magnetic field line generation, flow field generation, medial axis extraction and so on.

Some time ago I wrote a blog post about the covariance matrix and how it’s eigenvectors can be used to compute principal axes of any geometry. This time I’m writing again about eigendecompositions but instead of the covariance matrix it’s the Laplace-Beltrami operator which is decomposed into its eigenvalues and eigenvectors. Now, the question is why would we like to find eigenvalues and eigenvectors of the laplacian? Well, there are a number of applications for which this is very useful. For instance, calculating the biharmonic distance on a mesh. Dimensionality reduction and compression are obvious examples but beside that it’s quite important for various other things too, such as parameterization, smoothing, correspondence analysis, shape analysis, shape matching and many more.

By projecting the point coordinates onto the eigenvectors we can transform the geometry to the corresponding eigenspace. Similar to the Discrete Fourier Transform, the new mapping could then be used for doing various calculations. which are easier to perform in the spectral domain, such as clustering and segmentation.

A very important property of eigenvalues and eigenvectors of the Laplace-Beltrami operator is that they are invariant under isometric transformations/deformations. In other words, for two isometric surfaces the eigenvalues and eigenvectors coincide and consequently their eigenvector projection also coincide. Since a character in different poses is usually a near-isometric deformation, eigenfunctions are quite useful for finding non-rigid transformations or for matching different poses. For instance, if we have two characters like in the example below, we can rely on eigenvectors to compute a correspondence matrix and therefore find corresponding points on the two meshes. Subsequently, these points could then be used to match one pose to the other. Now you might ask why do we need eigenvectors for this? Well, if it is the same character in different poses, or more precisely, the same geometry with the same pointcount, this is easy. If it’s not the same geometry, things get much harder. This is typically the case if we are working with 3D scanned models or shapes which have been remeshed like in the example below. The two characters have different pointcounts: The mesh of the left character has 1749 points, the mesh of the right character has 1928 points.

A very useful tool is the Fiedler vector which is the smallest non-zero eigenvector. It is a handy shape descriptor and can be used to compute shape aware tangent vector fields, to align shapes, to compute uvs, for remeshing and many other things. The images below show that isolines based on the Fiedler vector are nicely following the shape of different meshes.

H. Zhang, O. Kaick, R. Dyer: Spectral Mesh Processing

V. Jain, H. Zhang: Robust 3D Shape Correspondence in the Spectral Domain

S. Dong, P. T. Bremer, M. Garland, V. Pascucci, J. C. Hart: Spectral Surface Quadrangulation

B. Vallet, B. Lèvy: Spectral Geometry Processing with Manifold Harmonics

A. Shtern, R. Kimmel: Matching the LBO Eigenspace of Non-Rigid Shapes via High Order Statistics