FROM NORMAL MAP TO HEIGHT MAP

The question is why do we want to have height maps instead of normal maps in the first place? Well, there are various reasons but the most obvious are that they are easier to edit, generally easier to understand and the result, when applied as displacement map, is easier to anticipate. In other words, while normals maps do have some advantages, they are generally less intuitive to work with and that’s probably a good reason why we might want to convert them. But before going into details how the conversion is done, let’s quickly recap what a normal map is and how it could be generated.

In fact it’s pretty easy and hence we’ll set up a simple example. Let’s take a grid, for example, and apply a noise function to it (in our case a mountainSop). Next compute point normals for the deformed grid and encode these normals as colors. Finally just write out the colors as texture map or directly generate the map by using the texture baker. And basically that’s it, the result is our normal map.

But now let’s reverse the process and assume we have our normal map but the geometry is just a flat grid. How can we reconstruct the original deformed geometry?

Well, let’s start simple. As we already know and the name “normal map” implies, normals are encoded as colors. To be more precise, the rgb values of our map correspond to the xyz components of the normals we are searching for. This means we can pretty much do a 1:1 conversion. The only thing we have to take care of is changing the range from 0, 1 (colors) to -1, 1 (normals). If we’ve done that we indeed end up with proper normals, the grid however, is still flat. What we need to do next is adjusting the points position/height so that the displaced grid finally matches the original deformed geometry. And that’s the tricky part. So, the question now is how are we going to get the height information out of our normals?

Well, one way for instance, is trying to solve this problem geometrically. This could be done by iteratively moving the points vertically up/down until new computed normals match the ones we got from our normal map. For a simple mesh/map this technique is fast and reasonably accurate but for more complicated maps and/or irregular meshes it starts failing and the result might just not be what we’d have expected. To overcome this problem there exist various other methods which could be used instead but my personal favourite is, once again, to rely on the famous Poisson equation.

If we assume height is a function on the mesh, it can be computed by minimizing the difference between our pre-computed normals (projected onto the tangent plane on the mesh) and the gradient of this function in the least squares sense. Since this is leading to the Poisson equation it’s easy to set up a linear system in the typical form A x = b. If we then use a fast sparse matrix library and let it do it’s magic by solving the system for x, we finally get what we were looking for, namely our desired height.

And of course, it works on non-flat meshes too.

Instead of setting up and solving everything in matrix form it’s also possible to do it with VEX/OpenCl although the setup is a bit different and it takes longer to solve. Anyways, for those who don’t have access to a fast matrix solver it’s a reasonable alternative. The process is pretty much the same as it’s done in this blog post.

THE JOY OF USING MATRICES

Fom VEX/OpenCl to sparse matrices

In my last blog post I wrote a bit about the possibility to (mostly) circumvent the need of using sparse matrices in Houdini and instead setting up and solving the system in VEX/OpenCl.  This time, however, it’s about the other way around and how a typically VEX/OpenCl based setup could be implemented and solved in matrix form.

Pretty much everybody using Houdini has at some point done basic Laplacian smoothing on a mesh (with uniform weights), perhaps without even knowing it. It’s a widely used algorithm and fairly easy  to implement by averaging the values of the 1-ring around the current point. In fact, in Houdini it’s even easier than that and we can just use the attribBlurSop. AttribBlur is using OpenCl under the hood and relies on the basic graph Laplacian to blur/smooth point data, such as P for instance. This is done by performing the smoothing/averaging step a number of times which is conveniently exposed as iteration slider in the UI of the attribBlurSop. 

However,  this is of course not the only way how Laplacian smoothing could be implemented. Instead of iteratively applying the smoothing step per point we could also set up a simple linear system and solve it subsequently for the final point positions. This is what I’ve done in the file below. It’s a simple example where I’m using a direct solver in Scipy which is basically solving all the equations simultaneously in one go. 

In practice it doesn’t make much of a difference which method we use because both will lead to the same result in pretty much the same amount of time (at least for our example). But since Laplacian smoothing is already provided to us by the attribBlurSop it’s most probably best to just use what’s already there instead of implementing it in matrix form.

Anyways, apart from practicality I hope this simple example, together with my last post, helps to clear up some misunderstandings and the question if it’s absolutely necessary to use sparse matrix libraries in Houdini or not.

FILE

ps.: To run the example file without problems you’ll need to have Scipy installed on your system/computer.

THE JOY OF NOT USING MATRICES

From sparse matrices to VEX/OpenCl

Over the last months I had some conversations about the need of using (sparse) matrices and/or matrix solvers in Houdini to implement various fundamental algorithms for geometry processing, such as the Poisson Equation. Of course, most of the time we definitely want to work with matrices because everything is easier to understand, faster to set up, and also faster to solve. Nevertheless, I think it’s important to know, that even if that’s the prefered method, it’s not the only one. This becomes particularly apparent when working with Houdini because it doesn’t get shipped with Scipy or any other easy-to-use sparse matrix library. So, as long as we don’t want to compile Scipy by our own, we’re stuck with either using Numpy (which doesn’t support sparse matrices) or using the HDK. 

However, the good thing is that if we don’t want to, we do not have to use matrices at all. In other words, if we don’t have access to Scipy and don’t want to use the HDK, pretty much all of it can also be done in VEX and/or OpenCl. It will be slower and most probably less accurate but it works by only using standard SOPs without the need of external dependencies such as Scipy, EIGEN, … 

About two years ago I wrote a little blog post about various algorithms how to compute geodesic distance fields on meshes. Nowadays we can simply use surfacedist() but back in 2017 you had to code it by yourself. And of course, if someone is talking about geodesics, there is no way of getting around Keenan Cranes wunderful paper “The Heat Method for Distance Computation”. So, what would be a better example than an implementation of exactly this paper!? 

I won’t go into details about the method described in the paper since everybody should read it but it works by basically solving the heat, as well as the Poisson equation. Both are typically set up and solved as a linear system in matrix form and that’s also what I did for my “Geodesics on Meshes” Blog entry. This time, however, we are going to do it differently and use only standard SOPs together with a bit of VEX and OpenCl.

So, below is the file with two slightly different methods. Example one is simply using Jacobi iteration to solve the heat and Poisson equation and example two is relying on Successive Over-Relaxation to speed up convergence time (if Jacobi takes n iterations, SOR convergence in approx. sqrt(n) iterations depending on the relaxation factor).  Additionally for reference and/or performance comparison there’s also an example which uses sparse matrices in EIGEN together with the HDK and another one which uses Scipy. To run the HDK example you’ll need a working compiler since it’s implemented by using inlinecpp() and for the Scipy example, well, you’ll need Scipy.

FILE

K. Crane, C. Weischedel, M. Wardetzky: The Heat Method for Distance Computation

MESH DEVELOPABILITY

Some weeks ago I had an interesting conversation on the Houdini Discord Server about Keenan Cranes fairly new paper Developability of Triangle Meshes. The paper is, of course, excellent as always and the topic quite interesting in general. Nevertheless I somehow missed it when it was officially published and even worse – I completely forgot about it although I was at a conference around two years ago where Keenan Crane gave a beautiful talk about their new method. Fortunately there are people around who are really quick in finding new papers and after it was pointed out to me by Spinynormal I did a quick and dirty implementation in Houdini. It’s just a rather rough prototype without taking too much care about all the details but nonetheless it seems to work quite well.

To be honest, at the time when I saw Keenan Cranes talk I was first a bit sceptical. I really liked the idea and thought wow, that’s brilliant but I also had my doubts about the practicality of their method. Since the result depends largely on the structure of the mesh and not so much on it’s shape (or other properties), flipping some edges, for instance, might lead to a very different solution. And to put it simple, it wasn’t sure if this is an advantage or rather the opposite. Well, now, years later and after playing around a bit with the implementation in Houdini, I’m pretty much convinced that it’s an advantage. It’s quite interesting to watch how the mesh evolves into different shapes by just changing small parts of the triangulation. But then again, I don’t need to use it in practise.

K. Crane, O. Stein, E. Grinspun: Developability of Triangle Meshes

POLYGON PLANARIZATION

Last week I had a great conversation with someone in the Houdini discord chat group about an interesting problem. He had a surface made up of quads and wanted to “planarize” the polygons in order to build a large physical model from it. He told me that he usually uses Grasshopper and Kangaroo for this but was wondering if it’s also doable in Houdini. Well, I don’t know Kangaroo very well but as far as I know it’s basically a particle-spring based solver. It uses a force based approach which works quite well for many different problems such as planarizing a polygonal mesh. (This isn’t true any more! Daniel wrote me in the comments below that Kangaroo is using a projective solver since 2014). Pretty much the same, of course, could be done in Houdini. By using a “planarization force” together with constraints in the grainSolver it’s easy to replicate such a system. Instead of using grains it’s also possible with wires. Most probably they’re not as stable as grains but nevertheless it should do the job. If we like we could also build our own force based solver. Implementing verlet integration in a solverSop is easy and apart from that there’s the gasIntegrator microsolver which we can use in DOPs for that purpose. However, to get a stable system it’s important to choose the right step size for the integration. If it’s too large the system explodes and if it’s to small it takes forever to find its equilibrium. Of course, higher order integrators like Runge Kutta are minimizing this problem and generally they are quite stable. The same is true for the position based approach of grains but the problem still remains, at least to some degree.
Anyway, it’s an interesting problem and even though it’s typically not too hard to solve it could become quite tricky depending on the geometry. The hard part is not so much to obtain planar polygons but to stay as close as possible to the original mesh. In the worst case we might end up with an entirely flat mesh what’s clearly not the intended result. The reason might be, for example, that the mesh describing the surface has an unfavourable structure. And sometimes it’s simply not possible because not every surface can be made up of planar polygons except triangles.
During our chat I remembered that there was a thread on OdForce about pretty much the same topic. Someone had a saddle-shaped surface for which he computed the dual mesh and subsequently wanted to make the resulting hexagons planar. At the same time the boundary points of the surface should keep their original position. Well, even though a saddle surface seems to be rather simple, it is not and I think this is a good example to highlight some of the problems we might run into when we’re trying to planarize polygons.

But first let’s take one step back and start simple. Instead of using hexagons let’s assume it is a quad mesh. One way to tackle this problem is to use a force based approach similar to what Kangaroo/Grasshopper is doing. We need to set up a number of forces and run the system until it reaches equilibrium. To calculate the “planarization-force” we could, for instance, project the polygon points onto the plane defined by center and normal. Or we compute the diagonals and check their distance to each other. To preserve the mesh structure we can rely on edges and diagonals as spring forces. Additionally we need to keep the points of the boundary at their position. The key point, however, is to adjust the weighting between all these forces to finally get the result we were looking for. Usually there is no way to find a solution satisfying all the objectives and hence the result is highly depending on how we balance the forces. As said above, such an approach is easy to set up and it works quite well under most circumstances. However, for a number of reasons I thought it might be better to take a slightly different approach. Instead of using a force based system I used a method which is conceptually somewhat similar to the “As Rigid As Possible Surface Modelling” approach. Intuitively it’s easy to understand and works basically in the following way:

For every polygon of the mesh, the planar counterpart is computed and then fitted to the original poly in the least squares sense. For our specific “planarization problem” we could also just project the points onto the tangent plane but in general we may want to rely on rigid shape matching to find the best fitting orientation. This could be done easily in VEX by using polardecomp() or by using Numpy. Even though polar decomposition in VEX is faster I tend to use Numpy’s SVD algorithm since it doesn’t have problems with flat axis aligned polys. Of course, we could make our matrix positive by adding a very small value but this might lead to other problems instead. For a high number of polygons it’s a good idea to implement a simplified SVD algorithm in VEX which doesn’t have such problems but in most cases Numpy should be fast enough. But let’s not get off the track and back to the planarization problem.
After the “planarization” step is applied to each polygon the mesh is obviously torn apart. What we need to do next is to reconstruct the mesh in the least squares sense based on the planar polygons. In other words, we need to set up and solve a linear system. If we repeat these steps iteratively a number of times the system finally converges to the solution we are looking for. The result is the mesh made up of planar polygons. So, the process boiles down to the following threes steps:

  • Fit planar polygon to original polygon in the least squares sense
  • Reconstruct mesh based on planar polygons in the least squares sense
  • Repeat until system converges

Of course, we also need to implement all the other constraints too, such as fixed boundary points, mesh structure preservation and so on. In the example below, the four corner points are fixed at their position all the other points are allowed to move in all directions. In my tests the algorithms works quite well and is also relatively fast. For the mesh shown below it needs about 175 iterations and converges in about 280 ms on my rather old laptop.

But now let’s go back to the example file from OdForce. The original question was how to obtain planar hexagons while retaining the overall shape of the mesh. Additionally, even though it’s not mentioned in the post, we may want to keep the mesh structure intact. In other words, the polygons should stay connected. Well, this sounds easier than it is. It is indeed possible but we might end up with unwanted results. The problem is that the saddle-shaped surface has negative gaussian curvature and hence it can’t be approximated by convex hexagonal polygons. However, if we allow non-convex polygons it’s doable although the deviation to the original shape is rather large. This is clearly visible in image 2 and 3 below. If we additionally allow for open edges the overall shape is preserved very well (image 4). However, this is just the case for hexagonal polygons and won’t work for quads in the same way. So let’s take a look what happens if we choose four-sided instead of six-sided polygons.
(Side note: For computing planar hexagonal meshes it’s most probably more efficient to rebuild the mesh instead of planarizing it’s polygons but that’s maybe a topic for another blog post)

Using the planarization algorithm on the quad mesh results in a completely flat surface which is clearly not what we want. But how could this happen? Well, the simple reason is that it’s just not possible to obtain planar AND connected polygons based on our initial mesh without getting large deviations in the overall shape. This is clearly visible in the images below. Picture 1 is the original mesh made up of non-planar polygons. Picture 2 shows the result of the planarization with fixed corner points and picture 3 is the planarized poly mesh without any additional constraints.

So what can we do to get a better result? The answer is simple. We need to provide a better starting point for the algorithm. In other words, we need to remesh the surface. The most natural way to remesh a surface is based on it’s principal curvature direction. By integrating the fields, for example, we could extract a conjugate curve network which we then use to construct the mesh. Such a curvature aligned mesh is exactly what we need since it’s quads are naturally close to planar. For simple geometries this is not too hard to implement and the resulting mesh is a perfect starting point for the subsequent planarization algorithm. In the case of our saddle-shaped surface it’s even better because we don’t need to do any planarization at all. The new curvature aligned mesh is already composed of fully planar polygons.

For some related reading:
R. Poranne, E. Ovreiu, C. Gotsman: Interactive Planarization and Optimization of 3D Meshes
S. Bouaziz, S. Martin, T. Liu, L. Kavan, M.Pauly: Projective Dynamics: Fusing Constraint Projections for Fast Simulations
W. Wang, Y. Liu: A Note on Planar Hexagonal Meshes
W. Wang, Y. Liu, B Chan, R. Ling, F. Sun: Hexagonal Meshes with Planar Faces

CURVES … appendix

My last blog post was mainly about curves and how some of the tools known from Grasshopper/Rhino could be implemented in Houdini. Somewhere at the very end of the entry I mentioned inflection points but I didn’t explain what it is. Well, I’ll try to do it now.
An inflection point is a point on a curve at which the curvature changes sign. For example, when we look at a two dimensional s-shaped curve and visualize it’s normals then there’s a point at which the normals flip from one side of the curve to the other. This point is the inflection point. Not every curve does have such a point but if it has one (or several), it’s sometimes useful if we know how to find it. For instance, it was important in the example in which I was approximating a curve by circular arcs. However, the inflection point is just an example and maybe we need to search for other points, such as global or local extreme points, points of max. or min. curvature and so on.
To find such a point we basically have two options. One way is to recursively sample the curve in smaller and smaller steps until we get the point of our interest. This is easy to implement and works generally very well for all kinds of curves. A more elegant and more accurate method, however, is to remember that we are dealing with polynomials and hence to find the point analytically. In case of searching for the inflection point this simply means that we have to solve the polynomial for the point at which curvature is zero.

CURVES

From time to time I’m having interesting talks with people who are mainly using Grasshopper and Rhino for their job. They are working in the field of architecture and/or design and even though they are usually quite interested in Houdini one of the first things I always hear is how bad the curve tools are compared to Rhino/Grasshopper. Well, that’s not true! Or let’s say, it’s just half the truth. Even though curves, and Nurbs in general, could get some love they are quite robust and not too bad to work with. (As long as we don’t need to fiddle around with trimmed surfaces). Houdini is not a CAD program and naturally it doesn’t have all the features which are available in Rhino/Grasshopper. So, when I asked what the main curve tools are that Grashopper users are missing the most in Houdini I got a list with the following five points:

  • Curvature
  • Curvature graph
  • Curve deviation
  • Curvature based resampling
  • Approximate curves by circular arcs

Of course, there are many other things which are missing, for instance, better handles to create and modify curves but since the points above seem to be the most important features to Grasshopper users I’ll stick with those. However, curves are not too complicated and if you ever need to implement or generate “custom” curves there’s plenty of good information available online. For a start you could take a look at these simple example files on OdForce.
But now back to the missing feature list. Before we start, it’s important to note that there are mainly two different kinds of curves in Houdini. On one hand we have Nurbs and Bezier curves which are basically polynomial curves parameterized by a function. On the other hand we have polygonal curves which are usually just points connected by straight lines. If we like, we could call the two types smooth and discrete curves.

Curvature
Curvature is fairly easy to calculate if we understand what it actually means in the context of curves. Basically it tells us how fast a curve is changing direction when we travel along it. In more mathematical terms, it’s the magnitude of acceleration. So far so good but what exactly does this mean? Well, acceleration is the second derivative, or in other words, the normal of the curve. The first derivative is velocity or slope and this is nothing more than just the tangent. So, if we want to compute curvature it comes down to implementing the following three simple steps:

  • Compute first derivative namely the tangent
  • Compute second derivative (which is the normal) based on tangents
  • Measure magnitude of normal which finally is curvature

A nice side effect of this procedure is that we can easily compute a local frame since we already have tangent and normal. We just need to compute the cross product of these two known vectors and we’ll get the third vector and hence an orthogonal local frame on our curve. To get the osculating circle at a given point, like it’s possible in Grasshopper, we need to calculate the radius which is just the inverse of curvature.
All this is relatively easy to implement in VEX, both for Nurbs and polygonal curves and it’s also quite fast. However, for discrete curves the easiest method is most probably the use of a slopeChop in CHOPS. On OdForce there is a rather old example file using this technique to compute the Frenet frame. It’s not exactly about curvature but it shows the general setup.

 

Curvature graph
Well, this is easy because it’s just the normals reversed and scaled by curvature and a multiplier by the user.

 

Curve deviation
This is easy too. We just need to use xyzdist() at sampling points on the first curve to get the distance to the second curve.

Curvature based resampling
This is again a job for xyzdist() which we could use to implement our own curve sampling algorithm in VEX. This could be done based on a minimal distance or maximal angle threshold by using a bit of basic trigonometry together with the xyzdist() and primuv() combo.

 

Approximate curves by circular arcs
This was the most important point on the list what makes totally sense for applications in the field of architecture and design. If someone is actually going to built something physically it might be a good idea to find a way how to approximate an arbitrary 3D curve by circular arcs as close as possible. So, how could we implement this in Houdini? Well, there are many different ways but the easiest would be to “brute-force” it. Since an arc is well defined by three points we could start with two close points at one end of the curve. Next we find the point in between construct an arc and estimate the maximal deviation to the curve. If it’s smaller than an user given threshold increase the distance between the points and repeat the steps until we get the largest arc within our tolerances. Then we just repeat the whole procedure until we finally get our arc approximated curve. VEX is pretty fast and hence this algorithm should work quite well performance-wise, even for very dense resampled curves.
However, I thought it might be better to choose a different and probably smarter method. The reason is mainly because the approximation should be as smooth as possible without large kinks along the curve. Instead of using the algorithm described above, which is not guaranteed to result in tangent arcs, we could use a method called biarc fitting. So, what is a biarc? A biarc is a pair of two arcs with the same tangent at the point at which they meet. And this is exactly what we need for a good and smooth approximation of the curve. The general algorithm is fairly easy to implement and works basically as follows:

  • Get tangent at endpoints
  • Sample curve and search for best connection point for biarc
  • Estimate deviation between biarc and curve
  • If deviation is larger than given tolerance split curve and repeat
  • If deviation is smaller than given tolerance compute the biarc

Instead of using just the distance between biarc and curve I’m using various different measurands to estimate the error, such as max. distance, average distance, torsion angle and tangent angle. The images below show some results using the distance with different tolerance settings. Of course, there are some things we need to take care of, for instance, splitting curves at inflection points but generally biarc fitting is fairly easy to implement and works quite well for all sorts of curves.

 

 

 

FIEDLER VECTOR AND THE CURVE SKELETON

Computing curve skeletons of 3D shapes is an important tool in computer graphics with many different applications for shape analysis, shape matching, character skeleton construction, rigging and many more. It’s an active field of research and over the years, numerous algorithms have been developed. Most of these algorithms are based on mesh contraction using smoothing or thinning. Another popular approach is to use the voronoi diagram generated by the points of the mesh. Quite an interesting alternative, however, is to rely on spectral analysis for computing the skeleton. The idea is fairly simple and works as follows:

  • Compute the first non-zero eigenvector of the Laplace-Beltrami operator, namely the Fiedler vector.
  • Compute contour lines based on the Fiedler vector
  • Find the center point for every contour
  • Fit curves to the points

VECTOR FIELDS … AGAIN

I guess I know what you think: “Not again another post about vector fields!” Well, that might be true but vector fields are omnipresent in computer graphics and therefore it’s incredible important to get a basic understanding of vector calculus and how it’s used in Houdini. A vector field is, as the name implies, a field of vectors in n-dimensional space. In Houdini terms, it’s a bunch of vectors assigned to points, vertices, polys or voxels of a mesh or a volume. That’s roughly what a vector field is. It’s easy to understand and easy to visualize. However, much more interesting than what it is, is to know what we can do with it and how we compute it. And that’s exactly what I’m trying to explain in this blog post. I’m not going into all the details but trying to give a general overview in the context of Houdini’s Surface Operators, better known as SOPs.
So, how do we compute a vector field in Houdini? Well, honestly, there is no single answer to this question because it mainly depends on what kind of field we would like to generate. One of the most important vector fields, however, is the gradient vector field. So, what is a gradient vector field? The gradient is the first order derivative of a multivariate function and apart from divergence and curl one of the main differential operators used in vector calculus. In three-dimensional space we typically get it by computing the partial derivatives in x, y and z of a scalar function. In other words, if we apply the gradient operator on a scalar field we get a vector field. Well, that’s more or less the general definition but what does this mean in the context of Houdini? I’ll try to explain it on a simple example.

Lets create a grid, wire it into a mountainSop and we’ll get some kind of a landscape like in the images below. Now let’s have a look at different points on that “landscape” and measure their height. For every of these positions, the gradient is the direction in which the height (P.y) is changing the most. In other words, it is the direction of the steepest ascent. We could check this by simply sampling the height values of our landscape around the points. Just copy small circles onto the points, use a raySop to import the height attribute and search for the point with the largest value. This way we get a good approximation to the gradient, depending on the number of sampling points. Of course, that’s not how the gradient is computed but it might help to get a better understanding of it’s meaning. The height is simply the scalar input for the gradient operator and the result is a vector. If we apply the gradient operator on every point of the geometry, we finally get our gradient vector field over the mesh.

 

So far so good, but what’s so special about the gradient vector field you might ask. Well, a gradient field has several special and important properties and we are going to explore some of them by having a closer look at our previous example. For instance, if we generate contour lines of the scalar field (our height attribute which is the input for the gradient operator) we can see that the vectors are locally perpendicular to the isolines. If we rotate the vectors by 90 degrees, or rather compute the cross product with N, we get a vector field which is tangent to the contour lines. While both vector fields are tangent to the surface,  the rotated vector field is in terms of its properties pretty much the opposite of the gradient. I won’t write much about it since I already did in another post, but nevertheless it’s important to mention what the main difference is. The gradient vector field is curl-free, it’s rotated counterpart, however, is a solenoidal vector field and hence divergence-free. If the field is curl- and divergence-free, it’s a laplacian (harmonic) vector field. But let’s go back to the gradient for now and have again a look at our “landscape” example.

 

Imagine, you’re standing on the top of the hill, put a ball on the ground and give it a light push in a specific direction. Most probably the ball will start rolling and if you’re not fast enough to stop it, it’s rolling down the hillside to the very bottom of the valley. Well, that’s nothing special and not much of a surprise. What is special, however, is the path the ball will take on its way down. It’s the path on which the height decreases locally as fast as possible and this route is exactly following the gradient vector field, or more precisely, the opposite direction in our example. This principle is called gradient descent/ascent and is used extensively for local minimization and maximization problems, as well as for local optimization problems in general. But mind, the keyword here is local. For global optimization problems we have to use different methods. This is quite obvious if we again look at our example. Depending on the sample point, or rather starting point of our path, we reach the local but not necessarily the global minimum/maximum. To make sure, that we actually find the global minimum, in other words, the point at the very bottom of the valley, we have to use a rather large number of samples.
Since the paths are following the gradient vector field, they are perpendicular to the contour lines of our height field and together they form a network of conjugate curves. Conjugate curve networks are quite important for many different applications such as remeshing, polygon planarization, parameterization and so on. The paths following the rotated vector field are, of course, parallel to the contour lines of our scalar function.

 

Talking about the gradient typically means that we have a scalar field, apply the gradient operator and finally get a vector field. The good thing about a gradient vector field is that we can reverse this operation. In other words, if we have a vector field which is a gradient field, we can calculate the original scalar field. This is possible as long as the vector field is curl-free, what a gradient field per definition is.
Understanding the concept of the gradient operator is quite important since it’s interrelated to pretty much everything in (discrete) differential geometry. For instance, if we apply the gradient operator on a scalar function, in other words, we compute the first order partial derivatives we get a vector as we already know. If we compute the second order partial derivative, the result is a matrix called the Hessian matrix. The trace of this matrix is then the Laplacian of the scalar function.

But now back to the beginning of the post. So, how do we compute the gradient? Well, the definition is quite simple: We have to compute the first order partial derivatives in x, y, and z. On volumes this is pretty easy to do since it’s basically a spatial grid and we can translate the equation pretty much one-to-one to VEX and use it in Houdini. On a parameterized surface we have to change it slightly and compute the derivatives in u and v. On meshes, however, Houdini’s prefered geometry type, we have to do it differently. The simplest method is just using the polyFrameSop in attribute gradient style. This way Houdini is doing the work for us and we don’t have to care about the underlying algorithm. If we don’t want to use the polyFrameSop for some reason, it’s fairly easy to implement our own version in VEX. If you take a look at the HIP you can find three examples. The first method uses the fact that the gradient vector field is perpendicular to contour lines. The second method is basically based on a piecewise linear function and the third is using point clouds to compute the gradient.

And finally some examples of using the gradient operator for various applications, such as magnetic field line generation, flow field generation, medial axis extraction and so on.

 

SPECTRAL ANALYSIS

Some time ago I wrote a blog post about the covariance matrix and how it’s eigenvectors can be used to compute principal axes of any geometry. This time I’m writing again about eigendecompositions but instead of the covariance matrix it’s the Laplace-Beltrami operator which is decomposed into its eigenvalues and eigenvectors. Now, the question is why would we like to find eigenvalues and eigenvectors of the laplacian? Well, there are a number of applications for which this is very useful. For instance, calculating the biharmonic distance on a mesh. Dimensionality reduction and compression are obvious examples but beside that it’s quite important for various other things too, such as parameterization, smoothing, correspondence analysis, shape analysis, shape matching and many more.

Eigenvectors 1-50

By projecting the point coordinates onto the eigenvectors we can transform the geometry to the corresponding eigenspace. Similar to the Discrete Fourier Transform, the new mapping could then be used for doing various calculations. which are easier to perform in the spectral domain, such as clustering and segmentation.

 

 


A very important property of eigenvalues and eigenvectors of the Laplace-Beltrami operator is that they are invariant under isometric transformations/deformations. In other words, for two isometric surfaces the eigenvalues and eigenvectors coincide and consequently their eigenvector projection also coincide. Since a character in different poses is usually a near-isometric deformation, eigenfunctions are quite useful for finding non-rigid transformations or for matching different poses. For instance, if we have two characters like in the example below, we can rely on eigenvectors to compute a correspondence matrix and therefore find corresponding points on the two meshes. Subsequently, these points could then be used to match one pose to the other. Now you might ask why do we need eigenvectors for this? Well, if it is the same character in different poses, or more precisely, the same geometry with the same pointcount, this is easy. If it’s not the same geometry, things get much harder. This is typically the case if we are working with 3D scanned models or shapes which have been remeshed like in the example below. The two characters have different pointcounts: The mesh of the left character has 1749 points, the mesh of the right character has 1928 points.

 

 

 

A very useful tool is the Fiedler vector which is the smallest non-zero eigenvector. It is a handy shape descriptor and can be used to compute shape aware tangent vector fields, to align shapes, to compute uvs, for remeshing and many other things. The images below show that isolines based on the Fiedler vector are nicely following the shape of different meshes.

 

 

 

H. Zhang, O. Kaick, R. Dyer: Spectral Mesh Processing
V. Jain, H. Zhang: Robust 3D Shape Correspondence in the Spectral Domain
S. Dong, P. T. Bremer, M. Garland, V. Pascucci, J. C. Hart: Spectral Surface Quadrangulation
B. Vallet, B. Lèvy: Spectral Geometry Processing with Manifold Harmonics
A. Shtern, R. Kimmel: Matching the LBO Eigenspace of Non-Rigid Shapes via High Order Statistics

 

 

TOPOLOGY OPTIMIZATION

Just an old experiment I did using ToPy in Houdini. ToPy is written by William Hunter and it is the Python implementation and 3D extension of Ole Sigmund’s famous 99 line topology optimization code in Matlab. Since ToPy is written entirely in Python, it’s easy to use in Houdini. Preparing and converting the input and output files worked pretty flawless by using the Python VTK library.

 

 

For more about topology optimization, check out the following links:
https://github.com/williamhunter/topy
http://www.topopt.dtu.dk/
http://www.topopt.dtu.dk/files/matlab.pdf

 

COVARIANCE MATRIX

If you ever had to compute an object oriented bounding box by yourself you probably came across the term covariance matrix. Well, this is no accident since covariance matrices are widely used in computer graphics for many different applications such as data fitting, rigid transformation, point cloud smoothing and many more. A covariance matrix is a N x N matrix basically measuring the variance of data in N dimensions. In case of Houdini’s three-dimensional coordinate system it’s therefore a symmetric 3 x 3 matrix capturing the variance in its diagonal and the covariance in its off-diagonal. Computing the covariance matrix is fairly easy and could be done efficiently in VEX or Python/Numpy. The interesting thing about the matrix is that it represents the variance and the covariance of the data. In other words, it describes the “shape” of the data. This means that we can, for example, find the direction of the largest variance measured from the centroid, which consequently is the largest dimension of the data. This is done by decomposing the matrix into its eigenvalues and eigenvectors. In case we are working with a 3 x 3 covariance matrix we get three eigenvectors. The eigenvector with the largest eigenvalue is the vector pointing into the direction of the largest variance. The eigenvector with the smallest eigenvalue is orthogonal to the largest eigenvector and, of course, pointing into the direction of the smallest variance.

 

Finding the eigenvectors for a small matrix is quite easy and could be done, for example, by using singular value decomposition (SVD) in Numpy. In VEX we don’t have such a function but we could implement a (inverse) power iteration algorithm to find the largest and/or smallest eigenvector. The missing third eigenvector is then computed simply by talking the cross product of the other two already known vectors.
Finding these eigenvectors is quite useful for many applications. For instance, if we need to compute object oriented bounding boxes for each polygon of a mesh. We could do this by building the covariance matrix for the points of each polygon and then searching for the largest eigenvector. Together with the normal, which is the smallest eigenvector, we finally can compute the oriented bounding box. Another useful application of the covariance matrix is the computation of point normals for a point cloud. In this case we need to find the smallest eigenvector, which represents the best fitting plane and thus the normal of the point. Building the covariance matrix is also important if we want to find the rigid transformation from one object to another. All these are just a few examples and to cut a long story short, covariance matrices are quite useful and definitely worth looking into.

 

 

 

CP MESHES

About a month ago I wrote my first serious post in this blog. It was about circle packings. Well, that’s nothing special because pretty much everybody knows what a circle packing is and many have used it in the field of generative art to produce stunning images and patterns. But what most people don’t know is how many uses such a packing has in the context of discrete differential geometry. Quite interesting for instance is the so called CP mesh. Originally developed to rationalize freeform surfaces in architecture and design, a CP mesh is a triangle mesh whose incircles form a packing.
To compute the CP mesh we first need to set up an optimization problem based on the original mesh. This is done by formulating an energy function measuring the incircle packing property, the distance to the original mesh and the distance to the boundary. After solving the nonlinear minimization problem in the least squares sense we finally obtain new point positions forming the required CP mesh. However, it is essential to bear in mind that it’s not possible to reconstruct a CP mesh from an arbitrary surface. In other words, there is always a deviation to the original mesh no matter what energy function we use. This is clearly visible on the images below. The picture in the middle shows a CP mesh for which the surface proximity is used in the energy function whereas for the picture on the right the boundary proximity is used.

CP meshes have some useful properties which are quite relevant to various algorithms and applications. One of these properties is that they form an ideal primal/dual structure which guarantees orthogonality, non-intersection and convexity. This is important in case we are using discrete differential geometry operators: Orthogonality ensures good numerical qualities of the Laplacian and furthermore it ensures that we can safely compute primal values based on the dual values and dual values based on primal values respectively.

The combinatorial relation between primal and dual mesh is quite important in the context of discrete differential geometry, especially when we are using DEC methods. For tight error bounds it’s therefore essential to work with a good primal triangulation which ensures the above mentioned properties such as non-intersection, orthogonality and convexity of the dual. To achieve orthogonality, we usually place the dual vertices in the circumcenter of its corresponding primal complex. This however could lead to intersections in case our primal mesh contains obtuse triangles. To avoid this problem we could use the barycentric instead of the circumcentre dual. But this in turn would fail to satisfy orthogonality and convexity. We could circumvent this new problem partially by generally using the circumcenter but switching to the barycenter in case the dual vertex is outside of its corresponding primal triangle. What we could do as well, is to project the vertex onto the triangles boundary. However, to cut a long story short, the only way to avoid all these problems is to use a proper triangle primal mesh. And a CP mesh is such a mesh.

(a) Triangle mesh.  (b1, b2) Dual mesh based on the circumcenter of its primal triangulation. Obtuse triangles for which the circumcenter is outside of the triangles boundary are leading to intersections of the dual mesh  (c) Dual mesh based on the barycenter of its primal triangulation. In this case there is no intersection but it fails to satisfy orthogonality.  (d) Dual vertex is projected onto the edge of its corresponding triangle.  (e) Optimized CP mesh  (f) Incircle packing.  (g) Dual mesh based on the incircle packing of the CP mesh

Another important property of a CP mesh is that if we extrude the edges of the dual polygon we get a torsion free structure. In other words, its physical realization could be build out of beams which intersect in a common axis passing through the incircles center. And last but not least, CP meshes can be used to approximate circle packings on a surface. This could be done by either fitting a circle into each vertex gap between the incircle packing or by using the contact points of the sphere packing. Both methods generally produce a fairly good packing. A real circle packing however, is not possible on an arbitrary surface.

 

A. Schiftner, M. Höbinger, J. Wallner, H. Pottmann: Packing Circles and Spheres on Surfaces

BALANCING 3D SHAPES

3D printing became rather popular over the last years and many people started to print their own models. The technology is great and the workflow generally quite hassle-free: Just do the design in your favourite 3D software, save the file and then send it to manufacturing.
There is one problem, though (apart from the necessity to have a watertight model): It’s easy to forget about the laws of physics, or more precisely, to forget about gravity. Consequently, the printed model which is intended to stand, simply doesn’t. The reason for this is most probably because it’s unbalanced and thus just tuple down. However, a great solution to this problem is provided in the excellent paper “Make It Stand: Balancing Shapes for 3D Fabrication” published by the igl Zürich. The authors propose a two-step balance optimization algorithm which works by firstly modifying the volume of the object and secondly, by deforming the original shape. Since I like 3D printing and have first-hand experience with the “it doesn’t stand” problem I thought an implementation of the paper might be worth a try.

The method described in the paper basically works as follows:

  • Define support polygons of the shape
  • Calculate center of mass
  • Carve out inner void based on computed center point
  • Calculate new center point and check if its projection along gravity is on support polygons
  • If center point isn’t on support polygons, furthermore deform original shape to ensure stability

Computing the center of mass (not to be confused with the centroid) is easy. It can be done quite efficiently by using the divergence theorem. It’s described in the paper but in case you are interested in details, check out the link. For the mass carving part Houdini already has the perfect toolset, namely VDBs. After converting the mesh to a VDB I used vdbReshapeSdf to compute the overall offset of the surface. The final inner volume is then computed iteratively during optimization in a volumeWrangle based on a mask. For the shape deformation I used on an old custom SOP based on laplace editing. And finally, the overall minimization problem was solved in Scipy within a SopSolver. Even though this setup was quite inefficient since all matrices have to be rebuild in every iteration, it worked surprisingly well. Of course, there is much room for improvements but tbh. I was more interested in the general method than in the exact implementation of the paper.

 

 

R. Prèvost, E. Whiting, S. Lefebvre, O. Sorkine-Hornung: Make It Stand: Balancing Shapes for 3D Fabrication

NON-PHOTOREALISTIC RENDERING

Non-photorealistic rendering is something I have been quite fascinated with for a long time. Unfortunately I never had enough time to really get into it. But when a good old friend asked me if I could help him on one of his projects I thought this would be a good opportunity to finally dig deeper into NPR land.
My friend is one of the few persons who pretty much never uses a computer. And for his work he doesn’t have to. He is a brilliant artist making wonderful drawings mainly with colored pencils. Around six to seven years ago, he started one of his newer projects. He developed his own alphabet as a kind of abstract notation which is based on rectangles and four different colors. In the following years he made hundreds (or maybe even thousands) of drawings which show all kinds of variations of this notation. Usually these drawings are translation of one or two words into his alphabet. At some point he finally wanted to translate more than a few words but since he is doing all by hand, this became rather time consuming. One evening, when we sat together over a drink we talked about his project and he told me about his idea of translating “The Pacific”, a chapter from the famous book Moby Dick into one of his drawings. It wasn’t that much text but since he also wanted to try out different variations it became rather tedious to draw it by hand. In Houdini however, this could be done in a fraction of a second. To test it out I quickly threw together some nodes and with the help of python and the mighty copySop I had a working setup shortly after. After that I simply used Wren for rendering out the wireframe as Postscript file which could then be plotted on paper.

My friend was quite happy with the result. It did save him a lot of work and time and now he could concentrate on coloring and finalizing the drawing. I for one, however, was asking myself if it was even necessary to hand colorize the prints. Wouldn’t it be possible to just write a shader and let Houdini and Mantra do all the work? I was rather confident and so I started working on the shader. On one hand it was important to simulate my friends drawing style as close as possible and on the other hand the shader had to be as flexible as possible. I first thought I could scan parts of his drawings and simply use it as texture but this wasn’t neither flexible enough nor did it look natural. It was quite the opposite, it looked terribly unnatural. After some more tests I finally decided to use only layered procedural textures. This made rendering rather slow but since I had to render just a few large stills, this wasn’t much of a problem. At the end it worked quite nicely and using fully procedural textures made it possible to simulate many different drawing styles.

VECTOR FIELD DECOMPOSITION

Some time ago I wrote a blog post about generating tangent and smoothly varying vector fields  on arbitrary meshes.  The method I’ve used works basically by specifying some constraints such as vortices, sources, sinks and guiding curves/edges, which are then interpolated over the surface to compute the final field. Depending on the constraints we achieve very different kinds of vector fields which have different properties and hence different applications. So, while the resulting vector field might be exactly what you need as velocity field in a fluid simulation, it might be completely useless for remeshing. The main reason for this is related to curl and/or divergence. Generally speaking, whenever we are computing or just working with vector fields, we always have to deal with at least one of the three interrelated main differential operators – gradient, divergence and curl.

For instance, applying the gradient operator on a scalar field results in a curl-free vector field. If we then apply a local rotation around 90 degrees, we get the orthogonal counterpart in the form of a divergence-free field. This is e.g. exactly what we need as velocity field for a fluid simulation since we usually want fluids to be incompressible. Furthermore we have to take care of open edges at which we usually want to have tangential boundary conditions, at least in case we are working with fluids.

Well, though all this sounds easy, the problem is that in most cases we just don’t have these nice velocity fields. Normally they are neither gradient fields nor solenoidal fields but rather a mixture of both. In other words we are dealing with vector fields which aren’t completely divergence-free. That’s the reason why we need to apply pressure projection in DOPS which finally computes a proper velocity field needed for fluids. This works nicely in the simulation environment on volumes. But what if we need to do the same in SOPS and on surfaces? Of course we could still rely on volumes, do all the operations we need to do and bring the result back onto the original mesh. While this works to some degree, it is far from being optimal – the result will most probably be inaccurate and we will be limited to solenoidal and gradient vector fields. What we could do instead is to work directly with vector fields on the surface. Typically any vector field on a simply-connected domain could be decomposed into the sum of an irrotational (curl-free), a solenoidal (divergence-free) and a harmonic (divergence-free and curl-free) field. This technique is known as Hodge-Helmholtz decomposition and is basically achieved by minimizing the energy functionals for the irrotational and the solenoidal component of the field by solving the Poisson equation. Finally the harmonic component will be computed as the residual.

While divergence-free vector fields are essential for any kind of fluid simulation, curl-free and harmonic fields are essential for many other applications, such as remeshing, texture synthesis and parameterization to name a few. The irrotational component of the vector field is curl-free and hence the field is integrable. It is equivalent to the gradient of a scalar potential function on simply-connected domains. This means that we can reproduce the scalar field from the irrotational vector field because it is exactly the gradient of the desired scalar field. The isolines of the scalar field are therefore exactly orthogonal to the vector field. In case the vector field is not curl-free, we can treat it as a gradient of a function and minimize the difference between the gradient and the given vector field in the least squares sense. Again this leads to the well known Poisson equation.

 

 

 

UNFOLDING MESHES

A friend of mine is pretty much obsessed with papercraft. Over the years he has built many beautiful models and I really like the aesthetic of this white, minimalistic and precisely folded paper geometries. He always starts by modeling them first in Blender and when he’s satisfied with the shape he uses Pepakura to unfold it to a plane so that he can actually build it out of paper. Pepakura is a simple yet powerful small program and can unfold pretty much everything into a foldable pattern. I found it quite fascinating and so I thought it might be a good idea trying to implement something like this in Houdini. Not that I’m building much paper models but nevertheless it’s sometimes useful to unfold meshes. After some research and several tests it turned out to be actually quite easy. And after some more test I finally implemented a simple version of an unfolding algorithm in VEX. Basically it works as follows:

  • Build the dual graph of the mesh
  • Calculate a spanning tree to minimize cuts and prevent faces from overlapping
  • Cut the spanning tree
  • Unfold mesh

AS-RIGID-AS-POSSIBLE SURFACE EDITING

In Houdini we are mainly working with meshes and quite often these meshes need to be deformed. For this reason we can use several tools such as cages or proxies together with the LatticeSop or the newer PointDeformSop. However, sometimes it’s handy to deform the shape directly. Of course, there is the EditSop but getting smooth deformations can be tricky. Even if soft radius is activated, you might end up with unsatisfying results, especially when rotations are involved. This might be the case because the EditSop uses just a spherical distance falloff and no geometry-aware distance, or maybe because local details don’t get maintained during editing, or maybe it’s because the deformed shape looses to much local volume/area compared to the original. Anyway, one of the best techniques to avoid pretty much all of these problems is “as-rigid-as-possible” deformations and it is described in detail in the paper: “As-Rigid-As-Possible Surface Modeling” by Olga Sorkine and Marc Alexa.

A perfect mesh editing tool should enable us to select some vertices to move and rotate them around. The deformation of the handle should get propagated smoothly to the rest of the geometry. And of course, this should happen without removing or distorting surface details and obviously as fast as possible. In other words, the deformation should be smooth and properties such as area, curvature, distance and so on, should be preserved as much as possible. Well, the problem is that if local mesh properties should be preserved, only rotation and translation should be involved since scaling would destroy the local structure. This however is clearly not possible in the case of mesh editing because as soon as we select a vertex and transform it, some parts of the mesh will get scaled and/or shifted. As consequence we´ll end up with distorted surface details.  “As-rigid-as-possible” however, presents a nice solution to this problem. It tries to preserve local rigidity as much as possible. This means, the shape is locally preserved and smooth as well.

 

Basically this is done in a two-stage approach. First, getting an initial deformation guess by propagating the handle transformation to the mesh. And second, finding the best deformation to maximize the local rigidity by minimizing an energy function. The energy function is based on the one-ring neighbourhood around vertices and measures the deviation to the rigid transformation of the cell. By summing up the energy functions for all vertices/cells of the mesh we get the global energy which is to be minimized to get the final deformation.

 

The number of iterations required to retain local properties depends mainly on the size of the mesh. Generally, just a few iterations (10-20) are needed until convergence.

The implementation is done with the HDK but instead of using Houdini’s own matrix solver I´ve used the linear algebra library Eigen. For more information about “as-rigid-as-possible” check out the following paper:

O. Sorkine, M. Alexa: As-Rigid-As-Possible Surface Modeling

GLOBAL OPTIMIZATION

Around 2006 I was working on an project where I had to find a specific distribution of objects on a surface. In other words, I had to position approximately 200 kubes on the surface in such a way that all the cubes could be seen from a given location. Because I didn’t want to spend the next two weeks moving around boxes, I was looking for a way to automate the process. After some research I found the solution. I had to use a global optimization algorithm which would be able to solve a problem with 400 variables as efficient as possible. Well, sounds easy, doesn’t it? Funnily enough it really was. After some more research I finally implemented a simple variant of the classical multiobjective genetic optimization algorithm. The implementation at the time was done entirely in CHOPS with some help of Python. Apart from being rather slow, the solver did a good job. I really had fun implementing the algorithm and became quite fascinated in optimization in general but unfortunately I had no time to get deeper into this topic. Half a year later I started to work on it again as a little side project in my spare time. Well, 10 years and around 18000 lines of code later it’s still a side project and still far from being finished. Over the time I have implemented the following algorithms in Houdini with varying success:

Single-Objective Optimization Algorithms

  • GA Genetic Algorithm
  • DE Differential Evolution Algorithm
  • SaDE Self-adaptive Differential Evolution Algorithm
  • SA Simulated Annealing Algorithm
  • PSO Particle Swarm Optimization
  • GCPSO Guaranteed Convergence Particle Swarm Optimization
  • ABS Artificial Bee Colony Algorithm
  • HBA Honeybee Algorithm
  • IWD Intelligent Water Drops Algorithm
  • FPA Flower Pollination Algorithm
  • HS Harmony Search Algorithm
  • CS Cuckoo Search Algorithm
  • CMAES Covariance Matrix Adaptation Evolution Strategy

Multi-Objective Optimization Algorithms:

  • NSGA  Classical non-dominated Sorting Genetic Algorithm
  • NSGAII Non-dominated Sorting Genetic Algorithm
  • ssNSGAII Steady state Non-dominated Sorting Genetic Algorithm
  • SPEA2 Strength-based Evolutionary Algorithm
  • ISPEA Improved Strenth-based Evolutionary Algorithm
  • MOGA Multi-Objective Genetic Algorithm
  • PAES Pareto Archived Evolution Strategy
  • IPESA2 Improved Pareto Envelope-based Selection Algorithm
  • VEGA Vector Evaluated Genetic Algorithm
  • GDE3 Generalized Differential Evolution
  • MODE Multi-Objective Differential Evolution
  • EMOEA e-Dominance-based Multi-Objective Evolutionary Algorithm
  • IBEA Indicator-Based Evolutionary Algorithm
  • MOEAD Multi-Objective Evolutionary Algorithm with Decomposition
  • MOPSO Multi-Objective Particle Swarm Optimization
  • SMPSO Speed-Constrained Multi-objective Particle Swarm Optimization
  • MOFPA Multi-Objective Flower Pollination Algorithm
  • MOSA Multi-Objective Simulated Annealing

Below are some of the results using different algorithms on various test problems. More about it could be found here

 

 

BIHARMONIC DISTANCE COMPUTATION

Some time ago I wrote a blog post about geodesics and different methods how they could be computed in Houdini. Geodesic distances are great since they represent the distance, or more precisely, the shortest distance between points on a surface. While this is an important property and useful for many things we might run into a situation in which other properties are even more important, for instance smoothness. This might be the case if we want to generate a smooth vector field based on the distance or if we are working on meshes with holes for example. However, a nice solution to these problems is to compute biharmonic distances instead of geodesic distances. Biharmonic distance is not necessarily the true distance between points on a surface but it is smooth and globally shape-aware.

Contrary to geodesic distance, the biharmonic distance is not computed directly on the surface but on the eigenvectors of the laplacian. Basically it’s the average distance between pairwise points projected onto incrementally increasing eigenvectors. The general accuracy is therefore highly dependent on the number of computed eigenvectors – the more the better. If we need to get exact distances we have to compute all eigenvectors or in other words, we need to perform the computationally rather expensive full eigendecomposition of the laplacian. For a sufficient approximation however, we usually can get away with the first 20-100 eigenvectors depending on mesh density and shape. For the implementation in Houdini I’m using Spectra and Eigen together with the HDK. Even though it’s quite fast to perform the eigenvalue/eigenvector decomposition, the computation of geodesic distances is of course much faster, especially on large meshes.

Y. Lipman, R. M. Rustamov, T. A. Funkhouser: Biharmonic Distance

GUIDANCE VECTOR FIELD

A good friend of mine is an artist. Last year when he did some kind of an installation he asked me if I could help him on an interesting problem. For an interactive installation he basically wanted to work with reflections on a three-dimensional surface. The general idea was quite simple and the setup basically as follows:
He wanted to have two sculptures in a room which could change their position in space. Pretty much in the middle of the room there is a three-dimensional surface, which could be deformed by using some servos with Arduino. This surface responds to the movement and position of the visitor and get modified in such a way that the two sculptures are always visible as reflection on the surface. Even if they are not visible directly. Well, that was the plan. He already had great ideas how to build the reflecting surface in reality and also knew how to build the servo-rig. Tracking the visitors wasn’t a problem either but what he didn’t know was how to define the shape of the surface based on the incoming data, which in turn he needed to control the servos. That was the problem I was supposed to take care of since I promised him to help.
So, just to summarize the “surface-problem”:

  • The surface should be deformed in such a way that both sculptures are visible as reflection as much as possible
  • The surface should be as smooth as possible because of aesthetic reasons
  • Due to material constraints, the surface area should stay pretty much the same
  • Since it’s all about a responsive surface in an interactive installation, it has to work fast

So, because I had no clue how to solve this problem I started doing some prototyping in Houdini. I first set up a simple scene with two spheres representing the sculptures, a grid representing the surface and of course, a “visitor” (in fact, another sphere). After some unsuccessful tests I started to compute various vector fields describing visual rays and reflections in the scene. And suddenly it made “click” – I knew how to do it. Looking at the setup I had so far, it became quite apparent that I could just use the vector field as gradient field in the Poisson equation. I tried it immediately and it worked.

The vector field was easy to compute. For every polygon of the grid I had to compute the vectors pointing to both “sculptures” by using a weight-function based on distance and reflection angle to finally blend between them. This way I got the direction in which visual rays needed to be reflected and hence I also knew the optimal orientation of each polygon. After that I applied a local rotation to each polygon of the mesh and obtained the new orientation. By doing so, every polygon turned to it’s “best” direction and the mesh was torn apart. Subsequently it needed to be reconstructed based on the new orientation of its polygons. And this is exactly the point where the Poisson equation came into play.

The setup was quite simple. Based on the new orientation I calculated a gradient for x, y and z, computed the divergence for each of these and used the result as new gradient field in the Poisson equation. After solving the Poisson equation, which is in fact a sparse linear system, the result is a piecewise continuous function. In other words, it is the final mesh reconstructed in the least squares sense.

 

ART GALLERY PROBLEM

Many years ago GI in rendering wasn’t as fast and easy to use as it is today. It was indeed possible but for animations you had to pre-bake indirect illumination as much as possible. One way of doing this was to place a bunch of cameras at good positions before shooting rays into the scene. Generally this sounds easy but the interesting question was the following: What are good camera position? To save render time you had to use as few cameras as possible but at the same time it was necessary to cover as much as possible of the scene. Basically this is a visibility problem and exactly what the classical art gallery problem is all about. There exist various algorithms which could be used in different ways. Some are listed here:

https://en.wikipedia.org/wiki/Art_gallery_problem

A fairly simple and efficient algorithm is described in a paper by Michael Batty and Sanjay Rana. It’s closely related to spatial analysis and since I already had some experience in this field it was finally the algorithm I implemented in Houdini.

M. Batty, S. Rana: The automatic definition and generation of axial lines and axial maps

 

ISOVIST AND VISIBILITY GRAPH

Isovists and visibility graphs are quite interesting and have been used in different fields throughout the last 30 years. They were used to analyse and understand space, helped to predict how people move through museums and were the conceptual basis for numerous path- and wayfinding applications. Broadly speaking, both isovist and visibility graph do generally the same thing but in different ways. An isovist is basically a polygon describing the visible area or volume around a given point, whereas a visibility graph is, as the name implies, a graph connecting visible points which is stored internally as adjacency matrix . Although the algorithms are different, the measures you’ll get by using one or the other are quite similar. For example: max and min distance to obstacles, average distance, area, perimeter, different statistics based on distances, and so on to name a few. These measures can be used later to derive properties of the analysed space. Thus it became important in the field of architecture and urban planning, robot motion planning and computational geometry, But it’s also used in other less obvious areas like sociology, spatial cognition and architectural psychology.  For a more complete explanation check out the following links:

https://en.wikipedia.org/wiki/Isovist
https://en.wikipedia.org/wiki/Visibility_graph

There exist various specialized software packages which can compute such a visibility analysis but out of interest I thought I implement my own version in Houdini. Most of the algorithms are written in C++ though it might be possible to do most of it in VEX nowadays. The main reason why I did use the HDK is simple because I used BGL (Boost Graph Library) for some calculations on the visibility graph and also because of better performance. While VEX is fast and gives you multithreading for free, C++ is still faster in this case, mainly due to the fact that I needed to do a lot of calculations on large arrays.

The images below show the main difference between the isovist and a visibility graph in 2D. The isovist on the left is a polygon describing the visual area around a viewpoint, whereas the visibility graph on the right is basically an adjacency graph. If we do the calculation on more than just one point, the result is a field with various measures indicating different spatial properties.

 

The obvious choice how to compute the isovist is by rotating around a vector and test for intersections with the environment. It is easy to implement but leads to a number of problems. For instance, in the left image below the vector is rotated around in one degree steps. This is sufficient for nearby walls but results in poor precision at larger distances and acute angles. A much better approach is to sample directly the points of the surrounding geometry (image on the right side below). This works simply by checking the points visibility. If the point is visible, check for subsequent ray-intersections and finaly compute the isovist. This solution is not just much better precision-wise but also performance-wise. Instead of at least 360 rays I had to shoot just 56 rays into the scene.

 

Most of the measurands derived from the isovists are scalars.  By computing the gradient however,  it is easy to get a vector field. This is useful for many things. For instance, to guide agents in a simulation.

 

Currently the following measurands are implemented:
Min. distance, max. distance, average distance, min. diameter, max. diameter, min. distance direction, max. distance direction, standard deviation distance, variance distance, skewness distance, dispersion distance, area, perimeter, closed perimeter, open perimeter, openness, min. convex core, compactness, convexity, circularity, centrality, jaggedness, roundness, clustering coefficient, revelation, connectivity. Some of these measurands are shown in the images below.

 

 

For some related reading:
M. L. Benedikt: To take hold of space: isovists and isovist fields
A. Turner, M. Doxa, D. O’Sullivan, A. Penn: From isovists to visibility graphs: a methodology for the analysis of architectural space
M Batty: Exploring isovist fields: space and shape in architectural and urban morphology.
G. Franz, J. M. Wiener: Exploring isovist-based correlates of spatial behavior and experience

QUADRILATERAL REMESHING

This is just an old and fairly rough implementation of an algorithm to convert triangular meshes into mostly quadrilateral meshes. The method is quite simple and works basically by computing principal curvature directions while subsequently trying to build quads based on that directions. It does a good job on smooth surfaces like the one on the images below but fails on meshes with hard edges. In other words, it’s far from being really usable …

 

GEODESICS ON MESHES

Measuring geodesic distances and the computation of paths on a mesh is widely used in computational geometry for many different applications ranging from mesh parameterization and remeshing to skinning and mesh deformation. After a longer conversation and chat about this topic, and since i thought it could be useful at some point I finally got my feet wet on different algorithms to compute such geodesics in Houdini.
The first and obvious choice was to compute the shortest path between the source point and all the other points on the geometry. This was easy and fast to do but unfortunately it doesn’t compute geodesics. Since the underlying algorithm of the ShortestPathSop (Dijkstra) relies on the edges of the mesh, it computes just a rough approximation of the real distances. Of course, the inaccuracy could be minimized by iteratively smoothing the paths but still, the result will be far from being optimal and the method slow as well.

To circumvent this problem, I tried another method. Instead of measuring the length of the shortest paths directly it relied on the fact that a real geodesic path unfolds into a straight line. So, what I basically did was:

  • Compute the shortest path between point A and point B
  • Unfold the path onto a plane
  • Measure the Euclidian distance between point A and point B on the unfolded path.

This method worked quite nicely but it has one big drawback – it doesn’t work on meshes with holes. As soon as the line drawn between the start- and endpoint of the unfolded path crosses a hole, the result is no longer the true geodesic distance.

The third algorithm I´ve used was the well known Fast Marching Front Propagation Method. You start at the single source point and compute the distance to the 1-ring neighbouring points around it. By doing this iteratively and summing up the distances you´ll finally get the geodesic distance to all points in the mesh. This works because it is easy to calculate the distance to the third point of a triangle as soon as the other two distances are known.
For the first quick implementation i used python but it was way to slow to be usable. After replacing the slow list lookups in the code i got a 3 times speedup but it was still to slow which was the reason for porting the algorithm to the HDK. After that, it worked quite fast (around 270 times faster) but the general problem with this algorithm is, that it doesn´t work with obtuse triangles. In other words, it needs to work on a fairly clean and regular triangulated mesh. If this is not the case, you´ll end up with errors in the distance calculation which leads to larger errors and so forth. To prevent this from happening I had to insert conditional statements for obtuse triangles as well for all other special cases and hence the code got much longer and somewhat cluttered. Usually, when fast marching front propagation is used on 2D and 3D grids, this is not a problem but for arbitrarily triangulated meshes it is.

Finally I decided to implement the great paper: “Geodesics in Heat” by Keenan Crane. The method described in the paper works basically as follows:

  • Integrate the heat flow for a source point across the mesh
  • Compute the normalized gradient for the heat
  • Solve the Poisson equation to get the actual distance

One of the great advantages of the heat method is its brilliant simplicity. Basically it relies only on solving a sparse linear system what could be done quite efficiently by various linear algebra libraries (I’ve used Eigen for this). This makes it possible to compute the geodesic distances from more than one point in pretty much the same computation time, quite contrary to the marching method for example.

For some related reading:
K. Crane, C. Weischedel, M. Wardetzky: The Heat Method for Distance Computation
M. Novotni, R. Klein: Computing Geodesic Distances on Triangular Meshes
V. Surazhsky, T. Surazhsky, D. Kirsanov, S. J. Gortler, H. Hoppe: Fast Exact and Approximate Geodesics on Meshes
D. Bommes, L. Kobbelt: Accurate Computation of Geodesic Distance Fields for Polygonal Curves on Triangle Meshes