FROM NORMAL MAP TO HEIGHT MAP

The question is why do we want to have height maps instead of normal maps in the first place? Well, there are various reasons but the most obvious are that they are easier to edit, generally easier to understand and the result, when applied as displacement map, is easier to anticipate. In other words, while normals maps do have some advantages, they are generally less intuitive to work with and that’s probably a good reason why we might want to convert them. But before going into details how the conversion is done, let’s quickly recap what a normal map is and how it could be generated.

In fact it’s pretty easy and hence we’ll set up a simple example. Let’s take a grid, for example, and apply a noise function to it (in our case a mountainSop). Next compute point normals for the deformed grid and encode these normals as colors. Finally just write out the colors as texture map or directly generate the map by using the texture baker. And basically that’s it, the result is our normal map.

But now let’s reverse the process and assume we have our normal map but the geometry is just a flat grid. How can we reconstruct the original deformed geometry?

Well, let’s start simple. As we already know and the name “normal map” implies, normals are encoded as colors. To be more precise, the rgb values of our map correspond to the xyz components of the normals we are searching for. This means we can pretty much do a 1:1 conversion. The only thing we have to take care of is changing the range from 0, 1 (colors) to -1, 1 (normals). If we’ve done that we indeed end up with proper normals, the grid however, is still flat. What we need to do next is adjusting the points position/height so that the displaced grid finally matches the original deformed geometry. And that’s the tricky part. So, the question now is how are we going to get the height information out of our normals?

Well, one way for instance, is trying to solve this problem geometrically. This could be done by iteratively moving the points vertically up/down until new computed normals match the ones we got from our normal map. For a simple mesh/map this technique is fast and reasonably accurate but for more complicated maps and/or irregular meshes it starts failing and the result might just not be what we’d have expected. To overcome this problem there exist various other methods which could be used instead but my personal favourite is, once again, to rely on the famous Poisson equation.

If we assume height is a function on the mesh, it can be computed by minimizing the difference between our pre-computed normals (projected onto the tangent plane on the mesh) and the gradient of this function in the least squares sense. Since this is leading to the Poisson equation it’s easy to set up a linear system in the typical form A x = b. If we then use a fast sparse matrix library and let it do it’s magic by solving the system for x, we finally get what we were looking for, namely our desired height.

And of course, it works on non-flat meshes too.

Instead of setting up and solving everything in matrix form it’s also possible to do it with VEX/OpenCl although the setup is a bit different and it takes longer to solve. Anyways, for those who don’t have access to a fast matrix solver it’s a reasonable alternative. The process is pretty much the same as it’s done in this blog post.

THE JOY OF USING MATRICES

Fom VEX/OpenCl to sparse matrices

In my last blog post I wrote a bit about the possibility to (mostly) circumvent the need of using sparse matrices in Houdini and instead setting up and solving the system in VEX/OpenCl.  This time, however, it’s about the other way around and how a typically VEX/OpenCl based setup could be implemented and solved in matrix form.

Pretty much everybody using Houdini has at some point done basic Laplacian smoothing on a mesh (with uniform weights), perhaps without even knowing it. It’s a widely used algorithm and fairly easy  to implement by averaging the values of the 1-ring around the current point. In fact, in Houdini it’s even easier than that and we can just use the attribBlurSop. AttribBlur is using OpenCl under the hood and relies on the basic graph Laplacian to blur/smooth point data, such as P for instance. This is done by performing the smoothing/averaging step a number of times which is conveniently exposed as iteration slider in the UI of the attribBlurSop. 

However,  this is of course not the only way how Laplacian smoothing could be implemented. Instead of iteratively applying the smoothing step per point we could also set up a simple linear system and solve it subsequently for the final point positions. This is what I’ve done in the file below. It’s a simple example where I’m using a direct solver in Scipy which is basically solving all the equations simultaneously in one go. 

In practice it doesn’t make much of a difference which method we use because both will lead to the same result in pretty much the same amount of time (at least for our example). But since Laplacian smoothing is already provided to us by the attribBlurSop it’s most probably best to just use what’s already there instead of implementing it in matrix form.

Anyways, apart from practicality I hope this simple example, together with my last post, helps to clear up some misunderstandings and the question if it’s absolutely necessary to use sparse matrix libraries in Houdini or not.

FILE

ps.: To run the example file without problems you’ll need to have Scipy installed on your system/computer.

THE JOY OF NOT USING MATRICES

From sparse matrices to VEX/OpenCl

Over the last months I had some conversations about the need of using (sparse) matrices and/or matrix solvers in Houdini to implement various fundamental algorithms for geometry processing, such as the Poisson Equation. Of course, most of the time we definitely want to work with matrices because everything is easier to understand, faster to set up, and also faster to solve. Nevertheless, I think it’s important to know, that even if that’s the prefered method, it’s not the only one. This becomes particularly apparent when working with Houdini because it doesn’t get shipped with Scipy or any other easy-to-use sparse matrix library. So, as long as we don’t want to compile Scipy by our own, we’re stuck with either using Numpy (which doesn’t support sparse matrices) or using the HDK. 

However, the good thing is that if we don’t want to, we do not have to use matrices at all. In other words, if we don’t have access to Scipy and don’t want to use the HDK, pretty much all of it can also be done in VEX and/or OpenCl. It will be slower and most probably less accurate but it works by only using standard SOPs without the need of external dependencies such as Scipy, EIGEN, … 

About two years ago I wrote a little blog post about various algorithms how to compute geodesic distance fields on meshes. Nowadays we can simply use surfacedist() but back in 2017 you had to code it by yourself. And of course, if someone is talking about geodesics, there is no way of getting around Keenan Cranes wunderful paper “The Heat Method for Distance Computation”. So, what would be a better example than an implementation of exactly this paper!? 

I won’t go into details about the method described in the paper since everybody should read it but it works by basically solving the heat, as well as the Poisson equation. Both are typically set up and solved as a linear system in matrix form and that’s also what I did for my “Geodesics on Meshes” Blog entry. This time, however, we are going to do it differently and use only standard SOPs together with a bit of VEX and OpenCl.

So, below is the file with two slightly different methods. Example one is simply using Jacobi iteration to solve the heat and Poisson equation and example two is relying on Successive Over-Relaxation to speed up convergence time (if Jacobi takes n iterations, SOR convergence in approx. sqrt(n) iterations depending on the relaxation factor).  Additionally for reference and/or performance comparison there’s also an example which uses sparse matrices in EIGEN together with the HDK and another one which uses Scipy. To run the HDK example you’ll need a working compiler since it’s implemented by using inlinecpp() and for the Scipy example, well, you’ll need Scipy.

FILE

K. Crane, C. Weischedel, M. Wardetzky: The Heat Method for Distance Computation

MESH DEVELOPABILITY

Some weeks ago I had an interesting conversation on the Houdini Discord Server about Keenan Cranes fairly new paper Developability of Triangle Meshes. The paper is, of course, excellent as always and the topic quite interesting in general. Nevertheless I somehow missed it when it was officially published and even worse – I completely forgot about it although I was at a conference around two years ago where Keenan Crane gave a beautiful talk about their new method. Fortunately there are people around who are really quick in finding new papers and after it was pointed out to me by Spinynormal I did a quick and dirty implementation in Houdini. It’s just a rather rough prototype without taking too much care about all the details but nonetheless it seems to work quite well.

To be honest, at the time when I saw Keenan Cranes talk I was first a bit sceptical. I really liked the idea and thought wow, that’s brilliant but I also had my doubts about the practicality of their method. Since the result depends largely on the structure of the mesh and not so much on it’s shape (or other properties), flipping some edges, for instance, might lead to a very different solution. And to put it simple, it wasn’t sure if this is an advantage or rather the opposite. Well, now, years later and after playing around a bit with the implementation in Houdini, I’m pretty much convinced that it’s an advantage. It’s quite interesting to watch how the mesh evolves into different shapes by just changing small parts of the triangulation. But then again, I don’t need to use it in practise.

K. Crane, O. Stein, E. Grinspun: Developability of Triangle Meshes

POLYGON PLANARIZATION

Last week I had a great conversation with someone in the Houdini discord chat group about an interesting problem. He had a surface made up of quads and wanted to “planarize” the polygons in order to build a large physical model from it. He told me that he usually uses Grasshopper and Kangaroo for this but was wondering if it’s also doable in Houdini. Well, I don’t know Kangaroo very well but as far as I know it’s basically a particle-spring based solver. It uses a force based approach which works quite well for many different problems such as planarizing a polygonal mesh. (This isn’t true any more! Daniel wrote me in the comments below that Kangaroo is using a projective solver since 2014). Pretty much the same, of course, could be done in Houdini. By using a “planarization force” together with constraints in the grainSolver it’s easy to replicate such a system. Instead of using grains it’s also possible with wires. Most probably they’re not as stable as grains but nevertheless it should do the job. If we like we could also build our own force based solver. Implementing verlet integration in a solverSop is easy and apart from that there’s the gasIntegrator microsolver which we can use in DOPs for that purpose. However, to get a stable system it’s important to choose the right step size for the integration. If it’s too large the system explodes and if it’s to small it takes forever to find its equilibrium. Of course, higher order integrators like Runge Kutta are minimizing this problem and generally they are quite stable. The same is true for the position based approach of grains but the problem still remains, at least to some degree.
Anyway, it’s an interesting problem and even though it’s typically not too hard to solve it could become quite tricky depending on the geometry. The hard part is not so much to obtain planar polygons but to stay as close as possible to the original mesh. In the worst case we might end up with an entirely flat mesh what’s clearly not the intended result. The reason might be, for example, that the mesh describing the surface has an unfavourable structure. And sometimes it’s simply not possible because not every surface can be made up of planar polygons except triangles.
During our chat I remembered that there was a thread on OdForce about pretty much the same topic. Someone had a saddle-shaped surface for which he computed the dual mesh and subsequently wanted to make the resulting hexagons planar. At the same time the boundary points of the surface should keep their original position. Well, even though a saddle surface seems to be rather simple, it is not and I think this is a good example to highlight some of the problems we might run into when we’re trying to planarize polygons.

But first let’s take one step back and start simple. Instead of using hexagons let’s assume it is a quad mesh. One way to tackle this problem is to use a force based approach similar to what Kangaroo/Grasshopper is doing. We need to set up a number of forces and run the system until it reaches equilibrium. To calculate the “planarization-force” we could, for instance, project the polygon points onto the plane defined by center and normal. Or we compute the diagonals and check their distance to each other. To preserve the mesh structure we can rely on edges and diagonals as spring forces. Additionally we need to keep the points of the boundary at their position. The key point, however, is to adjust the weighting between all these forces to finally get the result we were looking for. Usually there is no way to find a solution satisfying all the objectives and hence the result is highly depending on how we balance the forces. As said above, such an approach is easy to set up and it works quite well under most circumstances. However, for a number of reasons I thought it might be better to take a slightly different approach. Instead of using a force based system I used a method which is conceptually somewhat similar to the “As Rigid As Possible Surface Modelling” approach. Intuitively it’s easy to understand and works basically in the following way:

For every polygon of the mesh, the planar counterpart is computed and then fitted to the original poly in the least squares sense. For our specific “planarization problem” we could also just project the points onto the tangent plane but in general we may want to rely on rigid shape matching to find the best fitting orientation. This could be done easily in VEX by using polardecomp() or by using Numpy. Even though polar decomposition in VEX is faster I tend to use Numpy’s SVD algorithm since it doesn’t have problems with flat axis aligned polys. Of course, we could make our matrix positive by adding a very small value but this might lead to other problems instead. For a high number of polygons it’s a good idea to implement a simplified SVD algorithm in VEX which doesn’t have such problems but in most cases Numpy should be fast enough. But let’s not get off the track and back to the planarization problem.
After the “planarization” step is applied to each polygon the mesh is obviously torn apart. What we need to do next is to reconstruct the mesh in the least squares sense based on the planar polygons. In other words, we need to set up and solve a linear system. If we repeat these steps iteratively a number of times the system finally converges to the solution we are looking for. The result is the mesh made up of planar polygons. So, the process boiles down to the following threes steps:

  • Fit planar polygon to original polygon in the least squares sense
  • Reconstruct mesh based on planar polygons in the least squares sense
  • Repeat until system converges

Of course, we also need to implement all the other constraints too, such as fixed boundary points, mesh structure preservation and so on. In the example below, the four corner points are fixed at their position all the other points are allowed to move in all directions. In my tests the algorithms works quite well and is also relatively fast. For the mesh shown below it needs about 175 iterations and converges in about 280 ms on my rather old laptop.

But now let’s go back to the example file from OdForce. The original question was how to obtain planar hexagons while retaining the overall shape of the mesh. Additionally, even though it’s not mentioned in the post, we may want to keep the mesh structure intact. In other words, the polygons should stay connected. Well, this sounds easier than it is. It is indeed possible but we might end up with unwanted results. The problem is that the saddle-shaped surface has negative gaussian curvature and hence it can’t be approximated by convex hexagonal polygons. However, if we allow non-convex polygons it’s doable although the deviation to the original shape is rather large. This is clearly visible in image 2 and 3 below. If we additionally allow for open edges the overall shape is preserved very well (image 4). However, this is just the case for hexagonal polygons and won’t work for quads in the same way. So let’s take a look what happens if we choose four-sided instead of six-sided polygons.
(Side note: For computing planar hexagonal meshes it’s most probably more efficient to rebuild the mesh instead of planarizing it’s polygons but that’s maybe a topic for another blog post)

Using the planarization algorithm on the quad mesh results in a completely flat surface which is clearly not what we want. But how could this happen? Well, the simple reason is that it’s just not possible to obtain planar AND connected polygons based on our initial mesh without getting large deviations in the overall shape. This is clearly visible in the images below. Picture 1 is the original mesh made up of non-planar polygons. Picture 2 shows the result of the planarization with fixed corner points and picture 3 is the planarized poly mesh without any additional constraints.

So what can we do to get a better result? The answer is simple. We need to provide a better starting point for the algorithm. In other words, we need to remesh the surface. The most natural way to remesh a surface is based on it’s principal curvature direction. By integrating the fields, for example, we could extract a conjugate curve network which we then use to construct the mesh. Such a curvature aligned mesh is exactly what we need since it’s quads are naturally close to planar. For simple geometries this is not too hard to implement and the resulting mesh is a perfect starting point for the subsequent planarization algorithm. In the case of our saddle-shaped surface it’s even better because we don’t need to do any planarization at all. The new curvature aligned mesh is already composed of fully planar polygons.

For some related reading:
R. Poranne, E. Ovreiu, C. Gotsman: Interactive Planarization and Optimization of 3D Meshes
S. Bouaziz, S. Martin, T. Liu, L. Kavan, M.Pauly: Projective Dynamics: Fusing Constraint Projections for Fast Simulations
W. Wang, Y. Liu: A Note on Planar Hexagonal Meshes
W. Wang, Y. Liu, B Chan, R. Ling, F. Sun: Hexagonal Meshes with Planar Faces

CURVES … appendix

My last blog post was mainly about curves and how some of the tools known from Grasshopper/Rhino could be implemented in Houdini. Somewhere at the very end of the entry I mentioned inflection points but I didn’t explain what it is. Well, I’ll try to do it now.
An inflection point is a point on a curve at which the curvature changes sign. For example, when we look at a two dimensional s-shaped curve and visualize it’s normals then there’s a point at which the normals flip from one side of the curve to the other. This point is the inflection point. Not every curve does have such a point but if it has one (or several), it’s sometimes useful if we know how to find it. For instance, it was important in the example in which I was approximating a curve by circular arcs. However, the inflection point is just an example and maybe we need to search for other points, such as global or local extreme points, points of max. or min. curvature and so on.
To find such a point we basically have two options. One way is to recursively sample the curve in smaller and smaller steps until we get the point of our interest. This is easy to implement and works generally very well for all kinds of curves. A more elegant and more accurate method, however, is to remember that we are dealing with polynomials and hence to find the point analytically. In case of searching for the inflection point this simply means that we have to solve the polynomial for the point at which curvature is zero.