FROM NORMAL MAP TO HEIGHT MAP

The question is why do we want to have height maps instead of normal maps in the first place? Well, there are various reasons but the most obvious are that they are easier to edit, generally easier to understand and the result, when applied as displacement map, is easier to anticipate. In other words, while normals maps do have some advantages, they are generally less intuitive to work with and that’s probably a good reason why we might want to convert them. But before going into details how the conversion is done, let’s quickly recap what a normal map is and how it could be generated.

In fact it’s pretty easy and hence we’ll set up a simple example. Let’s take a grid, for example, and apply a noise function to it (in our case a mountainSop). Next compute point normals for the deformed grid and encode these normals as colors. Finally just write out the colors as texture map or directly generate the map by using the texture baker. And basically that’s it, the result is our normal map.

But now let’s reverse the process and assume we have our normal map but the geometry is just a flat grid. How can we reconstruct the original deformed geometry?

Well, let’s start simple. As we already know and the name “normal map” implies, normals are encoded as colors. To be more precise, the rgb values of our map correspond to the xyz components of the normals we are searching for. This means we can pretty much do a 1:1 conversion. The only thing we have to take care of is changing the range from 0, 1 (colors) to -1, 1 (normals). If we’ve done that we indeed end up with proper normals, the grid however, is still flat. What we need to do next is adjusting the points position/height so that the displaced grid finally matches the original deformed geometry. And that’s the tricky part. So, the question now is how are we going to get the height information out of our normals?

Well, one way for instance, is trying to solve this problem geometrically. This could be done by iteratively moving the points vertically up/down until new computed normals match the ones we got from our normal map. For a simple mesh/map this technique is fast and reasonably accurate but for more complicated maps and/or irregular meshes it starts failing and the result might just not be what we’d have expected. To overcome this problem there exist various other methods which could be used instead but my personal favourite is, once again, to rely on the famous Poisson equation.

If we assume height is a function on the mesh, it can be computed by minimizing the difference between our pre-computed normals (projected onto the tangent plane on the mesh) and the gradient of this function in the least squares sense. Since this is leading to the Poisson equation it’s easy to set up a linear system in the typical form A x = b. If we then use a fast sparse matrix library and let it do it’s magic by solving the system for x, we finally get what we were looking for, namely our desired height.

And of course, it works on non-flat meshes too.

Instead of setting up and solving everything in matrix form it’s also possible to do it with VEX/OpenCl although the setup is a bit different and it takes longer to solve. Anyways, for those who don’t have access to a fast matrix solver it’s a reasonable alternative. The process is pretty much the same as it’s done in this blog post.

THE JOY OF USING MATRICES

Fom VEX/OpenCl to sparse matrices

In my last blog post I wrote a bit about the possibility to (mostly) circumvent the need of using sparse matrices in Houdini and instead setting up and solving the system in VEX/OpenCl.  This time, however, it’s about the other way around and how a typically VEX/OpenCl based setup could be implemented and solved in matrix form.

Pretty much everybody using Houdini has at some point done basic Laplacian smoothing on a mesh (with uniform weights), perhaps without even knowing it. It’s a widely used algorithm and fairly easy  to implement by averaging the values of the 1-ring around the current point. In fact, in Houdini it’s even easier than that and we can just use the attribBlurSop. AttribBlur is using OpenCl under the hood and relies on the basic graph Laplacian to blur/smooth point data, such as P for instance. This is done by performing the smoothing/averaging step a number of times which is conveniently exposed as iteration slider in the UI of the attribBlurSop. 

However,  this is of course not the only way how Laplacian smoothing could be implemented. Instead of iteratively applying the smoothing step per point we could also set up a simple linear system and solve it subsequently for the final point positions. This is what I’ve done in the file below. It’s a simple example where I’m using a direct solver in Scipy which is basically solving all the equations simultaneously in one go. 

In practice it doesn’t make much of a difference which method we use because both will lead to the same result in pretty much the same amount of time (at least for our example). But since Laplacian smoothing is already provided to us by the attribBlurSop it’s most probably best to just use what’s already there instead of implementing it in matrix form.

Anyways, apart from practicality I hope this simple example, together with my last post, helps to clear up some misunderstandings and the question if it’s absolutely necessary to use sparse matrix libraries in Houdini or not.

FILE

ps.: To run the example file without problems you’ll need to have Scipy installed on your system/computer.

THE JOY OF NOT USING MATRICES

From sparse matrices to VEX/OpenCl

Over the last months I had some conversations about the need of using (sparse) matrices and/or matrix solvers in Houdini to implement various fundamental algorithms for geometry processing, such as the Poisson Equation. Of course, most of the time we definitely want to work with matrices because everything is easier to understand, faster to set up, and also faster to solve. Nevertheless, I think it’s important to know, that even if that’s the prefered method, it’s not the only one. This becomes particularly apparent when working with Houdini because it doesn’t get shipped with Scipy or any other easy-to-use sparse matrix library. So, as long as we don’t want to compile Scipy by our own, we’re stuck with either using Numpy (which doesn’t support sparse matrices) or using the HDK. 

However, the good thing is that if we don’t want to, we do not have to use matrices at all. In other words, if we don’t have access to Scipy and don’t want to use the HDK, pretty much all of it can also be done in VEX and/or OpenCl. It will be slower and most probably less accurate but it works by only using standard SOPs without the need of external dependencies such as Scipy, EIGEN, … 

About two years ago I wrote a little blog post about various algorithms how to compute geodesic distance fields on meshes. Nowadays we can simply use surfacedist() but back in 2017 you had to code it by yourself. And of course, if someone is talking about geodesics, there is no way of getting around Keenan Cranes wunderful paper “The Heat Method for Distance Computation”. So, what would be a better example than an implementation of exactly this paper!? 

I won’t go into details about the method described in the paper since everybody should read it but it works by basically solving the heat, as well as the Poisson equation. Both are typically set up and solved as a linear system in matrix form and that’s also what I did for my “Geodesics on Meshes” Blog entry. This time, however, we are going to do it differently and use only standard SOPs together with a bit of VEX and OpenCl.

So, below is the file with two slightly different methods. Example one is simply using Jacobi iteration to solve the heat and Poisson equation and example two is relying on Successive Over-Relaxation to speed up convergence time (if Jacobi takes n iterations, SOR convergence in approx. sqrt(n) iterations depending on the relaxation factor).  Additionally for reference and/or performance comparison there’s also an example which uses sparse matrices in EIGEN together with the HDK and another one which uses Scipy. To run the HDK example you’ll need a working compiler since it’s implemented by using inlinecpp() and for the Scipy example, well, you’ll need Scipy.

FILE

K. Crane, C. Weischedel, M. Wardetzky: The Heat Method for Distance Computation

POLYGON PLANARIZATION

Last week I had a great conversation with someone in the Houdini discord chat group about an interesting problem. He had a surface made up of quads and wanted to “planarize” the polygons in order to build a large physical model from it. He told me that he usually uses Grasshopper and Kangaroo for this but was wondering if it’s also doable in Houdini. Well, I don’t know Kangaroo very well but as far as I know it’s basically a particle-spring based solver. It uses a force based approach which works quite well for many different problems such as planarizing a polygonal mesh. (This isn’t true any more! Daniel wrote me in the comments below that Kangaroo is using a projective solver since 2014). Pretty much the same, of course, could be done in Houdini. By using a “planarization force” together with constraints in the grainSolver it’s easy to replicate such a system. Instead of using grains it’s also possible with wires. Most probably they’re not as stable as grains but nevertheless it should do the job. If we like we could also build our own force based solver. Implementing verlet integration in a solverSop is easy and apart from that there’s the gasIntegrator microsolver which we can use in DOPs for that purpose. However, to get a stable system it’s important to choose the right step size for the integration. If it’s too large the system explodes and if it’s to small it takes forever to find its equilibrium. Of course, higher order integrators like Runge Kutta are minimizing this problem and generally they are quite stable. The same is true for the position based approach of grains but the problem still remains, at least to some degree.
Anyway, it’s an interesting problem and even though it’s typically not too hard to solve it could become quite tricky depending on the geometry. The hard part is not so much to obtain planar polygons but to stay as close as possible to the original mesh. In the worst case we might end up with an entirely flat mesh what’s clearly not the intended result. The reason might be, for example, that the mesh describing the surface has an unfavourable structure. And sometimes it’s simply not possible because not every surface can be made up of planar polygons except triangles.
During our chat I remembered that there was a thread on OdForce about pretty much the same topic. Someone had a saddle-shaped surface for which he computed the dual mesh and subsequently wanted to make the resulting hexagons planar. At the same time the boundary points of the surface should keep their original position. Well, even though a saddle surface seems to be rather simple, it is not and I think this is a good example to highlight some of the problems we might run into when we’re trying to planarize polygons.

But first let’s take one step back and start simple. Instead of using hexagons let’s assume it is a quad mesh. One way to tackle this problem is to use a force based approach similar to what Kangaroo/Grasshopper is doing. We need to set up a number of forces and run the system until it reaches equilibrium. To calculate the “planarization-force” we could, for instance, project the polygon points onto the plane defined by center and normal. Or we compute the diagonals and check their distance to each other. To preserve the mesh structure we can rely on edges and diagonals as spring forces. Additionally we need to keep the points of the boundary at their position. The key point, however, is to adjust the weighting between all these forces to finally get the result we were looking for. Usually there is no way to find a solution satisfying all the objectives and hence the result is highly depending on how we balance the forces. As said above, such an approach is easy to set up and it works quite well under most circumstances. However, for a number of reasons I thought it might be better to take a slightly different approach. Instead of using a force based system I used a method which is conceptually somewhat similar to the “As Rigid As Possible Surface Modelling” approach. Intuitively it’s easy to understand and works basically in the following way:

For every polygon of the mesh, the planar counterpart is computed and then fitted to the original poly in the least squares sense. For our specific “planarization problem” we could also just project the points onto the tangent plane but in general we may want to rely on rigid shape matching to find the best fitting orientation. This could be done easily in VEX by using polardecomp() or by using Numpy. Even though polar decomposition in VEX is faster I tend to use Numpy’s SVD algorithm since it doesn’t have problems with flat axis aligned polys. Of course, we could make our matrix positive by adding a very small value but this might lead to other problems instead. For a high number of polygons it’s a good idea to implement a simplified SVD algorithm in VEX which doesn’t have such problems but in most cases Numpy should be fast enough. But let’s not get off the track and back to the planarization problem.
After the “planarization” step is applied to each polygon the mesh is obviously torn apart. What we need to do next is to reconstruct the mesh in the least squares sense based on the planar polygons. In other words, we need to set up and solve a linear system. If we repeat these steps iteratively a number of times the system finally converges to the solution we are looking for. The result is the mesh made up of planar polygons. So, the process boiles down to the following threes steps:

  • Fit planar polygon to original polygon in the least squares sense
  • Reconstruct mesh based on planar polygons in the least squares sense
  • Repeat until system converges

Of course, we also need to implement all the other constraints too, such as fixed boundary points, mesh structure preservation and so on. In the example below, the four corner points are fixed at their position all the other points are allowed to move in all directions. In my tests the algorithms works quite well and is also relatively fast. For the mesh shown below it needs about 175 iterations and converges in about 280 ms on my rather old laptop.

But now let’s go back to the example file from OdForce. The original question was how to obtain planar hexagons while retaining the overall shape of the mesh. Additionally, even though it’s not mentioned in the post, we may want to keep the mesh structure intact. In other words, the polygons should stay connected. Well, this sounds easier than it is. It is indeed possible but we might end up with unwanted results. The problem is that the saddle-shaped surface has negative gaussian curvature and hence it can’t be approximated by convex hexagonal polygons. However, if we allow non-convex polygons it’s doable although the deviation to the original shape is rather large. This is clearly visible in image 2 and 3 below. If we additionally allow for open edges the overall shape is preserved very well (image 4). However, this is just the case for hexagonal polygons and won’t work for quads in the same way. So let’s take a look what happens if we choose four-sided instead of six-sided polygons.
(Side note: For computing planar hexagonal meshes it’s most probably more efficient to rebuild the mesh instead of planarizing it’s polygons but that’s maybe a topic for another blog post)

Using the planarization algorithm on the quad mesh results in a completely flat surface which is clearly not what we want. But how could this happen? Well, the simple reason is that it’s just not possible to obtain planar AND connected polygons based on our initial mesh without getting large deviations in the overall shape. This is clearly visible in the images below. Picture 1 is the original mesh made up of non-planar polygons. Picture 2 shows the result of the planarization with fixed corner points and picture 3 is the planarized poly mesh without any additional constraints.

So what can we do to get a better result? The answer is simple. We need to provide a better starting point for the algorithm. In other words, we need to remesh the surface. The most natural way to remesh a surface is based on it’s principal curvature direction. By integrating the fields, for example, we could extract a conjugate curve network which we then use to construct the mesh. Such a curvature aligned mesh is exactly what we need since it’s quads are naturally close to planar. For simple geometries this is not too hard to implement and the resulting mesh is a perfect starting point for the subsequent planarization algorithm. In the case of our saddle-shaped surface it’s even better because we don’t need to do any planarization at all. The new curvature aligned mesh is already composed of fully planar polygons.

For some related reading:
R. Poranne, E. Ovreiu, C. Gotsman: Interactive Planarization and Optimization of 3D Meshes
S. Bouaziz, S. Martin, T. Liu, L. Kavan, M.Pauly: Projective Dynamics: Fusing Constraint Projections for Fast Simulations
W. Wang, Y. Liu: A Note on Planar Hexagonal Meshes
W. Wang, Y. Liu, B Chan, R. Ling, F. Sun: Hexagonal Meshes with Planar Faces

VECTOR FIELD DECOMPOSITION

Some time ago I wrote a blog post about generating tangent and smoothly varying vector fields  on arbitrary meshes.  The method I’ve used works basically by specifying some constraints such as vortices, sources, sinks and guiding curves/edges, which are then interpolated over the surface to compute the final field. Depending on the constraints we achieve very different kinds of vector fields which have different properties and hence different applications. So, while the resulting vector field might be exactly what you need as velocity field in a fluid simulation, it might be completely useless for remeshing. The main reason for this is related to curl and/or divergence. Generally speaking, whenever we are computing or just working with vector fields, we always have to deal with at least one of the three interrelated main differential operators – gradient, divergence and curl.

For instance, applying the gradient operator on a scalar field results in a curl-free vector field. If we then apply a local rotation around 90 degrees, we get the orthogonal counterpart in the form of a divergence-free field. This is e.g. exactly what we need as velocity field for a fluid simulation since we usually want fluids to be incompressible. Furthermore we have to take care of open edges at which we usually want to have tangential boundary conditions, at least in case we are working with fluids.

Well, though all this sounds easy, the problem is that in most cases we just don’t have these nice velocity fields. Normally they are neither gradient fields nor solenoidal fields but rather a mixture of both. In other words we are dealing with vector fields which aren’t completely divergence-free. That’s the reason why we need to apply pressure projection in DOPS which finally computes a proper velocity field needed for fluids. This works nicely in the simulation environment on volumes. But what if we need to do the same in SOPS and on surfaces? Of course we could still rely on volumes, do all the operations we need to do and bring the result back onto the original mesh. While this works to some degree, it is far from being optimal – the result will most probably be inaccurate and we will be limited to solenoidal and gradient vector fields. What we could do instead is to work directly with vector fields on the surface. Typically any vector field on a simply-connected domain could be decomposed into the sum of an irrotational (curl-free), a solenoidal (divergence-free) and a harmonic (divergence-free and curl-free) field. This technique is known as Hodge-Helmholtz decomposition and is basically achieved by minimizing the energy functionals for the irrotational and the solenoidal component of the field by solving the Poisson equation. Finally the harmonic component will be computed as the residual.

While divergence-free vector fields are essential for any kind of fluid simulation, curl-free and harmonic fields are essential for many other applications, such as remeshing, texture synthesis and parameterization to name a few. The irrotational component of the vector field is curl-free and hence the field is integrable. It is equivalent to the gradient of a scalar potential function on simply-connected domains. This means that we can reproduce the scalar field from the irrotational vector field because it is exactly the gradient of the desired scalar field. The isolines of the scalar field are therefore exactly orthogonal to the vector field. In case the vector field is not curl-free, we can treat it as a gradient of a function and minimize the difference between the gradient and the given vector field in the least squares sense. Again this leads to the well known Poisson equation.

 

 

 

GLOBAL OPTIMIZATION

Around 2006 I was working on an project where I had to find a specific distribution of objects on a surface. In other words, I had to position approximately 200 kubes on the surface in such a way that all the cubes could be seen from a given location. Because I didn’t want to spend the next two weeks moving around boxes, I was looking for a way to automate the process. After some research I found the solution. I had to use a global optimization algorithm which would be able to solve a problem with 400 variables as efficient as possible. Well, sounds easy, doesn’t it? Funnily enough it really was. After some more research I finally implemented a simple variant of the classical multiobjective genetic optimization algorithm. The implementation at the time was done entirely in CHOPS with some help of Python. Apart from being rather slow, the solver did a good job. I really had fun implementing the algorithm and became quite fascinated in optimization in general but unfortunately I had no time to get deeper into this topic. Half a year later I started to work on it again as a little side project in my spare time. Well, 10 years and around 18000 lines of code later it’s still a side project and still far from being finished. Over the time I have implemented the following algorithms in Houdini with varying success:

Single-Objective Optimization Algorithms

  • GA Genetic Algorithm
  • DE Differential Evolution Algorithm
  • SaDE Self-adaptive Differential Evolution Algorithm
  • SA Simulated Annealing Algorithm
  • PSO Particle Swarm Optimization
  • GCPSO Guaranteed Convergence Particle Swarm Optimization
  • ABS Artificial Bee Colony Algorithm
  • HBA Honeybee Algorithm
  • IWD Intelligent Water Drops Algorithm
  • FPA Flower Pollination Algorithm
  • HS Harmony Search Algorithm
  • CS Cuckoo Search Algorithm
  • CMAES Covariance Matrix Adaptation Evolution Strategy

Multi-Objective Optimization Algorithms:

  • NSGA  Classical non-dominated Sorting Genetic Algorithm
  • NSGAII Non-dominated Sorting Genetic Algorithm
  • ssNSGAII Steady state Non-dominated Sorting Genetic Algorithm
  • SPEA2 Strength-based Evolutionary Algorithm
  • ISPEA Improved Strenth-based Evolutionary Algorithm
  • MOGA Multi-Objective Genetic Algorithm
  • PAES Pareto Archived Evolution Strategy
  • IPESA2 Improved Pareto Envelope-based Selection Algorithm
  • VEGA Vector Evaluated Genetic Algorithm
  • GDE3 Generalized Differential Evolution
  • MODE Multi-Objective Differential Evolution
  • EMOEA e-Dominance-based Multi-Objective Evolutionary Algorithm
  • IBEA Indicator-Based Evolutionary Algorithm
  • MOEAD Multi-Objective Evolutionary Algorithm with Decomposition
  • MOPSO Multi-Objective Particle Swarm Optimization
  • SMPSO Speed-Constrained Multi-objective Particle Swarm Optimization
  • MOFPA Multi-Objective Flower Pollination Algorithm
  • MOSA Multi-Objective Simulated Annealing

Below are some of the results using different algorithms on various test problems. More about it could be found here

 

 

BIHARMONIC DISTANCE COMPUTATION

Some time ago I wrote a blog post about geodesics and different methods how they could be computed in Houdini. Geodesic distances are great since they represent the distance, or more precisely, the shortest distance between points on a surface. While this is an important property and useful for many things we might run into a situation in which other properties are even more important, for instance smoothness. This might be the case if we want to generate a smooth vector field based on the distance or if we are working on meshes with holes for example. However, a nice solution to these problems is to compute biharmonic distances instead of geodesic distances. Biharmonic distance is not necessarily the true distance between points on a surface but it is smooth and globally shape-aware.

Contrary to geodesic distance, the biharmonic distance is not computed directly on the surface but on the eigenvectors of the laplacian. Basically it’s the average distance between pairwise points projected onto incrementally increasing eigenvectors. The general accuracy is therefore highly dependent on the number of computed eigenvectors – the more the better. If we need to get exact distances we have to compute all eigenvectors or in other words, we need to perform the computationally rather expensive full eigendecomposition of the laplacian. For a sufficient approximation however, we usually can get away with the first 20-100 eigenvectors depending on mesh density and shape. For the implementation in Houdini I’m using Spectra and Eigen together with the HDK. Even though it’s quite fast to perform the eigenvalue/eigenvector decomposition, the computation of geodesic distances is of course much faster, especially on large meshes.

Y. Lipman, R. M. Rustamov, T. A. Funkhouser: Biharmonic Distance