FROM NORMAL MAP TO HEIGHT MAP

The question is why do we want to have height maps instead of normal maps in the first place? Well, there are various reasons but the most obvious are that they are easier to edit, generally easier to understand and the result, when applied as displacement map, is easier to anticipate. In other words, while normals maps do have some advantages, they are generally less intuitive to work with and that’s probably a good reason why we might want to convert them. But before going into details how the conversion is done, let’s quickly recap what a normal map is and how it could be generated.

In fact it’s pretty easy and hence we’ll set up a simple example. Let’s take a grid, for example, and apply a noise function to it (in our case a mountainSop). Next compute point normals for the deformed grid and encode these normals as colors. Finally just write out the colors as texture map or directly generate the map by using the texture baker. And basically that’s it, the result is our normal map.

But now let’s reverse the process and assume we have our normal map but the geometry is just a flat grid. How can we reconstruct the original deformed geometry?

Well, let’s start simple. As we already know and the name “normal map” implies, normals are encoded as colors. To be more precise, the rgb values of our map correspond to the xyz components of the normals we are searching for. This means we can pretty much do a 1:1 conversion. The only thing we have to take care of is changing the range from 0, 1 (colors) to -1, 1 (normals). If we’ve done that we indeed end up with proper normals, the grid however, is still flat. What we need to do next is adjusting the points position/height so that the displaced grid finally matches the original deformed geometry. And that’s the tricky part. So, the question now is how are we going to get the height information out of our normals?

Well, one way for instance, is trying to solve this problem geometrically. This could be done by iteratively moving the points vertically up/down until new computed normals match the ones we got from our normal map. For a simple mesh/map this technique is fast and reasonably accurate but for more complicated maps and/or irregular meshes it starts failing and the result might just not be what we’d have expected. To overcome this problem there exist various other methods which could be used instead but my personal favourite is, once again, to rely on the famous Poisson equation.

If we assume height is a function on the mesh, it can be computed by minimizing the difference between our pre-computed normals (projected onto the tangent plane on the mesh) and the gradient of this function in the least squares sense. Since this is leading to the Poisson equation it’s easy to set up a linear system in the typical form A x = b. If we then use a fast sparse matrix library and let it do it’s magic by solving the system for x, we finally get what we were looking for, namely our desired height.

And of course, it works on non-flat meshes too.

Instead of setting up and solving everything in matrix form it’s also possible to do it with VEX/OpenCl although the setup is a bit different and it takes longer to solve. Anyways, for those who don’t have access to a fast matrix solver it’s a reasonable alternative. The process is pretty much the same as it’s done in this blog post.

THE JOY OF NOT USING MATRICES

From sparse matrices to VEX/OpenCl

Over the last months I had some conversations about the need of using (sparse) matrices and/or matrix solvers in Houdini to implement various fundamental algorithms for geometry processing, such as the Poisson Equation. Of course, most of the time we definitely want to work with matrices because everything is easier to understand, faster to set up, and also faster to solve. Nevertheless, I think it’s important to know, that even if that’s the prefered method, it’s not the only one. This becomes particularly apparent when working with Houdini because it doesn’t get shipped with Scipy or any other easy-to-use sparse matrix library. So, as long as we don’t want to compile Scipy by our own, we’re stuck with either using Numpy (which doesn’t support sparse matrices) or using the HDK. 

However, the good thing is that if we don’t want to, we do not have to use matrices at all. In other words, if we don’t have access to Scipy and don’t want to use the HDK, pretty much all of it can also be done in VEX and/or OpenCl. It will be slower and most probably less accurate but it works by only using standard SOPs without the need of external dependencies such as Scipy, EIGEN, … 

About two years ago I wrote a little blog post about various algorithms how to compute geodesic distance fields on meshes. Nowadays we can simply use surfacedist() but back in 2017 you had to code it by yourself. And of course, if someone is talking about geodesics, there is no way of getting around Keenan Cranes wunderful paper “The Heat Method for Distance Computation”. So, what would be a better example than an implementation of exactly this paper!? 

I won’t go into details about the method described in the paper since everybody should read it but it works by basically solving the heat, as well as the Poisson equation. Both are typically set up and solved as a linear system in matrix form and that’s also what I did for my “Geodesics on Meshes” Blog entry. This time, however, we are going to do it differently and use only standard SOPs together with a bit of VEX and OpenCl.

So, below is the file with two slightly different methods. Example one is simply using Jacobi iteration to solve the heat and Poisson equation and example two is relying on Successive Over-Relaxation to speed up convergence time (if Jacobi takes n iterations, SOR convergence in approx. sqrt(n) iterations depending on the relaxation factor).  Additionally for reference and/or performance comparison there’s also an example which uses sparse matrices in EIGEN together with the HDK and another one which uses Scipy. To run the HDK example you’ll need a working compiler since it’s implemented by using inlinecpp() and for the Scipy example, well, you’ll need Scipy.

FILE

K. Crane, C. Weischedel, M. Wardetzky: The Heat Method for Distance Computation

VECTOR FIELD DECOMPOSITION

Some time ago I wrote a blog post about generating tangent and smoothly varying vector fields  on arbitrary meshes.  The method I’ve used works basically by specifying some constraints such as vortices, sources, sinks and guiding curves/edges, which are then interpolated over the surface to compute the final field. Depending on the constraints we achieve very different kinds of vector fields which have different properties and hence different applications. So, while the resulting vector field might be exactly what you need as velocity field in a fluid simulation, it might be completely useless for remeshing. The main reason for this is related to curl and/or divergence. Generally speaking, whenever we are computing or just working with vector fields, we always have to deal with at least one of the three interrelated main differential operators – gradient, divergence and curl.

For instance, applying the gradient operator on a scalar field results in a curl-free vector field. If we then apply a local rotation around 90 degrees, we get the orthogonal counterpart in the form of a divergence-free field. This is e.g. exactly what we need as velocity field for a fluid simulation since we usually want fluids to be incompressible. Furthermore we have to take care of open edges at which we usually want to have tangential boundary conditions, at least in case we are working with fluids.

Well, though all this sounds easy, the problem is that in most cases we just don’t have these nice velocity fields. Normally they are neither gradient fields nor solenoidal fields but rather a mixture of both. In other words we are dealing with vector fields which aren’t completely divergence-free. That’s the reason why we need to apply pressure projection in DOPS which finally computes a proper velocity field needed for fluids. This works nicely in the simulation environment on volumes. But what if we need to do the same in SOPS and on surfaces? Of course we could still rely on volumes, do all the operations we need to do and bring the result back onto the original mesh. While this works to some degree, it is far from being optimal – the result will most probably be inaccurate and we will be limited to solenoidal and gradient vector fields. What we could do instead is to work directly with vector fields on the surface. Typically any vector field on a simply-connected domain could be decomposed into the sum of an irrotational (curl-free), a solenoidal (divergence-free) and a harmonic (divergence-free and curl-free) field. This technique is known as Hodge-Helmholtz decomposition and is basically achieved by minimizing the energy functionals for the irrotational and the solenoidal component of the field by solving the Poisson equation. Finally the harmonic component will be computed as the residual.

While divergence-free vector fields are essential for any kind of fluid simulation, curl-free and harmonic fields are essential for many other applications, such as remeshing, texture synthesis and parameterization to name a few. The irrotational component of the vector field is curl-free and hence the field is integrable. It is equivalent to the gradient of a scalar potential function on simply-connected domains. This means that we can reproduce the scalar field from the irrotational vector field because it is exactly the gradient of the desired scalar field. The isolines of the scalar field are therefore exactly orthogonal to the vector field. In case the vector field is not curl-free, we can treat it as a gradient of a function and minimize the difference between the gradient and the given vector field in the least squares sense. Again this leads to the well known Poisson equation.

 

 

 

GUIDANCE VECTOR FIELD

A good friend of mine is an artist. Last year when he did some kind of an installation he asked me if I could help him on an interesting problem. For an interactive installation he basically wanted to work with reflections on a three-dimensional surface. The general idea was quite simple and the setup basically as follows:
He wanted to have two sculptures in a room which could change their position in space. Pretty much in the middle of the room there is a three-dimensional surface, which could be deformed by using some servos with Arduino. This surface responds to the movement and position of the visitor and get modified in such a way that the two sculptures are always visible as reflection on the surface. Even if they are not visible directly. Well, that was the plan. He already had great ideas how to build the reflecting surface in reality and also knew how to build the servo-rig. Tracking the visitors wasn’t a problem either but what he didn’t know was how to define the shape of the surface based on the incoming data, which in turn he needed to control the servos. That was the problem I was supposed to take care of since I promised him to help.
So, just to summarize the “surface-problem”:

  • The surface should be deformed in such a way that both sculptures are visible as reflection as much as possible
  • The surface should be as smooth as possible because of aesthetic reasons
  • Due to material constraints, the surface area should stay pretty much the same
  • Since it’s all about a responsive surface in an interactive installation, it has to work fast

So, because I had no clue how to solve this problem I started doing some prototyping in Houdini. I first set up a simple scene with two spheres representing the sculptures, a grid representing the surface and of course, a “visitor” (in fact, another sphere). After some unsuccessful tests I started to compute various vector fields describing visual rays and reflections in the scene. And suddenly it made “click” – I knew how to do it. Looking at the setup I had so far, it became quite apparent that I could just use the vector field as gradient field in the Poisson equation. I tried it immediately and it worked.

The vector field was easy to compute. For every polygon of the grid I had to compute the vectors pointing to both “sculptures” by using a weight-function based on distance and reflection angle to finally blend between them. This way I got the direction in which visual rays needed to be reflected and hence I also knew the optimal orientation of each polygon. After that I applied a local rotation to each polygon of the mesh and obtained the new orientation. By doing so, every polygon turned to it’s “best” direction and the mesh was torn apart. Subsequently it needed to be reconstructed based on the new orientation of its polygons. And this is exactly the point where the Poisson equation came into play.

The setup was quite simple. Based on the new orientation I calculated a gradient for x, y and z, computed the divergence for each of these and used the result as new gradient field in the Poisson equation. After solving the Poisson equation, which is in fact a sparse linear system, the result is a piecewise continuous function. In other words, it is the final mesh reconstructed in the least squares sense.

 

GEODESICS ON MESHES

Measuring geodesic distances and the computation of paths on a mesh is widely used in computational geometry for many different applications ranging from mesh parameterization and remeshing to skinning and mesh deformation. After a longer conversation and chat about this topic, and since i thought it could be useful at some point I finally got my feet wet on different algorithms to compute such geodesics in Houdini.
The first and obvious choice was to compute the shortest path between the source point and all the other points on the geometry. This was easy and fast to do but unfortunately it doesn’t compute geodesics. Since the underlying algorithm of the ShortestPathSop (Dijkstra) relies on the edges of the mesh, it computes just a rough approximation of the real distances. Of course, the inaccuracy could be minimized by iteratively smoothing the paths but still, the result will be far from being optimal and the method slow as well.

To circumvent this problem, I tried another method. Instead of measuring the length of the shortest paths directly it relied on the fact that a real geodesic path unfolds into a straight line. So, what I basically did was:

  • Compute the shortest path between point A and point B
  • Unfold the path onto a plane
  • Measure the Euclidian distance between point A and point B on the unfolded path.

This method worked quite nicely but it has one big drawback – it doesn’t work on meshes with holes. As soon as the line drawn between the start- and endpoint of the unfolded path crosses a hole, the result is no longer the true geodesic distance.

The third algorithm I´ve used was the well known Fast Marching Front Propagation Method. You start at the single source point and compute the distance to the 1-ring neighbouring points around it. By doing this iteratively and summing up the distances you´ll finally get the geodesic distance to all points in the mesh. This works because it is easy to calculate the distance to the third point of a triangle as soon as the other two distances are known.
For the first quick implementation i used python but it was way to slow to be usable. After replacing the slow list lookups in the code i got a 3 times speedup but it was still to slow which was the reason for porting the algorithm to the HDK. After that, it worked quite fast (around 270 times faster) but the general problem with this algorithm is, that it doesn´t work with obtuse triangles. In other words, it needs to work on a fairly clean and regular triangulated mesh. If this is not the case, you´ll end up with errors in the distance calculation which leads to larger errors and so forth. To prevent this from happening I had to insert conditional statements for obtuse triangles as well for all other special cases and hence the code got much longer and somewhat cluttered. Usually, when fast marching front propagation is used on 2D and 3D grids, this is not a problem but for arbitrarily triangulated meshes it is.

Finally I decided to implement the great paper: “Geodesics in Heat” by Keenan Crane. The method described in the paper works basically as follows:

  • Integrate the heat flow for a source point across the mesh
  • Compute the normalized gradient for the heat
  • Solve the Poisson equation to get the actual distance

One of the great advantages of the heat method is its brilliant simplicity. Basically it relies only on solving a sparse linear system what could be done quite efficiently by various linear algebra libraries (I’ve used Eigen for this). This makes it possible to compute the geodesic distances from more than one point in pretty much the same computation time, quite contrary to the marching method for example.

For some related reading:
K. Crane, C. Weischedel, M. Wardetzky: The Heat Method for Distance Computation
M. Novotni, R. Klein: Computing Geodesic Distances on Triangular Meshes
V. Surazhsky, T. Surazhsky, D. Kirsanov, S. J. Gortler, H. Hoppe: Fast Exact and Approximate Geodesics on Meshes
D. Bommes, L. Kobbelt: Accurate Computation of Geodesic Distance Fields for Polygonal Curves on Triangle Meshes