VECTOR FIELD DECOMPOSITION

Some time ago I wrote a blog post about generating tangent and smoothly varying vector fields  on arbitrary meshes.  The method I’ve used works basically by specifying some constraints such as vortices, sources, sinks and guiding curves/edges, which are then interpolated over the surface to compute the final field. Depending on the constraints we achieve very different kinds of vector fields which have different properties and hence different applications. So, while the resulting vector field might be exactly what you need as velocity field in a fluid simulation, it might be completely useless for remeshing. The main reason for this is related to curl and/or divergence. Generally speaking, whenever we are computing or just working with vector fields, we always have to deal with at least one of the three interrelated main differential operators – gradient, divergence and curl.

For instance, applying the gradient operator on a scalar field results in a curl-free vector field. If we then apply a local rotation around 90 degrees, we get the orthogonal counterpart in the form of a divergence-free field. This is e.g. exactly what we need as velocity field for a fluid simulation since we usually want fluids to be incompressible. Furthermore we have to take care of open edges at which we usually want to have tangential boundary conditions, at least in case we are working with fluids.

Well, though all this sounds easy, the problem is that in most cases we just don’t have these nice velocity fields. Normally they are neither gradient fields nor solenoidal fields but rather a mixture of both. In other words we are dealing with vector fields which aren’t completely divergence-free. That’s the reason why we need to apply pressure projection in DOPS which finally computes a proper velocity field needed for fluids. This works nicely in the simulation environment on volumes. But what if we need to do the same in SOPS and on surfaces? Of course we could still rely on volumes, do all the operations we need to do and bring the result back onto the original mesh. While this works to some degree, it is far from being optimal – the result will most probably be inaccurate and we will be limited to solenoidal and gradient vector fields. What we could do instead is to work directly with vector fields on the surface. Typically any vector field on a simply-connected domain could be decomposed into the sum of an irrotational (curl-free), a solenoidal (divergence-free) and a harmonic (divergence-free and curl-free) field. This technique is known as Hodge-Helmholtz decomposition and is basically achieved by minimizing the energy functionals for the irrotational and the solenoidal component of the field by solving the Poisson equation. Finally the harmonic component will be computed as the residual.

While divergence-free vector fields are essential for any kind of fluid simulation, curl-free and harmonic fields are essential for many other applications, such as remeshing, texture synthesis and parameterization to name a few. The irrotational component of the vector field is curl-free, therefore it is equivalent to the gradient of a scalar potential function on simply-connected domains. In other words it is integrable. This means that we can reproduce the scalar field from the irrotational vector field because it is exactly the gradient of the desired scalar field. The isolines of the scalar field are therefore exactly orthogonal to the vector field. In case the vector field is not curl-free, we can treat it as a gradient of a function and minimize the difference between the gradient and the given vector field in the least squares sense. Again this leads to the well known Poisson equation.

 

 

 

UNFOLDING MESHES

A friend of mine is pretty much obsessed with papercraft. Over the years he has built many beautiful models and I really like the aesthetic of this white, minimalistic and precisely folded paper geometries. He always starts by modeling them first in Blender and when he’s satisfied with the shape he uses Pepakura to unfold it to a plane so that he can actually build it out of paper. Pepakura is a simple yet powerful small program and can unfold pretty much everything into a foldable pattern. I found it quite fascinating and so I thought it might be a good idea trying to implement something like this in Houdini. Not that I’m building much paper models but nevertheless it’s sometimes useful to unfold meshes. After some research and several tests it turned out to be actually quite easy. And after some more test I finally implemented a simple version of an unfolding algorithm in VEX. Basically it works as follows:

  • Build the dual graph of the mesh
  • Calculate a spanning tree to minimize cuts and prevent faces from overlapping
  • Cut the spanning tree
  • Unfold mesh

AS-RIGID-AS-POSSIBLE SURFACE EDITING

In Houdini we are mainly working with meshes and quite often these meshes need to be deformed. For this reason we can use several tools such as cages or proxies together with the LatticeSop or the newer PointDeformSop. However, sometimes it’s handy to deform the shape directly. Of course, there is the EditSop but getting smooth deformations can be tricky. Even if soft radius is activated, you might end up with unsatisfying results, especially when rotations are involved. This might be the case because the EditSop uses just a spherical distance falloff and no geometry-aware distance, or maybe because local details don’t get maintained during editing, or maybe it’s because the deformed shape looses to much local volume/area compared to the original. Anyway, one of the best techniques to avoid pretty much all of these problems is “as-rigid-as-possible” deformations and it is described in detail in the paper: “As-Rigid-As-Possible Surface Modeling” by Olga Sorkine and Marc Alexa.

A perfect mesh editing tool should enable us to select some vertices to move and rotate them around. The deformation of the handle should get propagated smoothly to the rest of the geometry. And of course, this should happen without removing or distorting surface details and obviously as fast as possible. In other words, the deformation should be smooth and properties such as area, curvature, distance and so on, should be preserved as much as possible. Well, the problem is that if local mesh properties should be preserved, only rotation and translation should be involved since scaling would destroy the local structure. This however is clearly not possible in the case of mesh editing because as soon as we select a vertex and transform it, some parts of the mesh will get scaled and/or shifted. As consequence we´ll end up with distorted surface details.  “As-rigid-as-possible” however, presents a nice solution to this problem. It tries to preserve local rigidity as much as possible. This means, the shape is locally preserved and smooth as well.

 

Basically this is done in a two-stage approach. First, getting an initial deformation guess by propagating the handle transformation to the mesh. And second, finding the best deformation to maximize the local rigidity by minimizing an energy function. The energy function is based on the one-ring neighbourhood around vertices and measures the deviation to the rigid transformation of the cell. By summing up the energy functions for all vertices/cells of the mesh we get the global energy which is to be minimized to get the final deformation.

 

The number of iterations required to retain local properties depends mainly on the size of the mesh. Generally, just a few iterations (10-20) are needed until convergence.

The implementation is done with the HDK but instead of using Houdini’s own matrix solver I´ve used the linear algebra library Eigen. For more information about “as-rigid-as-possible” check out the following paper:

O. Sorkine, M. Alexa: As-Rigid-As-Possible Surface Modeling

GLOBAL OPTIMIZATION

Around 2006 I was working on an project where I had to find a specific distribution of objects on a surface. In other words, I had to position approximately 200 kubes on the surface in such a way that all the cubes could be seen from a given location. Because I didn’t want to spend the next two weeks moving around boxes, I was looking for a way to automate the process. After some research I found the solution. I had to use a global optimization algorithm which would be able to solve a problem with 400 variables as efficient as possible. Well, sounds easy, doesn’t it? Funnily enough it really was. After some more research I finally implemented a simple variant of the classical multiobjective genetic optimization algorithm. The implementation at the time was done entirely in CHOPS with some help of Python. Apart from being rather slow, the solver did a good job. I really had fun implementing the algorithm and became quite fascinated in optimization in general but unfortunately I had no time to get deeper into this topic. Half a year later I started to work on it again as a little side project in my spare time. Well, 10 years and around 18000 lines of code later it’s still a side project and still far from being finished. Over the time I have implemented the following algorithms in Houdini with varying success:

Single-Objective Optimization Algorithms

  • GA Genetic Algorithm
  • DE Differential Evolution Algorithm
  • SaDE Self-adaptive Differential Evolution Algorithm
  • SA Simulated Annealing Algorithm
  • PSO Particle Swarm Optimization
  • GCPSO Guaranteed Convergence Particle Swarm Optimization
  • ABS Artificial Bee Colony Algorithm
  • HBA Honeybee Algorithm
  • IWD Intelligent Water Drops Algorithm
  • FPA Flower Pollination Algorithm
  • HS Harmony Search Algorithm
  • CS Cuckoo Search Algorithm
  • CMAES Covariance Matrix Adaptation Evolution Strategy

Multi-Objective Optimization Algorithms:

  • NSGA  Classical non-dominated Sorting Genetic Algorithm
  • NSGAII Non-dominated Sorting Genetic Algorithm
  • ssNSGAII Steady state Non-dominated Sorting Genetic Algorithm
  • SPEA2 Strength-based Evolutionary Algorithm
  • ISPEA Improved Strenth-based Evolutionary Algorithm
  • MOGA Multi-Objective Genetic Algorithm
  • PAES Pareto Archived Evolution Strategy
  • IPESA2 Improved Pareto Envelope-based Selection Algorithm
  • VEGA Vector Evaluated Genetic Algorithm
  • GDE3 Generalized Differential Evolution
  • MODE Multi-Objective Differential Evolution
  • EMOEA e-Dominance-based Multi-Objective Evolutionary Algorithm
  • IBEA Indicator-Based Evolutionary Algorithm
  • MOEAD Multi-Objective Evolutionary Algorithm with Decomposition
  • MOPSO Multi-Objective Particle Swarm Optimization
  • SMPSO Speed-Constrained Multi-objective Particle Swarm Optimization
  • MOFPA Multi-Objective Flower Pollination Algorithm
  • MOSA Multi-Objective Simulated Annealing

Below are some of the results using different algorithms on various test problems. More about it could be found here

 

 

BIHARMONIC DISTANCE COMPUTATION

Some time ago I wrote a blog post about geodesics and different methods how they could be computed in Houdini. Geodesic distances are great since they represent the distance, or more precisely, the shortest distance between points on a surface. While this is an important property and useful for many things we might run into a situation in which other properties are even more important, for instance smoothness. This might be the case if we want to generate a smooth vector field based on the distance or if we are working on meshes with holes for example. However, a nice solution to these problems is to compute biharmonic distances instead of geodesic distances. Biharmonic distance is not necessarily the true distance between points on a surface but it is smooth and globally shape-aware.

Contrary to geodesic distance, the biharmonic distance is not computed directly on the surface but on the eigenvectors of the laplacian. Basically it’s the average distance between pairwise points projected onto incrementally increasing eigenvectors. The general accuracy is therefore highly dependent on the number of computed eigenvectors – the more the better. If we need to get exact distances we have to compute all eigenvectors or in other words, we need to perform the computationally rather expensive full eigendecomposition of the laplacian. For a sufficient approximation however, we usually can get away with the first 20-100 eigenvectors depending on mesh density and shape. For the implementation in Houdini I’m using Spectra and Eigen together with the HDK. Even though it’s quite fast to perform the eigenvalue/eigenvector decomposition, the computation of geodesic distances is of course much faster, especially on large meshes.

Y. Lipman, R. M. Rustamov, T. A. Funkhouser: Biharmonic Distance

GUIDANCE VECTOR FIELD

A good friend of mine is an artist. Last year when he did some kind of an installation he asked me if I could help him on an interesting problem. For an interactive installation he basically wanted to work with reflections on a three-dimensional surface. The general idea was quite simple and the setup basically as follows:
He wanted to have two sculptures in a room which could change their position in space. Pretty much in the middle of the room there is a three-dimensional surface, which could be deformed by using some servos with Arduino. This surface responds to the movement and position of the visitor and get modified in such a way that the two sculptures are always visible as reflection on the surface. Even if they are not visible directly. Well, that was the plan. He already had great ideas how to build the reflecting surface in reality and also knew how to build the servo-rig. Tracking the visitors wasn’t a problem either but what he didn’t know was how to define the shape of the surface based on the incoming data, which in turn he needed to control the servos. That was the problem I was supposed to take care of since I promised him to help.
So, just to summarize the “surface-problem”:

  • The surface should be deformed in such a way that both sculptures are visible as reflection as much as possible
  • The surface should be as smooth as possible because of aesthetic reasons
  • Due to material constraints, the surface area should stay pretty much the same
  • Since it’s all about a responsive surface in an interactive installation, it has to work fast

So, because I had no clue how to solve this problem I started doing some prototyping in Houdini. I first set up a simple scene with two spheres representing the sculptures, a grid representing the surface and of course, a “visitor” (in fact, another sphere). After some unsuccessful tests I started to compute various vector fields describing visual rays and reflections in the scene. And suddenly it made “click” – I knew how to do it. Looking at the setup I had so far, it became quite apparent that I could just use the vector field as gradient field in the Poisson equation. I tried it immediately and it worked.

The vector field was easy to compute. For every polygon of the grid I had to compute the vectors pointing to both “sculptures” by using a weight-function based on distance and reflection angle to finally blend between them. This way I got the direction in which visual rays needed to be reflected and hence I also knew the optimal orientation of each polygon. After that I applied a local rotation to each polygon of the mesh and obtained the new orientation. By doing so, every polygon turned to it’s “best” direction and the mesh was torn apart. Subsequently it needed to be reconstructed based on the new orientation of its polygons. And this is exactly the point where the Poisson equation came into play.

The setup was quite simple. Based on the new orientation I calculated a gradient for x, y and z, computed the divergence for each of these and used the result as new gradient field in the Poisson equation. After solving the Poisson equation, which is in fact a sparse linear system, the result is a piecewise continuous function. In other words, it is the final mesh reconstructed in the least squares sense.

 

ART GALLERY PROBLEM

Many years ago GI in rendering wasn’t as fast and easy to use as it is today. It was indeed possible but for animations you had to pre-bake indirect illumination as much as possible. One way of doing this was to place a bunch of cameras at good positions before shooting rays into the scene. Generally this sounds easy but the interesting question was the following: What are good camera position? To save render time you had to use as few cameras as possible but at the same time it was necessary to cover as much as possible of the scene. Basically this is a visibility problem and exactly what the classical art gallery problem is all about. There exist various algorithms which could be used in different ways. Some are listed here:

https://en.wikipedia.org/wiki/Art_gallery_problem

A fairly simple and efficient algorithm is described in a paper by Michael Batty and Sanjay Rana. It’s closely related to spatial analysis and since I already had some experience in this field it was finally the algorithm I implemented in Houdini.

M. Batty, S. Rana: The automatic definition and generation of axial lines and axial maps