Just some fun implementing a coral growth algorithm in Houdini roughly based on the following papers:

J. A. Kaandorp: Macroscopic Modelling of Environmental Influence on Growth and Form of Sponges and Corals Using the Accretive Growth Model
R. Merks, A. Hoekstra, J. Kaandorp, P. Sloot: Models of coral growth: spontaneous branching, compactification and the Laplacian growth assumption

The setup is quite simple and basically a bi-directional simulation model in which the environment (sunlight, fluid flow, nutrient concentration) influences the growth process but also gets influenced likewise by the growing structure itself. The images below are visualizing this interrelation by showing the resulting geometry after some of the environmental conditions have been changed.

This is a topic I came across a long time ago when I was confronted with the following question: How do you distribute objects in a way, that for a particular location and for a given time frame, the overall amount of sunlight on the surface gets maximized? Well, theoretically it’s quite easy since it is a classical optimization problem but how does one accurately calculate the illuminance values needed to thise? The first and obvious choice is probably to use Mantra. Even though Mantra is a great path tracer it won’t work because of various reasons. But Mainly just because it is simply not meant to be used for daylight and solar analysis but to render huge amount of data and to produce stunning pictures.
A render engine which is quite the opposite in this regard and commonly used for such simulations is Radiance. It provides everything needed for this kind of analysis and it is fairly easy to use. So, I wrote a couple of python scripts to share geometry and data between Houdini and Radiance. While this worked generally hassle-free it was unfortunately slow and the lack of interactivity became a problem. This was mainly due to the principal workflow which was as follows:

Export geometry and sampling grid as OBJ

Generate Radiance description files for materials, sky and so on

Render the scene in Radiance

Retrieve the result and bring it back into Houdini for visualization/optimization

This was more or less acceptable for simple tests but for the optimization problem it was simply to slow. Half of the computation time got lost in sharing data between Houdini and Radiance. So I thought of a way how this could be avoided and the next logical step was to leave Radiance aside and implement everything in Houdini.
And that’s exactly what I finally did. To calculate the sun position I implemented NREL’s Solar Position Algorithm (SPA). It is available as C version but I used Python since it’s calendar library made things easier. This first step was pretty easy. The hard part was to find a good radiance and luminance distribution model for the sky. Most of the physical sky models used by render engines are largely based on the paper “A Practical Analytic Model for Daylight” by Preetham, Shirley and Smits. For simulation purposes however, it is more common to rely on the CIE Standard Sky like Radiance does. Since I couldn’t decide which one to implement, I finally did the implementation for both, together with the CIE Standard General Sky and the Perez Sky Model.

In the UI you have to set up the position in regards to latitude and longitude and the time period for which the calculation should be performed. Currently the following sky models with different luminance distributions are implemented: CIE overcast sky, CIE sdandard, clear sky, CIE standard general sky, Preetham, Perez
The different luminance distributions of the CIE standard sky can be seen on the fourth image below.

Generally the following sunlight/daylight simulation types are currently implemented: illuminance analysis, solar radiation, daylight factor and sun hours. To check their accuracy I’ve set up some test scenes and compared the result to the ones I got with Radiance. They are not completely identical but the average difference is under 1.5% what’s more than enough for my usage.

Below are some example images showing the results of an sun hour analysis over different calculation periods. The vector field in the last image is showing the average luminance direction which is quite useful for various things. For instance, to simulate different growth processes based on sunlight and direction.

Line integral convolution is a texture synthesis method and a very nice way to visualize a vector field. Compared to streamline generation for instance, it has the advantage that no seed points are needed and usually the direction of the field is better readable. Furthermore it can be used to visualize the vector field in the viewport but also for showing it as a render. LIC is a simple but flexible technique and basically works by blurring a noise texture in the direction of a vector field. This is done by using a convolution filter kernel which is oriented along the field, what is fairly easy to do in COPs. The resulting texture is then applied to the UVs of the mesh. It’s

While such texture based methods are most naturally used on two-dimensional vector fields and generally easy to generate, it becomes difficult when the field is defined on a three-dimensional surface. As long as the surface can be parameterized by using two-dimensional coordinates (in other words, mapping the 3D space into a 2D parameter space), it works pretty much the same way. The LIC-texture is generated based on the vector field in parameter space, and finally mapped back onto the original surface. The problem with this method is, that the parameterization typically introduce distortion and the resulting LIC-texture is therefore distorted too. To avoid this problem and to use LIC on any arbitrary three-dimensional surface, the texture is computed in the local space of each triangle. To do this, I’ve simply encoded the vector field and point positions as colors which I used in COPS for the convolution process. The streamline integration, or more precisely, direction evaluation, however, is done in SOPS on the original surface to ensure that no triangle boundaries are visible in the final texture.

The vector field used in the images below is the maximal principal curvature direction. You can find more about it here.

Vector fields on meshes are useful for numerous things and hence they have many applications in computer graphics, such as texture synthesis, anisotropic shading, non-photorealistic rendering, hair grooming, surface parameterization, remeshing and many more. Sometimes they are quite easy to compute and sometimes it gets tricky. Depending on the task, we maybe have to work with an existing vector field but most of the time we need to generate a new one, typically based on a small set of user-defined constraints. Either way, we usually want the vector field to be tangent and more importantly, smoothly varying over the surface. Well, the tangent part is easy. One way how we could generate such a field is simply by computing the gradient of a scalar function, and the result is of course a tangent vector field. This is easy to do but the resulting field might not be what we want. If, for instance, we need to compute a vector field which should rotate around a point, it won’t be possible using the above mentioned method. The reason is quite simple – since it is a gradient field, it is per definition always rotation-free. In other words, curl is always zero. Of course, if we compute the orthogonal complement of each vector, in other words rotate it around 90 degree, we would get rotations. But at the same time we would naturally end up with a divergence-free vector field, which again might not be what we want.

Instead of utilizing the gradient, a much more flexible way to design vector fields is by prescribing user-defined constraints, which are then interpolated over the surface. These constraints are typically sinks, sources and vortices but could also be some edges on the mesh, which guide the direction of the field. Well, at this points things become a bit tricky but fortunately it is a very well researched topic and Google unveils the great wisdom hidden in many research papers. A paper I’ve particularly enjoyed is “Design of Tangent Vector Fields” by Fisher, Schröder, Desbrun and Hoppe. The described method is based on quadratic energy minimization and relies only on standard operators from discrete exterior calculus.

After reading the paper I did some prototyping in python and while it worked generally quite well it was of course rather slow. It was probably around 2014 at the time and I wanted to port the algorithm to the HDK or at least use a sparse matrix library in python but something intervened and later I lost somewhat interest in it. Well, now it’s 2017 and this blog seemed to be a good opportunity to dig out the old file. It´s still slow (even on a fast computer) but if you’re not in hurry it does a pretty good job.

When I saw Entagmas great tutorial about minimal surfaces I remembered an old file I did to help a friend some years ago. He wanted to generate a smooth surface between some given boundaries and it should behave similar to elastic textile. To simulate it he used the same method which had been invented about 40 years ago by Frei Otto. He built a physical model out of thin wire and dripped it into soap film. After taking it out of the fluid the result was a perfectly smooth surface in the form of a soap film spanning between the boundaries.
After some experimentation and many models later, the question was finally how to digitize the surface. 3D-scanning was obviously not possible. He tried to model it based on photos but that didn’t work out very well. Repeatedly subdividing a coarse mesh wasn’t successful either. Thereby the solution is quite simple. Don’t try to digitize the surface, just digitize the whole process and skip the physical model.
Mathematically such a surface is a minimal surface or more precisely, it is a surface of minimal area for given boundary conditions. Another important definition is that it has zero mean curvature at all points. What he had to do is to minimize the overall area of the surface until the mesh reaches equilibrium. The resulting surface is a minimal surface, or at least a very good approximation of it. So, what’s the best method to compute such a surface?
Well, one way is of course to use a spring model, set the rest length to 0 and run the simulation. That’s basically what Entagma did in their tutorial. The problem with this method is that it´s rather slow and works best only on a fairly regular and clean mesh. On irregular meshes it’s highly likely that it won’t converge to a minimal surface.
A probably better method is laplacian smoothing. Again this works best on a regular mesh but it’s readily adaptable by using proper weights. While both methods might be slow due to the fact that they use an iterative approach, it’s generally better to rely on laplacian smoothing. This will result in a better approximation of a minimal surface.
Of course, there is a third method which again relies on the laplacian but uses a slightly different approach to solve the same problem. It´s based on the paper “Computing Discrete Minimal Surfaces and Their Conjugates” by Pinkall and Polthier and works by minimizing the Dirichlet energy. This leads to an efficient numerical method to compute the desired minimal surface. The basic idea is simple – just move the vertices of the mesh in a specific direction for a specific distance and the result is a minimal surface.
To do this, we first need to find the direction in which the area changes the most. In other words, we need the gradient of the area. Apparently this is the mean curvature normal which we get fairly easy by using the Laplace-Beltrami operator. The length of the mean curvature normal is the mean curvature and as we already know, it should be 0 for minimal surfaces. With this information it´s easy to set up a linear system in which we set the curvature to 0 and simply solve the system of equations for the surface with harmonic coordinate functions. The solution we get are the new point positions for the minimal surface we were looking for, or at least a very good approximation to it.
When using a fast sparse matrix solver this method is not just quite accurate but also fast. I’ve used Eigen as linear algebra library in the HDK for this purpose and had no problems to interactively modify an average dense mesh in realtime on my rather old laptop.

The same setup could be used for membrane and tension surfaces since they share many similarities with minimal surfaces, which is quite nice for playing around with the shape.

Curvature is a fundamental tool in computer graphics and important for many techniques such as remeshing, smoothing, simplification, npr-rendering and many more. Unfortunately Houdini doesn’t provide something like a CurvatureSop out of the box. Of course, there is the MeasureSop which can calculate some kind of curvature but the result is far from being optimal. It doesn’t differentiate between different types of curvature, such as gaussian or mean curvature nor does it give you the important information if it’s positive or negative. In other words, if the shape is concave or convex.

Since I needed to calculate curvature quite often in the past I decided to build my own CurvatureSop. Well, one might think that this should be quite easy but in fact it isn´t. The problem is that in Houdini we are mostly working with meshes. Meshes are nice to work with for all sorts of things but they are not that great if we need to compute derivatives. And computing derivatives, or more precisely, the second derivative is exactly what curvature is about. In the continuous case this is simple and straightforward to do as it is pretty much on volumes (structured spatial grids). On an arbitrary mesh however, we face the problem that the second derivative of a piecewise continuous function is pretty much everywhere null, at least if we are working with triangulations. The same is not necessarily true on quadrilateral meshes but to keep it simple and not becoming even worse, we´ll stick with triangles. On a triangular mesh, curvature exist at vertices or along edges and therefore it’s highly dependent on the quality of the mesh. One way to partially minimize this problem is to locally fit a polynomial to the points around the vertex and use the fitted surface to compute the curvature analytically. While curvature estimation using this method is quite accurate, it is also very sensitive to the distribution of points in the neighbourhood and thus still dependent on a regular triangulation. A probably more straight forward way to measure curvature is to apply the Laplace Beltrami operator directly on the triangular mesh. This works generally quite robust as long as we are working with non-obtuse triangles. To cut a long story short, there exist many different algorithms to compute curvature on meshes but all have their pros and cons very much depending on the mesh. To compensate for this I finally implemented three different methods and usually use what’s giving me the best result for the mesh I´m working with. All three algorithms are using a different approach and are based on the following papers:

Beside using different algorithms, the CurvatureSop allows to compute the main curvature types, such as mean curvature, gaussian curvature and principal curvatures as well as principal curvature directions.