Just a small test doing some interactive path planning with obstacle avoidance. The minimal distance to obstacles and walls can easily be adjusted.

# Month: May 2017

# STEINER TREE

Well, it’s not really a Steiner minimal tree. It’s rather a very rough approximation and the result of a quick test I did using edge bundling. The general method I’ve used is described here.

# ISOVIST AND VISIBILITY GRAPH

Isovists and visibility graphs are quite interesting and have been used in different fields throughout the last 30 years. They were used to analyse and understand space, helped to predict how people move through museums and were the conceptual basis for numerous path- and wayfinding applications. Broadly speaking, both isovist and visibility graph do generally the same thing but in different ways. An isovist is basically a polygon describing the visible area or volume around a given point, whereas a visibility graph is, as the name implies, a graph connecting visible points which is stored internally as adjacency matrix . Although the algorithms are different, the measures you’ll get by using one or the other are quite similar. For example: max and min distance to obstacles, average distance, area, perimeter, different statistics based on distances, and so on to name a few. These measures can be used later to derive properties of the analysed space. Thus it became important in the field of architecture and urban planning, robot motion planning and computational geometry, But it’s also used in other less obvious areas like sociology, spatial cognition and architectural psychology. For a more complete explanation check out the following links:

https://en.wikipedia.org/wiki/Isovist

https://en.wikipedia.org/wiki/Visibility_graph

There exist various specialized software packages which can compute such a visibility analysis but out of interest I thought I implement my own version in Houdini. Most of the algorithms are written in C++ though it might be possible to do most of it in VEX nowadays. The main reason why I did use the HDK is simple because I used BGL (Boost Graph Library) for some calculations on the visibility graph and also because of better performance. While VEX is fast and gives you multithreading for free, C++ is still faster in this case, mainly due to the fact that I needed to do a lot of calculations on large arrays.

The images below show the main difference between the isovist and a visibility graph in 2D. The isovist on the left is a polygon describing the visual area around a viewpoint, whereas the visibility graph on the right is basically an adjacency graph. If we do the calculation on more than just one point, the result is a field with various measures indicating different spatial properties.

The obvious choice how to to compute the isovist is by rotating around a vector and test for intersections with the environment. It is easy to implement but leads to a number of problems. For instance, in the left image below the vector is rotated around in one degree steps. This is sufficient for nearby walls but results in poor precision at larger distances and acute angles. A much better approach is to sample directly the points of the surrounding geometry (image on the right side below). This works simply by checking the points visibility. If the point in visible, check for subsequent ray-intersections and finaly compute the isovist. This solution is not just much better precision-wise but also performance-wise. Instead of at least 360 rays I had to shoot just 56 rays into the scene.

Most of the measurands derived from the isovists are scalars. By computing the gradient however, it is easy to get a vector field. This is useful for many things. For instance, to guide agents in a simulation.

Currently the following measurands are implemented:

Min. distance, max. distance, average distance, min. diameter, max. diameter, min. distance direction, max. distance direction, standard deviation distance, variance distance, skewness distance, dispersion distance, area, perimeter, closed perimeter, open perimeter, openness, min. convex core, compactness, convexity, circularity, centrality, jaggedness, roundness, clustering coefficient, revelation, connectivity. Some of these measurands are shown in the images below.

For some related reading:

M. L. Benedikt: To take hold of space: isovists and isovist fields

A. Turner, M. Doxa, D. O’Sullivan, A. Penn: From isovists to visibility graphs: a methodology for the analysis of architectural space

M Batty: Exploring isovist fields: space and shape in architectural and urban morphology.

G. Franz, J. M. Wiener: Exploring isovist-based correlates of spatial behavior and experience

# QUADRILATERAL REMESHING

This is just an old and fairly rough implementation of an algorithm to convert triangular meshes into mostly quadrilateral meshes. The method is quite simple and works basically by computing principal curvature directions while subsequently trying to build quads based on that directions. It does a good job on smooth surfaces like the one on the images below but fails on meshes with hard edges. In other words, it’s far from being really usable …

# GEODESICS ON MESHES

Measuring geodesic distances and the computation of paths on a mesh is widely used in computational geometry for many different applications ranging from mesh parameterization and remeshing to skinning and mesh deformation. After a longer conversation and chat about this topic, and since i thought it could be useful at some point I finally got my feet wet on different algorithms to compute such geodesics in Houdini.

The first and obvious choice was to compute the shortest path between the source point and all the other points on the geometry. This was easy and fast to do but unfortunately it doesn’t compute geodesics. Since the underlying algorithm of the ShortestPathSop (Dijkstra) relies on the edges of the mesh, it computes just a rough approximation of the real distances. Of course, the inaccuracy could be minimized by iteratively smoothing the paths but still, the result will be far from being optimal and the method slow as well.

To circumvent this problem, I tried another method. Instead of measuring the length of the shortest paths directly it relied on the fact that a real geodesic path unfolds into a straight line. So, what I basically did was:

- Compute the shortest path between point A and point B
- Unfold the path onto a plane
- Measure the Euclidian distance between point A and point B on the unfolded path.

This method worked quite nicely but it has one big drawback – it doesn’t work on meshes with holes. As soon as the line drawn between the start- and endpoint of the unfolded path crosses a hole, the result is no longer the true geodesic distance.

The third algorithm I´ve used was the well known Fast Marching Front Propagation Method. You start at the single source point and compute the distance to the 1-ring neighbouring points around it. By doing this iteratively and summing up the distances you´ll finally get the geodesic distance to all points in the mesh. This works because it is easy to calculate the distance to the third point of a triangle as soon as the other two distances are known.

For the first quick implementation i used python but it was way to slow to be usable. After replacing the slow list lookups in the code i got a 3 times speedup but it was still to slow which was the reason for porting the algorithm to the HDK. After that, it worked quite fast (around 270 times faster) but the general problem with this algorithm is, that it doesn´t work with obtuse triangles. In other words, it needs to work on a fairly clean and regular triangulated mesh. If this is not the case, you´ll end up with errors in the distance calculation which leads to larger errors and so forth. To prevent this from happening I had to insert conditional statements for obtuse triangles as well for all other special cases and hence the code got much longer and somewhat cluttered. Usually, when fast marching front propagation is used on 2D and 3D grids, this is not a problem but for arbitrarily triangulated meshes it is.

Finally I decided to implement the great paper: “Geodesics in Heat” by Keenan Crane. The method described in the paper works basically as follows:

- Integrate the heat flow for a source point across the mesh
- Compute the normalized gradient for the heat
- Solve the Poisson equation to get the actual distance

One of the great advantages of the heat method is its brilliant simplicity. Basically it relies only on solving a sparse linear system what could be done quite efficiently by various linear algebra libraries (I’ve used Eigen for this). This makes it possible to compute the geodesic distances from more than one point in pretty much the same computation time, quite contrary to the marching method for example.

For some related reading:

K. Crane, C. Weischedel, M. Wardetzky: Geodesics in Heat

M. Novotni, R. Klein: Computing Geodesic Distances on Triangular Meshes

V. Surazhsky, T. Surazhsky, D. Kirsanov, S. J. Gortler, H. Hoppe: Fast Exact and Approximate Geodesics on Meshes

D. Bommes, L. Kobbelt: Accurate Computation of Geodesic Distance Fields for Polygonal Curves on Triangle Meshes

# EDGE BUNDLING

I have always been a big fan of Frei Otto. His work has been revolutionary on many levels but the most fascinating area for me are his early experiments. One of this experiments is the famous “optimized path system” where he used wet wool threads forming a network of minimal pathways. This technique works in 2D and in 3D and due to its self-organizing characteristics it was and still is heavily used for various form-finding experiments in many different areas. Quite interesting for obvious reasons in this regards is edge bundling which is used mainly in scientific visualization. Conceptually it is very similar to Otto´s wool threads experiments and is used for avoiding visual clutter by bundling together edges in large graphs. Out of interest and because I love visually appealing diagrams I tried to implement something like this in houdini.It´s a well researched area and so you can find various different solutions to the same problem but aber some digging I found the two papers which I finally implemented in Houdini:

C. Hurter, O. Ersoy, A. Telea: Graph Bundling by Kernel Density Estimation

D. Holten, J. Wijk: Force-Directed Edge Bundling for Graph Visualization

Both algorithms worked quite nicely and even if they are conceptually very different I got more or less similar result. Though, personally I prefer kernel density based bundling since I like the brilliant simplicity of the idea and overall it seemed to produce better results. At least in my test (what doesn´t mean anything, tbh.). Anyway, after all I implemented a third homegrown version which is somewhat a combination of the other two. Basically it is a spring based model which is guided by a density function. The advantage of this setup is that it is quite flexible and art-directable.

After rendering with Mantra and adding some grain, the bundled edges got a somewhat hand-drawn character.