Brute-Force Voxelization

Brute-Force Voxelization

In game dev, nailing performance is like painting a masterpiece – you tweak and refine, but eventually, you have to step back. It’s never quite ‘done,’ merely released. Contrast that with the ‘10% Rule’ of my university professor: if it won’t net you a double-digit boost, why bother? Just run it on a bigger cluster. To most game programmers this sounds like a blasphemy. But even in game development, there are instances where this brute-force approach actually hits the mark. In my case it was voxelization.

Reliable Tool

To give a bit of background, my profesor was working in the field of computational fluid dynamics, so he was more concerned with the accuracy of the physical model rather than performance. In the end it doesn’t really matter if the calculations took 10 or 11 hours if the results are wrong. The 10% rule was not encouraging sloppy code, it was about prioritizing results.

Right now I would benefit from similar result-focused approach. In my destruction tech demo, there are multiple instances where I need to determine how much two irregular shapes are overlapping.

The overlap data will be then used by the physics engine to determine the strength of the bond between 2 objects. There will be thousands of such pairs, so manual setup is impractical. These calculations will be done in editor as a part of destruction authoring tool, and performed on user request, not every frame. This imposes a set of requirements:

  • Ease of implementation: The aim of the tool is to reduce the amount of work, not trade time spent on manual setup for time spent coding.
  • Robustness: It should handle all edge cases without requiring manual correction.
  • Performance: It may take several minutes to complete, I’m not planning to use it in real-time; it will solely function within the editor.
  • Accuracy: Deviations of up to 30% in overlap volume are acceptable.

In essence, the algorithm should mimic human judgment in determining if two meshes overlap sufficiently. While not requiring absolute accuracy, it must avoid false positives – visibly separated meshes should not be classified as overlapping.


I didn’t want to spend days on researching the subject, so I only considered 3 most obvious approaches:

  • AABB overlap
  • Boolean operation on meshes
  • Boolean operations on voxelized meshes

Axis-Aligned Bounding Boxes Overlap

Using Axis-Aligned Bounding Boxes (AABBs) to determine overlap volume is straightforward: it involves finding the AABB for each mesh and calculating the overlap volume. However, this approach can lead to false positives in certain scenarios.

Boolean Operation on Meshes

Boolean operations on meshes are simple in theory. Find the overlapping part and calculate its volume. However, in practice, these operations are known for their unreliability due to issues such as geometric complexity, numerical precision errors, and topological problems.

Boolean Operation on Voxelized Objects

Boolean operations on the voxel objects – voxelize both meshes using the same grid, and then use boolean AND to get the common part. Easy to implement, and robust, provided that the voxelization was performed correctly. Moreover, with sufficiently high resolution, it can yield good accuracy.


While boolean operations on voxelized meshes are reliable, the accuracy of the results hinges on the quality of voxelization. In this particular case the voxelization is binary – per each voxel, or volume pixel, a boolean value is stored – true if the center is inside mesh, false if outside:

Both objects are voxelized using the same grid. If values of a voxel is true for both objects, then the objects overlap in that voxel:

One crucial aspect yet to be clarified is the definition of “inside” and “outside.”

Inside vs Outside

While it may seem immediately apparent whether a point resides inside a shape, translating this concept into an algorithm can be tricky. ‘Even–odd rule’ is helpful here, provided that the shape is closed. This rule determines whether a point lies inside a closed shape by counting the number of times a ray from the point intersects with the shapes’ outline; if the count is odd, the point is inside, and if it’s even, the point is outside.

If the total number of intersections is odd – 1, 3, 5, 7… the point is inside:

If the number is even – 0, 2, 4, 6… the point is laying outside:

Real-life situations are rarely as straightforward as textbook examples. Even when most meshes are designed to be watertight, there are often irregularities like odd shapes, gaps, or overlaps. These irregularities can cause thin, elongated lines to stick out from the voxelized object:

3D Voxelization

The ‘Utah Teapot’ will be the test case, as is a great example of a shape that looks solid from the outside but is tricky to voxelize properly:

Voxelization is an ideal candidate for parallel computing – tens of thousands of simple operations on the same set of data. I will use GPU to speed up the calculations.

First I have to convert the mesh into a form, that will be easy to process. Typically, meshes are stored as lists of vertex positions followed by sets of indices forming triangles. To simplify the code, I’ll embed each vertex’s position directly within its corresponding triangle. All mesh triangles are processed in this manner and then stored in a buffer:

struct Triangle
    float3 vertexA;
    float3 vertexB;
    float3 vertexC;

StructuredBuffer<Triangle> TriangleBuffer;

With the mesh prepared, the next step is to develop a version of the ‘even-odd rule’ algorithm that operates with an array of triangles. This algorithm will iterate through all mesh triangles for each voxel, checking for intersections:

bool IsInside(float3 origin, float3 direction)
    int intersections = 0;
    for (int i = 0; i < TriangleCount; i++)
        if (IntersectTriangle(origin, direction, TriangleBuffer[i]))
            intersections ++;
    return intersections % 2 != 0;

For each voxel, a ray is cast from its origin in the direction specified by the user, which remains consistent for all voxels. Ray-triangle intersection checks are carried out using the Möller–Trumbore method:

bool IntersectTriangle(float3 origin, float3 direction, Triangle tri)
    float epsilon = 0.000001f;
    float3 edgeAB = tri.vertexB - tri.vertexA;
    float3 edgeAC = tri.vertexC - tri.vertexA;
    float3 pVector = cross(direction, edgeAC);
    float determinant = dot(edgeAB, pVector);
    if (determinant > -epsilon && determinant < epsilon)
        return false;
    float inverseDeterminant = 1.0f / determinant;
    float3 tVector = origin - tri.vertexA;
    float u = dot(tVector, pVector) * inverseDeterminant;
    if (u < 0.0f || u > 1.0f)
        return false;
    float3 qVector = cross(tVector, edgeAB);
    float v = dot(direction, qVector) * inverseDeterminant;
    if (v < 0.0f || v + u > 1.0f)
        return false;
    float t = dot(edgeAC, qVector) * inverseDeterminant;
    if (t > epsilon)
        return true;
    return false;

Voxelization Failures

As expected, the results were not without their problems:

I tested several ray directions, and each voxelization exhibited numerous artifacts. To compound the issue, the protruding deformations can intersect with nearby objects. This could potentially result in false-positive overlap checks, precisely the situation I’m trying to prevent.

A quick inspection of the teapot reveals the root of the problem. The mesh is plagued with topological issues such as holes, gaps, and overlaps.

Brute Force Voxelization

When designing the Space Shuttle Orbiter, NASA incorporated five General Purpose Computers (GPC) into its avionics. The idea behind using multiple computers in the Space Shuttle’s flight control system was to ensure redundancy and reliability. By employing several identical computers running in parallel, any failing hardware could be isolated without disruption, as the majority-voting system allowed the operational computers to override the malfunctioning one. The system provided fail-operational, fail-safe capability of the spacecraft’s avionics.

I plan to apply a similar logic to reduce errors in the voxelization algorithm. Each voxel will undergo voting across different directions to decide if it’s inside or outside. Using a ‘majority voting system’, where each direction’s vote counts, should give us a mostly accurate result, with fewer errors.

While using the space shuttle analogy to explain multisampling may seem like a bit of a stretch, it does a great job of illustrating the concept. With such setup I should be able to improve the accuracy by increasing the number of raycasting directions.

Voxelization Directions

Currently, the raycasting directions have been defined by the user. However, as I plan to use an arbitrary number of directions, I’ll need an algorithm to generate them automatically. While this is straightforward in a 2D scenario – simply dividing 360° by the number of directions – it’s less obvious in 3 dimensions.

Luckily, evenly distributing points on a sphere has been tackled before and is known as the ‘Thomson problem’. It describes how electrons would arrange themselves when placed on a unit sphere.

Electrons, being negatively charged, repel each other, leading to an increase in distance from neighboring particles until they reach a point where they can no longer move further. At this stage, we can assume that they are evenly spaced.

Thomson problem iterations

The same procedure can be applied to distribute directions evenly. Here, the end of each direction vector can be likened to an electron that repels other directions. The function takes an array of directions and executes one step of distribution, returning the degree of change in the arrangement. The first direction in an array is treated as stationary.

public static float Distribute(Vector3[] directions, float stepSize, float strength)
    if (directions.Length <= 1)
        return 0;
    float maxDelta = float.MinValue;
    for (int i = 1; i < directions.Length; i++)
        Vector3 correction =;
        for (int j = 0; j < directions.Length; j++)
            if (i != j)
                Vector3 difference = directions[i] - directions[j];
                Vector3 direction = difference.normalized;
                float distance = difference.magnitude;
                float falloff = Mathf.Pow(Mathf.Clamp01((2 - distance) / 2), 2);
                float magnitude = falloff * strength;
                magnitude = Mathf.Min(magnitude, stepSize);
                correction += direction * magnitude / directions.Length;
        Vector3 oldDirection = directions[i];
        directions[i] += correction;
        directions[i] = Vector3.Normalize(directions[i]);
        maxDelta = Mathf.Max(maxDelta, Vector3.Distance(oldDirection, directions[i]));
    return maxDelta;

With this function we can start with any random arrangement of directions and iterate the Distribute step until the directions are evenly spaced.

List<Vector3> GenerateDistributed(int count)
    List<Vector3> directions;
    // Generate random directions, start with one fixed fixed direction
    directions = GenerateRandom(count, Vector3.up);

    // Distribute the directions until convergence or 1000 iterations
    int iteration = 0;
    float maxDelta = 0.0f;
    while (iteration < 1000 && maxDelta >= 0.001f)
        maxDelta = Distribute(directions, 0.01f, 100000);
    return directions;

Here, the algorithm is presented one step per frame, making it easier to follow along with the process:

It works well for any number of directions, although the convergence may be slow in some cases.


Although sampling multiple directions is a crude approach to solving the problem, it is undoubtedly effective. Increasing the number of samples can largely mitigate voxelization errors. Remarkably, satisfactory results for the teapot test case can be achieved with just 5 directions.

Saved Time

At every step of the process, I opted for the path of least resistance—from checking every voxel in the domain to pushing away the directions. I chose the simplest and probably most computationally intensive algorithms available.

Making the connection graph for the stone wall asset takes about a second. Although it’s not perfect, this step is only done once per asset. With tens of assets expected, a faster algorithm could potentially save a few minutes at best. However, developing such an algorithm would likely require about a week. Not exactly a good trade.


In cases when you are not trying to fit within 16.6 milliseconds, but within the deadline, simplicity often trumps sophistication. Completing the job “the right way” isn’t always more valuable than completing it on time.

I could have invested weeks or even months in extensive research and meticulous code polishing, only to potentially end up with a system that nobody would appreciate or use. Instead, I opted for shortcuts and hacks, and surprisingly, they did the job. The voxelization system is functional, was quick to implement, and now I have the time left to spend on other problems.


Realistic is not necessarily the most convincing. Audio designers know that well and use frozen leeks and watermelons to create sounds of breaking bones and tearing flesh. Chalk shot from slingshot was safer alternative for actual firearms in old westerns. Fake doesn’t have to mean worse, especially when it is hard to tell the difference. With that in mind I will try to create complex flowfields without using computational fluid dynamics.

New, Better Jupiter

Jupiter’s is undeniably beautiful, but it’s safest to admire it from a distance. Characteristic patterns observed on the surface are, in fact, massive storms and Earth-sized cyclones. Jovian winds can reach speeds of hundreds of kilometers per hour, although they may appear quaint from orbit.

I plan to feature a Jupiter-like planet as part of the skybox. While not a unique concept and often seen in the sci-fi genre, I want to take it a step further by animating it. In reality, the motion of the atmosphere is too slow to be noticed. However, I will speed it up substantially to make the movement obvious. This should capture players’ attention and remind them that the action is taking place on a distant, alien world.


While I don’t want to create a carbon copy of Jupiter, there is a set of features that immediately come to mind when thinking about gas planet. Hopefully, by recreating these, I will get the visuals reminiscent of Jupiter.

There are three distinct flow patterns I want to replicate :

  • Cyclones: A couple of large, easily identifiable vortices, much like Jupiter’s ‘Great Red Spot.’ Typically elongated, they are often accompanied by a ‘wake’ – a trail of secondary vortices.
  • Jets: These are linear currents that run parallel to the equator, with easily visible turbulent transition layers.
  • Storms: Smaller, more volatile, and less defined flow structures that contribute to the texture of the atmosphere.

Animating Fluids

Recreating water or any other sort of fluid in games is challenging. Computational fluid dynamics is demanding in terms of memory and processing power, but that hasn’t prevented game developers from including water in their games. With a set of clever hacks and workarounds, there is no need for expensive simulation.

Color Cycling

Color cycling is a technique with a long history, dating back to systems like the NES. It’s akin to painting by numbers, where colors representing different numbers change each frame. Despite its simplicity, when applied to well-prepared input sprites, it can produce eye-catching effects.

Frame-by-Frame Animation

For a while, the frame-by-frame approach was the standard solution. Each frame was stored as a separate sprite, which demanded a lot of memory. As a result, this method was typically reserved for short animation loops consisting of only a few frames.

Limited number of animation frames leads to a ‘choppy’ motion, typical of 90s shooters.

Texture Scrolling

As soon as games moved into fully textured 3D environments, texture scrolling became a viable option for creating rivers. This method involves incrementing one of the UV components over time to create the illusion of moving texture. When combined with appropriate geometry and shaders, texture scrolling can yield impressive results and remains a popular choice.

While versatile this technique is limited to laminar flow – it is non-trivial to add vortices or any other complex flow patterns.

Quake UV Distortion

Id Software, the developer behind Quake, is renowned for its innovations. The water distortion they created is rarely listed among them, but is still worth looking into. This simple formula lends Quake’s lava, water, and portals their distinctive appearance.

Although it’s straightforward to replicate using shaders, the original effect was made without them, relying instead on software rendering.

Unreal WaterPaint

WaterPaint is one of the most intriguing methods for simulating fluids in games, yet it’s also one of the most challenging to understand. It appears to generate the surface of the liquid and subsequently uses this information to distort a texture. The system’s complexity borders on overengineering, particularly given that the resulting effect is often overlooked in a fast-paced shooter.

Similarly to Quake this technique also predates shaders. Texture pixels are manipulated by CPU and then sampled just like in the case of normal texture.

The methods mentioned are decades old and, on their own, may not hold up well today. However, they still have value as components within larger, more complex systems.

Velocity Texture

Let’s create a “universal” flow shader capable of representing any motion of the fluid. Intuitively, we’ll need two textures: one representing the color of the flowing substance, and another one storing the velocity field. This velocity field texture will be a 2-component 2D texture, with the red component representing the x component of the normalized velocity and the green component representing the y component.

Mapping vectors to color

The velocity field is then used to incrementally modify the texture coordinates of the color texture, in the same manner like in the ‘Texture Scrolling’ described previously. However, in this case, the velocity values vary for each pixel.

The result, while interesting, is rather disappointing as it doesn’t accurately simulate fluid motion. Instead, it is an animated distortion that gradually bends the color texture over time.

The formula worked for simple scrolling because in that case the motion was linear and constant. At each point, the velocity was the same. However, here the velocity is more complex – it is defined per pixel. The correct approach would be to use integration.

Euler Method

Let’s simplify the problem. Imagine we have a tiny speck of dust sitting on the surface of water that’s moving. We describe the water’s movement using a texture that shows its velocity. Now, we want to figure out the path this speck of dust would take.

The first solution that comes to mind is to take a small step forward, check and update the velocity, then take another step using the updated velocity, and repeat this process. This is called Explicit Euler Method:

The Explicit or Forward Euler method is often seen as the most basic and least accurate numerical integration method. The larger the integration step, denoted as “h,” the greater the error, and these errors accumulate over time. Even in the example shown, the integrated path represented by the orange arrows deviates significantly from the particle’s true path, depicted by the gray line. Fortunately this inaccuracy won’t be noticeable in the animation.

The problem is, it’s not easy to transform Euler Method directly into a shader. We need to keep track of the particle’s position after each step, and this is something shader alone cannot do. Position, in form of a deformed color texture, has to be stored in a texture.

The shader reads the texture storing the color of the fluid and deforms it slightly based on the velocity texture. The resulting deformation is then written back into the color texture. This operation is performed every frame of the animation.

To complicate the problem further, it’s generally not possible to read from and write to the same texture in a fragment shader. Unreal Engine solves this problem with Canvas.

Canvas Node Setup

Canvas enables the use of a flow shader (Material) to be drawn into a Render Target texture. What sets Canvas apart is its capability to use the same Render Target in both the input and output of the flow shader, forming a feedback loop. To make this process work, several components are necessary:

  • Render Target Texture: This image stores the state of the flow, or the color of the fluid in our case. It must be created before the animation begins and initial color has to be set.
  • Rendering Event: The process of updating the animation has to be performed every frame, or at least every frame the animated object is visible.
  • Flow Material Instance: An instance of the Flow Material is necessary, and it has to be supplied with its own Render Target Texture.

Once all these elements are in place, the Rendering Event, which corresponds to one step of fluid simulation, can be achieved using just 3 nodes:

  • Begin Draw Canvas to Render Target
  • Draw Material
  • End Draw

With everything set up, the resulting animation should look like this:

The initial image is shifted in a more fluid manner, with each pixel moving more continuously. However, it still falls short of resembling the motion of a liquid.

Improving Velocity Field

Up until now, we’ve relied on basic smoothed 2D vector noise. While sufficient for testing basic functionality, it not enough to realistically representing fluid flow. Liquids tend to swirl around, forming vortices and other complex patterns, which simple noise cannot approximate effectively.

Fortunately, a mathematical operator, the curl, can be particularly useful here. By applying it to scalar noise, we transform it into a velocity field full of vortices.

To describe curl in the simplest way possible – in 2D case curl will create a clockwise flow around areas brighter than the surrounding, and counterclockwise flow around darker areas. I describe curl in more detail in Dissecting Curl Noise article.

There are multiple ways to calculate curl. DDX and DDY operators are useful in Shadertoy, where the input is not a static texture but a procedural noise. For more traditional applications like Unreal and Unity, it’s probably better to generate it using image generation software like Substance Designer or Photoshop. Any software capable of generating a normal map from a grayscale image will be helpful here, as converting a normal map to curl is simply a matter of swizzling and inverting channels.

The addition of swirly motion adds a fluid-like quality to the animation, although it still appears somewhat static.

The stationary vortices are the reason behind the artificial appearance. This can be addressed by distorting velocity field using the same function that manipulated lava in Quake.

The flow still lacks the complexity needed to resemble natural motion, but this can be remedied with a more detailed velocity field.


When the algorithm runs for too long, another problem becomes apparent: mixing. While it’s a desired feature, after a while, it turns the flow colors into a solid mass, devoid of any visual interest.

To remedy that, color can be reintroduced by sampling the initial color texture and blending it with the Render Target texture. This process involves using a point grid mask to mimic pigment dissolving in the fluid.

That solves the issue of mixing, but the pattern of points remains too noticeable. By using a noise texture and applying Quake distortion to it, the effect becomes less conspicuous and more natural.

Jupiter’s appearance is attributed to its water-ammonia clouds, which have a range of compositions and colors. These clouds undergo atmospheric circulation, occasionally pushing layers from below to the surface. This phenomenon results in changes in surface colors and structure over time.

I’ll artificially limit cloud compositions to 3 and assign a texture channel to each. Then, to simulate shifts in composition, I’ll utilize color cycling. In shader terms, the initial color, before being sampled and mixed with the render target, will undergo slight modifications over time. The result may look psychedelic, but ultimately, it will replaced by more natural set of colors. Right now those rainbow patterns serve as an useful placeholder.

The left side displays the initial color, while the right side shows the flow.


Another side effect of mixing is the blurring of the texture. As the image becomes progressively smoother, the details are lost, causing the texture to appear low-resolution, which is certainly an undesirable outcome.

The obvious solution is to use sharpening – an operation opposite to blurring. In its simplest form, it samples five pixels – the original one and its four neighbors – and returns their weighted sum. The layout of the pixels with their respective weights is called a kernel.

I will use a slightly different formula, one that isolates the ‘delta’ or change in color. This delta is then multiplied by the strength of sharpening. This approach gives me more control over the effect.

The material graph representation might seem daunting at first glance, but it’s essentially the result of repeating the same sampling operation multiple times with different parameters.

Sharpen is a separate Canvas rendering operation that follows the animation step.

Sharpening enhances the details but also introduces stripe artifacts.

This occurs because it is part of the feedback loop. It amplifies the difference between pixels, and the next sharpening step further magnifies the difference. This continues until the color values reach their maximum or minimum value.

The solution to that problem is far from elegant but very effective – clamping the calculated difference. This way, the difference doesn’t increase exponentially and the artifacts have no chance to form.

Sharpening with clamping:

With that in place, we can consider the whole system complete – we have a set of tools that allow for recreating a wide array of flow types in real-time. Now, to make further improvements, we need to enhance the input data – specifically, the velocity field.

Flow Patterns

There are 3 flow patterns that can be easily identified on Jupiter:

  • Cyclones
  • Jets
  • Storms

This list is by no means exhaustive. While there are numerous smaller and less noticeable flow details, many of them can be replicated using the same techniques employed for the main three patterns.

I will attempt to translate these patterns into corresponding velocity fields. This way, the complex flow on Jupiter can be broken down into individual flow components. These components could then be rearranged later to create a new, unique gas giant.

Creating Flowfields

As mentioned earlier, in a real-game scenario like Unreal or Unity, it’s not practical to generate the velocity field from scratch. It’s more efficient to generate most of the components in Substance Designer or Photoshop and then combine them in the shader to achieve the desired result. This approach allows us to create complex patterns with no additional costs.

I chose to create velocity textures in Substance Designer due to its flexibility and non-destructive workflow.


A cyclone is essentially a large vortex. Creating one involves generating a large blurred black or white dot and passing it through the curl operator. To add more complexity, the result can be combined with another operator – the gradient. This allows the cyclone to either suck in or expel matter, making it more dynamic.

Relation between curl and gradient is explained here in more detail.

The velocity field is generated by blending a mixture of curl and gradient operators over the previously created flow pattern.

Substance Designer enables the creation of more complex and detailed velocity fields. In this case, the cyclone flowfield was slightly deformed and elongated, featuring a non-linear speed distribution. Unlike a flowfield generated in a shader from scratch, all these details incur no additional cost – everything is baked into the texture.


The bands around Jupiter are known as belts and zones. Belts consist of darker, warmer clouds, while zones are characterized by brighter clouds made of ice. Strong currents form at the transitions between these bands. These currents, known as jets, run parallel to the equator and alternate in direction. Where two jets meet, the flow becomes turbulent, creating chains of vortices.

Replicating that is relatively simple: blurred stripes represent the laminar flow of the jets, while a curl applied to a series of dots creates vortices. It’s worth noticing the color of the dots; the spin of resulting vortices has to match the direction of the surrounding jets.

Once again, the flowfield generated in Designer exhibits more detail. Transition vortices are more scattered and vary in size. Additionally, jets are slightly disturbed to create a more wavy flow.


“Storms” is the term I used to encompass all the smaller vortices and turbulent streams that accompany the main currents. They are essentially noise, and I will approach creating them in the same way I would create noise.

Noise is typically comprised of multiple layers called octaves. Each subsequent octave contains smaller details and has a diminishing influence. In the case of a velocity field, each layer also has to be animated separately.

Those layers are then blended together to form complex, turbulent motion.

Substance Designer features storms gathered into clusters. Two versions of that texture are packed into a single texture and blended using moving masks to simulate quickly shifting currents.

It’s a different approach that results in patches of turbulent flow, as opposed to the uniformly distributed storms generated in Shadertoy. These patches resemble what can be observed on Jupiter more closely.


The division into cyclones, jets, and storms was artificial but proved quite useful for illustrating some of the techniques that can be used to mimic real flowfields. Each flow pattern can be achieved in many different ways, with no single approach that can be described as the “right” one.

To merge all these components together, a simple addition would suffice, but using alpha blending allows for accentuating some features like cyclones and toning down turbulence in certain areas.

At this stage, when all the components are ready, blending them together is more a matter of artistic choice than mathematics. After all, none of the presented techniques have solid grounding in physics – they are just approximations of natural phenomena.

2010: The Year We Make Contact

When I started working on the animation of the gas giant, I was really excited about the idea because I naively thought that this was going to be something novel, never tried before. Obviously, I was wrong. Films like ‘Outland’, ‘2010: The Year We Make Contact’, and ‘Contact’ all featured animated Jupiter.

The most interesting portrayal here is the rendition created by Digital Productions for ‘2010: The Year We Make Contact’. The technology behind it is a marvel of CGI, even though it looks like a perfectly executed practical effect.

The basic idea remains largely the same: utilizing a flowfield to deform the initial image. However, the execution differs significantly. While I used Substance Designer to generate flow textures, the team at Digital Productions utilized actual fluid mechanics to simulate the flow. My solution to the problem of mixing was to artificially reintroduce the color, whereas they sidestepped the problem entirely by converting the image into particles.

Remarkably, all of this was accomplished without the aid of modern CGI software or computing power. Instead, it relied on the ingenuity of a team of brilliant engineers and artists, supported by the CRAY X-MP.

The work of Larry Yaeger and Craig Upson is described in greater detail in Siggraph and Cinefex articles. Additionally, there is a documentary available on YouTube.

Further Developement

The presented methods should be sufficient to create convincing-looking flow, but not necessarily a visually appealing planet. Achieving that requires several additional steps:

  • Colors: Currently, the R, G, and B channels represent different substances. Ultimately they will be replaced with a color texture.
  • UV Mapping: Currently, the texture is just a square; it needs to be wrapped around a sphere. However, I plan to use the method described in Flat Planets and apply it to a flat disc.
  • Shading: Atmosphere is lit differently than a solid, opaque object. A specialized shading model has to be created to complete the effect.


The initial setup required some effort, both in Unreal and in Substance Designer, but once in place, it allowed for easy tweaking and modifications. Since it does not rely on computational fluid dynamics, the motion can be handcrafted, which is both a strength and a challenge. It offers total freedom to create any flow imaginable, but requires the artist to have a basic understanding of fluid dynamics.

Most importantly, it can compete with actual fluid simulations while using only a fraction of resources. A full planet with a 1024×1024 texture takes less than 0.5ms to render, which is a modest price for such VFX.

Flat Planets

I often envy those who can accurately estimate the time their tasks will take, particularly the seemingly small ones. Experience has taught me that it’s these ‘simple’ tasks that are often the most deceptive. What appears to be a 15-minute fix can unexpectedly turn into a bizarre edge case, transforming a supposed one-line code change into a day-long hunt for answers on the Wayback Machine. And yet, even with all that experience, the task that led me down the deepest rabbit hole, surprisingly, was texturing a sphere.


Sky can serve as an excellent narrative device. Movie directors and game developers understand this well, often using it for exposition. What better way to convey the sense of being on a far-off planet than by presenting an alien sky, complete with unfamiliar-looking moons and planets hanging over the horizon?

I’m taking a similar approach in my project. The skybox I’m working on will feature an animated moon and a gas giant, adding some extra visual flair.

Both planets will spin, and in addition, the gas giant will feature moving atmospheric currents. Normally, these motions are too subtle to be seen, but I’ll accelerate them for greater visual impact. As for the artistic direction, I’m going for a semi-realistic style, inspired by the sci-fi art of the ’70s and ’80s.

Planet’s Surface

While the gas giant’s surface texture will be generated in realtime using a pixel shader and a render-to-texture approach, moon texture will be created using Substance Designer. This allows for experimenting with various looks by simply changing the color palette and material seed.

Animated gas giant, while visually captivating, presents its own set of challenges. Rendering texture for each frame will impact performance, with the rendering cost quadrupling if the texture resolution is doubled.

I want one solution that works consistently, regardless of whether the input is a static texture created in Substance Designer or dynamically rendered in Unreal as described in Flowfields.

This leaves 2 options for storing the planet surface – a square or rectangular texture. Although a cubemap was considered initially, its potential performance impact – consuming six times the resources of its square texture counterpart – makes it completely impractical. Furthermore, Substance Designer doesn’t support cubemap generation.

UV Sphere Problems

With texture choices narrowed down to square or rectangle, the next step is mapping them onto a sphere. The UV sphere is an obvious choice here, as it is easy to align its texture coordinates with the a 2×1 rectangular texture.

Given that only one side of the sphere is visible at a time, there is no need to use a full, unique rectangular texture, a tiling square one can be used instead.

Jagged Edges

Using UV sphere might be the easiest option, but unfortunately in this case is not the best one. Firstly the outline is visibly angular – it doesn’t have to be a perfect circle, but at least should be smooth.

When triangle count is not an issue this problem can be solved by using a sphere with more subdivisions. Alternatively the edge can be masked using pixel shader.

Texture Usage

Most of the sphere’s surface is viewed at an angle, leading to sampling from lower mipmaps. This is particularly problematic for dynamic textures, where each pixel is rendered per frame. Most of the generated image gets wasted on areas where it is barely visible.

While increasing tiling can mitigate this issue, it introduces another problem.


To maintain a continuous pattern across the whole sphere, the tiling can only be increased in whole numbers. However, even doubling the initial tiling makes the repetition noticeable.

Fractional values, on the other hand, create a seam running from the north to the south pole.

Polar Distortion

Another problem is texture pinching at the poles. Not only the pinching is visually jarring, mapping onto triangles generates additional distortion, similiar to PSX’s ‘floaty’ texturing.

A common workaround in game development has been to conceal these pinched areas with a textured patch.


Finally, when considering the UV sphere for a planetary model, the inability to depict atmospheric halos becomes a limitation. Since mesh only allows surface drawing, an additional model is required for the halo, complicating the setup.

The UV sphere may seem as a feasible solution, but the laundry list of necessary modifications and creative hacks turns it into a Frankenstein’s monster. It is still used for modeling a planet, but for my application, the skybox, I was considering another option.

Flat Disk

Opting for an actual spherical mesh to represent the planet is primarily for the ease of texture mapping. In the case of a skybox, however, there’s no need for a complex 3D mesh – it will be observed from a distance and from one position. A basic disk will work just as good. The main issue, texture mapping, can be effectively managed through a pixel shader. This technique also has the added benefit of allowing the atmospheric halo to be drawn on the same mesh.

The disk mesh I’ll use is a straightforward filled circular polygon. The key feature to note is that its texture coordinates have their origin exactly at the center of the disk. If we assume that the planet has a radius of 1, then the UV coordinates have to extend beyond that to make some space for rendering the atmosphere.

Pixel Shader

There are several features of the 3D UV Sphere mesh I want to recreate in the shader. Obviously, the shader version of a planet has to be visually indistinguishable from the mesh one. It also has to be free of all the problems of the mesh listed before. The shader creation process will be broken down to several steps.

  • Surface Equation: Without the triangle mesh, the surface has to be defined in analytical terms.
  • Texture Mapping: UV coordinates have to be defined as a function of position on a sphere. This includes scaling and TBN calculation
  • Orientation: The shader must allow for the tilting of the planet’s axis.
  • Rotation: The planet should be able to spin around its axis.

Surface Equation

One important note on coordinates: I’ll be using a left-handed, Y-up system, as it is most consistent with DirectX. I will also use the DirectX normal map format. While Unreal Engine also uses a left-handed system, Z is the up-direction. It is a good practice to double- or triple-check orientations and the format of normal maps used.

The surface equation maps a 2D position on a disc plane to a corresponding 3D position on a spherical surface. In simpler terms – it is a function that draws a ball.

The first step is figuring out if the pixel lies on the surface of a sphere. To create a bounding circle mask, verify if the length of the UV vector is less than the sphere’s radius:

float CircleMask( float2 uv, float radius)
    return length(uv) < radius? 1.0: 0.0;

It’s important to note that this approach will function correctly only when the UV coordinates are aligned with the center of the mesh.

Next step is to create surface vector – or the local position of the surface of the sphere. Luckily both x and y components of the UV coordinates are the same as the corresponding surface vector components. The only thing left to do is to reconstruct z component.

float3 ReconstructSurface(float2 uv)
    float zSquared = 1.0 - dot(uv, uv);
    float z = sqrt(zSquared);
    return float3(uv, z);

Texture Mapping

Mapping the square texture to spherical surface can be broken down to 3 operations:

  1. Wrapping around cylinder
  2. Repeating the same for y axis
  3. Deforming the result into a circle

Process of wrapping might seem counterintuitive. The x coordinate of texture will be proportional to the angle the texture would wrap around the cylinder. To find this angle it is enough to calculate the arcsine(x). Then the angle has to be remapped from [-π, π ] to [0, 1].

To form a circle it is necessary to define the circular curve as a distance from y axis:

The ‘local width’ of the sphere, or the generatrix, is then used to pinch the coordinates at the poles. This is achieved by dividing the x-component of the surface position.

float2 GenerateSphericalUV(float3 position)
    float width = sqrt(1.0 - position.y * position.y);
    float generatrixX = position.x / width * sign(position.z);
    float2 generatrix = float2(generatrixX, position.y);
    float2 uv = asin(generatrix) / 3.14159 + float2(0.5, 0.5);  
    return float2(uv);


To make the planet look more natural, we need to adjust the orientation of its axis, specifically the pitch and roll. Planets in the sci-fi art are usually tilted, as it creates more dynamic composition and allows to show ice capes on the poles.

The yaw, which corresponds to the planet’s spinning motion, will be dealt with in separate step to avoid the seam issue mentioned when discussing UV spheres.

In the conventional approach, a mesh sphere is typically reoriented or transformed using a matrix. In our case, where each pixel defines the surface position, we can use a simplified 3⨯3 matrix limited to rotation – eliminating the need for a full set of transformations.

Building such a matrix is pretty straightforward. Identify the ‘right,’ ‘up,’ and ‘forward’ vectors of the rotated sphere and organize them as columns in the matrix.

Material editor doesn’t support matrices as a data format, but Unreal offers a walk-around:

When the rotations are limited to just pitch and roll the matrix looks like this:

In HLSL 3⨯3 matrix can be represented using float3x3:

float3x3 CreateRotationMatrix(float pitch, float roll) {
    float cosPitch = cos(pitch);
    float sinPitch = sin(pitch);
    float cosRoll = cos(roll);
    float sinRoll = sin(roll);

    return (float3x3)(
        cosRoll, -sinRoll * cosPitch, sinRoll * sinPitch,
        sinRoll, cosRoll * cosPitch, -cosRoll * sinPitch,
        0.0, sinPitch, cosPitch


Before adding the spinning motion, there is one problem that needs to be adressed – uv seams. Right now the texture is mapped in such a way, that the seams are aligned with the edges of the texture, and thus invisible. But if we change the scale, the discontinuity becomes obvious. It is not a problem that can be easily solved, but it can be partially masked by moving to the backside. This requires changes in mapping function:

float2 GenerateSphericalUV(float3 position, float scale)
    float width = sqrt(1.0 - position.y * position.y);
    float generatrixX = position.x / width * sign(position.z);
    float2 generatrix = float2(generatrixX, position.y);
    float2 uv = asin(generatrix) / 3.14159 + float2(0.5, 0.5);  
    return float2(uv);


With all the prerequisites in place, adding the spinning is straightforward, even if counterintuitive. To keep the seam in well masked position, we will not rotater sphere’s surface, but pan the surface texture along x axis.

float2 GenerateSphericalUV(float3 position, float scale, float spin)
    float width = sqrt(1.0 - position.y * position.y);
    float generatrixX = position.x / width * sign(position.z);
    float2 generatrix = float2(generatrixX, position.y);
    float2 uv = asin(generatrix) / 3.14159 + float2(0.5, 0.5);  
    return float2(uv);
float2 sphericalVU = GenerateSphericalUV(position, scale, time*speed)


For the ball to appear truly round, shading is crucial. Typically, it would be handled by the engine, but in the case of a planet, it is not that easy. A planet consists of two distinctive layers – the surface and the atmosphere. Typical materials describe only one medium. Therefore, we have to define how both layers are lit and blend them together manually.


Regardless of the chosen shading model, surface normal must be provided. Fortunately it is already there – surface position, before rotation, in case where radius is 1 is equal to surface normal. It is enough to perform basic shading.


I want to replicate the aesthetics of sci-fi book covers, with a stylized version inspired by NASA photos. Realism isn’t my primary concern. I’ll be using the simplest Lambertian model, where the light is equal to the dot product of the surface normal and light vector.

float LambertianLight(float3 normal, float3 lightDirection) {
    float NdotL = max(dot(normal, lightDirection), 0.0);
    return NdotL;

Bump Mapping

A simple normal might suffice for cases where the planet appears smooth, such as gas giants. However, planets with rocky surfaces are often covered with geological formations like mountains, ridges, or craters. These are best simulated using normal maps.

To use normal maps, we require a complete set of vectors: Tangent, Bitangent, and Normal. This set comprises three unit vectors perpendicular to each other and aligned to UV mapping. These vectors can be thought of as ‘UV Local Space’.

In the case of a sphere, calculations need to be done per pixel. It’s often convenient to store these calculations as columns of the TBN matrix. This matrix can then be used to recalculate the normal from the normal map into World Space Coordinates.

Shaded surface and the corresponding normalmap:

UV Discontinuities

When discussing the generation of UV coordinates in the pixel shader, one critical topic to address is discontinuities. Take, for instance, the case of the back seam – the horizontal component of the UV should wrap perfectly as it is equal to 0.0 and 1.0 on the borders. However, instead of a seamless transition, there is a line of blocky artifacts along the seam.

This line corresponds to the DDX of the UV. DDX and DDY, in this case, measure the rate of change of UV in the respective screen space axes. They are then fed into the texture sampler and used to determine which mipmap to use. Low values of UV derivative correspond to a high-resolution mipmap, and vice versa.

There’s a ‘jump’ in values between the left and right sides of the seam, causing the DDX to be high. Consequently, the texture is sampled from the lowest mipmap, resulting in the gray color.

In the case of a planet, it won’t be a significant issue, as it will be covered with a polar patch anyway. However, in other visually jarring scenarios, the DDX and DDY of the seam have to be manually corrected and fed into the sampler.

The presented ‘fix’ is merely for illustrative purposes. Typically, formulas for proper values of DDX and DDY have to be derived manually to match the mapping accurately.

Polar Patches

Last problem that needs adressing is the distortion on the poles. As mentioned before – the tried and tested solution is to just cover them. Seems like a hack, but in the case of a planet or a moon makes a lot of sense. Polar areas are often covered with ice, and visually distinct from the rest of the planet, so having a patch of different texture not only hides the prolem but also add visual interest.

Easiest way to add polar patch wolud be to map it in the plane perpendicular to the axis of rotation and rotate resulting coordinates:

float2 PolarPatchMapping(float3 position, float scale, float spin)
    float width = sqrt(1.0 - position.y * position.y);
    float generatrixX = position.x / width * sign(position.z);
    float2 generatrix = float2(generatrixX, position.y);
    float2 uv = asin(generatrix) / 3.14159 + float2(0.5, 0.5);  
    return float2(uv);


Planet’s surface doesn’t occupy the entire area of the disk, leaving enough space to draw the atmospheric halo. Drawn with the same shader, it can be seamlessly merged with the atmosphere drawn over the planet.

There are countless whitepapers discussing atmosphere rendering. Most are physically based and rely on raymarching to deliver the most realistic image possible. I will use none of them. There is a nice trick from Quake I want to try.

In both Quake 1 and 2, lighting was achieved using mostly lightmaps, a technique popular at the time and still used until recently for static light scenarios. In this approach, computationally expensive lighting calculations are precomputed during development and stored as a separate texture covering all surfaces in the level.

However, since lightmaps only work for static lighting, idSoftware had to devise a solution for dynamic objects like monsters. They sampled the lightmap below the dynamic object and used it to modulate the color of the object. While this technique wasn’t physically accurate, it was good enough to make the monster blend with its surroundings.

In a similar manner, I’ll use the planet’s surface to approximate the lighting of the atmospheric halo. It is enough to extend the surface normal before bump mapping to calculate the halo’s brightness.

To further enhance the effect, the brightness of each channel can be remapped differently. This is a simple yet effective method for simulating the absorption of different light wavelengths.

The visibility of the atmosphere is more pronounced at oblique angles. This effect can be replicated by remapping the Z component of the surface normal. Additionally, to recreate the halo’s fading effect, the distance from the sphere’s surface must be calculated.

Finally, both the surface and atmosphere need to be alpha-blended together.


Although it took more steps than anticipated, the result works great. The planet is a perfect sphere, and the mapping allows for all the texture and shader manipulations I wanted. I can create a single Substance Designer graph generating surface texture, and the result doesn’t require any further post-processing. The same goes for animated texture – I can use a smaller size and thus save performance, then adjust the tiling as needed.

Dissecting Curl Noise

This formula, F(p) = ∇⨯(∇(SDF(p))*K(p)) – ∇(|SDF(p)|), looks exactly like the arcane mathematical gibberish I would have memorized for a semester, only to forget a month after the exam. There’s a weird thing about these math oddities – the more abstract and convoluted they are, the higher the likelihood they’ll find their way into game development. This time, it’s no different; curl is a step necessary to transform a clay miniature into an animated creature.

Case Study

I’m developing an interactive GPU-based destruction tech demo that requires a player’s avatar. It will be a humanoid entity made from floating stone debris, a departure from the traditional golem design. In this concept, the stones are not part of a solid body but are manipulated by some force to form an external shell, while the interior remains hollow.


Similar themes are popular in film and gaming, with notable examples being Imhotep’s face materializing at the forefront of a sandstorm in ‘The Mummy’ and the Sandman character in ‘Spider-Man 3’.

‘Knack’ represents a similar concept but differs significantly: its creature is hand-animated rather than procedurally generated. On the other hand, ‘Katamari’ uses a procedural method for its growing ball but it is not animated.


A key requirement for my project is modeling the stone fragments as rigidbodies. This allows for two-way interaction between the environment and the creature, enabling it to navigate and interact with the physical world similarly to a typical player character, with the ability to push objects and destroy obstacles.

Although I plan to build the simulation from scratch using a compute shader, my current focus is on creating the creature. Several questions must be addressed before proceeding:

  • How should the stones be moved?
  • What is the minimum number of stones needed to clearly define a humanoid shape?
  • What method should be used to animate the humanoid shape?
  • Can real-time physics and rigidbodies be feasibly used for this purpose?

I have some ideas for arranging the stones into a specific shape, but ensuring their fluid flow on the surface is a different issue. To solve this, I’ll try to use curl noise, which seems to possess the visual qualities I need. Before fully committing to it, I want to understand curl noise and its properties to ensure it’s the right solution for the project.

“Curl of Noise”

The name ‘curl noise’ might suggest it’s a type of noise, like Worley or Perlin, but that can be somewhat misleading. Rather than indicating a specific type, ‘curl’ denotes an operation applied to input noise. Importantly, this operation isn’t confined to noise alone; various inputs such as vector fields or textures can also undergo the curl treatment. Concept is the same as in the case of heightfield-to-normalmap algorithms, just with a different formula.

Simply put, it can be considered as noise being subjected to the curl operation. It is probably easier to think of it as “curl of noise”.

The versatility of curl lies in its ability to handle various types of data. It extends beyond conventional applications like particle systems – it can be used to animate materials and aid in generating procedural geometry, to name a few examples.

Curl in 2D

As highlighted earlier, curl noise shares certain similarities with algorithms used in normal map generation. It is easier to understand how it works while comparing to something familiar. To illustrate, let’s use a few layers of Perlin noise generated in Substance Designer. These layers will be blended together into heightmap, a base for normal map.

Bright spots represent hills, and dark areas represent valleys. To create a normal map, the next step is to calculate the slope or gradient.


To achieve this, for any given point P, we need to sample four nearby points. The height difference between the east (E) and west (W) points will be represented as the x-component of the slope, while the difference between the north (N) and south (S) points will stand for the y-component accordingly.

Vector2 SampleGradient( Texture2D texture, Vector2 coords, float delta )
    float dX = texture.GetPixelBilinear(coords.x + delta, coords.y).r
             - texture.GetPixelBilinear(coords.x - delta, coords.y).r;
    float dY = texture.GetPixelBilinear(coords.x, coords.y + delta).r
             - texture.GetPixelBilinear(coords.x, coords.y - delta).r;
    return new Vector2(dX, dY) / ( 2 * delta );

Repeating this process for every point in the heightfield generates a vector field where each vector indicates the direction uphill. After calculating the gradient, additional steps are needed to generate a normal map. Initially, the gradient is inverted to convert slope information into normals. The Z component is subsequently reconstructed, ensuring the normal remains a unit vector. Following this, the X and Y components are packed into the 0-1 range for storage, allowing them to be preserved without clipping negative values. When juxtaposed side by side, the relationship between the gradient and the normal map becomes apparent.

Note that the arrows overlaid on both the gradient and normal map represent the same gradient values for easy comparison.


The steps for calculating curl are nearly identical:

Vector2 SampleCurl( Texture2D texture, Vector2 coords, float delta )
    float dX = texture.GetPixelBilinear(coords.x + delta, coords.y).r
         - texture.GetPixelBilinear(coords.x - delta, coords.y).r;
    float dY = texture.GetPixelBilinear(coords.x, coords.y + delta).r
         - texture.GetPixelBilinear(coords.x, coords.y - delta).r;
    return new Vector2(dY,-dX) / ( 2 * delta );

The key difference lies in the direction. While both curl and gradient vectors have the same length, the curl vectors don’t point uphill; rather, they point sideways. When arranged in an array, these curl vectors create a pattern resembling contour lines on a map.

Overlaying curl and gradient vectors reveals that they are essentially the same, only rotated 90 degrees. A closer examination of the code confirms this.


return new Vector2(dX, dY) / ( 2 * delta );


return new Vector2(dY,-dX) / ( 2 * delta );

The relationship between curl, gradient and normalmap becomes convenient when generating curl. Curl can be generated by simply converting normalmap.

The normal map, essentially an inverted gradient with an additional z component, can be transformed into curl through a few simple operations. It’s important to note that differences exist between OpenGL and DirectX normal maps, potentially requiring adjustments to the sign of the Y component during the transformation process.

Curl vs Gradient

Why bother with curl when the gradient is essentially the same and more intuitive? Let’s imagine that both curl and gradient represent velocity fields describing the motion of a fluid. If we place a particle in that fluid, we can trace its path.

The dot is our starting point, and the path unfolds after 10,000 integration steps. We’re focused on the nature of the motion rather than achieving pinpoint accuracy, so the forward Euler method works well:

Vector2[] IntegrateCurl( Texture2D texture, Vector2 start, float stepSize, int stepCount )
    Vector2[] path = new Vector2[stepCount];
    path[0] = start;
    for( int i = 1; i < stepCount; i++ )
        path[i] = path[i - 1] + SampleCurl( texture, path[i - 1], 0.01f ) * stepSize;
    return path;

Let’s calculate multiple paths with origins located nearby. Each path will have different shade:

Gradient paths typically converge to a single point rather swiftly. In contrast, curl generates log, conformal lines that follow a twisting pattern. This distinctive feature is not a byproduct of utilizing Perlin noise as an input; instead, it is an inherent trait of curl, known for being divergence-free.


In fluid flow, divergence tells us if fluid is spreading out or coming together at a point. Positive divergence means spreading out, and negative means coming together. To imitate the flow of water in a pool, we use incompressible fluid as a an approximation. Incompressible fluid has zero divergence, meaning it doesn’t spread out or come together. That’s why opting for a divergence-free curl is the better choice when creating a velocity field.

Curl in 3D

The comparison between gradient and curl was helpful in illustrating the concept in 2D. However, in 3D, the relationship between the two breaks down. Gradient is defined for scalar-valued fields, while curl in 3D can only be defined for vector-valued fields. In 2D, we made the comparison work by treating the scalar field like a vector field. The vectors were oriented perpendicular to the texture plane, and their magnitudes were based on brightness.

To extend this approach to 3D, we need a new source of noise – similar in nature to Perlin noise but in vector field form.

For equivalent results to Perlin noise in 3D, we need a 3D vector noise field that fulfills specific criteria:

  • Tileable in all directions:
    Essential for storing the result in a 3D texture. Imagine the texture as a cube; the left side should seamlessly match the right, the top should fit the bottom, and so on.
  • Isotropic:
    It should behave uniformly in all directions, maintaining the same frequency and amplitude along each axis. This ensures that, regardless of where or in which direction we sample, the resulting turbulent motion remains consistent.
  • Differentiable:
    The sampling method used is essentially a symmetric difference quotient. To achieve a smooth-looking curl, at least the first derivative needs to be continuous.

3D Noise

Similar to Perlin noise, the process begins by generating a set of random directions, evenly distributed on a uniform grid and constrained within a bounding cube. Each node on the grid is assigned a random unit vector.

While a magnitude of one is not obligatory, it tends to make the field more visually interesting.

Using a uniform grid with randomized directions leads to an isotropic field. Interpolating between these generated vectors forms a continuous 3D field. Wrapping indices during interpolation ensures tileability, creating a seamless and repeating pattern.

int WrapIndex(int index, int gridSize)
    return ((index % gridSize) + gridSize) % gridSize;

Linear interpolation between direction vectors doesn’t provide a differentiable function. A piecewise linear function results in a piecewise constant derivative, leading to a blocky curl texture. Smoothstep interpolation will solve that problem in a single line of code:

  t = 3 * t * t - 2 * t * t * t;
Vector3 Smoothstep(Vector3 vectorA, Vector3 vectorB, float t)
    t = 3 * t * t - 2 * t * t * t;
    return Vector3.Lerp(vectorA, vectorB, t);

Higher-order interpolation methods yields noticeably smoother transitions between values:

While the current result meets all the requirements, it lacks complexity. To add more detail, multiple layers, called octaves, will be blended together.

public class MultiOctaveNoise : ScriptableObject
TileableNoise[] _octaves = new TileableNoise[0];

Higher-frequency octaves have a progressively smaller impact on the final result.

public Vector3 Sample(Vector3 position)
    Vector3 sum =;
    float weight = 1.0f;
    float accumulatedWeight = 0.0f;
    for (int i = 0; i < _octaves.Length; i++)
        sum += _octaves[i].Sample(position) * weight;
        accumulatedWeight += weight;
        weight *= 0.5f;
    return sum / accumulatedWeight;

This type of texture is commonly generated using Perlin noise with different seeds for each of the X, Y, and Z components. The approach presented here, inspired by Perlin but rearranging some steps to obtain a vector field directly, is not only computationally more efficient but also easier to code.

Curl in 3D Space

The 3D vector noise serves as the foundation, but to transform it into a flow field, the next step is to compute its curl. Calculating curl in 3D space isn’t notably more complex or computationally expensive than its 2D counterpart. In this implementation, we utilize 6 samples for the 3D case, compared to the 4 used in the 2D scenario.

Unit vectors ijk correspond to xy and z axis. Curl of 3D vector field P can be written as:

Code representing the formula:

public static Vector3 SampleCurl(MultiOctaveNoise noise, Vector3 position, float delta)
    Vector3 deltaX = Vector3.right   * delta;
    Vector3 deltaY = Vector3.up      * delta;
    Vector3 deltaZ = Vector3.forward * delta;
    float   span   = 2 * delta;
    Vector3 dX = (noise.Sample(position + deltaX) 
                - noise.Sample(position - deltaX)) / span;
    Vector3 dY = (noise.Sample(position + deltaY) 
                - noise.Sample(position - deltaY)) / span;
    Vector3 dZ = (noise.Sample(position + deltaZ)
                - noise.Sample(position - deltaZ)) / span;
    Vector3 curl = (dY.z - dZ.y) * Vector3.right  +
                   (dZ.x - dX.z) * Vector3.up     +
                   (dX.y - dY.x) * Vector3.forward;
    return curl;

Applying curl to the tileable noise vector field yields the following result:

Curl of a tileable vector field is also tileable:

However, whether it can effectively serve as a flow field is still to be determined.

Curl Noise Motion

The most effective way to visualize the motion is, unsurprisingly, through animation. If the curl vector field can serve as a flow field, it should mimic particle behavior as if they were suspended in turbulent fluid.

A grid of 30 x 30 spheres was arranged to represent particles. Each sphere had a path generated through 3000 steps using the Euler method. These paths were then transformed into rope meshes. To indicate tiling boundaries, a wireframe cube was included.

The flow blends several main currents with random motion. I want to use it for simulating debris movement in a way that mimics the motion of bird flocks; the result is very close to that.

The trajectories of particles not only simulate fluid-like movements but also create formations resembling vines or tentacles. It will certainly find use in procedural mesh generation.


Curl relies on lots of samples from a layered vector field, making it computationally heavy. To handle this, the tileability requirement was introduced, making it possible to store calculations in a 3D texture. But there’s a trade-off – it introduces inaccuracies because the analytical definition is per point, while the 3D texture stores values per voxel and interpolates them. Further complicating matters, the values are stored with limited precision.

Let’s explore how these inaccuracies impact the motion within the vector field. The previously generated field is cached as a texture with dimensions 64x64x64 and a precision of 32 bits per channel.

The trajectory is rapidly diverging, yet the crucial aspect, the character of the motion, is remarkably well-preserved. Through caching, it becomes feasible to substitute all the operations necessary for obtaining curl with a single sampling of a 3D texture. This is a significant improvement, especially considering that we need the curl value for thousands of objects per each frame.


Until now, this has been a mere replication of the systems already found in Unreal’s Niagara and Unity’s particle system. However, the capability to store curl in an easily accessible and modifiable form opens up new possibilities. Now, it can be applied with rigid bodies as a force field, as opposed to the velocity field used previously. This allows debris fragments to interact with each other and with the surrounding environment.

To set the object in motion, you only need to sample the cached curl using the object’s gravity center, convert the color to a vector, and apply it as a force:

void ApplyForce(Rigidbody rigidbody, Texture3D curl, float fieldSize, float strength)
    Vector3 samplingPoint = rigidbody.position / fieldSize;
    float u = samplingPoint.x;
    float v = samplingPoint.y;
    float w = samplingPoint.z;
    Color sample = curl.GetPixelBilinear(u, v, w);
    Vector3 force = new Vector3(sample.r, sample.g, sample.b) * strength;

In this proof of concept, NVidia PhysX was employed for physical calculations. While effective, it’s not the optimal solution, restricting the number of objects. A final solution would likely require a more customized approach, better suited to the parallel nature of the problem, utilizing GPU and compute shader.

The forcefield texture is scrolled to give the illusion of the force being animated:

    Vector3 samplingPoint = rigidbody.position / fieldSize + panningSpeed * Time.time;


While the conventional method of representing 3D shapes via triangle mesh is effective for rendering, it presents challenges in volumetric applications, particularly when storing a force field as a 3D texture. Fortunately, there’s a solution available – Signed Distance Fields.

SDFs represent shapes by assigning each space point a signed distance value relative to the nearest surface. Negative values denote points inside the shape, while positive values indicate points outside.

I won’t delve into the topic of SDF generation here; instead, I’ll concentrate on transforming SDF into a force field.

Surface Attraction

Similar to how gradients are crucial in generating normal maps from heightfields, the SDF gradient is key in providing extended surface normals in 3D. When used as a force, this gradient will push rigidbodies away from the shape. To make it more useful, some transformations are necessary. Firstly, taking the absolute value of the SDF yields the distance to the surface. The gradient of this value, in its raw form, propels objects away from the surface rather than away from the center. Multiplying it by -1 reverses the direction. This transformation is sufficient to guide rigidbodies into the desired shape.

The transformed field is then cached into 32x32x32 3D RGB Float Texture as used as an force applied to the rigidbodies.

Surface Flow

A few issues became apparent. Firstly, movement is limited, primarily just jittering at the edges. This was anticipated, and I plan to address it by implementing curl.

Secondly, the visible voxel layer lines on some walls suggests that an increase in the forcefield resolution may be necessary.

The application of curl noise here is inspired by the way curl is generated in 2D. In that case scalar values from Perlin noise are interpreted as lengths of vectors perpendicular to the plane. This same approach is applied to the surfaces of 3D objects.

The gradient of the Signed Distance Field (SDF) generates a vector field where vectors are perpendicular to the shape’s surface. These vectors are then multiplied by the values of 3D Perlin noise. The curl of this modified gradient produces vectors parallel to the object’s surface. Using this curl as a velocity field results in motion paths that roughly conform to the surface.

The combination of the surface attraction forcefield and surface curl has effectively produced the intended result for a simpler shape. The next step involves testing this method on a more complex humanoid form to assess its effectiveness in a detailed scenario.

Proof of Concept

When sculpting the creature’s model, I focused only its primary features. There is no reason to add any details – they will be cost in the final effect.

I used oil-based clay and photogrammetry. It a matter of personal preference – the model could have been made with 3D modelling package with the same results.

Following the scanning and cleanup in Blender, the mesh underwent the same procedure as the initial test shape. Signed Distance Field was generated and then processed to obtain the forcefield. Subsequently, the forcefield was stored as a 32x32x32 pixel 3D float texture..

The initial test answered two of the questions: creating a procedural creature with real-time rigidbodies is feasible, and storing force fields as textures appears to be an effective approach.

The next challenge is determining the required number of stone fragments to accurately represent the shape.

The humanoid shape is discernible even with a surprisingly low number of objects, which also gives the creature a rough and untamed appearance.

The final and most daunting challenge is animating the creature. Given that its shape is texture-defined, the most straightforward method of animation is a frame-by-frame approach. While not ideal due to the substantial memory usage of 3D textures, this method is easy to implement. For this, the original mesh was rigged and posed. Each frame then underwent the same forcefield generation process.

The result is rather underwhelming. The creature simply ‘morphs’ between frames, lacking any discernible movement of the limbs. While adding more in-between frames might alleviate the issue, a more robust solution, potentially based on skeletal animation, would be more appropriate.

Further Development

Using a procedural creature as the player’s avatar shows promise. The shape remains easily identifiable even with a limited number of debris chunks. Incorporating curl noise to introduce secondary motion to the surface was a good decision – it feels natural and brings the creature to life.

Animation presents a challenge; using basic frame-by-frame approach resulted in chaotic motion. A more sophisticated system will be necessary, particularly given that the creature will be driven by player input.

Performance was intentionally set aside for now; PhysX served as a convenient testbed but not necessarily an optimal one. Ultimately, the entire simulation will be moved to the GPU, so optimizing anything right now will be a wasted effort.


I am really pleased with the outcomes of this test. Initially, I perceived curl noise as a mathematical novelty with limited application, but it turned into a valuable addition to my toolbox. Moreover, guiding the rigidbodies into the desired shape turned out to be a more straightforward than I expected. I had prepared to use various tricks and hacks, but the implementation was surprisingly simple.

Creating my own tools for generating forcefields from scratch was a valuable learning experience at best. It was more time-consuming than I anticipated. In a typical production environment, where time constraints are a significant factor, using a more established tool like Houdini would likely be a better choice.