Dissecting Curl Noise

Dissecting Curl Noise

This formula, F(p) = ∇⨯(∇(SDF(p))*K(p)) – ∇(|SDF(p)|), looks exactly like the arcane mathematical gibberish I would have memorized for a semester, only to forget a month after the exam. There’s a weird thing about these math oddities – the more abstract and convoluted they are, the higher the likelihood they’ll find their way into game development. This time, it’s no different; curl is a step necessary to transform a clay miniature into an animated creature.

Case Study

I’m developing an interactive GPU-based destruction tech demo that requires a player’s avatar. It will be a humanoid entity made from floating stone debris, a departure from the traditional golem design. In this concept, the stones are not part of a solid body but are manipulated by some force to form an external shell, while the interior remains hollow.


Similar themes are popular in film and gaming, with notable examples being Imhotep’s face materializing at the forefront of a sandstorm in ‘The Mummy’ and the Sandman character in ‘Spider-Man 3’.

‘Knack’ represents a similar concept but differs significantly: its creature is hand-animated rather than procedurally generated. On the other hand, ‘Katamari’ uses a procedural method for its growing ball but it is not animated.


A key requirement for my project is modeling the stone fragments as rigidbodies. This allows for two-way interaction between the environment and the creature, enabling it to navigate and interact with the physical world similarly to a typical player character, with the ability to push objects and destroy obstacles.

Although I plan to build the simulation from scratch using a compute shader, my current focus is on creating the creature. Several questions must be addressed before proceeding:

  • How should the stones be moved?
  • What is the minimum number of stones needed to clearly define a humanoid shape?
  • What method should be used to animate the humanoid shape?
  • Can real-time physics and rigidbodies be feasibly used for this purpose?

I have some ideas for arranging the stones into a specific shape, but ensuring their fluid flow on the surface is a different issue. To solve this, I’ll try to use curl noise, which seems to possess the visual qualities I need. Before fully committing to it, I want to understand curl noise and its properties to ensure it’s the right solution for the project.

“Curl of Noise”

The name ‘curl noise’ might suggest it’s a type of noise, like Worley or Perlin, but that can be somewhat misleading. Rather than indicating a specific type, ‘curl’ denotes an operation applied to input noise. Importantly, this operation isn’t confined to noise alone; various inputs such as vector fields or textures can also undergo the curl treatment. Concept is the same as in the case of heightfield-to-normalmap algorithms, just with a different formula.

Simply put, it can be considered as noise being subjected to the curl operation. It is probably easier to think of it as “curl of noise”.

The versatility of curl lies in its ability to handle various types of data. It extends beyond conventional applications like particle systems – it can be used to animate materials and aid in generating procedural geometry, to name a few examples.

Curl in 2D

As highlighted earlier, curl noise shares certain similarities with algorithms used in normal map generation. It is easier to understand how it works while comparing to something familiar. To illustrate, let’s use a few layers of Perlin noise generated in Substance Designer. These layers will be blended together into heightmap, a base for normal map.

Bright spots represent hills, and dark areas represent valleys. To create a normal map, the next step is to calculate the slope or gradient.


To achieve this, for any given point P, we need to sample four nearby points. The height difference between the east (E) and west (W) points will be represented as the x-component of the slope, while the difference between the north (N) and south (S) points will stand for the y-component accordingly.

Vector2 SampleGradient( Texture2D texture, Vector2 coords, float delta )
    float dX = texture.GetPixelBilinear(coords.x + delta, coords.y).r
             - texture.GetPixelBilinear(coords.x - delta, coords.y).r;
    float dY = texture.GetPixelBilinear(coords.x, coords.y + delta).r
             - texture.GetPixelBilinear(coords.x, coords.y - delta).r;
    return new Vector2(dX, dY) / ( 2 * delta );

Repeating this process for every point in the heightfield generates a vector field where each vector indicates the direction uphill. After calculating the gradient, additional steps are needed to generate a normal map. Initially, the gradient is inverted to convert slope information into normals. The Z component is subsequently reconstructed, ensuring the normal remains a unit vector. Following this, the X and Y components are packed into the 0-1 range for storage, allowing them to be preserved without clipping negative values. When juxtaposed side by side, the relationship between the gradient and the normal map becomes apparent.

Note that the arrows overlaid on both the gradient and normal map represent the same gradient values for easy comparison.


The steps for calculating curl are nearly identical:

Vector2 SampleCurl( Texture2D texture, Vector2 coords, float delta )
    float dX = texture.GetPixelBilinear(coords.x + delta, coords.y).r
         - texture.GetPixelBilinear(coords.x - delta, coords.y).r;
    float dY = texture.GetPixelBilinear(coords.x, coords.y + delta).r
         - texture.GetPixelBilinear(coords.x, coords.y - delta).r;
    return new Vector2(dY,-dX) / ( 2 * delta );

The key difference lies in the direction. While both curl and gradient vectors have the same length, the curl vectors don’t point uphill; rather, they point sideways. When arranged in an array, these curl vectors create a pattern resembling contour lines on a map.

Overlaying curl and gradient vectors reveals that they are essentially the same, only rotated 90 degrees. A closer examination of the code confirms this.


return new Vector2(dX, dY) / ( 2 * delta );


return new Vector2(dY,-dX) / ( 2 * delta );

The relationship between curl, gradient and normalmap becomes convenient when generating curl. Curl can be generated by simply converting normalmap.

The normal map, essentially an inverted gradient with an additional z component, can be transformed into curl through a few simple operations. It’s important to note that differences exist between OpenGL and DirectX normal maps, potentially requiring adjustments to the sign of the Y component during the transformation process.

Curl vs Gradient

Why bother with curl when the gradient is essentially the same and more intuitive? Let’s imagine that both curl and gradient represent velocity fields describing the motion of a fluid. If we place a particle in that fluid, we can trace its path.

The dot is our starting point, and the path unfolds after 10,000 integration steps. We’re focused on the nature of the motion rather than achieving pinpoint accuracy, so the forward Euler method works well:

Vector2[] IntegrateCurl( Texture2D texture, Vector2 start, float stepSize, int stepCount )
    Vector2[] path = new Vector2[stepCount];
    path[0] = start;
    for( int i = 1; i < stepCount; i++ )
        path[i] = path[i - 1] + SampleCurl( texture, path[i - 1], 0.01f ) * stepSize;
    return path;

Let’s calculate multiple paths with origins located nearby. Each path will have different shade:

Gradient paths typically converge to a single point rather swiftly. In contrast, curl generates log, conformal lines that follow a twisting pattern. This distinctive feature is not a byproduct of utilizing Perlin noise as an input; instead, it is an inherent trait of curl, known for being divergence-free.


In fluid flow, divergence tells us if fluid is spreading out or coming together at a point. Positive divergence means spreading out, and negative means coming together. To imitate the flow of water in a pool, we use incompressible fluid as a an approximation. Incompressible fluid has zero divergence, meaning it doesn’t spread out or come together. That’s why opting for a divergence-free curl is the better choice when creating a velocity field.

Curl in 3D

The comparison between gradient and curl was helpful in illustrating the concept in 2D. However, in 3D, the relationship between the two breaks down. Gradient is defined for scalar-valued fields, while curl in 3D can only be defined for vector-valued fields. In 2D, we made the comparison work by treating the scalar field like a vector field. The vectors were oriented perpendicular to the texture plane, and their magnitudes were based on brightness.

To extend this approach to 3D, we need a new source of noise – similar in nature to Perlin noise but in vector field form.

For equivalent results to Perlin noise in 3D, we need a 3D vector noise field that fulfills specific criteria:

  • Tileable in all directions:
    Essential for storing the result in a 3D texture. Imagine the texture as a cube; the left side should seamlessly match the right, the top should fit the bottom, and so on.
  • Isotropic:
    It should behave uniformly in all directions, maintaining the same frequency and amplitude along each axis. This ensures that, regardless of where or in which direction we sample, the resulting turbulent motion remains consistent.
  • Differentiable:
    The sampling method used is essentially a symmetric difference quotient. To achieve a smooth-looking curl, at least the first derivative needs to be continuous.

3D Noise

Similar to Perlin noise, the process begins by generating a set of random directions, evenly distributed on a uniform grid and constrained within a bounding cube. Each node on the grid is assigned a random unit vector.

While a magnitude of one is not obligatory, it tends to make the field more visually interesting.

Using a uniform grid with randomized directions leads to an isotropic field. Interpolating between these generated vectors forms a continuous 3D field. Wrapping indices during interpolation ensures tileability, creating a seamless and repeating pattern.

int WrapIndex(int index, int gridSize)
    return ((index % gridSize) + gridSize) % gridSize;

Linear interpolation between direction vectors doesn’t provide a differentiable function. A piecewise linear function results in a piecewise constant derivative, leading to a blocky curl texture. Smoothstep interpolation will solve that problem in a single line of code:

  t = 3 * t * t - 2 * t * t * t;
Vector3 Smoothstep(Vector3 vectorA, Vector3 vectorB, float t)
    t = 3 * t * t - 2 * t * t * t;
    return Vector3.Lerp(vectorA, vectorB, t);

Higher-order interpolation methods yields noticeably smoother transitions between values:

While the current result meets all the requirements, it lacks complexity. To add more detail, multiple layers, called octaves, will be blended together.

public class MultiOctaveNoise : ScriptableObject
TileableNoise[] _octaves = new TileableNoise[0];

Higher-frequency octaves have a progressively smaller impact on the final result.

public Vector3 Sample(Vector3 position)
    Vector3 sum = Vector3.zero;
    float weight = 1.0f;
    float accumulatedWeight = 0.0f;
    for (int i = 0; i < _octaves.Length; i++)
        sum += _octaves[i].Sample(position) * weight;
        accumulatedWeight += weight;
        weight *= 0.5f;
    return sum / accumulatedWeight;

This type of texture is commonly generated using Perlin noise with different seeds for each of the X, Y, and Z components. The approach presented here, inspired by Perlin but rearranging some steps to obtain a vector field directly, is not only computationally more efficient but also easier to code.

Curl in 3D Space

The 3D vector noise serves as the foundation, but to transform it into a flow field, the next step is to compute its curl. Calculating curl in 3D space isn’t notably more complex or computationally expensive than its 2D counterpart. In this implementation, we utilize 6 samples for the 3D case, compared to the 4 used in the 2D scenario.

Unit vectors ijk correspond to xy and z axis. Curl of 3D vector field P can be written as:

Code representing the formula:

public static Vector3 SampleCurl(MultiOctaveNoise noise, Vector3 position, float delta)
    Vector3 deltaX = Vector3.right   * delta;
    Vector3 deltaY = Vector3.up      * delta;
    Vector3 deltaZ = Vector3.forward * delta;
    float   span   = 2 * delta;
    Vector3 dX = (noise.Sample(position + deltaX) 
                - noise.Sample(position - deltaX)) / span;
    Vector3 dY = (noise.Sample(position + deltaY) 
                - noise.Sample(position - deltaY)) / span;
    Vector3 dZ = (noise.Sample(position + deltaZ)
                - noise.Sample(position - deltaZ)) / span;
    Vector3 curl = (dY.z - dZ.y) * Vector3.right  +
                   (dZ.x - dX.z) * Vector3.up     +
                   (dX.y - dY.x) * Vector3.forward;
    return curl;

Applying curl to the tileable noise vector field yields the following result:

Curl of a tileable vector field is also tileable:

However, whether it can effectively serve as a flow field is still to be determined.

Curl Noise Motion

The most effective way to visualize the motion is, unsurprisingly, through animation. If the curl vector field can serve as a flow field, it should mimic particle behavior as if they were suspended in turbulent fluid.

A grid of 30 x 30 spheres was arranged to represent particles. Each sphere had a path generated through 3000 steps using the Euler method. These paths were then transformed into rope meshes. To indicate tiling boundaries, a wireframe cube was included.

The flow blends several main currents with random motion. I want to use it for simulating debris movement in a way that mimics the motion of bird flocks; the result is very close to that.

The trajectories of particles not only simulate fluid-like movements but also create formations resembling vines or tentacles. It will certainly find use in procedural mesh generation.


Curl relies on lots of samples from a layered vector field, making it computationally heavy. To handle this, the tileability requirement was introduced, making it possible to store calculations in a 3D texture. But there’s a trade-off – it introduces inaccuracies because the analytical definition is per point, while the 3D texture stores values per voxel and interpolates them. Further complicating matters, the values are stored with limited precision.

Let’s explore how these inaccuracies impact the motion within the vector field. The previously generated field is cached as a texture with dimensions 64x64x64 and a precision of 32 bits per channel.

The trajectory is rapidly diverging, yet the crucial aspect, the character of the motion, is remarkably well-preserved. Through caching, it becomes feasible to substitute all the operations necessary for obtaining curl with a single sampling of a 3D texture. This is a significant improvement, especially considering that we need the curl value for thousands of objects per each frame.


Until now, this has been a mere replication of the systems already found in Unreal’s Niagara and Unity’s particle system. However, the capability to store curl in an easily accessible and modifiable form opens up new possibilities. Now, it can be applied with rigid bodies as a force field, as opposed to the velocity field used previously. This allows debris fragments to interact with each other and with the surrounding environment.

To set the object in motion, you only need to sample the cached curl using the object’s gravity center, convert the color to a vector, and apply it as a force:

void ApplyForce(Rigidbody rigidbody, Texture3D curl, float fieldSize, float strength)
    Vector3 samplingPoint = rigidbody.position / fieldSize;
    float u = samplingPoint.x;
    float v = samplingPoint.y;
    float w = samplingPoint.z;
    Color sample = curl.GetPixelBilinear(u, v, w);
    Vector3 force = new Vector3(sample.r, sample.g, sample.b) * strength;

In this proof of concept, NVidia PhysX was employed for physical calculations. While effective, it’s not the optimal solution, restricting the number of objects. A final solution would likely require a more customized approach, better suited to the parallel nature of the problem, utilizing GPU and compute shader.

The forcefield texture is scrolled to give the illusion of the force being animated:

    Vector3 samplingPoint = rigidbody.position / fieldSize + panningSpeed * Time.time;


While the conventional method of representing 3D shapes via triangle mesh is effective for rendering, it presents challenges in volumetric applications, particularly when storing a force field as a 3D texture. Fortunately, there’s a solution available – Signed Distance Fields.

SDFs represent shapes by assigning each space point a signed distance value relative to the nearest surface. Negative values denote points inside the shape, while positive values indicate points outside.

I won’t delve into the topic of SDF generation here; instead, I’ll concentrate on transforming SDF into a force field.

Surface Attraction

Similar to how gradients are crucial in generating normal maps from heightfields, the SDF gradient is key in providing extended surface normals in 3D. When used as a force, this gradient will push rigidbodies away from the shape. To make it more useful, some transformations are necessary. Firstly, taking the absolute value of the SDF yields the distance to the surface. The gradient of this value, in its raw form, propels objects away from the surface rather than away from the center. Multiplying it by -1 reverses the direction. This transformation is sufficient to guide rigidbodies into the desired shape.

The transformed field is then cached into 32x32x32 3D RGB Float Texture as used as an force applied to the rigidbodies.

Surface Flow

A few issues became apparent. Firstly, movement is limited, primarily just jittering at the edges. This was anticipated, and I plan to address it by implementing curl.

Secondly, the visible voxel layer lines on some walls suggests that an increase in the forcefield resolution may be necessary.

The application of curl noise here is inspired by the way curl is generated in 2D. In that case scalar values from Perlin noise are interpreted as lengths of vectors perpendicular to the plane. This same approach is applied to the surfaces of 3D objects.

The gradient of the Signed Distance Field (SDF) generates a vector field where vectors are perpendicular to the shape’s surface. These vectors are then multiplied by the values of 3D Perlin noise. The curl of this modified gradient produces vectors parallel to the object’s surface. Using this curl as a velocity field results in motion paths that roughly conform to the surface.

The combination of the surface attraction forcefield and surface curl has effectively produced the intended result for a simpler shape. The next step involves testing this method on a more complex humanoid form to assess its effectiveness in a detailed scenario.

Proof of Concept

When sculpting the creature’s model, I focused only its primary features. There is no reason to add any details – they will be cost in the final effect.

I used oil-based clay and photogrammetry. It a matter of personal preference – the model could have been made with 3D modelling package with the same results.

Following the scanning and cleanup in Blender, the mesh underwent the same procedure as the initial test shape. Signed Distance Field was generated and then processed to obtain the forcefield. Subsequently, the forcefield was stored as a 32x32x32 pixel 3D float texture..

The initial test answered two of the questions: creating a procedural creature with real-time rigidbodies is feasible, and storing force fields as textures appears to be an effective approach.

The next challenge is determining the required number of stone fragments to accurately represent the shape.

The humanoid shape is discernible even with a surprisingly low number of objects, which also gives the creature a rough and untamed appearance.

The final and most daunting challenge is animating the creature. Given that its shape is texture-defined, the most straightforward method of animation is a frame-by-frame approach. While not ideal due to the substantial memory usage of 3D textures, this method is easy to implement. For this, the original mesh was rigged and posed. Each frame then underwent the same forcefield generation process.

The result is rather underwhelming. The creature simply ‘morphs’ between frames, lacking any discernible movement of the limbs. While adding more in-between frames might alleviate the issue, a more robust solution, potentially based on skeletal animation, would be more appropriate.

Further Development

Using a procedural creature as the player’s avatar shows promise. The shape remains easily identifiable even with a limited number of debris chunks. Incorporating curl noise to introduce secondary motion to the surface was a good decision – it feels natural and brings the creature to life.

Animation presents a challenge; using basic frame-by-frame approach resulted in chaotic motion. A more sophisticated system will be necessary, particularly given that the creature will be driven by player input.

Performance was intentionally set aside for now; PhysX served as a convenient testbed but not necessarily an optimal one. Ultimately, the entire simulation will be moved to the GPU, so optimizing anything right now will be a wasted effort.


I am really pleased with the outcomes of this test. Initially, I perceived curl noise as a mathematical novelty with limited application, but it turned into a valuable addition to my toolbox. Moreover, guiding the rigidbodies into the desired shape turned out to be a more straightforward than I expected. I had prepared to use various tricks and hacks, but the implementation was surprisingly simple.

Creating my own tools for generating forcefields from scratch was a valuable learning experience at best. It was more time-consuming than I anticipated. In a typical production environment, where time constraints are a significant factor, using a more established tool like Houdini would likely be a better choice.