Realistic is not necessarily the most convincing. Audio designers know that well and use frozen leeks and watermelons to create sounds of breaking bones and tearing flesh. Chalk shot from slingshot was safer alternative for actual firearms in old westerns. Fake doesn’t have to mean worse, especially when it is hard to tell the difference. With that in mind I will try to create complex flowfields without using computational fluid dynamics.

New, Better Jupiter

Jupiter’s is undeniably beautiful, but it’s safest to admire it from a distance. Characteristic patterns observed on the surface are, in fact, massive storms and Earth-sized cyclones. Jovian winds can reach speeds of hundreds of kilometers per hour, although they may appear quaint from orbit.

I plan to feature a Jupiter-like planet as part of the skybox. While not a unique concept and often seen in the sci-fi genre, I want to take it a step further by animating it. In reality, the motion of the atmosphere is too slow to be noticed. However, I will speed it up substantially to make the movement obvious. This should capture players’ attention and remind them that the action is taking place on a distant, alien world.


While I don’t want to create a carbon copy of Jupiter, there is a set of features that immediately come to mind when thinking about gas planet. Hopefully, by recreating these, I will get the visuals reminiscent of Jupiter.

There are three distinct flow patterns I want to replicate :

  • Cyclones: A couple of large, easily identifiable vortices, much like Jupiter’s ‘Great Red Spot.’ Typically elongated, they are often accompanied by a ‘wake’ – a trail of secondary vortices.
  • Jets: These are linear currents that run parallel to the equator, with easily visible turbulent transition layers.
  • Storms: Smaller, more volatile, and less defined flow structures that contribute to the texture of the atmosphere.

Animating Fluids

Recreating water or any other sort of fluid in games is challenging. Computational fluid dynamics is demanding in terms of memory and processing power, but that hasn’t prevented game developers from including water in their games. With a set of clever hacks and workarounds, there is no need for expensive simulation.

Color Cycling

Color cycling is a technique with a long history, dating back to systems like the NES. It’s akin to painting by numbers, where colors representing different numbers change each frame. Despite its simplicity, when applied to well-prepared input sprites, it can produce eye-catching effects.

Frame-by-Frame Animation

For a while, the frame-by-frame approach was the standard solution. Each frame was stored as a separate sprite, which demanded a lot of memory. As a result, this method was typically reserved for short animation loops consisting of only a few frames.

Limited number of animation frames leads to a ‘choppy’ motion, typical of 90s shooters.

Texture Scrolling

As soon as games moved into fully textured 3D environments, texture scrolling became a viable option for creating rivers. This method involves incrementing one of the UV components over time to create the illusion of moving texture. When combined with appropriate geometry and shaders, texture scrolling can yield impressive results and remains a popular choice.

While versatile this technique is limited to laminar flow – it is non-trivial to add vortices or any other complex flow patterns.

Quake UV Distortion

Id Software, the developer behind Quake, is renowned for its innovations. The water distortion they created is rarely listed among them, but is still worth looking into. This simple formula lends Quake’s lava, water, and portals their distinctive appearance.

Although it’s straightforward to replicate using shaders, the original effect was made without them, relying instead on software rendering.

Unreal WaterPaint

WaterPaint is one of the most intriguing methods for simulating fluids in games, yet it’s also one of the most challenging to understand. It appears to generate the surface of the liquid and subsequently uses this information to distort a texture. The system’s complexity borders on overengineering, particularly given that the resulting effect is often overlooked in a fast-paced shooter.

Similarly to Quake this technique also predates shaders. Texture pixels are manipulated by CPU and then sampled just like in the case of normal texture.

The methods mentioned are decades old and, on their own, may not hold up well today. However, they still have value as components within larger, more complex systems.

Velocity Texture

Let’s create a “universal” flow shader capable of representing any motion of the fluid. Intuitively, we’ll need two textures: one representing the color of the flowing substance, and another one storing the velocity field. This velocity field texture will be a 2-component 2D texture, with the red component representing the x component of the normalized velocity and the green component representing the y component.

Mapping vectors to color

The velocity field is then used to incrementally modify the texture coordinates of the color texture, in the same manner like in the ‘Texture Scrolling’ described previously. However, in this case, the velocity values vary for each pixel.

The result, while interesting, is rather disappointing as it doesn’t accurately simulate fluid motion. Instead, it is an animated distortion that gradually bends the color texture over time.

The formula worked for simple scrolling because in that case the motion was linear and constant. At each point, the velocity was the same. However, here the velocity is more complex – it is defined per pixel. The correct approach would be to use integration.

Euler Method

Let’s simplify the problem. Imagine we have a tiny speck of dust sitting on the surface of water that’s moving. We describe the water’s movement using a texture that shows its velocity. Now, we want to figure out the path this speck of dust would take.

The first solution that comes to mind is to take a small step forward, check and update the velocity, then take another step using the updated velocity, and repeat this process. This is called Explicit Euler Method:

The Explicit or Forward Euler method is often seen as the most basic and least accurate numerical integration method. The larger the integration step, denoted as “h,” the greater the error, and these errors accumulate over time. Even in the example shown, the integrated path represented by the orange arrows deviates significantly from the particle’s true path, depicted by the gray line. Fortunately this inaccuracy won’t be noticeable in the animation.

The problem is, it’s not easy to transform Euler Method directly into a shader. We need to keep track of the particle’s position after each step, and this is something shader alone cannot do. Position, in form of a deformed color texture, has to be stored in a texture.

The shader reads the texture storing the color of the fluid and deforms it slightly based on the velocity texture. The resulting deformation is then written back into the color texture. This operation is performed every frame of the animation.

To complicate the problem further, it’s generally not possible to read from and write to the same texture in a fragment shader. Unreal Engine solves this problem with Canvas.

Canvas Node Setup

Canvas enables the use of a flow shader (Material) to be drawn into a Render Target texture. What sets Canvas apart is its capability to use the same Render Target in both the input and output of the flow shader, forming a feedback loop. To make this process work, several components are necessary:

  • Render Target Texture: This image stores the state of the flow, or the color of the fluid in our case. It must be created before the animation begins and initial color has to be set.
  • Rendering Event: The process of updating the animation has to be performed every frame, or at least every frame the animated object is visible.
  • Flow Material Instance: An instance of the Flow Material is necessary, and it has to be supplied with its own Render Target Texture.

Once all these elements are in place, the Rendering Event, which corresponds to one step of fluid simulation, can be achieved using just 3 nodes:

  • Begin Draw Canvas to Render Target
  • Draw Material
  • End Draw

With everything set up, the resulting animation should look like this:

The initial image is shifted in a more fluid manner, with each pixel moving more continuously. However, it still falls short of resembling the motion of a liquid.

Improving Velocity Field

Up until now, we’ve relied on basic smoothed 2D vector noise. While sufficient for testing basic functionality, it not enough to realistically representing fluid flow. Liquids tend to swirl around, forming vortices and other complex patterns, which simple noise cannot approximate effectively.

Fortunately, a mathematical operator, the curl, can be particularly useful here. By applying it to scalar noise, we transform it into a velocity field full of vortices.

To describe curl in the simplest way possible – in 2D case curl will create a clockwise flow around areas brighter than the surrounding, and counterclockwise flow around darker areas. I describe curl in more detail in Dissecting Curl Noise article.

There are multiple ways to calculate curl. DDX and DDY operators are useful in Shadertoy, where the input is not a static texture but a procedural noise. For more traditional applications like Unreal and Unity, it’s probably better to generate it using image generation software like Substance Designer or Photoshop. Any software capable of generating a normal map from a grayscale image will be helpful here, as converting a normal map to curl is simply a matter of swizzling and inverting channels.

The addition of swirly motion adds a fluid-like quality to the animation, although it still appears somewhat static.

The stationary vortices are the reason behind the artificial appearance. This can be addressed by distorting velocity field using the same function that manipulated lava in Quake.

The flow still lacks the complexity needed to resemble natural motion, but this can be remedied with a more detailed velocity field.


When the algorithm runs for too long, another problem becomes apparent: mixing. While it’s a desired feature, after a while, it turns the flow colors into a solid mass, devoid of any visual interest.

To remedy that, color can be reintroduced by sampling the initial color texture and blending it with the Render Target texture. This process involves using a point grid mask to mimic pigment dissolving in the fluid.

That solves the issue of mixing, but the pattern of points remains too noticeable. By using a noise texture and applying Quake distortion to it, the effect becomes less conspicuous and more natural.

Jupiter’s appearance is attributed to its water-ammonia clouds, which have a range of compositions and colors. These clouds undergo atmospheric circulation, occasionally pushing layers from below to the surface. This phenomenon results in changes in surface colors and structure over time.

I’ll artificially limit cloud compositions to 3 and assign a texture channel to each. Then, to simulate shifts in composition, I’ll utilize color cycling. In shader terms, the initial color, before being sampled and mixed with the render target, will undergo slight modifications over time. The result may look psychedelic, but ultimately, it will replaced by more natural set of colors. Right now those rainbow patterns serve as an useful placeholder.

The left side displays the initial color, while the right side shows the flow.


Another side effect of mixing is the blurring of the texture. As the image becomes progressively smoother, the details are lost, causing the texture to appear low-resolution, which is certainly an undesirable outcome.

The obvious solution is to use sharpening – an operation opposite to blurring. In its simplest form, it samples five pixels – the original one and its four neighbors – and returns their weighted sum. The layout of the pixels with their respective weights is called a kernel.

I will use a slightly different formula, one that isolates the ‘delta’ or change in color. This delta is then multiplied by the strength of sharpening. This approach gives me more control over the effect.

The material graph representation might seem daunting at first glance, but it’s essentially the result of repeating the same sampling operation multiple times with different parameters.

Sharpen is a separate Canvas rendering operation that follows the animation step.

Sharpening enhances the details but also introduces stripe artifacts.

This occurs because it is part of the feedback loop. It amplifies the difference between pixels, and the next sharpening step further magnifies the difference. This continues until the color values reach their maximum or minimum value.

The solution to that problem is far from elegant but very effective – clamping the calculated difference. This way, the difference doesn’t increase exponentially and the artifacts have no chance to form.

Sharpening with clamping:

With that in place, we can consider the whole system complete – we have a set of tools that allow for recreating a wide array of flow types in real-time. Now, to make further improvements, we need to enhance the input data – specifically, the velocity field.

Flow Patterns

There are 3 flow patterns that can be easily identified on Jupiter:

  • Cyclones
  • Jets
  • Storms

This list is by no means exhaustive. While there are numerous smaller and less noticeable flow details, many of them can be replicated using the same techniques employed for the main three patterns.

I will attempt to translate these patterns into corresponding velocity fields. This way, the complex flow on Jupiter can be broken down into individual flow components. These components could then be rearranged later to create a new, unique gas giant.

Creating Flowfields

As mentioned earlier, in a real-game scenario like Unreal or Unity, it’s not practical to generate the velocity field from scratch. It’s more efficient to generate most of the components in Substance Designer or Photoshop and then combine them in the shader to achieve the desired result. This approach allows us to create complex patterns with no additional costs.

I chose to create velocity textures in Substance Designer due to its flexibility and non-destructive workflow.


A cyclone is essentially a large vortex. Creating one involves generating a large blurred black or white dot and passing it through the curl operator. To add more complexity, the result can be combined with another operator – the gradient. This allows the cyclone to either suck in or expel matter, making it more dynamic.

Relation between curl and gradient is explained here in more detail.

The velocity field is generated by blending a mixture of curl and gradient operators over the previously created flow pattern.

Substance Designer enables the creation of more complex and detailed velocity fields. In this case, the cyclone flowfield was slightly deformed and elongated, featuring a non-linear speed distribution. Unlike a flowfield generated in a shader from scratch, all these details incur no additional cost – everything is baked into the texture.


The bands around Jupiter are known as belts and zones. Belts consist of darker, warmer clouds, while zones are characterized by brighter clouds made of ice. Strong currents form at the transitions between these bands. These currents, known as jets, run parallel to the equator and alternate in direction. Where two jets meet, the flow becomes turbulent, creating chains of vortices.

Replicating that is relatively simple: blurred stripes represent the laminar flow of the jets, while a curl applied to a series of dots creates vortices. It’s worth noticing the color of the dots; the spin of resulting vortices has to match the direction of the surrounding jets.

Once again, the flowfield generated in Designer exhibits more detail. Transition vortices are more scattered and vary in size. Additionally, jets are slightly disturbed to create a more wavy flow.


“Storms” is the term I used to encompass all the smaller vortices and turbulent streams that accompany the main currents. They are essentially noise, and I will approach creating them in the same way I would create noise.

Noise is typically comprised of multiple layers called octaves. Each subsequent octave contains smaller details and has a diminishing influence. In the case of a velocity field, each layer also has to be animated separately.

Those layers are then blended together to form complex, turbulent motion.

Substance Designer features storms gathered into clusters. Two versions of that texture are packed into a single texture and blended using moving masks to simulate quickly shifting currents.

It’s a different approach that results in patches of turbulent flow, as opposed to the uniformly distributed storms generated in Shadertoy. These patches resemble what can be observed on Jupiter more closely.


The division into cyclones, jets, and storms was artificial but proved quite useful for illustrating some of the techniques that can be used to mimic real flowfields. Each flow pattern can be achieved in many different ways, with no single approach that can be described as the “right” one.

To merge all these components together, a simple addition would suffice, but using alpha blending allows for accentuating some features like cyclones and toning down turbulence in certain areas.

At this stage, when all the components are ready, blending them together is more a matter of artistic choice than mathematics. After all, none of the presented techniques have solid grounding in physics – they are just approximations of natural phenomena.

2010: The Year We Make Contact

When I started working on the animation of the gas giant, I was really excited about the idea because I naively thought that this was going to be something novel, never tried before. Obviously, I was wrong. Films like ‘Outland’, ‘2010: The Year We Make Contact’, and ‘Contact’ all featured animated Jupiter.

The most interesting portrayal here is the rendition created by Digital Productions for ‘2010: The Year We Make Contact’. The technology behind it is a marvel of CGI, even though it looks like a perfectly executed practical effect.

The basic idea remains largely the same: utilizing a flowfield to deform the initial image. However, the execution differs significantly. While I used Substance Designer to generate flow textures, the team at Digital Productions utilized actual fluid mechanics to simulate the flow. My solution to the problem of mixing was to artificially reintroduce the color, whereas they sidestepped the problem entirely by converting the image into particles.

Remarkably, all of this was accomplished without the aid of modern CGI software or computing power. Instead, it relied on the ingenuity of a team of brilliant engineers and artists, supported by the CRAY X-MP.

The work of Larry Yaeger and Craig Upson is described in greater detail in Siggraph and Cinefex articles. Additionally, there is a documentary available on YouTube.

Further Developement

The presented methods should be sufficient to create convincing-looking flow, but not necessarily a visually appealing planet. Achieving that requires several additional steps:

  • Colors: Currently, the R, G, and B channels represent different substances. Ultimately they will be replaced with a color texture.
  • UV Mapping: Currently, the texture is just a square; it needs to be wrapped around a sphere. However, I plan to use the method described in Flat Planets and apply it to a flat disc.
  • Shading: Atmosphere is lit differently than a solid, opaque object. A specialized shading model has to be created to complete the effect.


The initial setup required some effort, both in Unreal and in Substance Designer, but once in place, it allowed for easy tweaking and modifications. Since it does not rely on computational fluid dynamics, the motion can be handcrafted, which is both a strength and a challenge. It offers total freedom to create any flow imaginable, but requires the artist to have a basic understanding of fluid dynamics.

Most importantly, it can compete with actual fluid simulations while using only a fraction of resources. A full planet with a 1024×1024 texture takes less than 0.5ms to render, which is a modest price for such VFX.