Categories
Graphics rendering shader Tech

Making Atmosphere

Walking Through The Air

There are many ways of rendering skies in games. A classic technique, which is still one of the best, is to do it with images and map these images to a skybox (or sometimes a skydome). The humble skybox have been used for a long time and consists of one or several images. Sometimes they are rendered in a 3D software but they can also be painted directly by the artist. It is not uncommon to represent a sky using multiple skyboxes and cross-fade between these. The main benefit of using images for skyboxes is that they can support a lot of detail like clouds or landscape. However you have to consider the drawbacks, like extra memory usage for one thing.

Sky flying showcase
Sky editor showcase

The night sky in AER was made with a skybox, however most of the sky was rendered procedurally. One of our goals for AER was to make a dynamic sky that beautifully and seamlessly transitions when the player moves through it. This then becomes one of the reasons why texturing the entire sky with images was not a suitable solution and why I made a procedural sky. The procedural sky is not rendered with images but with a collection of parameters and properties that determines what the sky looks like.

skyb
Example of procedural sky in AER.
skyc
The same sky with different settings.

Properties of a Procedural Sky

A procedural sky can have a lot of properties, some more important than others. Each zone of the game gets its own a set of properties. The properties are first texture mapped, the textures are then transformed into world-space and scaled to fit the size of the entire world map of the game.

worldproj

Not all of the properties are textured mapped to the entire world, that would be too expensive performance-wise, but some of the more important properties are chosen for that like:

  • Sun height
  • Sun color
  • Fog thickness
  • Sky color

We are sampling these texture with the players world-position (the players position relative to the center of the world) and viewpoint. This technique is typically called volume ray marching or volume ray casting (“Volume ray casting,” Wikipedia).

Texture Sampling With Ray Marching

In some cases the player might have two or more different zones in view at any one time, and so in reality we would be seeing this represented on the sky. In real life of course we would just see this transition take place but to achieve this in the game we use ray marching.

rayMarching

Ray marching is a technique computed in a shader, our ray marcher for the fog breaks down into these simplified steps:

  1. Ray casting. A ray is sent from the view once for each pixel of the screen.
  2. Sampling. Several texture samples are taken at different distances along the ray.
  3. Composite each sample into the final fog color.

In other words we take several steps into the fog texture along a path and at each step taking a sample of the fog properties. We take these samples along the view vector for every pixel of the screen. To compute the final sky/fog color we then composite all of the samples we gathered. Compositing is simple in our case, we take the average value out of all of the samples.

Aurora editor showcase
Screenshot_aurora
Aurora

The Aurora is another example of raymarching, although it looks very different (than the fog) it works by the same principles as the fog does. The key difference with the aurora is the “compositing step”, which is the calculation that spits out the final color. But the effect still breaks down into the same basic steps, raycasting, sampling and compositing.

Stylized Effects

Water falls
Water in AER
Rift editor showcase

The interesting thing about the water of AER is the techniques to map procedural wave animations and textures. Typically when we are texturing 3D objects we use a technique called UV-mapping. UV-mapping is the process of projecting a 2D image onto a 3D model.

When an 3D artist is creating a 3D model they typically also create a UV-mapping for that specific model before texturing. So a UV-map is a coordinate system in 2D because the textures we use are 2D images most of the time. Typically every 3D model have UV coordinates stored in it so we can apply the textures.

rayMarching

The water in AER is no exception and use UV-mapping for the water streams. But there are parts of the water effect that only use a 1D mapping, as opposed to UV-mapping. Along the waters edge the waves are only 1D mapped and this  is also stored in the 3D model. So the question is; how is this data stored in a 3D model? Luckily both OpenGL and DirectX lets us store and use a bunch of data per vertex, like UV coordinates, normals, colors and positions.

 
 
Water showcase
Rift Tear
Rift

For the 1D water waves we store the mapping inside the vertex colors. The benefit to this is that vertex colors are editable in any 3D package just like the UV coordinates are. Instead of using the UV-editor in Maya you would use the vertex paint tool instead. Although for game productions of larger scale I would recommend looking for an automated solution or make purpose built tools for the job. I hardly scratched the surface of what could be done as far tools, but I think the first thing to try would be to create some tools for baking a 1D mapping into a texture or possibly into the vertices.

Even though the wave mapping is 1D (per vertex) the result we get is a 2D effect because the vertex colors are interpolated over the triangles. The actual geometry then becomes very important. This effect is a 1D effect dependent on the geometry of the model. This works perfectly with the polygonal art direction of AER and was the main reason for to actually use this approach. The approach is actually quite cumbersome, especially without purpose built tools.

We also used 1D mapping for a dissolve effect the same way and also stored it in the vertex colors for both magical and water effects.

That’s all folks, thanks for reading!