Unity uses the standard ShaderA program that runs on the GPU. More info
See in Glossary language HLSL and supports general HLSL data types. However, Unity handles some data types differently from HLSL to provide better support on mobile platforms.
Shaders carry out the majority of calculations using floating point numbers (also called as float
in regular programming languages like C#). In Unity’s implementation of HLSL, the scalar floating point data types are float
and half
. These data types differ in precision and, consequently, performance or power usage. There are also several related data types for vectors and matrices such as half3
and float4x4
.
float
This is the highest precision floating point data type. On most platforms, float
values are 32 bits like in regular programming languages.
Full float
precision is typically useful for world space positions, texture coordinates, or scalar calculations that involve complex functions such as trigonometry or power/exponentiation. If you use lower precision floating point data types for these purposes, it can cause precision-related artifacts. For example with texture coordinates, a half
doesn’t have enough precision to accurately represent 1-texel offsets of larger textures.
half
This is a medium precision floating point data type. half
values can have a smaller range and precision than float
values. Half precision provides better shader performance for values that don’t require high precision such as short vectors or directions.
By default, half
is min16float
on platforms like mobile and float
on other platforms. Set Shader Precision Model to Unified to set half
as min16float
on all platforms.
min16float
is an HLSL type that indicates that floating point arithmetic operations can be performed in 16-bit. Depending on the target platform, it’s still possible that operations are performed in 32-bit instead.
The size and alignment of min16float
is always 4 bytes (32 bits) if it’s used in a CPU visible buffer, for example a constant buffer, a structured buffer or a vertex buffer input. If your project uses Unity 2023.2 or an earlier version and you target Metal, min16float
has a size and alignment of 2 bytes instead.
In texture declarations (such as RWTexture2D<min16float>
), min16float
is the input type in a store operation, and the type returned by a load operation.
Unity’s shader compiler ignores floating point number suffixes from HLSL. Floating point numbers with a suffix therefore all become float
.
This code shows a possible negative impact of numbers with the h
suffix in Unity:
half3 packedNormal = ...;
half3 normal = packedNormal * 2.0h - 1.0h;
Since the h
suffix is ignored, the shader compiler generates code that executes these steps:
1. Calculate an intermediary normal
value in high precision (float3
)
2. Convert the intermediary value to half3
.
This reduces your shader’s performance.
This code is more efficient because it only uses half
values in its calculations:
half3 packedNormal = ...;
half3 normal = packedNormal * half(2.0) - half(1.0);
Integers (int
data type) are often used as loop counters or array indices, and typically work fine across various platforms.
Depending on the platform you’ve chosen, your GPU might not support integer types.
Direct3D 11, OpenGL ES 3, Metal, and other modern platforms have proper support for integer data types, so using bit shifts and bit masking works as expected.
HLSL has built-in vector and matrix types are created from the basic types. For example, float3
is a 3D vector with .x, .y, .z components, and half4
is a medium precision 4D vector with .x, .y, .z, .w components. Alternatively, you can index vectors using .r, .g, .b, .a components, which is useful when working on colors. For example:
float4 myColor = ...
float redValue = myColor.r;
Matrix types are built in a similar way; for example float4x4
is a 4x4 transformation matrix. However, some platforms only support square matrices.
Typically, you declare combined texture samplers in your HLSL code in the following way:
sampler2D _MainTex;
samplerCUBE _Cubemap;
These samplers use the default sampler precision. You can change the default sampler precision for the whole Unity project in the Player SettingsSettings that let you set various player-specific options for the final game built by Unity. More info
See in Glossary using the Shader Precision Model option. If Unified is selected, then the default sampler precision is full precision on all platforms. If Platform Specific is selected, then the default precision is half precision on mobile platforms, and full precision on other platforms.
You can specify half precision explicitly by adding a _half
suffix to the sampler declaration:
sampler2D_half _MainTex;
samplerCUBE_half _Cubemap;
Or you can specify full precision explicitly by adding a _float
suffix:
sampler2D_float _MainTex;
samplerCUBE_float _Cubemap;
GPUs in desktop platforms and most modern mobile platforms support 32-bit floating point precision in the vertex and fragment shader stages. However, mobile GPUs perform better and are more energy efficient if you use lower precision.
If the platform supports lower precision, using half
has the following effects:
You should start with lower precision for everything except world space coordinates and texture coordinates. Check whether using lower precision causes visible errors in shader calculations (for example, color bands, or geometry that jumps between positions). If you see errors, increase precision.
Support for special floating point values can be different depending on which GPU family (mostly mobile) you’re running.
All PC GPUs that support Direct3D 10 support well-specified IEEE 754 floating point standard. This means that float numbers behave exactly like they do in regular programming languages on the CPU.
Mobile GPUs can have slightly different levels of support. On some, dividing zero by zero might result in a NaN (“not a number”); on others it might result in infinity, zero or any other unspecified value. Make sure to test your shaders on the target device to check they’re supported.
GPU vendors have in-depth guides about the performance and capabilities of their GPUs. See these for details: