# Neural LOD Selector
Training a small network (using C++, Metal and Pytorch) to predict the pixel error of different LOD levels. The network is then embedded in a compute shader (HLSL), implementing neural LOD selection on the GPU.
Combining this with Indirect Drawing, we can render 10,000's of instances more efficiency while applying GPU-side culling and LOD selection.
The model can automatically learn the relationships between different input features (e.g. object distance, screen size, velocity, FOV...) and the loss in image quality for each LOD level. We can then use the predicted pixel error to pick an optimal LOD.
The neural network is first trained using a standalone C++ application:
Training LOD levels: pixel error highlighted in green
We can use the model prediction to apply content-aware LODs based on the mesh characteristics. In this example, the delicate Sword asset maintains higher quality LODs at a distance, while the Rock uses more aggressive LODs. Feature selection is used to train the model based on combination of different parameters. By including the camera FOV as an input feature, the model can also switch to higher quality LODs when zooming on objects:
To learn more, check the Github repo: https://github.com/eldnach/neural-lod
# Procedural VFX Demo
Little demo implementing custom passes, compute and indirect rendering. Besides being fun and dynamic, procedural drawing enables GPU-side culling, while reducing the need for prebaked assets.
Particle effects created using shader and VFX graph: