Polyphony Digital is developing a new rendering system for Gran Turismo that uses neural networks to determine which objects in a scene need to be drawn, and the early results suggest it could meaningfully improve performance on PlayStation 5.The system, called “NeuralPVS”, was detailed in a technical presentation at the Computer Entertainment Developers Conference (CEDEC) last year. The talk was given by two Polyphony graphics engineers: Yu Chengzhong and Hajime Uchimura.Polyphony Digital published their slides here:https://s3.amazonaws.com/gran-turismo.com/pdi_publications/CEDEC2025_NeuralPVS.pdf
Yu joined Polyphony Digital in 2024 after graduating from Tokyo University of Science, where he presented a technical paper on real-time volumetric rendering at SIGGRAPH 2023. Hajime is a longer-tenured engineer who has been with the studio since 2008 and is responsible for much of the existing technology that NeuralPVS aims to improve, including GT7’s current precomputed occlusion culling system. He has also worked on the game’s color reproduction systems (particularly car body paint), and HDR image processing for the Scapes photo mode.Polyphony Digital has a history of sharing its technical work at CEDEC and other academic conferences, and it always offers a fascinating and rare look behind-the-curtains at how the secretive studio’s custom technologies actually work. Past presentations have covered topics ranging from the “Iris” ray tracing system used in GT Sport, to its circuit scanning and course creation methods and procedural landscape generation techniques.
Every frame Gran Turismo renders contains thousands of objects: buildings, trees, grandstands, barriers, track surfaces, and everything else that makes up a course environment. But, at any given moment, only a fraction of those objects are actually visible to the player. Some are behind the camera, some are off to the side, and some are hidden behind other objects in the scene.Drawing all of those invisible objects would be a waste of processing power. So the game uses a process called “culling” to figure out which objects can be skipped. The better the culling, the less work the CPU and GPU have to do, which means more stable frame rates and potentially more room for visual detail.
Gran Turismo 7 currently uses a precomputed culling system. Before a course ships, Polyphony’s tools render the track from thousands of camera positions along the driving surface, recording which objects are visible from each spot. Those results are stored as visibility lists (internally referred to as “vision lists”) that the game looks up at runtime.To keep the data manageable, the system clusters those thousands of sample points into a smaller set of zones using a mathematical technique called Voronoi partitioning. At runtime, the game figures out which zone the camera is in and uses that zone’s visibility list to decide what to draw.
This clustering approach works, but it has some inherent limitations.The boundaries between zones are hard lines, which means visibility can only change in abrupt, discontinuous jumps as the camera crosses from one zone to the next. Those boundaries don’t always line up neatly with the actual geometry of the course, either, which can lead to objects popping in or out at moments that don’t look natural.The number of zones is also a manual tuning parameter. Too few and the culling is too coarse. Too many and the data gets unwieldy. It’s a balancing act that has to be revisited for each track.
Neural Networks to the Rescue!NeuralPVS replaces that zone-based lookup with a neural network that learns the relationship between a camera position and which objects should be visible. Instead of snapping to the nearest precomputed zone, the network takes the camera’s exact coordinates and outputs a visibility prediction for every object in the scene.The result is a smooth, continuous visibility field rather than a patchwork of discrete zones. Objects transition in and out of visibility gradually as the camera moves, rather than flipping on and off at arbitrary zone boundaries.
The approach was inspired by NeRF (Neural Radiance Fields), a technique from the research community that uses neural networks to represent 3D scenes. Polyphony’s team recognized that their visibility mapping problem (position in, visibility out) had a structurally similar shape and adapted the concept.Each course is divided into regions, and each region gets its own small neural network. The team tested several network architectures and settled on one using Fourier Feature Mapping, which maps input coordinates into a higher-dimensional space before processing them. This gave the best balance of accuracy and speed.
Very interesting threadPolyphony putting in heavy work for sony.Its the only game that makes me think about buying a ps5.
sonys inhouse engine studios put out a lot of work.https://www.iryoku.com/aacourse/downloads/06-MLAA-on-PS3.pdf
too much mumbo jumbo for literal snoyslop LMAO
>>735030010bump for actually interesting thread
For anons who only understand things in horrible analogies: neural networks are like vector graphics for rendering. If you have a texture or reference library they can greatly reduce compression artifacting and loading issues.
>>735030010That's really neatMachine learning is cool, I hope we get more good uses for it in games for optimizing or good procedural generation Man, I wish AI wasn't such a buzzword general umbrella term at times like this. Some people will see this thread and think "new sony car game is slopping all of their assets" when it's something completely different
>>735030010just realized the racing suit is in the wheel's reflection
Gran Slop
>>735030010This is just a fancy way of saying they're offloading occlusion culling to the GPU's slop module and hoping for the best instead of handling it programmatically.
>>735036669same. slop machines were a mistake, i wish that machine learning was strictly used for cool shit like this instead