Intel Machine Learning Makes GTA V Look Incredibly, Photo Realistic
One of the more exciting aspects of Grand Theft Auto V is how exactly the game’s San Andreas approximates real-life Los Angeles and Southern California, but check out this new machine learning project from Intel Labs called “Enhancing Photorealism Enhancement” might take that reality in an unsettlingly photorealistic direction.
Inserting the game through the processes researchers Stephan R. Richter, Hassan Abu Alhaija, and Vladlen Kolten created provides a shocking result: a visual look that has clear similarities to the kinds of photos you might casually take through the smudged front window of your car. You have to see it in motion to really acknowledge the depth of it, but the mixture of slightly washed-out lighting, smoother pavement, and believably reflective cars just convince you of the fact you’re looking out at the real street from a real dashboard. This is unreal and real on a new level.
The Intel researchers propose some of that photorealism comes from the datasets they feed their neural network. The group offers a more in-depth and precise explanation for how image improvement actually works in their paper (PDF), the Cityscapes Dataset that was used — built mostly from photographs of German streets — packed in a lot of detail. It’s fainter and from a different angle, but it almost takes what I imagine a smoother, more interactive version of going through Google Maps Street View. It doesn’t completely behave like it’s real, but it looks very much like it’s made from real things.
The researchers say their improvements go beyond what other photorealistic resolution processes are able of by also integrating geometric information from GTA V itself. Those “G-buffers,” as the researchers call them, can add data like the gap between objects in the game and the camera, and the quality of textures, like the shine of cars.
While you might not see an official “photorealism update” roll out to GTA V tomorrow, you may have already played a game or watched a video that’s helped from another kind of machine learning — AI upscaling. The method of using machine learning smarts to blow up graphics to higher resolutions doesn’t show up everywhere but has been highlighted in Nvidia’s Shield TV and in many different mod projects focused on improving the graphics of older games. In those cases, a neural network is making forecasts to fulfill the missing pixels of detail from a lower resolution game, movie, or TV show to reach those higher resolutions.
Photorealism probably shouldn’t be the only graphical object for video games to have, but this Intel Labs project does show there’s probably as much room to improve on the software as there is in the raw GPU power of new gaming consoles and brute gaming PCs.