If a picture is worth a thousand words, GTA V is now worth about ten thousand.
Grand Theft Auto V is the game that simply refuses to die. After multiple iterations on various consoles (both old and new), it’s still alive and kicking. One reason is that the modding community is absolutely in love with the GTA franchise, pouring all of their creativity into it over the years. One of the most interesting developments is the attempt to bringing photorealism to the expansive sandbox world that Rockstar originally created. Up until now, this has mostly been accomplished with the iCEnhancer graphical mod, that has continued to improve things like shadows, textures and shaders in order to generate photorealistic lighting and give the overall world a very true-to-life feel. Even more than just an average video game has a right to. But Cornell University researchers Stephan Richter, Hassan Abu AlHaija and Vladlen Koltun have taken things to the next level, utilizing technology that was previously thought to be far beyond our means in the gaming world. A recent study was released, which utilized special algorithms and artificial intelligence hardware that completely blew the doors off the photorealism chase (their gameplay test seen above). While they clearly state they are far from practical application across the entire game, it’s clear that with the help of NVIDIA’s DLSS technology, we have some serious deepfake gaming potential in the future. Meaning that it generates things procedurally, which is to say in real-time, pulling from a pool of comparable resources that match up. It definitely opens up the possibilities to a staggering new world of graphical emersion. An excerpt from the original Kotaku article can be found below, which attempts to briefly explain the technical aspect of the mod. We hope more developments like this come to games like GTA in the future. If successful, it’s likely to trickle up to the professional development level and provide new jumping-off points creatively in the future for companies like Rockstar and others.
We present an approach to enhancing the realism of synthetic images. – Researchers Stephan Richter, Hassan Abu AlHaija and Vladlen Koltun
From the Enhancing photorealism enhancement study (via Kotaku): We present an approach to enhancing the realism of synthetic images. The images are enhanced by a convolutional network that leverages intermediate representations produced by conventional rendering pipelines. The network is trained via a novel adversarial objective, which provides strong supervision at multiple perceptual levels. We analyze scene layout distributions in commonly used datasets and find that they differ in important ways. We hypothesize that this is one of the causes of strong artifacts that can be observed in the results of many prior methods. To address this we propose a new strategy for sampling image patches during training. We also introduce multiple architectural improvements in the deep network modules used for photorealism enhancement. We confirm the benefits of our contributions in controlled experiments and report substantial gains in stability and realism in comparison to recent image-to-image translation methods and a variety of other baselines.