Nick Evanson, Hardware writer
This month I’ve been testing: Ghost of Tsushima. Nixxes has done one heck of a great job porting it to the PC, especially its support for the PS5 Dualsense controller. Oh, and I’ve also been delving into a Ryzen 7 5700X3D, as an upgrade for a 5600X. More on this soon.
Last week, UL Benchmarks released Steel Nomad, a new graphics test for 3DMark, with the ambition that it will eventually replace Time Spy Extreme as the most used benchmark for GPUs. Although I’ve been running it a fair bit of late, on a variety of different systems, I tried it briefly before launch while collating a performance analysis of Ghost of Tsushima.
Steel Nomad and Ghost of Tsushima both have superb graphics, either through a combination of the very latest rendering techniques and high-resolution assets or because the art direction is top-notch.
But as I was writing up the analysis, it got me thinking about the days of Final Reality benchmark, the precursor to 3DMark, and the first Unreal game. Back then, I had an Intel Pentium II 233 MHz gaming PC, with an Nvidia Riva TNT paired with a 3dfx Voodoo2 graphics card.
By modern standards, it was all incredibly basic stuff, but I used to get goosebumps seeing the use of multi-textures and lighting in Unreal. Final Reality was far less pretty than some examples in the 90s demo scene but it was exciting to see how well my PC could run it.
Each year, new benchmarks and games would get released and raise the graphics bar to another level. Like so much of the PC crowd I hung around with then, the Nature test in 3DMark2001 was a real ‘wow’ moment, as was the Advanced Pixel Shader test.
I couldn’t believe that such graphics were possible on everyday hardware and still run at a fair rate. Successive generations of GPUs offered increasingly more features and both ATI and Nvidia would make demos to show off what their chips could do.
Fast forward to now and you have mid-range graphics cards capable of achieving high frame rates at high resolutions, all without getting too stressed. Some of the more recent games we’ve seen possess visuals of such detail and intricacy that they wouldn’t look out of place in a movie or TV show from just a few years ago. But as much as I like them all, none of them gives me quite the same feeling the Nature test or the Voodoo2 running Unreal gave me.
Even the introduction of real-time ray tracing in games didn’t move me like seeing per-pixel water reflections for the first time. Cyberpunk 2077, with all the bells and whistles turned to their maximum values, looks staggering and it’s a great benchmark for grinding any GPU to dust.
And yet while I admire its technical achievements, I haven’t spent anything like the amount of time I used to in old games, just staring at graphics.
My partner has been gaming for most of her life, but I’ve recently introduced her to PC gaming. Her current game of choice is Hogwarts Legacy and our reactions to the textures, lighting, and overall details couldn’t be more different. Where I felt the developers did a good job at capturing the whole Harry Potter vibe but fell short of giving it great graphics, her opinion has been the complete opposite.
“Have you seen this? Just look at that! Wow, that is so cool…”
That’s the kind of giddy excitement I had with 3DMark2001, Unreal, Quake III Arena and countless others, so I don’t think my subdued feelings have anything to do with the fact that many of today’s games have poor environment readability—this is where artists pack so much detail into their game’s world that they’re just too busy and too complex for any one aspect to really stand out.
So I guess it must come down to familiarity. Graphics and games have been a part of my life, either professionally or merely for entertainment, for over 40 years. While it’s certainly not a case of ‘familiarity breeds contempt’ or the like, I suspect that it’s harder to surprise someone who’s seen so much of it, for so long.
Don’t get me wrong, though. I’m thoroughly enjoying Ghost of Tsushima, both its graphics and gameplay. I’m also really looking forward to seeing what AMD, Intel, and Nvidia will do in their next generation of GPU architectures, even though I know there won’t be any significant breakthroughs, in terms of design, performance, or features. I know full well that the days of seeing a 50%+ increase in rendering power between successive chip releases are long gone, just as it has with the single-thread performance in CPUs.
Advances in hardware and software technology have pushed chip makers towards a fairly homogenous design and while there are still some fundamental differences between an AMD and an Nvidia GPU, it mostly concerns things like shader occupancy or cache hierarchies—stuff that affects overall performance rather than what the GPU can or can’t do.
Pick up any new graphics card and it’ll fully support Direct3D and Vulkan graphics APIs, a far cry from the early days of GPUs.
What transpired 26 years ago was ground-breaking and both game developers and hardware engineers were constantly stepping into uncharted territory. Explorers of a new world, so to speak. I guess it’s just not new for me anymore and although I can enjoy the easy living in this world that coders and engineers have made, I can’t experience the wonder of seeing it new for the first time.
But watching my partner beam with delight upon seeing a glorious 3D environment, replete with meshes, textures, lights and shadows, I know that there will always be fresh arrivals in this world who’ve yet to fully experience what it has to offer.
And that encourages me greatly, even though the entry fee to this PC Wonderland is as high as it has ever been. Games and graphics might not be a quantum leap better in another 26 years, but I can’t wait to see what they’ll be like—because they’ll still give someone that ‘wow’ factor.