Gamers Hate Nvidia's DLSS 5. Developers Aren’t Crazy About It, Either

Foto: Wired AI
Nvidia's DLSS 5, an artificial intelligence technology for scaling graphics in games, is facing widespread criticism from both players and game developers. Players complain about image quality and visual artifacts, while developers express concerns about implementation complexity and support for various platforms. The main issue is that DLSS 5 requires significant computational resources and does not always deliver the results promised by the manufacturer. Players report that upscaled graphics often look worse than native resolution, particularly in fast-moving scenes. Developers, in turn, are worried about compatibility with other technologies, the time needed for optimization, and pressure to support competing solutions. The situation highlights a broader problem in the gaming industry — the rush to implement advanced AI technologies without sufficient time to perfect them. For users, this means it is not always worthwhile to update software if current solutions work satisfactorily. Nvidia will need to significantly improve DLSS 5 to regain the community's trust.
Nvidia has just released DLSS 5, its latest AI-powered scaling technology, which was supposed to be a breakthrough in computer gaming. Instead, it has faced a wave of criticism from two sides — both from players who find the effects unpleasant to the eye, and from developers who are not convinced of its practical application. The irony is thick: technology that was supposed to be the future of gaming is already struggling with resistance from the industry it's meant to improve.
The history of scaling in games is a long road from simple interpolation techniques to today's AI-based solutions. DLSS (Deep Learning Super Sampling) has built itself a solid reputation in previous versions, but DLSS 5 introduces a radically different paradigm — instead of just scaling the image, it uses AI-generated motion prediction to reconstruct entire frames. This theoretically sounds revolutionary. In practice, it turns out to be something completely different.
Uncanny Valley of Visual Effects
When the first games with DLSS 5 reached players' hands, the community quickly noticed a problem that is difficult to describe but very easy to see: the images look strange, unnatural, as if something is wrong, yet it's hard to point out exactly what. Artifacts, interpolation errors, and stiff character animations make the game look like a processed deepfake rather than an actual video game. This is a classic case of uncanny valley — a phenomenon where something almost realistic but not quite triggers discomfort instead of delight.
Read also
Players report specific issues: characters moving in strange ways, objects flickering or blurring in motion, textures dancing across the screen in ways that shouldn't happen. Gameplay footage from DLSS 5 quickly spread across Reddit and YouTube, and the comments were unanimous — turn this off. The problem is that for many players, DLSS 5 is the only way to achieve decent fps in new titles. It's a choice between a game that looks weird and a game that doesn't play.
Nvidia argues that the problems are temporary and that developers need time to optimize. Perhaps they're right — first implementations of new technologies are rarely perfect. However, the fact that so many players immediately noticed these problems suggests that something fundamental in the approach to frame prediction may be problematic. Reconstructing entire images from motion prediction is a task far more difficult than traditional scaling, and the boundary between what AI can do well and what it does poorly is clearly visible to the human eye.
Developers Have Other Problems
If players complain about how DLSS 5 looks, developers complain about how to implement it. Integrating new technology is always extra work, and DLSS 5 requires significantly more effort than previous versions. Developers must provide additional data to the system, optimize code for it, test results. All of this costs time and money.
Additionally, some developers express concerns about control over the final appearance of the game. When AI generates frames, game creators have less direct influence over what the player ultimately sees. This is a problem for artists and designers who spent months tweaking lighting, shadows, and details. If AI rewrites all of this on the fly, their work could be distorted in unpredictable ways.
There's also the issue of compatibility and long-term support. Developers remember how much time it took them to support older Nvidia technologies. Is it worth investing in DLSS 5 if DLSS 6 could appear next year, requiring everything from scratch? For smaller studios, this question could be decisive — they might simply decide it's not worth it.
The Technical Paradox of Motion Prediction
To understand why DLSS 5 goes so wrong, it's worth understanding what exactly it does. Previous DLSS versions rendered the image at low resolution and scaled it up while preserving the original's features. DLSS 5 goes further — it renders some frames at full resolution, then uses AI to predict how the next frame should look based on motion analysis.
The problem emerges when AI must predict something it cannot predict — for example, a character appearing from around a corner, or a quick camera pan. In these moments, the system tries to guess what should be on screen, and very often gets it wrong. The artifacts that players see are essentially AI errors that didn't have enough information to make a decision.
This is a fundamental problem — motion prediction cannot be perfect because the future doesn't exist yet. You can approximate it, but there will always be situations where the system gets it wrong. Nvidia hoped that AI would be smart enough to minimize these errors, but reality turned out to be more complicated than theoretical models.
Where DLSS 5 Actually Works
To be fair, DLSS 5 is not a total failure. In some scenarios — particularly in games with slower pacing, where cameras move smoothly and predictably — the technology works quite well. Turn-based games, strategies, or even some RPGs can benefit from DLSS 5 without drastic artifacts.
The problem appears primarily in fast-action games, where every pixel matters and movements are dynamic and unpredictable. In FPS games where the player quickly turns the camera, DLSS 5 falls apart. In racing games where vehicles move at high speed, the system can't keep up. These are exactly the games that players want to play with maximum performance, but DLSS 5 fails here.
Ironically, games that need aggressive scaling the least — calm, artistic experiences — are the ones where DLSS 5 works best. For games that truly need performance, the technology is less useful.
Competition Isn't Sleeping
Nvidia is not the only player in the upscaling market. AMD FSR 3 (FidelityFX Super Resolution) offers similar capabilities, though it's based on a different technical approach. Intel XeSS is a third option. Meanwhile, Sony and Microsoft are working on their own solutions for consoles.
Importantly, the competition doesn't have the same artifact problems as DLSS 5. FSR 3 is more conservative but stable. XeSS works solidly. This means that players can have alternatives, and developers can choose which technology to support. Nvidia has lost its monopoly on innovation in this area, and its technological advantage suddenly seems less certain.
The history of technology shows that dominance doesn't guarantee success. A better solution can come from an unexpected direction, and users quickly switch when they have a better option. For Nvidia, this is a warning — DLSS 5 must improve, and quickly, before competition takes over the market.
The Future of Scaling: Slow Evolution Instead of Revolution
Although DLSS 5 doesn't make a good first impression, Nvidia is probably right in claiming that the technology will evolve. History shows that AI needs time to reach the point where it works naturally to the human eye. Deep learning requires enormous amounts of training data, and each new version can be better than the previous one.
However, this process won't be quick. Before DLSS 5 becomes a true standard, years could pass. In the meantime, players will play with DLSS 4, FSR 2, or traditional scaling. Developers will optimize for multiple technologies simultaneously rather than focusing exclusively on Nvidia's latest solution.
The reality is that AI-generated frame prediction may never be perfect. There will always be scenarios where the system gets it wrong. The question is whether the errors will be small enough to be acceptable, or whether they will always be visible to demanding players. Today, the answer is: too visible.
What This Means for the Future of Gaming
DLSS 5 is above all a lesson in humility for Nvidia and the tech industry in general. Advanced technology doesn't always mean a better user experience. Sometimes a simpler solution that works reliably is better than complicated technology that looks weird.
For players, this means they should be skeptical of promises about magical solutions. DLSS 5 may be the future, but it's not the future today. It's better to play with DLSS 4 or FSR 2 enabled, which work stably, than to experiment with DLSS 5 and watch artifacts.
For developers, this means that integrating new technologies should be optional, not mandatory. If a game looks better without DLSS 5, then play it without it. Players care about a good-looking game, not which technologies were used to create it.
Ultimately, DLSS 5 shows that the path to real artificial intelligence in games is longer than it seemed. AI can do amazing things, but it also has clear limitations. Before the technology truly becomes invisible to the player — before it works so well that no one wonders whether it's enabled — it must go through many more iterations. Nvidia knows this, developers know this, and players are beginning to understand it. The question is not whether DLSS 5 will eventually dominate — the question is how long it will take and whether something better won't appear in the meantime.
More from Industry
Elon Musk misled Twitter investors ahead of $44 billion acquisition, jury says
Super Micro co-founder indicted on Nvidia smuggling charges leaves board
Alibaba workforce shrinks 34% in 2025 as Chinese tech giant doubles down on AI
OpenAI to create desktop super app, combining ChatGPT app, browser and Codex app
Related Articles

Sorry, Amazon, you couldn't pick a worse time to bring a phone to market: IDC analyst
7h
WSL graphics driver update brings better GPU support for Linux apps
11h
Starship may chauffeur Orion to the Moon, as NASA mulls ditching SLS after Artemis V
11h

