Picture the scene. A development team, talented and ambitious, boots up Unreal Engine 5 for the first time. Nanite. Lumen. Virtual Shadow Maps. Temporal Super Resolution. The feature list reads like a wish list written by someone with an unlimited budget and an RTX 5090 on their desk. So they turn it all on. Why wouldn't they? It looks incredible. The tech demo that gets screenshotted and posted to social media gets thirty thousand likes. The publisher is ecstatic. The studio is riding high.
Then someone boots the build on a mid-range laptop. Or a PlayStation 5 in a warm room after two hours. Or — heaven forbid — the machine that most of their actual customers own. And suddenly that stunning tech demo becomes a slideshow. The memory climbs. The frame times spike. The shaders stutter. And six months later, on launch day, the Steam reviews roll in like a slow-motion disaster, and everyone agrees: it's Unreal Engine's fault.
Except it isn't. It never was.
The Convenient Scapegoat
It has become fashionable to blame Unreal Engine 5 for the performance sins of modern AAA games, and there is no shortage of ammunition for that argument. The complaints have become a familiar litany — games released with abysmal optimisation, stutters, frame rates dropping to single digits, frequent crashes, shaders compiling every time you launch, and an overall sense of disaster. It has been STALKER 2, Metal Gear Solid Delta, Mafia: The Old Country — the problem seems to follow the engine like a ghost in the machine.
Metal Gear Solid Delta, one of the most anticipated releases in recent memory, struggled to hit 4K 60fps even on medium settings with a card costing over a thousand dollars — and even when it did, frametime stutters remained. Most players were forgiving enough of the game's underlying quality to award it positive reviews regardless, but the performance criticism was widespread and fair. Wuchang: Fallen Feathers was less fortunate, launching to an Overwhelmingly Negative review score on Steam — players appreciated the design but couldn't overlook the performance, and the score paid the price. Borderlands 4, switching from a proven and well-optimised UE4 codebase, found its open world struggling to maintain 60fps on consoles after an hour or so — due, in no small part, to memory leaks.
The instinct is understandable. When the same engine appears in the credits of enough broken launches, it starts to look like the common denominator. But correlation is not causation, and the real common denominator is something rather more uncomfortable to admit: most of these studios simply weren't ready.
The counter-evidence is right there in plain sight. Black Myth: Wukong, built on Unreal Engine 5 by a team with deep technical investment in the platform, launched to near-universal acclaim for its visuals and its performance. Stellar Blade, also UE5, ran beautifully at launch. And then there's Fortnite — a UE5 title running Nanite, Lumen, and more, at scale, across an enormous range of hardware, and doing so reliably. Of course, Fortnite has a slight advantage: it's made by Epic Games, the people who built the engine. Hard to complain the toolkit doesn't work when the toolmaker is using it to run one of the most played games on the planet. The engine is not broken. Some teams just know how to use it.
The Temptation of the Feature Buffet
Here is the fundamental problem. Unreal Engine 5 ships with a set of rendering systems — Nanite, Lumen, Virtual Shadow Maps — that are, genuinely, extraordinary pieces of technology. They promise to eliminate entire categories of manual work. No more hand-crafted LODs. No more baked lighting. No more shadow map budgeting. Just enable the feature and let the engine handle it.
The problem is that these systems are expensive, and that expense is not always obvious until you're deep in production, staring at a GPU frame capture that looks like a ransom note.
One Steam user, dissecting a well-performing UE5 release, noted that the game ran well specifically because it used none of the default heavy features — no Nanite meshes, no Lumen, no Virtual Shadow Maps — instead using classic LOD setups and conventional global illumination. His conclusion was pointed: "the TOOLS are a bigger problem than the actual engine. The Nanite and Lumen suites simply weren't refined enough yet for modern hardware, and the defaults as UE5 uses them are set up so poorly that developers have fallen straight into the trap."
That trap is a familiar one. Turning features off requires understanding what they cost and what they replace. It requires experienced engineers who've shipped products on this engine, who've sat in the profiling trenches, who know that Virtual Shadow Maps will eat your frame budget if you let them run unchecked on a console. That knowledge doesn't come from a tutorial series. It comes from hard experience — usually the experience of having shipped something, learned what went wrong, and carried those lessons into the next project.
Enabling everything is easy. Being selective is work. And that work requires people who know what they're doing.
The Physics Problem Nobody Talks About
And it's not always a programmer making this mistake. One of the most common performance killers I encounter — and one that rarely makes it into the post-mortems — comes from the art and tech-art side of production.
Picture a Tech Artist tracking down a bug. Objects in a particular scene aren't behaving quite right — something about the physics interaction looks off. They dig through the settings, find the relevant physics feature, enable it on the affected objects, and the bug goes away. Problem solved. Then a thought creeps in: "Wait — could this be causing similar issues elsewhere in the game? I should probably just turn this on for everything, to be safe."
Unreal Engine obliges. It's the artist's choice; the engine will dutifully do whatever it's asked. And so, with a few property changes in the editor, physics simulation that was previously running on a handful of specific objects is now running on every prop, every piece of environmental geometry, every asset across the entire game world — large and small, interactive and static, significant and completely irrelevant.
The frame budget consequences of that decision are enormous. Physics simulation on a single object is cheap. Physics simulation running unnecessarily on hundreds or thousands of objects per scene is a tax on every single frame, forever, until someone notices. And often nobody does notice — at least not until the game is running on a minimum-spec machine, or until a profiler session reveals a physics tick that has absolutely no business being that expensive.
This is the kind of problem that experienced technical oversight catches early. Someone who has shipped games with Unreal Engine knows to ask "why is this enabled globally?" before approving the change. Without that experience in the room, perfectly well-intentioned decisions compound quietly into a performance budget that's been eaten alive before anyone realised it was happening.
The outlier on that chart is not an accident. Black Myth's development team had real technical depth on UE5 — years of investment, close collaboration with Epic, and the willingness to do the hard performance work rather than lean on defaults.
A Tale of Two Disasters
I've spent a long time working in the engine-side trenches of this industry, and I've seen these patterns play out up close, in ways that were both instructive and, at times, deeply concerning.
When "Technology Experts" Aren't
Some years ago, my team and I were called in to assist a development studio that carried, formally, the title of technology experts for their publisher. This was a respected, experienced studio — not a newcomer, not a team that had never shipped. They had real pedigree. And yet, when we arrived, what we found in their source control system was extraordinary, and not in a good way.
They were running Perforce, standard practice for a studio of their size working with Unreal Engine source. The problem was what they weren't running: any kind of layered depot structure at all.
Working with Unreal Engine source correctly requires a branching hierarchy that mirrors how the engine is actually composed. At the base you have Epic's Unreal Engine. On top of that, a stream for the engine plus any licensed third-party middleware and extensions. Above that, a stream for the studio's own engine improvements and modifications. And finally, the game itself — which may or may not need to be a further fork, depending on the project. Each layer can receive updates from the one below it cleanly, which means when Epic ships a new engine revision, you integrate it at the base, propagate it upward through the layers, and your modifications remain intact and coherent throughout.
This studio had none of that. Their entire setup was one stream: the game. Unreal Engine, third-party code, engine modifications, and game code — all in a single undifferentiated mass. When a new engine version arrived from Epic, there was no clean mechanism to integrate it. Instead, engineers were using Araxis Merge to manually diff the incoming changes against their codebase, line by line, and attempt to apply them by hand. For major version updates this was painful. For minor revisions it was still painful. And the further behind they fell, the more changes accumulated between their current state and the latest engine, and the worse each subsequent merge became. They were in a death spiral of their own making — not through malice, but simply through never having set the structure up correctly in the first place.
Here's the thing about Unreal Engine development: Epic's engineering team is large. At the time we were working with this studio, there were somewhere in the region of seventy to a hundred developers actively contributing improvements, optimisations, and fixes to the engine on a rolling basis. My client's technology team — talented people, making genuinely good engine contributions — numbered three. Three engineers, however skilled, producing changes in isolation from a team of nearly a hundred.
The solution wasn't glamorous. We had to rebuild their source management tree from a clean foundation, carefully reapply their legitimate engine modifications onto an up-to-date base, and put in place a structure that could actually sustain ongoing engine updates. We got them back on track. But the cost — in time, in lost improvements, in the compounding technical debt of the months they'd fallen behind — was significant. And this was the studio the publisher trusted as their engine authority.
Source control strategy for Unreal Engine is not a secondary concern. If your depot structure can't cleanly receive upstream changes from Epic, you are not running Unreal Engine — you are running a slowly diverging fork of it, and forks grow more expensive to maintain with every passing month.
When Cleverness Isn't Enough
The second story is less about infrastructure and more about the human cost of misplaced confidence.
A developer came to me after their game had already launched. The Steam reviews were, by that point, damning — and justifiably so. The game was crashing. Not occasionally; not on specific hardware configurations that could be debugged with a targeted patch. It was crashing routinely, often within an hour of play. The player experience was, charitably, poor.
When we dug in, there was a complicated picture. The team had done some genuinely impressive technical work — novel additions to Unreal's rendering pipeline that showed real creativity and ambition. You could see the talent. But running alongside all of that cleverness, the game was haemorrhaging memory faster than a publisher haemorrhages goodwill on a broken launch day — which, as it turned out, was precisely what was unfolding.
The critical context: this was the team's first AAA project. Their previous titles had been successful, charming, commercially viable — 2D platformers with a dedicated audience. They were good programmers. They understood their previous codebase deeply. But Unreal Engine at AAA scale is a different discipline entirely, and nobody on the team had shipped a AAA game before. The memory management practices that are perfectly adequate for a 2D platformer are not the practices you need when you're managing streaming assets across a large 3D world on constrained console hardware.
We found the leaks. We fixed the stability. We delivered a v1.1 that was, genuinely, a good game — solid, playable, much closer to what the team had originally intended. But it arrived a month after launch. The reviews were already filed. The refund requests had already gone in. Some damage, in this industry, cannot be repaired with a patch.
This Is Happening Right Now
These anecdotes are not historical curiosities. The exact same dynamics are playing out in 2025 and 2026, at scale, across studios that are adopting UE5 because it is, frankly, the industry standard — and because it is genuinely spectacular when used well.
The hardware reality compounds the problem. The majority of PC gamers are running hardware in the RTX 3060 range or below, at 1080p resolution. UE5's showcase features — Lumen, Nanite, Virtual Shadow Maps at full resolution — were not designed for that hardware profile. They were designed for the machine on the lead developer's desk. Shipping a game that performs beautifully on a 4090 development rig and struggles on the GPU that most of your customers actually own is not an engine problem. It is a scoping and testing failure.
The majority of your players are on the left side of that chart. Enabling Lumen and Nanite with default settings is a choice to make your game inaccessible to more than half the people who might buy it, unless you invest the time and the expertise to profile, tune, and scale those features correctly.
The concern surrounding The Witcher 4 — CDPR's high-profile move from their proprietary REDengine to Unreal Engine 5 — illustrates just how much anxiety has built up around this topic. CD Projekt Red are working closely with Epic, and they have the budget and the scale to do this properly. But the question being asked is a reasonable one: if studios with far fewer resources are shipping broken UE5 games month after month, what's actually different here? The answer, I'd argue, is investment in engine expertise — and CDPR have made that investment. The question for everyone else is whether they're prepared to do the same.
The Actual Solution
None of this is a counsel of despair. Unreal Engine 5 is an extraordinary piece of technology, and the studios that have invested properly in understanding it are producing games of exceptional quality and performance. The gap between a mediocre UE5 implementation and a top-tier one is not primarily a budget gap. It is an expertise gap.
The studios that ship well-optimised UE5 games share several characteristics:
- They have engineers with prior, shipped experience on Unreal Engine — not just developers who are learning on the job
- They treat source control architecture as a first-class concern, not an afterthought
- They profile on target hardware throughout development, not just on high-end development machines
- They are selective about which engine features they enable, and they understand the cost of each one
- They stay current with upstream engine improvements, rather than letting their fork drift
The good news is that this expertise doesn't have to live permanently on your headcount. Consultancy and targeted technical intervention — bringing in experienced engine engineers for an audit, a performance review, a source management overhaul, or a structured handover of best practices — can close that gap faster and more cost-effectively than building the capability in-house from scratch.
If your team is mid-production on an Unreal Engine title and you haven't yet done a dedicated performance profile on minimum-spec target hardware, that profiling session is the most valuable thing you could schedule this week. The earlier problems are found, the cheaper they are to fix.
The pattern I've described in this post — talented teams, impressive technology, insufficient engine expertise, broken launch — is not inevitable. It is, in fact, entirely preventable. But prevention requires humility: the recognition that Unreal Engine 5 is a deep, complex platform that rewards genuine expertise, and that turning on every feature in the settings panel is not the same as knowing how to use them.
The engine isn't the problem. Treating it as a magic box that does the hard work for you is the problem.
And right now, across studios in Southeast Asia, in Europe, in North America — teams are making exactly that mistake, in exactly that way, on projects that are months from launch. Some of them will ship something wonderful. The rest will be writing forum posts about how it's the engine's fault.
If your studio is navigating an Unreal Engine project — whether you're in early production, approaching a performance crunch, or trying to understand why your frame budget doesn't add up — I'm available for technical consultancy and studio improvement engagements. Reach out, and let's talk before the reviews do.
Comments