This site may earn chapter commissions from the links on this page. Terms of apply.

For the last few years, there'southward been an ongoing debate nearly the benefits and advantages (or lack thereof) surrounding DirectX 12. It hasn't helped any that the argument has been bitterly partisan, with Nvidia GPUs ofttimes showing minimal benefits or fifty-fifty performance regressions, while AMD cards have frequently shown pregnant operation increases.

[H]ardOCP recently compared AMD and Nvidia performance in Ashes of the Singularity, Battleground one, Deus Ex: Flesh Divided, Hitman, Rise of the Tomb Raider, Sniper Elite 4, and Tom Clancy's The Division. Bear in heed that this was specifically designed as a high-terminate comparison that would compare the two APIs in GPU-express scenarios at high resolutions and detail levels, and with a Cadre i7-6700K clocked at 4.7GHz powering the testbed. The GTX 1080 Ti was tested in 4K, while the less-powerful 1080 and RX 480 were tested in 1440p. Before you squawk about comparison the GTX 1080 and the RX 480, continue in listen that each GPU was but compared confronting itself in DX11 versus DX12.

Sniper Elite iv was a rare game with very strong results in DX12 versus DX11 on Polaris. Information and graph by [H]ardOCP.

The reply to whether DirectX 12 was better or worse than DirectX eleven boils downwardly to "It depends." Specifically, information technology depends on whether you're using an AMD or an Nvidia GPU, and information technology depends on the game itself. AMD GPUs were less likely to show a performance delta between the two APIs, while Nvidia cards still tended to tilt towards DX11 overall. [H]ardOCP notes in its conclusion that DX11 is still the better overall API option, but that DX12 back up has improved from both companies, performance deltas between the two APIs have dropped, and in a few cases, DX12 pulls out potent wins.

Why DirectX 12 hasn't transformed gaming

A few years ago, when low-overhead APIs like DirectX 12 and Vulkan hadn't been released and even Mantle was in its infancy, there were a lot of overconfident predictions made nigh how these upcoming APIs would be fundamentally transformative to gaming, unleash the latent ability in all of our computers, and transform the gaming industry. The truth, thus far, has been more prosaic. How much a game benefits from DirectX 12 depends on what kind of CPU you're testing it on, how GPU-limited your quality settings are, how much experience the developer has in the API to first with, and whether the title was developed from the ground up to take reward of DX12, or if its support for the API was patched in at a later engagement.

And the components you lot choose can have a significant touch on what kind of scaling you run into. Consider the graph below, from TechSpot, which compares a variety of CPUs while using the Fury X.

Intel's Core i7-6700K barely twitches, while the Core i3-6100T has an boilerplate frame rate i.14x higher and a minimum frame rate less than half as high. AMD'south FX-6350 and FX-8370 both run across boilerplate frame rates rise by nearly 27%, but, again, minimum frame rates drib severely.

A similar point is demonstrated below, with a graph of Hitman results. The 6700K is capable of driving the Fury Ten virtually as fast in DX11 equally information technology is in DX12, while the FX-8370 improved enormously.

Hitman-PC-DirectX-12-CPU-Scaling

One reason why we see things playing out the manner they practice is because the goal and performance-improving functions of low-overhead APIs accept been misunderstood. Information technology's been known for years that Nvidia GPUs are frequently faster with lower-end Intel or AMD CPUs (pre-Ryzen) than AMD's own GPUs are. Part of the reason for this is because Nvidia's own DX11 drivers implement multi-threading, whereas AMD's practice not. That's one reason why, in games like Ashes of the Singularity, AMD's GPU operation skyrocketed and then much in DX12. But fundamentally, DX12, Vulkan, and Mantle are methods of compensating for weak single-threaded performance (or for spreading out a workload more than evenly and so it isn't bottlenecked by a single thread).

This article from Eurogamer is older, merely it notwithstanding makes an important indicate — the improvements to performance shown by Mantle and DX12 come from assuasive the CPU to process more draw calls per second. If the GPU is already saturated with all the processing information technology tin can handle, stuffing more draw calls into the piping isn't going to improve anything.

Now, having said all this, was in that location whatever bespeak to DirectX 12 at all? Absolutely yep. Games, equally a category of applications, have been among the slowest to comprehend and benefit from multi-cadre processors. Even today, the number of games that can scale in a higher place four cores is quite pocket-sized. Giving lower-terminate CPUs the liberty to utilize their resource more effectively can absolutely pay dividends for consumers on lower-end hardware. DirectX 12 is besides still adequately new, with just a handful of supporting titles. It'due south not unusual for a new API to accept several years to find its feet and for developers to begin supporting it as a chief selection. Game engines accept to exist developed to work well with information technology. Developers have to become comfy using it. AMD, NV, and Intel need to release drivers that use information technology more finer, and in some cases, may make hardware changes to their ain GPUs to make low-latency APIs run more efficiently.

Neither the fact that DX12'southward gains confronting DX11 are less dramatic than many would prefer nor its limited adoption at this signal in time are unusual for a new API that makes as many central changes as DX12 makes relative to DX11. How those changes volition shape games of the future remains to be seen.