Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Graphics Upgrades Games Hardware

DirectX 12 Performance Tested In Ashes of the Singularity 96

Vigile writes: The future of graphics APIs lies in DirectX 12 and Vulkan, both built to target GPU hardware at a lower level than previously available. The advantages are better performance, better efficiency on all hardware and more control for the developer that is willing to put in the time and effort to understand the hardware in question. Until today we have only heard or seen theoretical "peak" performance claims of DX12 compared to DX11. PC Perspective just posted an article that uses a pre-beta version of Ashes of the Singularity, an upcoming RTS utilizing the Oxide Games Nitrous engine, to evaluate and compare DX12's performance claims and gains against DX11. In the story we find five different processor platforms tested with two different GPUs and two different resolutions. Results are interesting and show that DX12 levels the playing field for AMD, with its R9 390X gaining enough ground in DX12 to overcome a significant performance deficit that exists using DX11 to the GTX 980.
This discussion has been archived. No new comments can be posted.

DirectX 12 Performance Tested In Ashes of the Singularity

Comments Filter:
  • by Anonymous Coward on Monday August 17, 2015 @07:06PM (#50335829)

    The Developer now must know MORE about the underlying hardware to make the best use of Direct X 12?

    This is a total step in the WRONG direction. So now having Direct X 12 hardware doesn't mean your game now just works, oh no. If you want the full experience you now must have the HARDWARE that your game was written for, or forget all this compatible Direct X stuff. How's this different from the game developer just coding directly to the video hardware of choice? That's what they do now, especially when they are funded by the video hardware guys in an effort to sell more hardware..

    For this Direct X thing to really be useful, it needs to isolate the developer from the hardware implementation. You need to abstract away the vendor specifics and make the programming agnostic to what hardware it's running on... Otherwise this is all going to just going to be what it has always been, vendor lock in for specific games and drive us towards only ONE video hardware chip maker....

    • The Developer now must know MORE about the underlying hardware to make the best use of Direct X 12?

      Uh, yeah, that's what a lower level API means.

    • by exomondo ( 1725132 ) on Monday August 17, 2015 @08:08PM (#50336081)

      I'm not quite sure why you are modded down, this is a perfectly valid and legitimate concern.

      Part of the problem of PC gaming has always been that the choice of hardware has meant we have had to have abstraction layers and these introduced inefficiencies which have reached significant levels nowadays. You remember seeing every time a new console generation is released that PC gamers proclaim it is terrible because their PC hardware is theoretically more powerful? Well that is true however being more powerful is pointless if that power can not be efficiently utilized, it just goes to waste. Heavy abstractions and generic implementations mean a lot of that increased power is wasted. So it is about bringing APIs up to scratch with modern hardware and removing some of the legacy cruft. It means more responsibility on the part of the application developer in the same way that it did when we went from the fixed function pipeline to the programmable pipeline many years ago.

      For this Direct X thing to really be useful, it needs to isolate the developer from the hardware implementation. You need to abstract away the vendor specifics and make the programming agnostic to what hardware it's running on.

      Well it still is relatively hardware agnostic, the difference is that we have had many advances in hardware that are not reflected in modern APIs. Take resource binding for example, currently resources are bound to "slots" when you define a shader pipeline and are fixed at draw time so if you want to change the resources that this shader pipeline uses you need to bind those new resources to those slots and draw again. This was a great general view of hardware at the time and is forward-compatible. But modern hardware has long had the capability to index a table of resources rather than just whatever is currently bound to the resource slots though the APIs are not architected to allow this. So this is generally implemented on an application-specific basis in the driver when the driver author (usually the hardware manufacturer) works with the application developer to understand what they are trying to accomplish that the API doesn't provide and then create a kind of munging layer in the driver that converts the application's "bind->draw bound, rebind->draw bound" workflow into a "create table->draw all" workflow. This is partly why you see significant performance differences in applications between driver versions but also why an application on similarly capable hardware can perform so vastly different between vendors.

      That's just one example, I hope it's somewhat clear. Yes there is less of an abstraction in some areas but at the API level it doesn't do things like expose an AMD R9 -specific feature or an nVidia GTX980 -specific feature.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        I know slashdot usually does not read AC, but I reply none the less.

        Your post and grandparent is spot on, but I still do not think it is a problem. The ecosystem has changed a lot, "back then" basically every studio wrote their own engine, today we have three big engine makers (Epic with UE, Crytek with CryEngine, and Unity) on the market plus a few big in-house engines of the publishers that they do not license out.
        A few other larger projects make their own engines, but if we are down to half a dozen major

    • by Dutch Gun ( 899105 ) on Monday August 17, 2015 @08:27PM (#50336149)

      No, don't worry. I haven't seen the DirectX 12 API yet myself (I'm still working in DX9-11 land), but I'm pretty sure all this is doing is making the abstraction layer more closely match the realities of the existing hardware designs. That is, it's not eliminating the abstraction altogether, but making it a much thinner layer, so as to avoid imposing unnecessary overhead.

      All the GPUs work in roughly the same manner, because they have to execute the same common shader micro-code. In order to be labeled as "DX11" or "DX12" compliant hardware, a GPU must be able to perform a minimum set of functionality. Moreover, the vast majority of this functionality is accessed via shader languages, and this doesn't change from GPU to GPU.

      I'd be surprised if there was any significant divergence at all between different types of GPUs in the code at all. DirectX 12 looks like it's going to be a very good thing, both for developers and for gamers.

    • by Anonymous Coward

      The Developer now must know MORE about the underlying hardware to make the best use of Direct X 12?

      If you think that is bad, you should see the shit developers have to do for current APIs. If you're not a AAA game, technically you can only use the standard APIs, but these is incredibly slow making lots of system calls are inherently slow. Want to know why driver updates list increased performance in certain games? Because they driver devs look at how those games are using the APIs, then detect these games and do black magic behind the scenes that is all a bunch of bandaids to work around the horrible per

  • Driver Differences (Score:3, Interesting)

    by nateman1352 ( 971364 ) on Monday August 17, 2015 @07:08PM (#50335841)

    I think what this benchmark really tells us is two things:

    1. nVidia has not optimized their driver stack for DX12 as much as AMD has optimized for DX12
    2. The performance difference between AMD and nVidia is likely a software issue, not a hardware issue (nVidia's driver has a more optimized DX11 implementation than AMD's). However, it is possible that nVidia's silicon architecture is designed to run DX11 workloads better than AMD's.

    Bullet #1 make sense, AMD has been developing Mantle for years now so they likely have a more mature stack for these low level APIs. Bullet #2 also makes sense, AMD/ATI's driver has been a known weak point for a long time now.

    • by Anonymous Coward

      Not really.

      It has been known for a long time that AMD has poor driver overhead in Windows. This removes that. Its also known that in terms of brute power, AMD cards are in the vicinity of 15% faster overall. I fully expect AMD to catch up and perhaps overtake in DX12 benchmarking.

    • I think what this benchmark really tells us is two things:

      1. nVidia has not optimized their driver stack for DX12 as much as AMD has optimized for DX12

      Maybe but the whole idea is that should have little impact. These new APIs are about reducing driver overhead by re-designing the API such that it is a more accurate representation of the underlying hardware requiring a lot less work from the driver in converting between what the application thinks the hardware looks like and what the hardware actually looks like.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Monday August 17, 2015 @07:38PM (#50335959)
    Comment removed based on user account deletion
  • I'm more interested in the fact that the game used for benchmarking has the following in it's backstory: "Computronium became the ultimate currency."

  • No.. This engine's implementation *may* level the playing field for AMD but that does not mean that API does.
  • by Kjella ( 173770 ) on Monday August 17, 2015 @08:11PM (#50336087) Homepage

    It is widely known that DX12 will reduce draw call overhead, making weaker CPUs perform better relative to stronger CPUs. This is of course good for AMD, since they don't have high-end CPUs anymore though it's bit of a "scorched earth" result where gamers don't need expensive CPUs at all. But if you look at "Ashes (Heavy) DX11 to DX12 Scaling - Radeon R9 390X" and look at an extremely powerful CPU like the Core i7-6700 you're seeing 50-100% gains. If you're that severely bottle-necked by a 4+ GHz quad core then this is not a typical DX11 game.

    We can compare the "typical" difference between a R9 390X and GTX980 in Anandtech's bench [anandtech.com], though I have to substitute for a R290X "Uber" so the differences should actually be even smaller. Normally these cards are almost head to head, the question is not why DX12 is closing the gap but why there's such a huge DX11 gap to begin with. And the only reason I can come up with is because they're pushing way, way more draw calls than normal. Which may be DX12 enabling developers to do things they wanted to, but couldn't before or it could be to make someone look good/bad.

    • If you're that severely bottle-necked by a 4+ GHz quad core then this is not a typical DX11 game.

      We already know this isn't a typical DX11 game. They're using way more draw calls than a typical DX11 game simply because DX12 allows them to make more. That's even stated in the article.

    • Anyway I have no idea what's up with this obsession with increasing the number of api calls. You don't need to do a lot of those if you design your rendering pipeline right. One glDrawElements would accomplish a lot of work, especially if you're using vertex and pixel shaders.
      • It's not just the number of API calls, it's all the things that go on in the driver. Things like shader recompiles to match new hardware state, mutexes and blocks on resource use, resource use tracking to make sure the next call doesn't interfere with the previous one and so on. There's a huge amount of bloat in drivers at the moment and it all contributes to the relative lack of efficiency. There's a fantastic post here at gamedev on the subject [gamedev.net]
  • by Anonymous Coward
    Direct3D 11 introduced the capability to render with multiple threads via deferred contexts. NVidia chose to support that feature, AMD did not. Direct3D 12 mandates multithreaded rendering.
    • Gains from threaded rendering with D3D 11 were marginal to non-existent because of the way the driver worked. With D3D12 (and Vulkan), threading is a 1st class citizen. You'll be able to call into the driver without a blocking penalty, so it will genuinely be faster (all else being equal). D3D12 doesn't "mandate" MT rendering by the way. You can still do everything on a single thread if you want.
  • All that these results show is that AMD has higher draw call overhead than nVidia does on DX11 but DX11 and older games were not designed to make massive amounts of draw calls so it doesn't matter all that much when playing games designed for these older API's. DX12 was designed to minimize the API overhead to allow games to start drawing way more stuff and games that are designed to take advantage of this are going to suck on older API's when they support it. If developers were to write support for DX12 in

  • by Luckyo ( 1726890 ) on Monday August 17, 2015 @08:50PM (#50336237)

    Those who haven't clued in yet: this is the same engine that was used for "unreleased game turned DX 12 synthetic benchmark" star swarm. All same caveats apply:
    1. Unknown engine not available to public with unknown performance. We have no idea how DX11 implementation is made, or why DX12 is so much faster than anywhere else seen so far.
    2. Is in pre-alpha, meaning performance is all over the place and a complete black box, it could render faster in DX11 in next build for all we know.

    We've been there with mantle already. Specialized tech demos showing massive performance boost from using mantle over DX11. Then release, frostbite et al start supporting it and we see minimal to no performance boost outside really low end CPUs bundled with really high end GPUs.

    Show me this kinds of numbers on a known engine that has a polished DX11 implementation like unreal 4 engine, and I'll actually believe you. Until then, all I see is more marketing BS.

    • We do know why DX12 is a lot faster. For example here's one of a thousand articles [eurogamer.net] about it. Also please see NVIDIA's SIGGRAPH 2015 presentation on Vulkan (same kind of technology as D3D 12) [gputechconf.com].
    • by Kaitiff ( 167826 )

      There's a problem with your argument there chief... this is a game that is about to release. I'ts not an alpha.. from the video I just watched the company is about to release 'Ashes' for purchase.

      DX11 is dead tree man. Might as well use that same argument a few years ago with DX9.. when M$ moves on with it's API, you either get onboard or you're left in the dust. I've been hoping for years and years that someone could make the push for OpenGL to become competitive again but that's not gonna happen. At

      • the I3 is faster than AMD's flagship processor in this game. Ouch.

        Right now for non-power users the sweet spot is Haswell i5, it is relatively cheap and is more powerful than anything AMD has to offer. However, it will cost you at least a couple hundred bucks more than the AMD solution when you factor in the motherboard as well, and it doesn't get you significantly better maximum frame rates. What it does get you is notable better minimum rates. What that suggests to me is that AMD's cores are just as fast as Intel's, but they can't shovel data into them as quickly. That'

  • I'm hoping it'll
    - Make good use of DX12
    - Have a stable and performing Windows 10
    - Have the new Intel processor
    - Be super quiet

He who has but four and spends five has no need for a wallet.

Working...