DirectX 12 Performance Tested In Ashes of the Singularity 96
Vigile writes: The future of graphics APIs lies in DirectX 12 and Vulkan, both built to target GPU hardware at a lower level than previously available. The advantages are better performance, better efficiency on all hardware and more control for the developer that is willing to put in the time and effort to understand the hardware in question. Until today we have only heard or seen theoretical "peak" performance claims of DX12 compared to DX11. PC Perspective just posted an article that uses a pre-beta version of Ashes of the Singularity, an upcoming RTS utilizing the Oxide Games Nitrous engine, to evaluate and compare DX12's performance claims and gains against DX11. In the story we find five different processor platforms tested with two different GPUs and two different resolutions. Results are interesting and show that DX12 levels the playing field for AMD, with its R9 390X gaining enough ground in DX12 to overcome a significant performance deficit that exists using DX11 to the GTX 980.
Re: (Score:2)
NVIDIA is just as terrible for drivers. Just have to see the massive failure for the Windows 10 launch.
A tiny minority of nVidia users had a crash bug, and a new driver was rolled out a day later. That doesn't sound like massive failure to me. Massive failure is when the GUI for your driver is larger, slower, and more bloated than Adobe Reader.
Wait a min, Direct X 12 needs what? (Score:5, Interesting)
The Developer now must know MORE about the underlying hardware to make the best use of Direct X 12?
This is a total step in the WRONG direction. So now having Direct X 12 hardware doesn't mean your game now just works, oh no. If you want the full experience you now must have the HARDWARE that your game was written for, or forget all this compatible Direct X stuff. How's this different from the game developer just coding directly to the video hardware of choice? That's what they do now, especially when they are funded by the video hardware guys in an effort to sell more hardware..
For this Direct X thing to really be useful, it needs to isolate the developer from the hardware implementation. You need to abstract away the vendor specifics and make the programming agnostic to what hardware it's running on... Otherwise this is all going to just going to be what it has always been, vendor lock in for specific games and drive us towards only ONE video hardware chip maker....
Re: (Score:3)
The Developer now must know MORE about the underlying hardware to make the best use of Direct X 12?
Uh, yeah, that's what a lower level API means.
Re:Wait a min, Direct X 12 needs what? (Score:5, Interesting)
I'm not quite sure why you are modded down, this is a perfectly valid and legitimate concern.
Part of the problem of PC gaming has always been that the choice of hardware has meant we have had to have abstraction layers and these introduced inefficiencies which have reached significant levels nowadays. You remember seeing every time a new console generation is released that PC gamers proclaim it is terrible because their PC hardware is theoretically more powerful? Well that is true however being more powerful is pointless if that power can not be efficiently utilized, it just goes to waste. Heavy abstractions and generic implementations mean a lot of that increased power is wasted. So it is about bringing APIs up to scratch with modern hardware and removing some of the legacy cruft. It means more responsibility on the part of the application developer in the same way that it did when we went from the fixed function pipeline to the programmable pipeline many years ago.
For this Direct X thing to really be useful, it needs to isolate the developer from the hardware implementation. You need to abstract away the vendor specifics and make the programming agnostic to what hardware it's running on.
Well it still is relatively hardware agnostic, the difference is that we have had many advances in hardware that are not reflected in modern APIs. Take resource binding for example, currently resources are bound to "slots" when you define a shader pipeline and are fixed at draw time so if you want to change the resources that this shader pipeline uses you need to bind those new resources to those slots and draw again. This was a great general view of hardware at the time and is forward-compatible. But modern hardware has long had the capability to index a table of resources rather than just whatever is currently bound to the resource slots though the APIs are not architected to allow this. So this is generally implemented on an application-specific basis in the driver when the driver author (usually the hardware manufacturer) works with the application developer to understand what they are trying to accomplish that the API doesn't provide and then create a kind of munging layer in the driver that converts the application's "bind->draw bound, rebind->draw bound" workflow into a "create table->draw all" workflow. This is partly why you see significant performance differences in applications between driver versions but also why an application on similarly capable hardware can perform so vastly different between vendors.
That's just one example, I hope it's somewhat clear. Yes there is less of an abstraction in some areas but at the API level it doesn't do things like expose an AMD R9 -specific feature or an nVidia GTX980 -specific feature.
Re: (Score:3, Insightful)
I know slashdot usually does not read AC, but I reply none the less.
Your post and grandparent is spot on, but I still do not think it is a problem. The ecosystem has changed a lot, "back then" basically every studio wrote their own engine, today we have three big engine makers (Epic with UE, Crytek with CryEngine, and Unity) on the market plus a few big in-house engines of the publishers that they do not license out.
A few other larger projects make their own engines, but if we are down to half a dozen major
Re:Wait a min, Direct X 12 needs what? (Score:4, Insightful)
No, don't worry. I haven't seen the DirectX 12 API yet myself (I'm still working in DX9-11 land), but I'm pretty sure all this is doing is making the abstraction layer more closely match the realities of the existing hardware designs. That is, it's not eliminating the abstraction altogether, but making it a much thinner layer, so as to avoid imposing unnecessary overhead.
All the GPUs work in roughly the same manner, because they have to execute the same common shader micro-code. In order to be labeled as "DX11" or "DX12" compliant hardware, a GPU must be able to perform a minimum set of functionality. Moreover, the vast majority of this functionality is accessed via shader languages, and this doesn't change from GPU to GPU.
I'd be surprised if there was any significant divergence at all between different types of GPUs in the code at all. DirectX 12 looks like it's going to be a very good thing, both for developers and for gamers.
Re: (Score:1)
Re: (Score:2)
No, it's not weird. You just seem to be confused by the description of "lower level hardware access".
These aren't vendor-specific extensions. This is simply a redesigned API that eliminates a lot of the software bottlenecks, allowing developers to get a bit closer to the hardware in a GPU-independent manner. You're still programming to a common API. It's just that the API better reflects what a 2015-era GPU can do, rather than impose a lot of software overhead that isn't really needed.
Re: (Score:1)
The Developer now must know MORE about the underlying hardware to make the best use of Direct X 12?
If you think that is bad, you should see the shit developers have to do for current APIs. If you're not a AAA game, technically you can only use the standard APIs, but these is incredibly slow making lots of system calls are inherently slow. Want to know why driver updates list increased performance in certain games? Because they driver devs look at how those games are using the APIs, then detect these games and do black magic behind the scenes that is all a bunch of bandaids to work around the horrible per
Driver Differences (Score:3, Interesting)
I think what this benchmark really tells us is two things:
1. nVidia has not optimized their driver stack for DX12 as much as AMD has optimized for DX12
2. The performance difference between AMD and nVidia is likely a software issue, not a hardware issue (nVidia's driver has a more optimized DX11 implementation than AMD's). However, it is possible that nVidia's silicon architecture is designed to run DX11 workloads better than AMD's.
Bullet #1 make sense, AMD has been developing Mantle for years now so they likely have a more mature stack for these low level APIs. Bullet #2 also makes sense, AMD/ATI's driver has been a known weak point for a long time now.
Re: (Score:2)
They aren't offloading to DX12, since DX12 is a much smaller API that does a lot less than the previous ones. They are offloading to the game engine developers.
Re: (Score:1)
Not really.
It has been known for a long time that AMD has poor driver overhead in Windows. This removes that. Its also known that in terms of brute power, AMD cards are in the vicinity of 15% faster overall. I fully expect AMD to catch up and perhaps overtake in DX12 benchmarking.
Re: (Score:2)
I think what this benchmark really tells us is two things:
1. nVidia has not optimized their driver stack for DX12 as much as AMD has optimized for DX12
Maybe but the whole idea is that should have little impact. These new APIs are about reducing driver overhead by re-designing the API such that it is a more accurate representation of the underlying hardware requiring a lot less work from the driver in converting between what the application thinks the hardware looks like and what the hardware actually looks like.
Re: (Score:2)
Re: (Score:2)
>> DX12 is all hype.
Not according to this:
https://www.youtube.com/watch?... [youtube.com]
The big issue seems to be whether the benchmark is synthetic/representative or not, but since it is really just an early version of a real game that will be released next year I tend towards believing it is legit.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
what about putting
127.0.0.1 microsoft.com
in your hosts file? Would that work?
Re: (Score:2)
I'm VERY interested in Vulkan but my fear is that all the AAA developers other than Valve will just keep assuming DirectX-only even for new games development, mostly through an incorrect belief that they're not loosing sales by failing to support Linux.
I find it very frustrating that Bethesda especially keep plodding along on their own tired old windows-only engine instead of switching to, say UT4. Apart from the Linux-support-for-free that would bring, judging from what they showed at IGN the upcoming Fall
Re: (Score:1)
Re: (Score:2)
You shouldn't be in a hurry but upgrade to Windows 8.1 will be worth it eventually, with EOL in 2023 rather that 2020 and an update from WDDM 1.1 to WDDM 1.3.
Re: (Score:2)
I REALLY don't like the UI of Windows 8.1. I think its completely unworkable.
I use Linux for anything other than gaming, so I also don't care about Win7 EOL since it doesn't seem to stop you actually using the OS, just no more annoying alerts about ambiguous security patches that ususally don't actually anything relevant/significant anyway.
Would there be any noticeable benefit of WDDM1.3 over WDDM 1.1 if when just using Windows for gaming?
Re: (Score:2)
Except for the ugly window borders and other pet things.
I don't even like Windows 7 that much either, it's mostly fine but I hate the file manager.
Re: (Score:2)
Windows 7 has the Home Basic version, install that and you only get the 2D desktop.
Changing visual style? with Windows 8 they removed classic look, and with Windows 7 they removed the color schemes in clasic look.. I could use a very old third party tool but it didn't include the old styles only additional ugly ones.
With Windows 8 you can get such train wreck of a theme http://kizo2703.deviantart.com... [deviantart.com]
So with Windows the easy way I found is to install 7 Home Basic and leave it alone ("Aero Basic"). I don't
Re: (Score:2)
Some more efficient.. things.. the most easily understandable is better multitasking on the GPU.
https://en.wikipedia.org/wiki/... [wikipedia.org]
And some.. things.. a wild guess is that it allows great performance in alt-tabbing out and in of a game. We all cordially hate Windows 8, but it is amazingly fast and smooth at showing you a Metro thing or the Charmed bar, whether you wanted it or not.
Comment removed (Score:5, Insightful)
Re: (Score:2)
Re: (Score:1)
If they didn't give a shit then why are they collecting data?
Let me guess, you use the computer mommy and daddy bought for you purely for video games and Facebook, right? Of course *you* don't have privacy concerns, you have no responsibility nor any data worth harvesting.
Re: (Score:2)
Do you even know (or care) what it's sending back or did you just read the headlines and start freaking out about your privacy?
"Computronium" (Score:2)
I'm more interested in the fact that the game used for benchmarking has the following in it's backstory: "Computronium became the ultimate currency."
DX12 levels the playing field for AMD (Score:2)
These results don't make much sense (Score:3)
It is widely known that DX12 will reduce draw call overhead, making weaker CPUs perform better relative to stronger CPUs. This is of course good for AMD, since they don't have high-end CPUs anymore though it's bit of a "scorched earth" result where gamers don't need expensive CPUs at all. But if you look at "Ashes (Heavy) DX11 to DX12 Scaling - Radeon R9 390X" and look at an extremely powerful CPU like the Core i7-6700 you're seeing 50-100% gains. If you're that severely bottle-necked by a 4+ GHz quad core then this is not a typical DX11 game.
We can compare the "typical" difference between a R9 390X and GTX980 in Anandtech's bench [anandtech.com], though I have to substitute for a R290X "Uber" so the differences should actually be even smaller. Normally these cards are almost head to head, the question is not why DX12 is closing the gap but why there's such a huge DX11 gap to begin with. And the only reason I can come up with is because they're pushing way, way more draw calls than normal. Which may be DX12 enabling developers to do things they wanted to, but couldn't before or it could be to make someone look good/bad.
Re: (Score:2)
If you're that severely bottle-necked by a 4+ GHz quad core then this is not a typical DX11 game.
We already know this isn't a typical DX11 game. They're using way more draw calls than a typical DX11 game simply because DX12 allows them to make more. That's even stated in the article.
Re: (Score:2)
Re: (Score:1)
Its the threading model (Score:2, Interesting)
Re: (Score:1)
Re: (Score:1)
OpenGL has existed, and has been open, for longer than Direct3D itself has. Direct3D still exists. Direct3D isn't even the current most popular graphics API if you consider mobile devices, but if you're just looking at high-end PC games, it still hasn't been dethroned and doesn't show any particular signs of being dethroned anytime soon.
Re: (Score:1)
Re: (Score:1)
OpenGL has existed, and has been open, for longer than Direct3D itself has. Direct3D still exists. Direct3D isn't even the current most popular graphics API if you consider mobile devices, but if you're just looking at high-end PC games, it still hasn't been dethroned and doesn't show any particular signs of being dethroned anytime soon.
Indeed, the only reason that Direct3D even exists is that 3dfx decided to create their own interface (GLIDE) instead of just using OpenGL. They later brought out a "MiniGL" driver which provided a reasonable subset of OpenGL, but by then the damage had been done and Microsoft had already done Direct3D. If they had just gone with OpenGL to start with, Microsoft would likely have followed suit. Microsoft never has an original idea, they only copy people. They copied 3dfx and the rest is the present, and unfor
Re: (Score:2)
What it actually shows (Score:2)
All that these results show is that AMD has higher draw call overhead than nVidia does on DX11 but DX11 and older games were not designed to make massive amounts of draw calls so it doesn't matter all that much when playing games designed for these older API's. DX12 was designed to minimize the API overhead to allow games to start drawing way more stuff and games that are designed to take advantage of this are going to suck on older API's when they support it. If developers were to write support for DX12 in
Star Swarm style marketing continues (Score:5, Interesting)
Those who haven't clued in yet: this is the same engine that was used for "unreleased game turned DX 12 synthetic benchmark" star swarm. All same caveats apply:
1. Unknown engine not available to public with unknown performance. We have no idea how DX11 implementation is made, or why DX12 is so much faster than anywhere else seen so far.
2. Is in pre-alpha, meaning performance is all over the place and a complete black box, it could render faster in DX11 in next build for all we know.
We've been there with mantle already. Specialized tech demos showing massive performance boost from using mantle over DX11. Then release, frostbite et al start supporting it and we see minimal to no performance boost outside really low end CPUs bundled with really high end GPUs.
Show me this kinds of numbers on a known engine that has a polished DX11 implementation like unreal 4 engine, and I'll actually believe you. Until then, all I see is more marketing BS.
Re: (Score:2)
Re: (Score:2)
There's a problem with your argument there chief... this is a game that is about to release. I'ts not an alpha.. from the video I just watched the company is about to release 'Ashes' for purchase.
DX11 is dead tree man. Might as well use that same argument a few years ago with DX9.. when M$ moves on with it's API, you either get onboard or you're left in the dust. I've been hoping for years and years that someone could make the push for OpenGL to become competitive again but that's not gonna happen. At
Re: (Score:2)
the I3 is faster than AMD's flagship processor in this game. Ouch.
Right now for non-power users the sweet spot is Haswell i5, it is relatively cheap and is more powerful than anything AMD has to offer. However, it will cost you at least a couple hundred bucks more than the AMD solution when you factor in the motherboard as well, and it doesn't get you significantly better maximum frame rates. What it does get you is notable better minimum rates. What that suggests to me is that AMD's cores are just as fast as Intel's, but they can't shovel data into them as quickly. That'
Re: (Score:1)
For my new PC (Score:2)
- Make good use of DX12
- Have a stable and performing Windows 10
- Have the new Intel processor
- Be super quiet