DX12/Vulcan Thread

i'm just pissed 980 and now 1080 ti and I have seen very little progress out of them on dx12 and vulkan

I get the impression they are just butthurt about the whole vulkan/dx12 thing and don't want to work on it
and AMD ****ed them with freesync at the same time adding to the butthurt

....
but we need to take this to the new thread

http://www.rage3d.com/board/showthread.php?t=34038252

I don't think NV is ignoring DX12/Vulkan - I think they are working on it, and the driver released last month that gave some performance boost in DX12 is evidence of that. Not sure how AMD has ****ed them with Freesync since they're still making money hand-over-fist with GSYNC.

As for DX12/Vulkan - How much can you do when the ability to code faster paths and optimizations has been stripped from you, though? I think the problem is that there isn't much they can do, and AMDs faster hardware is coming out to play now without AMDs terrible DX11 driver limiting it.

NVs coders that worked on DX11 have proven to be just as efficient or better than what the coders at game studios have been able to produce with DX12 thus far. NV has found some ways to make DX12 a bit more efficient on their driver end, getting 10-15% (from what I've read it was more like 7-12% from actual users) increases across the board, but again .. I'm not sure how much they can really do. You could blame the driver for poor DX11 performance, but you can't really do that with DX12 as that is the main feature of the API - taking control away from the driver.
 
I don't think NV is ignoring DX12/Vulkan - I think they are working on it, and the driver released last month that gave some performance boost in DX12 is evidence of that. Not sure how AMD has ****ed them with Freesync since they're still making money hand-over-fist with GSYNC.

As for DX12/Vulkan - How much can you do when the ability to code faster paths and optimizations has been stripped from you, though? I think the problem is that there isn't much they can do, and AMDs faster hardware is coming out to play now without AMDs terrible DX11 driver limiting it.

NVs coders that worked on DX11 have proven to be just as efficient or better than what the coders at game studios have been able to produce with DX12 thus far. NV has found some ways to make DX12 a bit more efficient on their driver end, getting 10-15% (from what I've read it was more like 7-12% from actual users) increases across the board, but again .. I'm not sure how much they can really do. You could blame the driver for poor DX11 performance, but you can't really do that with DX12 as that is the main feature of the API - taking control away from the driver.
not that much

[yt]EPws8eF9alQ[/yt]


.......
and G-SYNC nv spent a fortune on R&D to make a proprietary monitor snyc then then another fortune on PR " look what we can do "

and AMD with in a week " oh we can do that for free "

what a mind **** :lol:
 
GSYNC was superior to freesync for quite a while, I think you're forgetting that aspect. Freesync used to only work in certain ranges while GSYNC was working the entire frequency range. That's off topic anyway though.

Anyway, different systems saw different increases. Some actually got around 15% but it seems it benefitted lower end CPUs better, which would make sense as the driver update seemed to offload some of the bottleneck (if there was one).
 
I don't think NV is ignoring DX12/Vulkan - I think they are working on it, and the driver released last month that gave some performance boost in DX12 is evidence of that. Not sure how AMD has ****ed them with Freesync since they're still making money hand-over-fist with GSYNC.

As for DX12/Vulkan - How much can you do when the ability to code faster paths and optimizations has been stripped from you, though? I think the problem is that there isn't much they can do, and AMDs faster hardware is coming out to play now without AMDs terrible DX11 driver limiting it.

NVs coders that worked on DX11 have proven to be just as efficient or better than what the coders at game studios have been able to produce with DX12 thus far. NV has found some ways to make DX12 a bit more efficient on their driver end, getting 10-15% (from what I've read it was more like 7-12% from actual users) increases across the board, but again .. I'm not sure how much they can really do. You could blame the driver for poor DX11 performance, but you can't really do that with DX12 as that is the main feature of the API - taking control away from the driver.

It is that their hardware is meant to brute force their way through things; it always has been since pre-DX9 era. If there were any questions of what 'drivers' could do with DX12/Vulkan one need only look at the 9xx series cards. The ones that were promised to have a-sync support, but then..... had it disabled at the driver level because of the massive performance hit it took due to the hardware's limitations. 980 Ti was touted as having it, but I think that gets overshadowed by the issues with its little brothers and the class-action lawsuit over their memory configurations.

That the 10xx line has it is interesting, but... once again, Pascal owes a lot of its design to Maxwell (why fix what isn't broken, amirite?) so you're not going to see sweeping changes in how well they are able to parallel compute. They've added dynamic scheduling, but.... when your pipelines are made to serialize the **** out of some monster draw calls efficiently... there is only so much you can do in parallel. Nvidia bet on the right horse it would seem with their focusing on DX11 for this generation, or... you could argue that they're what is holding back the developer community from new fancy toys...

Either way, with all of the major APIs, and I really hope Vulkan takes off, as OS-agnostic stuff is just nice to see because of what it means to the community as a whole... they will have to start to become more 'elegant' in their solutions, which I'd expect Volta to be.
 
If there were any questions of what 'drivers' could do with DX12/Vulkan one need only look at the 9xx series cards. The ones that were promised to have a-sync support, but then..... had it disabled at the driver level because of the massive performance hit it took due to the hardware's limitations. 980 Ti was touted as having it, but I think that gets overshadowed by the issues with its little brothers and the class-action lawsuit over their memory configurations.

Entirely separate issue, and one I agree with you on, but then again ASYNC is not a DX12-compliance feature, it's a side-feature. Maxwell still has full DX12 support. I don't remember if my 980TI had "ASYNC SUPPORT" on the box, I don't think it did, and I can't seem to find anything with a quick google on that either. Either way, I agree with you on that.

Nvidia bet on the right horse it would seem with their focusing on DX11 for this generation, or... you could argue that they're what is holding back the developer community from new fancy toys...

Completely disagree with you here. There is zero evidence that NV is holding back developers from using DX12, and I haven't seen NV publicly complain that DX12 is putting them at a disadvantage either. NV used a lot of money and resources to get their DX11 driver as optimized as possible, and I'm confused why I should view that as a bad thing. What annoys me is that people act as if performance is terrible in games simply because DX12 is not on-par with DX11 on NV right now, except the fact that NVs DX11 is outperforming AMDs DX12 anyway. It's not like you're losing anything by using DX11 over DX12 - there are zero graphical features added with DX12, and by the time DX11 is phased out entirely, we'll be a generation past Volta at least. DX11 is a matured API with matured drivers on NVs side, while DX12 is still an infant. Why anyone expects a performance increase when we knew the entire time that NV has focused on better driver pathing and coding, and were going to lose that advantage with DX12, is beyond me.

Either way, with all of the major APIs, and I really hope Vulkan takes off, as OS-agnostic stuff is just nice to see because of what it means to the community as a whole... they will have to start to become more 'elegant' in their solutions, which I'd expect Volta to be.

I disagree. NV has been elegant with the use of great driver coding, pathing, and optimizations in order to make AMDs better hardware look slow. Now is where NV has to use brute force, as AMD has, in order to get the numbers up in DX12. Time to drop elegance and bust out the hammer.

Pascal is no slouch and was a huge increase over Maxwell. Volta will most likely be a smaller increase than Maxwell->Pascal was. I can't see a 40-50% increase (if not more) happening two architectures in a row, just doesn't seem feasible.
 
Last edited:
Completely disagree with you here. There is zero evidence that NV is holding back developers from using DX12, and I haven't seen NV publicly complain that DX12 is putting them at a disadvantage either.

Maybe not directly doing it, but when Nvidia, the top dog (roughly 70% of the discrete GPU market) focuses just on current technology (DX11), and pretty much ignored future technology until recently (DX 12/ Vulkan), it holds back developers, there is no way around it. Developers can't/won't focus on DX12 or Even Vulkan, or spend as much resources in developing it if it only works good on 30% of the Hardware, that would be shooting themselves in the foot, and Nvidia knows this.

That is why Nvidia has such "great" DX 11 drivers as you believe because that is where nearly all of their focus and driver resources was at. AMD didn't focus and tie up all their resources on DX11 as Nvidia did, they looked and focused on the future, a decision that may not have paid off for DX11, but may Pay of huge for the future when DX12/Vulkan become Main stream.

If Vega is what people hope it is, you may see developers focus more on DX12/Vulkan going forward, and it may have a huge impact on Nvidia in the long run.
 
Last edited:
Entirely separate issue, and one I agree with you on, but then again ASYNC is not a DX12-compliance feature, it's a side-feature. Maxwell still has full DX12 support. I don't remember if my 980TI had "ASYNC SUPPORT" on the box, I don't think it did, and I can't seem to find anything with a quick google on that either. Either way, I agree with you on that.

"Introducing the 980 Ti" By Nvidia Scroll down a bit to see what their DX12 features have... and you get to this image

Completely disagree with you here. There is zero evidence that NV is holding back developers from using DX12, and I haven't seen NV publicly complain that DX12 is putting them at a disadvantage either. NV used a lot of money and resources to get their DX11 driver as optimized as possible, and I'm confused why I should view that as a bad thing. What annoys me is that people act as if performance is terrible in games simply because DX12 is not on-par with DX11 on NV right now, except the fact that NVs DX11 is outperforming AMDs DX12 anyway. It's not like you're losing anything by using DX11 over DX12 - there are zero graphical features added with DX12, and by the time DX11 is phased out entirely, we'll be a generation past Volta at least. DX11 is a matured API with matured drivers on NVs side, while DX12 is still an infant. Why anyone expects a performance increase when we knew the entire time that NV has focused on better driver pathing and coding, and were going to lose that advantage with DX12, is beyond me.

I get it, why break what isn't broken? Yet, DX12 and more modern APIs are adding things in them to only benefit the developers and, in turn, us consumers. Ever wonder why there are some console games that are running on archaic hardware, but are still able to squeeze out some respectable scenes? It is the access with which those developers have to the entirety of the hardware that enables it. Being that close to squeak out just another ounce of performance is something that one cannot do on the general PC sphere because of the full range of brands and types of hardware. With these new APIs there are more ways to take advantage of the hardware present, but not quite at the console level example.... There are also things in there aimed at ensuring that we're never/hardly CPU limited. The reduction of draw call overhead to ensure that GPU is not just simply waiting on the CPU to tell it what it needs to do next. And really, the thing that I'm hoping will somehow take off is the whole multiple adapter bit. They mention the integrated GPUs on most modern CPUs as something that is just sitting there idle. Well, if they can add multiple adapters to help render the scene to improve performance, why the hell not give me a bit more performance? It was mentioned that this is something that is brand-Gnostic as well. Imagine upgrading your "last gen" GPU with a "Next Gen" GPU and instead of putting it in a box in the closet, another PC, or on Craig's List, but rather putting it in your 2nd PCIE slot and running that **** in tandem with the new hotness. No bridge needed; no extra hardware. Is this going to really come about? Who knows, it is supposedly a nightmare to code/plan for, but... it is an interesting possibility thanks to a new API.

I disagree. NV has been elegant with the use of great driver coding, pathing, and optimizations in order to make AMDs better hardware look slow. Now is where NV has to use brute force, as AMD has, in order to get the numbers up in DX12. Time to drop elegance and bust out the hammer.

Pascal is no slouch and was a huge increase over Maxwell. Volta will most likely be a smaller increase than Maxwell->Pascal was. I can't see a 40-50% increase (if not more) happening two architectures in a row, just doesn't seem feasible.

You act like I'm knocking their hardware, I'm not, it is just that I genuinely believe that they are meant to be high Mhz, high bandwidth in order to just overpower things vs. finding solutions that aren't just all muscle. I was getting my computer engineering degree when the "modern" GPU started to take shape. Hardware T&L, Vertex and Pixel shaders, and a way too early for its time hardware version of tessellation on the Radeon 8500 called TruForm. Throughout all of this there were very few instances of Nvidia really pushing the boundary on "new" tech. That isn't to say that there weren't times, but... on the whole the impression that I've genuinely gotten from them has been get the highest FPS regardless of the other things, when the other things are usually what interest me. And this isn't to say that I'm right or they're wrong, but rather, this is what I find interesting in this hobby....
 
Maybe not directly doing it, but when Nvidia, the top dog (roughly 70% of the discrete GPU market) focuses just on current technology (DX11), and pretty much ignored future technology until recently (DX 12/ Vulkan), it holds back developers, there is no way around it. Developers can't/won't focus on DX12 or Even Vulkan, or spend as much resources in developing it if it only works good on 30% of the Hardware, that would be shooting themselves in the foot, and Nvidia knows this.

That is why Nvidia has such "great" DX 11 drivers as you believe because that is where nearly all of their focus and driver resources was at. AMD didn't focus and tie up all their resources on DX11 as Nvidia did, they looked and focused on the future, a decision that may not have paid off for DX11, but may Pay of huge for the future when DX12/Vulkan become Main stream.

If Vega is what people hope it is, you may see developers focus more on DX12/Vulkan going forward, and it may have a huge impact on Nvidia in the long run.

NWR what are you rambling on about again? I posted a link yesterday that showed the 1080ti stomping through Doom on Vulkan so what's their big problem? You just seem to flip flop between arguments that no one seems to know which argument you are actually supporting. Why don't you post some proof of the 1080/ti doing very poorly in DX12 and Vulkan. Maybe I've missed them and that's my bad who knows.

The quote I highlighted is also complete boll**ks because firstly AMD didn't have the money to develop their drivers and they actually made a shed load of engineers redundant to try and reduce their losses and and stem the hemorrhaging of cash.
 
NWR what are you rambling on about again? I posted a link yesterday that showed the 1080ti stomping through Doom on Vulkan so what's their big problem? You just seem to flip flop between arguments that no one seems to know which argument you are actually supporting. Why don't you post some proof of the 1080/ti doing very poorly in DX12 and Vulkan. Maybe I've missed them and that's my bad who knows.

The quote I highlighted is also complete boll**ks because firstly AMD didn't have the money to develop their drivers and they actually made a shed load of engineers redundant to try and reduce their losses and and stem the hemorrhaging of cash.

I did post a good computerbase.de article yesterday with doom benchmarks.

https://www.computerbase.de/2017-03/geforce-gtx-1080-ti-test/#diagramm-doom-3840-2160

to me in that specific game, nvidias performance is very unimpressive. In relation to the 980ti(the card it was designed to compete with) the fury is way faster, its not even close.(37% faster) Faster than a stock 1080 even and only 33% slower than a 1080ti. So yes in my opinion, in this game only, nv does do poorly compared to its amd counterparts
 
Last edited:
NWR what are you rambling on about again? I posted a link yesterday that showed the 1080ti stomping through Doom on Vulkan so what's their big problem? You just seem to flip flop between arguments that no one seems to know which argument you are actually supporting. Why don't you post some proof of the 1080/ti doing very poorly in DX12 and Vulkan. Maybe I've missed them and that's my bad who knows.

The quote I highlighted is also complete boll**ks because firstly AMD didn't have the money to develop their drivers and they actually made a shed load of engineers redundant to try and reduce their losses and and stem the hemorrhaging of cash.

Your post was in the Vega thread, I suggest you go read my response there. Then, if you so choose, or can, while keeping the correct topic in the correct thread it was started in, please show me where I am flip flopping. No where is that the case. (pay very close attention to my words if you chose to take on this task) If the sentence you bold typed in my quote is what you are talking about, please show me where I stated otherwise concerning focus of resources.
 
Last edited:
Back
Top