Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

HU is not saying that its saying dont pair a low end 6 core cpu with a high end gpu because nvidia will use 20-30% more of your cpu just to run their driver.

Its also not a good idea to pair a high end amd gpu as cpu limited situations will show up sooner.

Considering games will push towards cpu limitations over time its a good info to know what cpu to buy if you intend to hold onto it a long time.

Most people I know buy a cpu/mobo then swap out the gpu 2-3 times. Not the other way around. But to see a huge 20% to 30% cpu penalty just to run the gpu driver is insane. Nvidia has gotten lazy due to its market dominance and isnt optimizing properly. Something we've seen a lot of in the past.
 
Nvidia has gotten lazy due to its market dominance and isnt optimizing properly. Something we've seen a lot of in the past.

It's more like, AMD has always had less overhead in Vulkan/DX12, while the opposite is true in DX11.


In any case it doesn't matter much as AMD GPU's are not a real consideration for me. Poor RT performance, poor VR performance, lack of features etc etc. My only option is to get faster CPU's..
 
Last edited by a moderator:
I dunno, looks to me like AMD has better optimized drivers, which implies the nVidia has a good chunk of potential performance wasted due to driver overhead, possibly limiting performance with more powerful CPUs as well (even though high core counts mitigate the impact, that driver processing will still need to occur and will lower performance). My last ATI card was an 800XL, so it's not as if I'm a fanboy of either company....

So, rather than acting as an apologist for nVidia, why not put pressure on them to better optimize their drivers?
 
You're worse than bill. It's more like, AMD has always had less overhead in Vulkan/DX12, while the opposite is true in DX11.


In any case it doesn't matter much as AMD GPU's are not a real consideration for me. Poor RT performance, poor VR performance, lack of features etc etc. My only option is to get faster CPU's..

Obviously those who spend an inordinate amount on upgrades on a regular basis dont need to worry. This is meant for people who care that their upgrades last. When 6 core 12 threads is now considered low end due this massive driver overhead its hardly reassuring. Eating 20-30% of the fps perf on that kind of cpu is disarming.


Ive been critical of amd as well. They fabbed almost literally half the number of gpus in Q4 2020 than they did in Q4 2019. All the while telling us they had ample supply.
 
Ive been critical of amd as well.

Whatever pax. :rolleyes:

So, rather than acting as an apologist for nVidia, why not put pressure on them to better optimize their drivers?

No apologies for Nvidia. In DX11 it's the opposite way around, but I don't call AMD lazy because of it. Instead of just throwing insults like 'lazy', how about trying to provide some insight? It's because of architectural differences and their approaches to scheduling. Nvidia have actually done far more work when it comes to scheduling which is why their DX11 implementation is far superior - they don't rely on developers or API to do it for them. They do it via their drivers on CPU.

In a nut shell:

They moved away from handling the scheduling on hardware because not enough developers were multithreading the calls and thus it was bottlenecking.

Both AMD and Nvidia support deferred contexts, but Nvidia also supports driver command lists. So when DX11 games are single thread heavy, Nvidia's driver will spread out and make it thread friendly. When Developers take the time to make it thread friendly to begin with, both work AMD and Nvidia work well and neither end up CPU bound.

So yes, Nvidia is more reliant on the CPU than AMD since they handle it through drivers, but they aren't limited to a single CPU if the game isn't optimized well.

So, poor CPU utilization = Nvidia faster (driver moves to other cores), good CPU utilization = AMD has lower overhead because the GPU handles it.


This is nothing new and not limited to Ampere. Almost double the draw calls for Nv in DX11, while about 30% more for AMD in DX12.


86100.png

86101.png
 
Last edited by a moderator:
I personally like investigations and would like to see this improve, especially for gamers that demand more raw frames for higher refresh monitors that don't have a higher end CPU.


Awareness for limitations or cons are always welcomed, so companies may improve.
 
I mean, didn’t we all know this? Nvidia driver downloads are closing in on 700MBs. Sheesh.
 
I personally like investigations and would like to see this improve, especially for gamers that demand more raw frames for higher refresh monitors that don't have a higher end CPU.


Awareness for limitations or cons are always welcomed, so companies may improve.


Certainly, there's always room for improvement.

In comparison to DX11, DX12/Vulkan draw call rates are through the roof, so it was probably seen as a non issue, instead focusing on improving DX11 scheduling via drivers on CPU.
 
Another, perhaps related, issue, is that nVidia is behind AMD in terms of DPC latency. People who use their PCs for digital audio recording require very low DPC latency for uninterrupted realtime audio processing. So, this has led many in the DAW world to use AMD cards, or just on-board Intel graphics, as nVidia cards/drivers cause more pops and crackles due to increased overhead.

Bit more of a digression, but I've always taken issue with the "just throw more CPU/RAM/etc..." at the problem type of solution rather than the "let's actually optimize the drivers" approach. In that spirit, I appreciate the things that creative programmers have been able to accomplish with older hardware, creatively fleshing out more performance than the original hardware designers had ever imagined possible. That's why something like the ray-tracing SNES interests me.
 
but I've always taken issue with the "just throw more CPU/RAM/etc..." at the problem type of solution rather than the "let's actually optimize the drivers" approach. In that spirit, I appreciate the things that creative programmers have been able to accomplish

Then you appreciate Nvidias efforts to take it upon themselves to handle scheduling on the CPU via drivers, instead of relying on API, developers, or faster hardware. There are thousands of DX11 games where the real problem with draw calls lies, compared to barely 100 DX12 games where draw call rates are already 10x higher by virtue of API and not a real world issue.


HUB are 100% correct. But keep in mind they only tested 2 games in DX12 where it's not really an issue. They had to run DX12 at 1080p medium to expose Nv's driver overhead, or should I say AMD's lack of it because they don't actually do anything.
 
Don’t think it is too relevant at all. I ran a 3080 paired with 8086K and didn’t have any issues. I also ran a 3070 paired with 10700 vanilla and had no issues. Not sure who that article is for. Seems like someone had time to waste and they wasted it.
 
I dunno, looks to me like AMD has better optimized drivers, which implies the nVidia has a good chunk of potential performance wasted due to driver overhead, possibly limiting performance with more powerful CPUs as well (even though high core counts mitigate the impact, that driver processing will still need to occur and will lower performance). My last ATI card was an 800XL, so it's not as if I'm a fanboy of either company....

So, rather than acting as an apologist for nVidia, why not put pressure on them to better optimize their drivers?

Also the hardware scheduler vs the software one in Nvidia might be coming into play. It might not matter at higher resolutions now but if people stick to their newish 6 cores and face new games in the near future it could hobble perf on higher resolution with current gpus as well.

Heck perf on CP2077 is still odd on top end gpus. Be nice if proper drivers or the addition of hardware scheduling gave up 20-30% more fps.

And to think many said 4 cores were plenty not too long ago.
 
Back
Top