Rage3D Discussion Area

Rage3D Discussion Area (http://www.rage3d.com/board/index.php)
-   Other Graphics Cards and 3D Technologies (http://www.rage3d.com/board/forumdisplay.php?f=65)
-   -   Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs (http://www.rage3d.com/board/showthread.php?t=34052371)

badsykes Mar 11, 2021 04:19 AM

Nvidia Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs
 


xCLAVEx Mar 11, 2021 08:43 AM

Pfft, not like anybody can find any of those cards listed anyway.

Mangler Mar 11, 2021 08:59 AM

It feels like this thread belongs in the Other Graphics Cards and 3D Technologies section.

acroig Mar 11, 2021 09:45 AM

Quote:

Originally Posted by Mangler (Post 1338273824)
It feels like this thread belongs in the Other Graphics Cards and 3D Technologies section.

Report it so mod can move it there.

Munkus Mar 11, 2021 11:12 AM

Quote:

Originally Posted by acroig (Post 1338273836)
Report it so mod can move it there.

Done!

Hapatingjaky Mar 11, 2021 11:13 AM

Good ol Hardware Unboxed, if its not AMD its bad.

pax Mar 11, 2021 11:39 AM

HU is not saying that its saying dont pair a low end 6 core cpu with a high end gpu because nvidia will use 20-30% more of your cpu just to run their driver.

Its also not a good idea to pair a high end amd gpu as cpu limited situations will show up sooner.

Considering games will push towards cpu limitations over time its a good info to know what cpu to buy if you intend to hold onto it a long time.

Most people I know buy a cpu/mobo then swap out the gpu 2-3 times. Not the other way around. But to see a huge 20% to 30% cpu penalty just to run the gpu driver is insane. Nvidia has gotten lazy due to its market dominance and isnt optimizing properly. Something we've seen a lot of in the past.

acroig Mar 11, 2021 11:49 AM

Quote:

Originally Posted by Munkus (Post 1338273846)
Done!

Thanks! :)

demo Mar 11, 2021 05:15 PM

Quote:

Originally Posted by pax (Post 1338273853)
Nvidia has gotten lazy due to its market dominance and isnt optimizing properly. Something we've seen a lot of in the past.

It's more like, AMD has always had less overhead in Vulkan/DX12, while the opposite is true in DX11.


In any case it doesn't matter much as AMD GPU's are not a real consideration for me. Poor RT performance, poor VR performance, lack of features etc etc. My only option is to get faster CPU's..

12Bass Mar 11, 2021 06:22 PM

I dunno, looks to me like AMD has better optimized drivers, which implies the nVidia has a good chunk of potential performance wasted due to driver overhead, possibly limiting performance with more powerful CPUs as well (even though high core counts mitigate the impact, that driver processing will still need to occur and will lower performance). My last ATI card was an 800XL, so it's not as if I'm a fanboy of either company....

So, rather than acting as an apologist for nVidia, why not put pressure on them to better optimize their drivers?

pax Mar 11, 2021 06:47 PM

Quote:

Originally Posted by demo (Post 1338273930)
You're worse than bill. It's more like, AMD has always had less overhead in Vulkan/DX12, while the opposite is true in DX11.


In any case it doesn't matter much as AMD GPU's are not a real consideration for me. Poor RT performance, poor VR performance, lack of features etc etc. My only option is to get faster CPU's..

Obviously those who spend an inordinate amount on upgrades on a regular basis dont need to worry. This is meant for people who care that their upgrades last. When 6 core 12 threads is now considered low end due this massive driver overhead its hardly reassuring. Eating 20-30% of the fps perf on that kind of cpu is disarming.


Ive been critical of amd as well. They fabbed almost literally half the number of gpus in Q4 2020 than they did in Q4 2019. All the while telling us they had ample supply.

demo Mar 11, 2021 07:12 PM

Quote:

Originally Posted by pax (Post 1338273941)
Ive been critical of amd as well.

Whatever pax. :rolleyes:

Quote:

Originally Posted by 12Bass (Post 1338273940)
So, rather than acting as an apologist for nVidia, why not put pressure on them to better optimize their drivers?

No apologies for Nvidia. In DX11 it's the opposite way around, but I don't call AMD lazy because of it. Instead of just throwing insults like 'lazy', how about trying to provide some insight? It's because of architectural differences and their approaches to scheduling. Nvidia have actually done far more work when it comes to scheduling which is why their DX11 implementation is far superior - they don't rely on developers or API to do it for them. They do it via their drivers on CPU.

In a nut shell:

Quote:

They moved away from handling the scheduling on hardware because not enough developers were multithreading the calls and thus it was bottlenecking.

Both AMD and Nvidia support deferred contexts, but Nvidia also supports driver command lists. So when DX11 games are single thread heavy, Nvidia's driver will spread out and make it thread friendly. When Developers take the time to make it thread friendly to begin with, both work AMD and Nvidia work well and neither end up CPU bound.

So yes, Nvidia is more reliant on the CPU than AMD since they handle it through drivers, but they aren't limited to a single CPU if the game isn't optimized well.

So, poor CPU utilization = Nvidia faster (driver moves to other cores), good CPU utilization = AMD has lower overhead because the GPU handles it.

This is nothing new and not limited to Ampere. Almost double the draw calls for Nv in DX11, while about 30% more for AMD in DX12.




SIrPauly Mar 11, 2021 07:25 PM

I personally like investigations and would like to see this improve, especially for gamers that demand more raw frames for higher refresh monitors that don't have a higher end CPU.


Awareness for limitations or cons are always welcomed, so companies may improve.

mizzer Mar 11, 2021 07:33 PM

I mean, didn’t we all know this? Nvidia driver downloads are closing in on 700MBs. Sheesh.

demo Mar 11, 2021 07:43 PM

Quote:

Originally Posted by SIrPauly (Post 1338273943)
I personally like investigations and would like to see this improve, especially for gamers that demand more raw frames for higher refresh monitors that don't have a higher end CPU.


Awareness for limitations or cons are always welcomed, so companies may improve.


Certainly, there's always room for improvement.

In comparison to DX11, DX12/Vulkan draw call rates are through the roof, so it was probably seen as a non issue, instead focusing on improving DX11 scheduling via drivers on CPU.

12Bass Mar 11, 2021 07:48 PM

Another, perhaps related, issue, is that nVidia is behind AMD in terms of DPC latency. People who use their PCs for digital audio recording require very low DPC latency for uninterrupted realtime audio processing. So, this has led many in the DAW world to use AMD cards, or just on-board Intel graphics, as nVidia cards/drivers cause more pops and crackles due to increased overhead.

Bit more of a digression, but I've always taken issue with the "just throw more CPU/RAM/etc..." at the problem type of solution rather than the "let's actually optimize the drivers" approach. In that spirit, I appreciate the things that creative programmers have been able to accomplish with older hardware, creatively fleshing out more performance than the original hardware designers had ever imagined possible. That's why something like the ray-tracing SNES interests me.

demo Mar 11, 2021 08:05 PM

Quote:

Originally Posted by 12Bass (Post 1338273951)
but I've always taken issue with the "just throw more CPU/RAM/etc..." at the problem type of solution rather than the "let's actually optimize the drivers" approach. In that spirit, I appreciate the things that creative programmers have been able to accomplish

Then you appreciate Nvidias efforts to take it upon themselves to handle scheduling on the CPU via drivers, instead of relying on API, developers, or faster hardware. There are thousands of DX11 games where the real problem with draw calls lies, compared to barely 100 DX12 games where draw call rates are already 10x higher by virtue of API and not a real world issue.


HUB are 100% correct. But keep in mind they only tested 2 games in DX12 where it's not really an issue. They had to run DX12 at 1080p medium to expose Nv's driver overhead, or should I say AMD's lack of it because they don't actually do anything.

SIrPauly Mar 11, 2021 08:59 PM

Here are more examples:

https://www.techspot.com/article/220...5-gpu-scaling/

KAC Mar 12, 2021 12:13 AM

Don’t think it is too relevant at all. I ran a 3080 paired with 8086K and didn’t have any issues. I also ran a 3070 paired with 10700 vanilla and had no issues. Not sure who that article is for. Seems like someone had time to waste and they wasted it.

pax Mar 12, 2021 12:51 AM

Quote:

Originally Posted by 12Bass (Post 1338273940)
I dunno, looks to me like AMD has better optimized drivers, which implies the nVidia has a good chunk of potential performance wasted due to driver overhead, possibly limiting performance with more powerful CPUs as well (even though high core counts mitigate the impact, that driver processing will still need to occur and will lower performance). My last ATI card was an 800XL, so it's not as if I'm a fanboy of either company....

So, rather than acting as an apologist for nVidia, why not put pressure on them to better optimize their drivers?

Also the hardware scheduler vs the software one in Nvidia might be coming into play. It might not matter at higher resolutions now but if people stick to their newish 6 cores and face new games in the near future it could hobble perf on higher resolution with current gpus as well.

Heck perf on CP2077 is still odd on top end gpus. Be nice if proper drivers or the addition of hardware scheduling gave up 20-30% more fps.

And to think many said 4 cores were plenty not too long ago.

demo Mar 12, 2021 01:00 AM

You didn't understand a word I said, did you pax? I even gave you graphs.

badsykes Mar 12, 2021 04:20 AM

Quote:

Originally Posted by KAC (Post 1338273984)
Don’t think it is too relevant at all. I ran a 3080 paired with 8086K and didn’t have any issues. I also ran a 3070 paired with 10700 vanilla and had no issues. Not sure who that article is for. Seems like someone had time to waste and they wasted it.

I am in this case.If i have to upgrade my gpu and keep ryzen 1700 i would get a 5700 or a 6800 instead of 3070.
5700 will have better performance than 3070 or even 3090.

acroig Mar 12, 2021 07:08 AM

Quote:

Originally Posted by demo (Post 1338273987)
You didn't understand a word I said, did you pax? I even gave you graphs.

Demo, keep comparisons to other members out of your arguments please.

Carry on.

bobvodka Mar 12, 2021 09:48 AM

Quote:

Originally Posted by 12Bass (Post 1338273940)
I dunno, looks to me like AMD has better optimized drivers, which implies the nVidia has a good chunk of potential performance wasted due to driver overhead

That's not really the case.

The reason why NV has always had better DX9/11/OpenGL performance than AMD is because their drivers do a lot more behind the scenes to optimise their GPU performance.

So the draw call the app does ends up being sent to effectively a 'server' process which will be doing things like on-demand shader recompiling, caching, draw call rewriting and reordering (per game/engine), and correcting mistakes/issues that can happen.

AMD does a lot less of this aggressive optimisation which is why their performance on DX9/11/OpenGL has been traditionally lower, or doesn't increase like NVs would so often do.

This is also the reason why DX12/Vulkan saw AMD suddenly jump up performance wise, the driver basically got 'out of the way' and NV lost their 'server' advantage as those APIs are very much "do what I say" where as previous APIs are a bit more "do what I mean" - and NV has also been far more forgiving in that regard; my rule of thumb tends to be if it works on NV and not on AMD I've done something wrong, if it works on AMD and not on NV then NV have an optimisation which has done something wrong.

That's not to say that the drivers don't make a difference, however how much of a difference they can make and when they can do it is now heavily constrained.

Anyway, the point of all that is that this isn't "poorly optimised", it's just doing a lot of useful work which, yes, on lower performance CPUs might cause an issue.

GTwannabe Mar 12, 2021 09:50 AM

The latest NV driver package is ~630MB. No bloat there. :rolleyes:

pax Mar 12, 2021 10:05 AM

Quote:

Anyway, the point of all that is that this isn't "poorly optimised", it's just doing a lot of useful work which, yes, on lower performance CPUs might cause an issue.
Could more of that work be done on the gpu vs the cpu in the case of nvidia or are their architecture not suited for that?

bobvodka Mar 12, 2021 10:06 AM

Quote:

Originally Posted by GTwannabe (Post 1338274035)
The latest NV driver package is ~630MB. No bloat there. :rolleyes:

Large size != high CPU usage.

I could write a tiny program which burnt CPU time for no reason at all if I felt the need.

The large package is likely due to all the various driver profiles, which is likely to include both code and custom shaders and other support things for games.

The thing that matters is the code which executes on the critical path while it is doing work.

bobvodka Mar 12, 2021 10:12 AM

Quote:

Originally Posted by pax (Post 1338274038)
Could more of that work be done on the gpu vs the cpu in the case of nvidia or are their architecture not suited for that?

Not really, because it's all setup work that is being carried out before the commands are written to the command queue that the front end consumes.

An example would be if a game was to issue say four draw calls; it might issue them in the order 1,2,3,4 but via profiling NV might know that for their hardware (and maybe even just a particular generation of GPU) doing 4,3,1,2 yields better results, so they have to cache everything for the first 3 draw calls before doing them in that order - that's something you'll do CPU side as it's more efficient to do it there vs trying to jam more logic and resources into the GPU.

Nagorak Mar 12, 2021 03:18 PM

I wonder if on an 8 core processor like a 2700X or 1800X if this would be less of an issue since you'd have the two more cores to spread the load out to? Maybe all the talk about games not utilizing more than 6 cores wouldn't really turn out to be true once you take into account driver overhead on a slower system? If so, the 2700X might turn out to have been a better buy than the 2600X.

I guess I should look and see if HU has already run the 8 core Ryzen tests, since that should answer my question.

pax Mar 12, 2021 09:14 PM

Im wondering with graphical and ai complexity in games going forward if the driver overhead will grow faster than cpu ipc.


All times are GMT -5. The time now is 04:08 PM.

Powered by vBulletin® Version 3.6.5
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
All trademarks used are properties of their respective owners. Copyright 1998-2011 Rage3D.com