RTX 30X0 power issue thread.

Welcome to the discussion :lol:

Thanks for the invite.

I just don't understand how the 3090 would be much faster if the clocks are the same.

If you can only process X amount of work per clock, to feed your shaders, etc. why would you expect it to be much faster?
 
Thanks for the invite.

I just don't understand how the 3090 would be much faster if the clocks are the same.

If you can only process X amount of work per clock, to feed your shaders, etc. why would you expect it to be much faster?

If you have more cores available to work, even if you're operating at the same clock speed, then you'd still accomplish more work.

The reason why we aren't seeing linear 20% scaling with the cores, in my opinion, is due to the power limits being hit causing clocks to drop. I've said this multiple times so I sort of feel weird repeating this again, but we see it with the 3080. The card is hamstrung by hitting the power limit, downclocking and downvolting in order to stay within it's specified range, and will continue to scale up and down as the load increases/decreases depending on what is being rendered.

With the 3090, the card has 20-40watt more available power to utilize, yet it has 14GB more GDDR6X, a wider memory bus, and roughly 2000 more cores to feed. From what I believe, the 20-40w is not enough to compensate for the extra cores, so the card is going to automatically scale the clock speed in order to stay within the power target. The extra cores are now being negated to some extent because of the lower clock speed; hitting the power limit and having to drop clock speed and/or voltage to stay within it's designated range.

There are a multitude of reasons that I couldn't really explain without having the card in-hand as to why we are seeing better scaling in content creation work. This could be due to the way the programs are loading the GPU, or that they utilize the extra memory bandwidth more efficiently. I do not think there's a driver issue with the 3090 specifically.
 
Last edited:
The reason why we aren't seeing linear 20% scaling with the cores, in my opinion, is due to the power limits being hit causing clocks to drop.

I have seen clocks drop due to temp envelope but have not seen power. Have you seen reviews showing this.

It's a good theory, just wondering if other have see this behavior.
 
I have seen clocks drop due to temp envelope but have not seen power. Have you seen reviews showing this.

It's a good theory, just wondering if other have see this behavior.

I can watch it happen with my 2080TI. If the card hits it's power limit, how else will it bring down power draw? The main factors of GPU Boost are temperature and power target. If the target is exceeded, the card will always downclock and/or downvolt in order to stay within it's range. This is a feature that has existed since GPU Boost became a thing, but became much more relevant with the 2080TI.
 
I can watch it happen with my 2080TI. If the card hits it's power limit, how else will it bring down power draw? The main factors of GPU Boost are temperature and power target. If the target is exceeded, the card will always downclock and/or downvolt in order to stay within it's range.

Interesting, I have to look for this. Usually temps get me the downclock faster.
 
Interesting, I have to look for this. Usually temps get me the downclock faster.

Yes, temp is usually the main cause of downclocking; the 15Mhz steps that the 2080TI abides by. This is why Voltage/Frequency Curve OCing was a thing with the 2080TI; you are able to individually control each voltage step on your card. My 2080TI for example, I have a more aggressive clockspeed set at the higher voltage ranges, but a less aggressive speed set at the lower voltage range as the only reason I hit low voltage range is due to temperature causing the card to downclock+downvolt. The BIOS I run rarely hits power limits, but I can make it happen by running no framerate cap, or my bench settings that are not stable above 48c.

Kingpin actually modded a 2080TI to remove the temperature throttle that the 2080TI does (I believe it starts at 20c, no matter what) and it caused a whole slew of issues and basically made the card unusable.
 
My 3080 hits it's power limit long before it even hits temperature. My card runs below 65'c in extreme cases.
 
My 3080 hits it's power limit long before it even hits temperature. My card runs below 65'c in extreme cases.

Yes, Ampere hits the power limit faster than Turing but from vids of modded Ampere cards more power seems to give little performance gains from what I saw in actual FPS.
 
My 3080 hits it's power limit long before it even hits temperature. My card runs below 65'c in extreme cases.

Now if you got the temps into .. let's say mid 40s, I guarantee you'd see higher clocks.

Yes, Ampere hits the power limit faster than Turing but from vids of modded Ampere cards more power seems to give little performance gains from what I saw in actual FPS.

More power does not always equate to more performance when other factors are causing issues. Once we see actual 3080/3090 waterblocks, I think we'll see the shunted cards performing better. Derbauer got a 3090 up to 2100Mhz with the shunt mod and +80mv voltage, but wasn't stable due to the air cooler. Put that thing under water, and I think we'd see 2150 24/7. I would be weary of pumping extra voltage into the chips though, simply due to not knowing the limits/durability of the Samsung 8nm node. I wouldn't take that risk on a $1800 GPU..
 
very rarely does it only happen for one specific GPU. The 3080 and 3090 are running the same architecture. Any improvement to the 3090 will apply to the 3080. If the argument is that the 3090 will benefit more... Well I don't agree with that, but time will tell I suppose.

And this is where we diverge completely as I know for a fact that MANY examples exist of a single GPU in a family of ALL the same chip, as long as it's the top and resources are just not being used fully, that optimizations only happen to that chip and do NOT trickle down.

I hold that the PROOF of this is in your face and you are dancing around it. ALL content creation applications benchmarked and shown publicly have shown the full resources differential in performance, while games, notoriously optimization driven, do not. No, 3080 will not receive the same benefits as it got them first, optimization for it's LOWER core count than 3090.

I suggest that 3090 is only being recognized in the drivers from release with almost NO optimization for it's increased resources at all for any specific games what so ever yet. 3080 is the main gaming focus for Nvidia and likely getting the lions share of attention at first. I'm guessing on that part, but its more logical that your assumption that ALL GPUs in same family benefit the same for ALL optimizations, that is patently false.
 
And this is where we diverge completely as I know for a fact that MANY examples exist of a single GPU in a family of ALL the same chip, as long as it's the top and resources are just not being used fully, that optimizations only happen to that chip and do NOT trickle down.

That is under the assumption that game just don't know what to do with all the cores on the 3090, which I do not agree with. It's not like the 3090 is running double the CUDA cores.

I hold that the PROOF of this is in your face and you are dancing around it. ALL content creation applications benchmarked and shown publicly have shown the full resources differential in performance, while games, notoriously optimization driven, do not. No, 3080 will not receive the same benefits as it got them first, optimization for it's LOWER core count than 3090.

There are many other factors to this, though. Maybe the content creation applications benefit from the higher memory throughput, where as in games that may not be the bottleneck, thus it's useless? It's not just about CUDA cores either; the 3090 has more Tensor cores and more RT cores. It's extremely possible that the content creation apps are able to utilize those extra tensor cores while gaming applications are only using them when the game actually needs them for the specific usage scenarios that they are designed for. Extra RT cores don't mean anything if the game doesn't have RT, for example.

We do see the 3090 perform marginally better than the 3080 in RT performance; does that mean game A is somehow able to utilize the extra RT cores, but can't figure out how to use the extra CUDA cores? Eh.. I have a hard time following that one. I think it's worth pointing out that most 3090s are running 1750-1900 range under sustained load. Most 3080s are running 1975+ from what I've seen, at least the one's that aren't garbage or being used in poor airflow cases. Maybe not 10% worth of performance in clock speed, but 5% I can see from 100-200Mhz clock speed difference.

I suggest that 3090 is only being recognized in the drivers from release with almost NO optimization for it's increased resources at all for any specific games what so ever yet. 3080 is the main gaming focus for Nvidia and likely getting the lions share of attention at first. I'm guessing on that part, but its more logical that your assumption that ALL GPUs in same family benefit the same for ALL optimizations, that is patently false.

Again, I've owned the card that had the biggest die for every generation since Kepler. I do not remember a single time that the TITAN received performance increases that the other cards did not. I mean I hope that the 3090 performs better over time due to drivers, I do, because $1500+ dictates that it should be a much faster card than it is.

I hope you're right! Maybe it's the pessimist in me that believes people buying the card shouldn't get their hopes up for more performance. If the card isn't worth it to you (not you specifically mosh) with the performance it's putting out right now, don't buy it expecting it to get better.

*Looks at the HVAC to my left in the garage.

Hmmmmmm

Where's the eye emoji when you need it :lol:
 
Last edited:
You are doing a TON of speculation for reasons that fall under Occam's Razor as the simplest solution, driver's need optimization to take advantage of the additional resources and your assertion that is not needed is purely where you are wrong to the core.

Beyond that, debating this with you is less interesting than waiting on the STUPID UPS MAN WHO IS MAKING MY LIFE HELL!!! WHY OH WHY DO THEY MAKE ME WAIT?!? It's an evul conspiracy, and THAT is where the real optimizations are needed! UPS to my door, optimize that shiz!
 
Oh, as a quick follow up, I do have to comment that the somewhat more civilized debate around here is VERY refreshing and not like the old days at all. I do approve...
 
Back
Top