RTX 3080 8704 CU's
RTX 3090 10496 CU's
That's a 20.5% increase and it's only 10% faster in real-world performance, so how does it scale?
Welcome to the discussion
RTX 3080 8704 CU's
RTX 3090 10496 CU's
That's a 20.5% increase and it's only 10% faster in real-world performance, so how does it scale?
Welcome to the discussion
Thanks for the invite.
I just don't understand how the 3090 would be much faster if the clocks are the same.
Thanks for the invite.
I just don't understand how the 3090 would be much faster if the clocks are the same.
If you can only process X amount of work per clock, to feed your shaders, etc. why would you expect it to be much faster?
The reason why we aren't seeing linear 20% scaling with the cores, in my opinion, is due to the power limits being hit causing clocks to drop.
I have seen clocks drop due to temp envelope but have not seen power. Have you seen reviews showing this.
It's a good theory, just wondering if other have see this behavior.
I can watch it happen with my 2080TI. If the card hits it's power limit, how else will it bring down power draw? The main factors of GPU Boost are temperature and power target. If the target is exceeded, the card will always downclock and/or downvolt in order to stay within it's range.
Interesting, I have to look for this. Usually temps get me the downclock faster.
My 3080 hits it's power limit long before it even hits temperature. My card runs below 65'c in extreme cases.
My 3080 hits it's power limit long before it even hits temperature. My card runs below 65'c in extreme cases.
Yes, Ampere hits the power limit faster than Turing but from vids of modded Ampere cards more power seems to give little performance gains from what I saw in actual FPS.
Now if you got the temps into .. let's say mid 40s, I guarantee you'd see higher clocks.
Let's get an A/C mod like Jay Z did.
I should make a duct that just directs all the A/C flow out of the vent into my PC. Full send!
very rarely does it only happen for one specific GPU. The 3080 and 3090 are running the same architecture. Any improvement to the 3090 will apply to the 3080. If the argument is that the 3090 will benefit more... Well I don't agree with that, but time will tell I suppose.
I should make a duct that just directs all the A/C flow out of the vent into my PC. Full send!
And this is where we diverge completely as I know for a fact that MANY examples exist of a single GPU in a family of ALL the same chip, as long as it's the top and resources are just not being used fully, that optimizations only happen to that chip and do NOT trickle down.
I hold that the PROOF of this is in your face and you are dancing around it. ALL content creation applications benchmarked and shown publicly have shown the full resources differential in performance, while games, notoriously optimization driven, do not. No, 3080 will not receive the same benefits as it got them first, optimization for it's LOWER core count than 3090.
I suggest that 3090 is only being recognized in the drivers from release with almost NO optimization for it's increased resources at all for any specific games what so ever yet. 3080 is the main gaming focus for Nvidia and likely getting the lions share of attention at first. I'm guessing on that part, but its more logical that your assumption that ALL GPUs in same family benefit the same for ALL optimizations, that is patently false.
*Looks at the HVAC to my left in the garage.
Hmmmmmm