It is just ridiculous! (NV40 related)

EDITED BY HANNERS:

Can we keep the comments constructive please?

I don't want to risk threads here turning into flamewars, which is why I'm editing this post out as it could easily become flame bait.

Thanks.
 
Last edited by a moderator:
This is business - nothing personal - fastest card with comparable price tab.

i.e. if:
ati 20000 3dmarks nvidia 21000 3dmark
and:
ati £250 nvidia £500
then:
ati wins nvidia blows
 
Last edited:
Just remember that the Geforce 3 is "significantly" faster than the Geforce 2 by only a "slight" margin...

You can only imagine the PR these days..
 
Deathlike2 said:
Just remember that the Geforce 3 is "significantly" faster than the Geforce 2 by only a "slight" margin...

You can only imagine the PR these days..
I beg to differ, the GF3 is a hell of a lot faster than a GF2 and has extry features...it's not that "slight" of a margin, it's a decent bump.

Same thing from the GF3 to GF4. The benchmarks may not reflect it that much, but my games ran a HELL of a lot better when I upgraded from a GF3 to a GF4.

(In fairness, the 9700 Pro makes 'em all look like yesterdays news though. ;) )
 
I don't believe that from one generation to the next there has ever been a two fold increase in performance, apart from voodoo to first nvidia (although don't quote me on that).
 
It's just my opinion..

The GF3 in Q3 was about the same (the GF3 being slightly better) before the DetFX series of drivers.. after the DetFX... NVidia probably didn't bother optimizing the GF2 as much to beat its newer product (of course it wouldn't be surprising for both ATI and NVidia if they did it)..

The "Programmable T&L" stuff that was hyped in that release.. I haven't heard much to who hsa used the stuff..

At this moment... it's similar to the "Programmable AA" on the Radeon 9700/9800.. who is really going to use that? Maybe a few developers.. You need to take into the account the number of features developers use and the number of features on a video card... (which are too many to count)
 
you know ATi was faily accurate about the performance increase that the R300 core has over the R200 core. IT is roughly 2.5 to 3x faster in most situations. Turn on FSAA and it totally dominates.


I mean look at Quake III or UT 2K3 with FSAA there is NOTHING that can compete with the R300 not even the nV35 and that is why nVidia has to hack the FSAA settings and IQ of the card in it's drivers... Nvidia KNOWS they screwed the pooch and they will not openly admitt it. They only admitted that the nV30 was a failure and since the nV35 is derived from it, it can be classified as a failure as well (abeit less of a failure) especially once you make IQ the same between the cards..
 
Back
Top