Non "cheating" shaders results.

I really didnt want to get into the details but there were some concerns that this re-arrangement can affect other parts of the benchmark. Its possible that other scenes reuse part of that shader in which case the ati tweak can provide negative effects.

Ati should work closer with developers to make sure they code their software to push atis hardware to the limit. Not go behind their back and bloat up drivers. I would rather download a small update to games/benchmarks that include tweaks then have them in my drivers.

What Ati did simply can not be compared to what Nvidia did but thats not what is important. Ati should not have done anything to the benchmark if it was cases specific tweak. Their tweak only applies to this benchmark and only to that test. If they had some on the fly shader optimizer built in that tweaked shaders for multiple titles then that would be ok given that image quality was the same.

Back to the thread it seems that Ati did more then just swap around some shader code for them to loose just about half their score. I think that if we going to ***** out nvidia for foul play ati should get whats coming to them too...
 
Just to clear up what I think is a misunderstanding.
The original tests were done using DX9 mip filter reflections:

The tests with the shuffled shaders were run using DX8 mip filter reflections because of supposed problems with the DX9 mip filter reflections one.

Anybody can download ShaderMark 1.7D & do two runs; one with each type of DX mip filter reflections.

From doing the tests myself, using shaderMark 1.7D, I've found there IS normally a much lower score when using DX8 mip filter reflections on the R3xx compared to the score obtained using DX9 mip filter reflections. I hardly used the scientific method when running my own tests so I'll leave it for others to report the % difference.

If anybody cares.
 
if it is the same instructions just reordered, how would that have a negative impact on the code if it was resused?
 
First, I am sorry to have upset you. That was not my intention.

Second, if assembly instruction re-ordering is not valid, then what is?

Third, I have in fact written in 2k in pure assembly what takes 500k when compiled in C. So I don't get the 'bloat' statement since we are talking assembly.

Fourth, I felt I was still addressing the topic at hand. Here's how:

Let's go back to your addition example with my modification, and tweak it once again:

Benchmark workload
1+2+3 ... + x = y

x = some number at the end of the sequence
y = result of all that addition

Again, let's say that something about ATI architecture likes to handle all odd numbers first, then all evens. So in the driver, they add all odds, then evens.

Now ATI states that they re-ordered instructions to get more performance out of their hardware. Ok, what if ATI identified stages where pipes would be stalled, so they made sure each pipe had something to do, or was loaded enough to do something. So let's say a hypothetical 5 stage pipe has the fourth stage waiting on a result and the previous stages are clear. We'll say they are clear because at each clock since stage 1, some part of the core wasn't ready with a result. So it finally hands the result in stage 5.

Now on the next clock, I have to fill the pipe from scratch. Not very efficient. But what if I re-ordered the instructions to make sure all stages 1 thru 4 had something there. Then on the next clock the hardware is ready to work on something else. Very efficient.

If the very same instructions are going to flow through the hardware, and I just re-order them to provide a catalyst to the processing of the hardware, I see that as a valid optimization and in fact the very nature of an optimization.

Hold on, this ties into shadermark...

Back to the addition example, if x is a small number then performance differences will be minimal. See, no matter how I schedule the assembly instructions, I get minimal performance gain/loss because there isn't much work to do. But since the benchmark we are talking about here is called shadermark, I'm assuming these are long shaders we are talking about here (x is large in our example). In that case, scheduling tweaks would have a much more dramatic performance difference because there are many more stages to add/lose performance.

If ATI steps up and says they re-ordered instructions here too, which led to the performance loss when the makers patched their benchmark, then we can see that across all benchmarks and across all games, ATI is making the exact same optimization. Again, this cannot be said of nvidia's methods.

Using Cg, or any like 'application specific optimzations' is an nvidia specific optimization, not a general one. What ATI has done looks to be going across the board, games and benchmarks alike. That's how I can justify it as a valid optimzation, and how I can see how the performance difference happening in shadermark.

I will say I can see how you would want each piece of hardware to run the same instructions in the same order. But re-scheduling the assembly is how you get performance out of hardware without compromising the developer's original work. We all know that nvidia has a lot of cards out there that get developed on. And their way of doing things is not the same as ATIs way of doing it. If it was, then why not just have one graphics company running the show? But anyway, if they both are re-ordering instructions to squeeze performance out of their cores, then that is valid to me. Then it's a match of who has the better hardware because they are both doing the exact same instructions, just in a different order. What I object to is not running the same set of instructions. That is the point where you cross the line.

Again, sorry for upsetting some of you on this.
 
We seem to be debating completely different topics here. I am not debating whether certain optimizations are valid vs clear cheats. What my original point was that on a benchmark doing such tweaks only serves one purpose. That is to increase score and make said card look better to potential customers.

There is no benefit to users because no one plays their benchmarks. Again I feel the need to repeat this over and over. Changing the way a benchmark executes defeats the purpose of the benchmark. Its one thing to dissagree with the overall validity of a benchmark but entirely another to sabotage it. If you dont like the benchmark dont use it.

You are going into a different subject all together. I think that game developers should code for apis not special hardware. If the hardware vendor has tweaks they should give them to the developer and let them release updates.

Once again I think what Ati did with 3Dmark2K3 is only bad because it is a benchmark. If they do such things in games without lowering image quality I am all for it.

Bottom Line: Both Ati and Nvidia should not have messed with the original code at all. Nvidia went way overboard in every way and Ati really only deserves a warning for what they did. However that doesnt change the fact that they special cased the bechmark to get better sales.
 
Last edited:
Ok, I was merely debating that since what they are doing in benchmarks is the exact same thing they are doing in games doesn't make it a cheat. Oh, and since they never threw away the developers code, only re-ordered, then again, no cheat.

No 2 apps are alike so performance is/was not a given. If they only tweaked 3dmark, then I'd say they're cheating. But it's the same optimization across the board... games, benchmarks, it doesn't matter. And through it all, they have not thrown out any of the developer's original code.

So my point is/was they are doing the same workload because they are using the same instructions. So across the benchmarks and the games, it's the same workload cause it's the same instructions. The performance came from using the instructions in the order that worked best with their hardware.

True, nobody plays benchmarks. But what ati did with, since you brought it up, 3dmark, and what we are speculating happened in shadermark, what we see is representative of performance gains in games because they apply the very same optimization to what you actually PLAY.

Again, they have not changed the workload and are using the same instructions. Ripping out the instructions and replacing them with your own, which is what is speculated as to going on with the FX line even now with shadermark, that is crossing the line. That is not the same workload because it's doing different instructions as you clearly point out in your example.

Cat Maker and OpenGl guy are applauding the benchmark in the catalyst discussion area. Do you think they'd do that if they pulled an nvidia with shadermark? I think they are proud of their hardware and drivers as they should be. It is THE wysiwyg hardware out there, which again, cannot be said of the other guy.
 
If ATI's shader tweaking were truly generic and automatic, then changing the order of the instructions would have no effect, the drivers would optimize it back to the or at least a fast version. It is only when it is detecting one paticular shader that you lose all the ATI tweaks when you change that shader so it can't be detected.
 
IISquintsII said:
So can anyone explain the huge drop in performance in the radeons too?

As was stated before the drop in performance is not only due to any "optimizations" that might have been but also due to the fact that they are using DX8 filters as opposed to Dx9 ones. (Refer to Senile's post above) Jut how much is due to each is not known as of now :)
 
The Radeons probably got rather shafted due to the shuffled coding.. either that or ATI did optimize for it..

This doesn't change the fact that the FX was CRIPPLED when it was done.. scary is it?...

Let's wait for a game (that uses DX9) to surface (other than Doom3) and see the damage...
 
http://www.3dvelocity.com/cgi-bin/ikonboard/ikonboard.cgi?;act=ST;f=2;t=7

Seems nVidia replied to Pixelat0r (3dvelocity.com) regarding his concerns over the ShaderMark results. The relevent post is the fifth one in the thread I have provided a link to. I'll copy paste it in full below.


" Okay, having spoken to NVIDIA I'm going to try and spell out their thoughts with as little of my own opinion thrown in as possible so you can decide for yourselves. Here's what they told me.

NVIDIA works closely with games deveopers and 9 out of 10, and eventually nearer 10 out of 10 games will be either developed on their hardware, developed with Cg or developed with their direct input. How can they be accused of not conforming to industry standard shader routines when they ARE the driving force that sets the industry standards in shaders. Games developers are not likely to go shuffling instructions the way benchmark creators are and any games developer that wants to succeed will write their code so that it runs the way NVIDIA's shaders expect it to. the fact that their shader don't cut it on rarely heard of benchmarks and code that's been "tinkered" with is of little or no concern to them. They won't spend valuable time defending themselves against something they don't see as worth defending. Changing code doesn't expose cheating, it simply feeds code to their shaders in a way that they're not designed to handle it. Games developers will never do that, and if it's games performance that matters then NVIDIA is where the clever money is being spent.
When I asked about the FX's relatively poor performance in our "real" game tests the reply wasn't entirely clear but they certainly claim to have doubts on the reliability of FRAPS and the reliability of those using it.
In a nutshell they're saying that you can analyse all you want, future titles will be coded to run at their best on NVIDIA hardware and they suggested we ask the developers who they think is on top right now.

As I said, I have many, many thoughts on this statement but for now I'm biting my tongue to let others have their say first. "

Edited by pixelat0r on June 18 2003,18:31

END.
 
Senile said:
NVIDIA works closely with games deveopers and 9 out of 10, and eventually nearer 10 out of 10 games will be either developed on their hardware, developed with Cg or developed with their direct input. How can they be accused of not conforming to industry standard shader routines when they ARE the driving force that sets the industry standards in shaders. Games developers are not likely to go shuffling instructions the way benchmark creators are and any games developer that wants to succeed will write their code so that it runs the way NVIDIA's shaders expect it to. the fact that their shader don't cut it on rarely heard of benchmarks and code that's been "tinkered" with is of little or no concern to them. They won't spend valuable time defending themselves against something they don't see as worth defending. Changing code doesn't expose cheating, it simply feeds code to their shaders in a way that they're not designed to handle it. Games developers will never do that, and if it's games performance that matters then NVIDIA is where the clever money is being spent.
When I asked about the FX's relatively poor performance in our "real" game tests the reply wasn't entirely clear but they certainly claim to have doubts on the reliability of FRAPS and the reliability of those using it.
In a nutshell they're saying that you can analyse all you want, future titles will be coded to run at their best on NVIDIA hardware and they suggested we ask the developers who they think is on top right now.

This has to be possibly the single most arrogant statement I've heard from any IHV.

I particularly love the way how first 3DMark 2003 wasn't a fair test, now all benchmarks are unfair, and to make things worse using FRAPS is unfair?!?!

* Hands nVidia the spade again *

Keep digging guys, keep digging... :rolleyes:
 
Hanners said:
This has to be possibly the single most arrogant statement I've heard from any IHV.

I particularly love the way how first 3DMark 2003 wasn't a fair test, now all benchmarks are unfair, and to make things worse using FRAPS is unfair?!?!

* Hands nVidia the spade again *

Keep digging guys, keep digging... :rolleyes:

I couldn't have said it better :)
 
To translate that message.. NVidia is saying this:

We, NVidia are the driving force for the best graphics in the industry. The 1 of 10 developers that don't care about NVidia.. screw them.. they will eventually succumb to supporting NVidia.

If the industry doesn't comply to use optimal NVidia optimizations, their games will look ugly and slow as intended. We are trying to make sure the code works best on NVidia products so anything our competitors run will run slow as a result. Obviously anything coded any other way is inefficient. The developers aren't too bright.

Anything that tests our shader performance will be skewed and you can't do anything about it.

We don't care for FRAPS.. we'll "optimize" it soon. It will be a useless tool in due time.

Developers, it's our way or the highway.

//End Transmission

This is funny as heck if you ask me.

Edited: Increased content
 
Last edited:
Just a thought.. anything that exposes the new Shader requirements seems to need a "replaced and optimized shader"...

So if you are going to write some shader code.. don't bother.. NVidia will make it allllllllllllllll better *sarcasm*

Maybe people should write those programs though.. the more.. the merrier :)
 
http://www.beyond3d.com/forum/viewtopic.php?t=6510

Richard Huddy is ATI's European developer relations guy.

Richard Huddy wrote:
3DVelocity wrote:
NVIDIA works closely with games developers and 9 out of 10, and eventually nearer 10 out of 10 games will be either developed on their hardware, developed with Cg or developed with their direct input. How can they be accused of not conforming to industry standard shader routines when they ARE the driving force that sets the industry standards in shaders. Games developers are not likely to go shuffling instructions the way benchmark creators are and any games developer that wants to succeed will write their code so that it runs the way NVIDIA's shaders expect it to. the fact that their shader don't cut it on rarely heard of benchmarks and code that's been "tinkered" with is of little or no concern to them. They won't spend valuable time defending themselves against something they don't see as worth defending. Changing code doesn't expose cheating, it simply feeds code to their shaders in a way that they're not designed to handle it. Games developers will never do that, and if it's games performance that matters then NVIDIA is where the clever money is being spent.


It's fair to say that NVIDIA hardware is used in the development of most games. That's true - but it's not as spectacular a domination as NVIDIA would have people believe. ATI is also used in the development of most games. In fact I suspect that you couldn't find a single example of a game which was developed without some use of both Vendor's hardware. [Even GunMetal which is intended for NVIDIA hardware only was clearly fixed up at some point on ATI hardware.] That's the way the process works. No game could be released with a respectable QA process which involves everyone's hardware.

But what NVIDIA are trying to claim is that developers produce code which is specifically tuned to work best on their hardware - and that claim is completely bogus. Sure they have an active DevRel team who try to intervene in the development process and steer things NVIDIA's way - but we all know that the two real dominating forces in games are (a) schedule and (b) the game. For that reason most shaders are written by the games developers in general kinds of ways - most of them are not tuned for NVIDIA hardware at all. NVIDIA don't control this industry nor will they ever.

They claim to be "the driving force that sets the industry standards in shaders". If that's the case then it's odd that they arrived late with their DX9 support (about 6 months behind ATI), that they have been shown to re-write several DX9 benchmark shaders to run on their DX8-style fixed point interfaces, that the OpenGL ARB declined to use Cg as the basis for OpenGL's high level shading language, that their own demo 'Dawn' runs faster on ATI hardware than on NVIDIA hardware even with the extra layers of software involved etc., etc.

NVIDIA are trailing for the moment. Now I don't think they're going to trail forever - but they still haven't come to terms with the fact that they're very much second best at the moment. And in that sense they're like alcoholics... The first step to recovery is to come to terms with the truth in the situation. Right now NVIDIA can't seem to do that, instead they're just saying that everyone else is wrong.

As for the claim that games developers don't shuffle shader ops around well that's an odd statement. Developers clearly do tweak shaders during the development process mostly to make experimental tweaks to the functionality - and often that means that reordering happens many times along the way. But sure, when a game is released it tends to remain unchanging. Then if NVIDIA become interested in the benchmarking built into the game then if they want to, they can go in and detect the shaders and consider substituting in other shaders that do 'similar' things (usually with some image quality reduction) but run faster. At that point NVIDIA would describe them as Application Specific Optimisations - but since they are authored solely with the purpose of getting higher benchmark scores then the so-called optimisations may be totally inactive during actual game play.

It's also clear that NVIDIA have been involved in this process in a very extensive way. The revelations regarding both ShaderMark and 3DMark03 make it abundantly clear that NVIDIA do re-write their shaders specifically to rise their scores in synthetic benchmarks. Clearly they're very interested in benchmark scores no matter how synthetic.

The statement that, "Changing code doesn't expose cheating, it simply feeds code to their shaders in a way that they're not designed to handle it" is also very obviously not true. If this were the case you might expect to see a small reduction in shader performance - but you cannot explain the massive performance drops that have been seen in recent cases. It would be remarkable indeed if NVIDIA had designed hardware that could only run the shaders from this "rarely heard of benchmark" at decent speed and any changes to that setup would many times slower. That would suggest that their hardware was perhaps the most badly designed you could imagine. Where's all this programmability they keep claiming to have? If you use it then you lose all your performance?

Actually on reflection I guess that you could argue that the above quote from NVIDIA _is_ true. Take it literally - and don't worry about the word 'cheating' in there - we'll let them use their "Get Out Of Jail Free" card for that. What the NVIDIA defence could be claiming is that their hardware is not designed to handle DX9 shaders. Something I guess I'd be happy to accept.

3DVelocity wrote:
"When I asked about the FX's relatively poor performance in our "real" game tests the reply wasn't entirely clear but they certainly claim to have doubts on the reliability of FRAPS and the reliability of those using it. In a nutshell they're saying that you can analyse all you want, future titles will be coded to run at their best on NVIDIA hardware and they suggested we ask the developers who they think is on top right now."

It's a fine sight isn't it? A company that used to lead by example with innovative technology and honest product positioning is reduced to saying that anyone who uses FRAPS to check on NVIDIA's story is unreliable. There's no reason I know of to doubt FRAPS - it's widely used and well respected.

It reminds me of the guy who was talking to his psychologist and his psychologist said, "You're in denial". To which the guy's simple response was, "No I'm not".

Developers genuinely like the fact that there's some intense competition in graphics these days. They see that as a good thing - and many of them like the spectacle of the struggle for technological supremacy. I don't think they're impressed by this kind of nonsense.
 
Last edited:
What in the heck are you guys talking about? Nvidia responded to the benchmark by blowing it off then Ati responded to nvidias statement. I am still waiting for Ati to make their statement about shadermark and why their hardware takes a dive.

I can see that nvidia is just trying to blow the whole thing off but they have no more options left. Ati still has an option to fess up as to why there is such a hit or start pulling nvidia bs. Let the people decide if their explonation is cheating or legit optimizing.
 
Pojo,

I think this remark just about sums it up

The statement that, "Changing code doesn't expose cheating, it simply feeds code to their shaders in a way that they're not designed to handle it" is also very obviously not true. If this were the case you might expect to see a small reduction in shader performance - but you cannot explain the massive performance drops that have been seen in recent cases. It would be remarkable indeed if NVIDIA had designed hardware that could only run the shaders from this "rarely heard of benchmark" at decent speed and any changes to that setup would many times slower. That would suggest that their hardware was perhaps the most badly designed you could imagine. Where's all this programmability they keep claiming to have? If you use it then you lose all your performance?


This quote from the thread sums up what I think exactly

I think every company is optimizing, its just nVIDIA has a huge nescessity to do it. As recently discovered (today?), nVIDIA is even optimizing for regular time-demos, so as I originally suspected, 9700 Pro can even be faster than the 5900U (And is a lot of the time), while the 9800 Pro completly dominates. Everytime word of cheating has come out, it seems ATI has taken a fall of 2% or less, nVIDIA of 20% or more (Upto 150% in some UT2003 benchmarks). The difference as well, ATI's "cheats" are still rendering the whole scene, nVIDIA's "cheats" are rendering only what you see.
 
I guess the real question is did Ati rearrange shaders again or completely substitued shaders the way Nvidia did it in 3DMark2K3. If they substitued shaders do they produce identical or better image quality? I just want a direct reply from ati without mixing in any pr ********.
 
Back
Top