AMD gaming going to Pax west

AMD Confirms New 7nm Radeon Graphics Cards Launching in 2018

AMD’s 7nm Vega is a Monster – 1.25x Turing’s Compute at Half The Size

Whilst the company hasn’t disclosed detailed specifications relating to the new GPU we could reasonably expect around one terabyte/s of memory bandwidth, higher clock speeds and significantly better power efficiency thanks to TSMC’s leading-edge 7nm process technology. Which has reportedly enabled the company to extract an unbelievable 20.9 TFLOPS of graphics compute out of 2nm Vega, according to one source. Which if true would make it the world’s first 20 TFLOPS GPU.
https://wccftech.com/amd-confirms-new-7nm-radeon-graphics-cards-launching-in-2018/

:hmm:
 
Last edited:
Regardless if it's nvidia or AMD, if the cost of production is going radically up at ever smaller fab processes because of EUV equipment prices are going thru the roof, and the time it took to get said process working with good yields and hitting intended clocks and power usage, the days of affordable anything are behind us at least for the high end parts.....Even Intel's 10nm is already 2 years late and only be out in store shelves by late 2019.


TSMC is no different as we just have to remember the 20nm process and how using traditional horizontal transistors made the power usage go nuts, and it was only when they used Intel's trick of Finfets where the transistors are built like a high rise building vertically, did the power use become reasonable and at which point they named it the 16nm process..... Their current 12nm is a mix of both 12 and 16 nm parts and it looks like the 7nm process is a mix between 7nm for the easier parts and 12nm for the harder ones, so we're down to using hybrids of a given fab process and not making the whole chip in one process.....Intel learned this the hard way with 10nm.


Global foundries we know what's the story there, stick with 12nm and give up on anything smaller since it's costing a fortune to develop it, so for how long will the competition last if 7nm is the last step then none of the big companies have enough money to develop even smaller or physics themselves get in the way.
 
Regardless if it's nvidia or AMD, if the cost of production is going radically up at ever smaller fab processes because of EUV equipment prices are going thru the roof, and the time it took to get said process working with good yields and hitting intended clocks and power usage, the days of affordable anything are behind us at least for the high end parts.....Even Intel's 10nm is already 2 years late and only be out in store shelves by late 2019.


TSMC is no different as we just have to remember the 20nm process and how using traditional horizontal transistors made the power usage go nuts, and it was only when they used Intel's trick of Finfets where the transistors are built like a high rise building vertically, did the power use become reasonable and at which point they named it the 16nm process..... Their current 12nm is a mix of both 12 and 16 nm parts and it looks like the 7nm process is a mix between 7nm for the easier parts and 12nm for the harder ones, so we're down to using hybrids of a given fab process and not making the whole chip in one process.....Intel learned this the hard way with 10nm.


Global foundries we know what's the story there, stick with 12nm and give up on anything smaller since it's costing a fortune to develop it, so for how long will the competition last if 7nm is the last step then none of the big companies have enough money to develop even smaller or physics themselves get in the way.

we will most likely get 3 or 4 cards out of 7nm out of AMD one vega and 2 or 3 navi generations after that who knows with the one marked next gen gpu on the road map one at 7nm+
after that it depends on TSMC there maybe a 7nm +++
 
we will most likely get 3 or 4 cards out of 7nm out of AMD one vega and 2 or 3 navi generations after that who knows with the one marked next gen gpu on the road map one at 7nm+
after that it depends on TSMC there maybe a 7nm +++



Needs to be smaller simply to allow much higher transistor budgets, while refinements such as what you mentioned reduce power consumption and increase clocks, but they yield relatively minor boosts in performance such as the 5 to 10% seen in intel CPU's from one generation to the next, where the follow up has an even more refined process.


A good example is what we see now with big Turing and big Pascal......Big Turing is 18.6 billion for 751mm^, while big Pascal is 12 billion transistors on 16nm at 471mm^, and while Turing is using a lot that transistor increase to add new features, it's still 6 billion extra making the die huge and not being used on shipping games, so the price is being paid big time on the production cost of these, and the gains over the 1080TI may be not larger than 35% on current games still not using RT.



The link you provided shows a potential 7nm gaming Vega topping 20 teraflops single precision math, for a 366mm^ die, and doubling the bandwidth at 1.2 TB/sec ( likely HBM and using 4 stacks ), so the next big evolution beyond that that increases transistor count by several billion more, would require an even smaller process to keep die sizes reasonable.....Something that may no longer be possible.
 
Needs to be smaller simply to allow much higher transistor budgets, while refinements such as what you mentioned reduce power consumption and increase clocks, but they yield relatively minor boosts in performance such as the 5 to 10% seen in intel CPU's from one generation to the next, where the follow up has an even more refined process.


A good example is what we see now with big Turing and big Pascal......Big Turing is 18.6 billion for 751mm^, while big Pascal is 12 billion transistors on 16nm at 471mm^, and while Turing is using a lot that transistor increase to add new features, it's still 6 billion extra making the die huge and not being used on shipping games, so the price is being paid big time on the production cost of these, and the gains over the 1080TI may be not larger than 35% on current games still not using RT.



The link you provided shows a potential 7nm gaming Vega topping 20 teraflops single precision math, for a 366mm^ die, and doubling the bandwidth at 1.2 TB/sec ( likely HBM and using 4 stacks ), so the next big evolution beyond that that increases transistor count by several billion more, would require an even smaller process to keep die sizes reasonable.....Something that may no longer be possible.

well if should be somewhere between a 2080 and a 2080 ti with those spec's

as for after refinements on the same node we have had 3 new cards on the same node before
they may get bigger than the 2080 ti and need water also
 
well if should be somewhere between a 2080 and a 2080 ti with those spec's

as for after refinements on the same node we have had 3 new cards on the same node before
they may get bigger than the 2080 ti and need water also


2080TI has no where near 20 teraflop for single precision math which is used in games, as the fully unlocked large Turing used in the Quadro 8000 has 4608 shaders, so about 300 more than the 2080TI and hits 16 Teraflop so the latter likely has a little less, and the same applies for memory bandwidth bound scenarios, where it's the same 616 GB/sec, so a potential 7nm Vega with 1.2 terabyte per second would make it twice as fast in those situations.


Some say that said 7nm Vega has 6000 shaders onboard, so a 50% increase over 14nm Vega that has 4096, and the RTX2080 TI just only now slightly passes over the latter in shader math power.....It's been 18 months since 14nm Vega was presented and 12 months since retail availability and AMD have always released chips that have more grunt in shader operations earlier....It's one of their hallmark tendencies.
 
2080TI has no where near 20 teraflop for single precision math which is used in games, as the fully unlocked large Turing used in the Quadro 8000 has 4608 shaders, so about 300 more than the 2080TI and hits 16 Teraflop so the latter likely has a little less, and the same applies for memory bandwidth bound scenarios, where it's the same 616 GB/sec, so a potential 7nm Vega with 1.2 terabyte per second would make it twice as fast in those situations.


Some say that said 7nm Vega has 6000 shaders onboard, so a 50% increase over 14nm Vega that has 4096, and the RTX2080 TI just only now slightly passes over the latter in shader math power.....It's been 18 months since 14nm Vega was presented and 12 months since retail availability and AMD have always released chips that have more grunt in shader operations earlier....It's one of their hallmark tendencies.

compute performance hasnt been an issue with amd hardware for a while. On the other hand rasterization and geometry performance has imo. I recall reading something awhile back that gcn arch was limited to 64 rops. Dont quote me on that but amd has been stuck on 64 since hawaii.
 
On the opposite side of the fence, Nvidia has had a tendency to release designs with more geometry, texturing and raw fill rate onboard their chips....It's perhaps the reason why they perform so well in games already released once you crank up settings and resolutions....No special support or patches required from developers are needed.


But Shading and memory bandwidth are the hallmark features for AMD GPU's, and it's been only recently that Geometry got a solid kick in the ass upwards too, but they are still behind in texturing and raw fill rate.


The upcoming 7nm generation does bring up an interesting prospect in that AMD will continue using HBM yielding that massive 1.2 TB/sec using 4 stacks and we all saw the picture of Lisa sue holding one, and now Nvidia is still going with GDDR6, and the max per pin bandwidth for it is 18 Gbit/sec per pin, so about 30% more than what RTX2080 in either regular or TI versions ship with ( 14Gbit/ sec per pin ), at 616GB/sec using a 384 bit bus ( the widest bus Nvidia has ever used on their high end gaming cards ), plus 30% yields just over 800 GB/sec.....


That's still 400 GB/sec behind 7nm Vega, so to even get close would require them to move to a 512 bit memory bus which would get them to 1066 GB/sec by using 16 GDDR6 memory chips clocked at their maximum ( not just 12 ), or for 7nm large Turing to go the same HBM route that Volta has....Vega at 7nm has every chance to have some very serious teeth performance wise when it comes to memory bandwidth at least.
 
Last edited:
compute performance hasnt been an issue with amd hardware for a while. On the other hand rasterization and geometry performance has imo. I recall reading something awhile back that gcn arch was limited to 64 rops. Dont quote me on that but amd has been stuck on 64 since hawaii.


Indeed they have been, but geometry got boosted big time in Vega though so there's at least that.....I think the way they figure for rops is that if there isn't enough memory bandwidth to feed them in all scenarios, what's the point of adding more of them, while Nvidia keeps adding more even if they don't have enough bandwidth for all scenarios.....Might be why we see so much variation on a per game basis with Nv hardware.
 
[yt]IWS5fkOOTwg[/yt]


Nice touches overall.....I didn't know you didn't need a CPU installed in X399 to flash the Bios, then there's ray tracing support built in in both the workstation and gaming Vega using open standards, and finally the 12 core 1920X for X399 boards drops to 399$ officially, so AMD are ready for Intel's 8 core / 16 thread 9900k chips right now as they'll likely be more expensive and be short 4 cores / 8 threads.


I can see Intel getting pissed big time with the last point and Nv being also pissed that Vega has been able to do ray tracing for the last 6 months using open standards and driver updates.... :lol:
 
Back
Top