Has AMD Reduced RX Vega Supply & Prioritized Frontier Edition?

but in all seriousness everyone needs to a deep breath and relax. Use the ignore function if civility and mutual respect isnt an option. Respect others opinions and debate like you would as IF YOU WERE FACE TO FACE IN REAL LIFE


Ignore function is indeed my friend......


Anyhow, back to the real source of the problem and it's possible solution, which is that no matter what GPU one chooses, it doesn't cost the consumer a kidney in the process......Samsung has started mass volume production of a custom ASIC specifically built for crypto currency mining but there's no information if it's about bit coin or some other virtual currency, and TSMC has been selling one since last year and has supposedly made some 500 million off them so far......In both cases, it's for Chinese customers, but no clue as to their exact identity.


Hopefully it'll sell like hot cakes over the entire world and bring GPU prices back to sane levels, including future releases from both green and red team.
 
Ignore function is indeed my friend......


Anyhow, back to the real source of the problem and it's possible solution, which is that no matter what GPU one chooses, it doesn't cost the consumer a kidney in the process......Samsung has started mass volume production of a custom ASIC specifically built for crypto currency mining but there's no information if it's about bit coin or some other virtual currency, and TSMC has been selling one since last year and has supposedly made some 500 million off them so far......In both cases, it's for Chinese customers, but no clue as to their exact identity.


Hopefully it'll sell like hot cakes over the entire world and bring GPU prices back to sane levels, including future releases from both green and red team.

Some crypto coins are designed to be ASIC resistant like Ethereum, XMR, and zcash. Also used GPUs are easier to sell than ASICS especially gaming cards are high resale value as well as better than mining version card that have no resale value. How do you sell ASICs if nobody wants it?
 
Some crypto coins are designed to be ASIC resistant like Ethereum, XMR, and zcash. Also used GPUs are easier to sell than ASICS especially gaming cards are high resale value as well as better than mining version card that have no resale value. How do you sell ASICs if nobody wants it?


True on all counts......The only counter at least when it comes to overall value is if these ASIC's are much cheaper to begin with, and can process calculations much faster for the crypto coins that allow their use than any GPU, which isn't that hard to believe since the latter isn't natively and exclusively designed for that use alone.


I'm sure that when mining, a lot of hardware in the GPU is just there twiddling it's proverbial thumbs...
 
Actually, AMD will be using both TSMC and Glofab for GPUs. I think Navi will be made from TSMC if they want to use 7nm. But probably both though...who knows?
 
TSMC 7nm will likely clock higher where the GF 7nm will be more tuned for power savings...
 
TSMC 7nm will likely clock higher where the GF 7nm will be more tuned for power savings...


Something about which libraries are used for a given fab process, which i can only assume are the basic design rules required to hit a certain specific power consumption, or clock speed using the process in question...They're called T libraries and the latest cutting edge one from TSMC seems to be T 7.5, but an earlier library that's more mature and has all the kinks ironed out for better yields and /or specific goals such as lower power consumption may be used.....It's up to AMD to pick which one they want.
 
This couldn't be more further from the truth and a gross negligence of how critical (and long) high level simulations can be.

Modeling anything beyond Newtonian physics is going to require a level of precision that can either make or break the said research in question. You certainly don't want your engine propulsion design to fail because your simulations failed to accurately model minute perturbations in fluid dynamics for example.

Nor do you want your model of the formation and evolution of galactic superclusters and its gravitational effects dismissed outright due to a lack of precision. Cosmological simulations are notorious for taking weeks, if not months, to complete for the level of precision they require.

There are definitely use cases for FP64, all I said was that most scientists just use double because its not worth the time tinkering with precision.

I'm the author of a path integral monte carlo code used to study entanglement/superfluidity of a long range interacting system. Basically, quantum ('beyond Newtonian physics') and, well, not a far cry from gravitation (long range interaction). It really boils down to the code you are running: monte carlo in general offers a level of resilience that things like exact diagonalization may not, and by design it equilibrates into a region of the solution space and the proceeds to bounce around mostly in that region, sampling as it goes.

I can give you an example where I need higher than FP64: I have a solution to an N-body model that I use to confirm my PIMC code has converged. Building the partition function to calculate finite temperature energies requires higher precision: I use mpmath for that.

The point is, as I said, most scientists don't bother testing to see if they need FP64: they just use it. If it fails, they will often look at increasing precision but it is rare they test FP32.
 
There are definitely use cases for FP64, all I said was that most scientists just use double because its not worth the time tinkering with precision.

I'm the author of a path integral monte carlo code used to study entanglement/superfluidity of a long range interacting system. Basically, quantum ('beyond Newtonian physics') and, well, not a far cry from gravitation (long range interaction). It really boils down to the code you are running: monte carlo in general offers a level of resilience that things like exact diagonalization may not, and by design it equilibrates into a region of the solution space and the proceeds to bounce around mostly in that region, sampling as it goes.

I can give you an example where I need higher than FP64: I have a solution to an N-body model that I use to confirm my PIMC code has converged. Building the partition function to calculate finite temperature energies requires higher precision: I use mpmath for that.

The point is, as I said, most scientists don't bother testing to see if they need FP64: they just use it. If it fails, they will often look at increasing precision but it is rare they test FP32.

Most credible scientists won't even bother with FP32 precision if they're running any serious simulations because that would invalidate their results from the getgo. Running student level simulations on a personal machine is quite different than engineering and validating a design for commercial use, or modeling cosmological effects in general relativity as part of a research grant.

In fact I can't think of many situations where FP32 would even be applicable where less precision would be desired for the sake of simplicity and speed. About the only one that really stands out is ironically physics simulations in games. The rest as you said, would need FP64 and higher.
 
Most credible scientists won't even bother with FP32 precision if they're running any serious simulations because that would invalidate their results from the getgo. Running student level simulations on a personal machine is quite different than engineering and validating a design for commercial use, or modeling cosmological effects in general relativity as part of a research grant.

In fact I can't think of many situations where FP32 would even be applicable where less precision would be desired for the sake of simplicity and speed. About the only one that really stands out is ironically physics simulations in games. The rest as you said, would need FP64 and higher.

I'm not running student level simulations on a personal computer. I'm a scientist with validated results using FP32, as well as FP64 and higher depending on the application. I run jobs on thousands of cores for days at a time, accumulating months of data. For publications. I'm giving a talk at APS in March.

I'm not sure what sort of work you do, but it doesn't seem likely you are in my field. Perhaps we are simply exposed to different environments.
 
I'm not running student level simulations on a personal computer. I'm a scientist with validated results using FP32, as well as FP64 and higher depending on the application. I run jobs on thousands of cores for days at a time, accumulating months of data. For publications. I'm giving a talk at APS in March.

I'm not sure what sort of work you do, but it doesn't seem likely you are in my field. Perhaps we are simply exposed to different environments.

Not questioning your work at all. I am very curious of what applications FP32 are good enough for engineering/research where FP64 precision is not needed. I can only imagine these cases are where single precision errors would be negligible or the measured output already comes close to the double precision result. I can't see that being the case for aerospace engineering or cosmological models, however I did see this being the case for modeling X ray CT imaging when I looked for case studies. But the majority of monte carlo calculations for other fields, be it financial, climate change, etc. in published research all seem to favor FP64 or greater. It is as you said, dependent on the field of study and the level of precision needed.
 
Back
Top