Days of 3d accelerators numbered?

This thread started in 2002, and was continued in 2004. Time for the 2006 edition :)

AMD-ATI are into Fusion, which seems to have been predicted by some of the posts above.

AMD plans to create a new class of x86 processor that integrates the central processing unit (CPU) and graphics processing unit (GPU) at the silicon level with a broad set of design initiatives collectively codenamed “Fusion.”
(from the press release)

Meanwhile, Valve is working on multicore optimizations for it's games, counting on future high-count multicore processors and no dedicated GPU:

Newell even talked about a trend he sees happening in the future that he calls the "Post-GPU Era." He predicts that as more and more cores appear on single chip dies, companies like Intel and AMD will add more CPU instructions that perform tasks normally handled by the GPU. This could lead to a point where coders and gamers no longer have to worry if a certain game is "CPU-bound" or "GPU-bound," only that the more cores they have available the better the game will perform. Newell says that if it does, his company is in an even better position to take advantage of it.
(article at arstechnica)
 
The idea of converging GPU and CPU didn't come from IBM or Intel or AMD or NVidia or ATI but Commodore/Amiga and their 1989 secret AAA project. They did go even step further to actually have CPU/GPU and Sound Chip all together in one package. ;) They were ahead of their time, and IBM PC concept was and still is utter crap concept of personal computer.
It was supposed to be released 1990 and they had working prototype but it seems that greedy Commodore CEO was doing everything possible to kill the company, heck i would be even suprised IBM got their dirty fingers in there. Oh man I hate ****ing IBM cause everything what came from them is utter crap. I can't wait a day when that ****ing company dies cause it did so much damage to IT industry with their crappy products.

Anyway, speaking of sound chip. 1989 Amiga comes up with 18bit sound chip...way way way ahead of its time. Just to toss some numbers here.
CPU/GPU package had power of todays $40 low end card. Guy we are talking about 18 years in past ;)

The biggest loss IT ever had was when Commodore/Amiga closed their business. Apple and IBM did not have better sale together compared to Commodore. And true reason why Steve Jobs left Apple was their inability to compete Commodore/Amiga not IBM/PC. It's the biggest lie and for some reason IBM and Apple don't mention Commodore at all, company who simply outperform them in every possible way.

Should i Mention Amiga OS -> everything else at that time was ****ing joke compared to Amiga OS written for new AAA chip.
 
Kind of shows how most ideas get recycled, but this kind of turns the idea on it's head in a way - AMD's marketing concept is almost a case of a GPU with an integrated CPU. This is a bold play for AMD, and if successful could redefine a large segment of the market.

This will be excellent for low-power/mobile, mainstream and embedded applications. For enthusiast/performance markets, discrete components will still be the only real option 'tho, as there will still be bandwidth and power limitations that an integrated solution wouldn't be able to overcome in a cost effective manner. Imagine trying to put a 125W 256-bit bus part and a 250W 384-bit bus part on one piece of silicon in a single socket while trying to keep the board layer count down so it wouldn't cost as much as a car, plus deal with power supply and heat dissipation requirements - the engineers would explode.
 
Kind of shows how most ideas get recycled, but this kind of turns the idea on it's head in a way - AMD's marketing concept is almost a case of a GPU with an integrated CPU. This is a bold play for AMD, and if successful could redefine a large segment of the market.

You're thinking in the wrong terms still, this is AMD being a design company and allowing third party IP to be inserted in to their designs. What this leverages is that companies who currently make custom ASIC's or build custom platforms can move to HSA and get the benefits of common architecture/platform with their custom IP. It massively expands AMD's markets which allowing more people access to broader capabilities.


This will be excellent for low-power/mobile, mainstream and embedded applications. For enthusiast/performance markets, discrete components will still be the only real option 'tho, as there will still be bandwidth and power limitations that an integrated solution wouldn't be able to overcome in a cost effective manner. Imagine trying to put a 125W 256-bit bus part and a 250W 384-bit bus part on one piece of silicon in a single socket while trying to keep the board layer count down so it wouldn't cost as much as a car, plus deal with power supply and heat dissipation requirements - the engineers would explode.

In form factors like a PC, you're right. However, in a custom design environment where there is no expectation of changed components, upgrades, then that's perfectly doable. There are plenty of boxes out there with 1200W TDP designs that sell very well. They're called servers. PC Gamers spend 1-2K+ on a bunch of components they put together themselves, or an OEM prebuilt. The hurdle stopping AMD (or Intel, or NVIDIA) putting together a box with a 125W CPU and a 250W GPU on a board with adequate cooling is expansion slots, upgradability. People won't accept not being able to swap out bits when the new stuff comes out. It's not an engineering challenge, if you've got sign off for the form factor control and TDP then you can make that easily. Harder to get a 125W CPU and a 250W GPU on a single chip, though... not impossible, just hard to manufacture, hard to price where it's not cheaper to make discrete components (until system architecture changes).
 
Almost sounds like they are moving to a more ARM like approach. At least in terms of viewing the CPU and GPU as something below the chip level that can be integrated with other people's IP onto the same chip. I wonder if in these arrangements they will be licensing AMD's IP and letting the other company take care of manufacturing.
 
Last edited:
You're thinking in the wrong terms still, this is AMD being a design company and allowing third party IP to be inserted in to their designs. What this leverages is that companies who currently make custom ASIC's or build custom platforms can move to HSA and get the benefits of common architecture/platform with their custom IP. It massively expands AMD's markets which allowing more people access to broader capabilities.




In form factors like a PC, you're right. However, in a custom design environment where there is no expectation of changed components, upgrades, then that's perfectly doable. There are plenty of boxes out there with 1200W TDP designs that sell very well. They're called servers. PC Gamers spend 1-2K+ on a bunch of components they put together themselves, or an OEM prebuilt. The hurdle stopping AMD (or Intel, or NVIDIA) putting together a box with a 125W CPU and a 250W GPU on a board with adequate cooling is expansion slots, upgradability. People won't accept not being able to swap out bits when the new stuff comes out. It's not an engineering challenge, if you've got sign off for the form factor control and TDP then you can make that easily. Harder to get a 125W CPU and a 250W GPU on a single chip, though... not impossible, just hard to manufacture, hard to price where it's not cheaper to make discrete components (until system architecture changes).

The bolded part was exactly my point - a 1200W server box (I've owned a few) is not the same as a 400W chip. Trying to make a 350-400W TDP processor part on a single chip without pricing it out of the consumer market would be very challenging, particularly with the crazy wide data buses required to keep the monster fed. The chip would be very complex and expensive, the motherboard would be very complex and expensive and upgrades for new generations would involve throwing out pretty much everything and starting over. This keeps the enthusiast/performance market locked to discrete components rather than a monolithic unit, at lest for the foreseeable future.
 
Fusion is a on going process (not sure of the new name, good grief AMD). What is cpu and GPU looks like it will grey into a common structure free from bus slowdowns. I am wondering when AMD will make a new instruction set to extend x86, x64 for GPU like instructions, meaning programming to the metal possibilities. As for memory bandwidth, still hope to see fiber optics used for memory with virtually unlimited bandwidth potential. The APU or what ever it will be called will need a fiber optic receiver/transmitter/decoder to hook up to the rest of the computer. Still having pins for power and other stuff as a note.
 
Back
Top