Xilinx and Radeon

SuperGeil

New member
My Galaxy S20 has some of them and most of us are normal people on here not nerds so try and remember that.

You probably used more devices with Xilinx chips in them.

In chips past RDNA 3, this is probably going to be game changing. 3D stacked chiplet Radeons... with a deep learning FPGA Xilinx chip.
 
I'm not entirely sure the tech will directly be of benefit to GPU's. Although given how programmable they are... maybe???
 
There was a great comment here that got deleted. Shame.

Reading around, it seems AMD is betting on FPGA's to help them with compute acceleration as process nodes get more expensive and complicated. Considering how promising FPGA's are for AI and raytracing, and video encoding/streaming, newer Radeons could be radically different pretty soon.
 
This a long but really good read on the deal

Good paragraph here...

But there is probably more to the shift to newer FPGA iron at Xilinx than just some of the Super 8 doing big rollouts. First, the Vitis environment for programming FPGAs from Xilinx has made it easier to deploy them as industry-specific and application-specific offload engines, and with the slowdown in Moore’s Law, there is greater need to do something. Both the GPU and the FPGA have emerged as a new kind of general-purpose offload engine, and the FPGA is getting its share here.
 
AMD reveals CPU-FPGA hybrid in released patents

also, if any doubt that CDNA will be an FPGA in a few years...

AMD has been working on different ways to speed up AI calculations for years. First the company announced and released the Radeon Impact series of AI accelerators, which were just big headless Radeon graphics processors with custom drivers. The company doubled down on that with the release of the MI60, its first 7-nm GPU ahead of the Radeon RX 5000 series launch, in 2018. A shift to focusing on AI via FPGAs after the Xilinx acquisition makes sense, and we're excited to see what the company comes up with.
 
Unless FPGA have a massive, immense, leap in ability to run high clocks very efficiently, you're not going to see FPGA GPUs. The perf/w will not be available.

There is a place for FPGA-like structures for AI, where a CNN implementation is flexible (or DNN, RNN etc.). Reconfiguring the weighting for different inference models, perhaps.
 
Back
Top