Offical Radeon VII Thread

It shouldn't matter where the NVMe is. On the Maximus XI Formula, the single GPU slot (top slot) will feed 3.0 16x no matter the population of the M.2 slots.

I'm not sure why the Radeon is acting up. Sounds like the 1080 issue is tied to your monitor being weird, which is fine; however it doesn't make much sense that the Radeon refuses to run above 3.0 8x. Have you tried running DDU to clean out any remnants of prior drivers? Not sure if you had an NV card prior to the AMD one, but that could be causing an issue, I suppose.
 
I'm guessing it's due to some sort of M.2/NVMe/sata bandwidth setting in combination with the number of drives you have - not sure about the formula but many boards drop the first pcie slot to x8 under certain storage configurations.

I know you updated bios, but silly question - did you reset everything to (optimised) defaults after update?
 
It shouldn't matter where the NVMe is. On the Maximus XI Formula, the single GPU slot (top slot) will feed 3.0 16x no matter the population of the M.2 slots.

I'm not sure why the Radeon is acting up. Sounds like the 1080 issue is tied to your monitor being weird, which is fine; however it doesn't make much sense that the Radeon refuses to run above 3.0 8x. Have you tried running DDU to clean out any remnants of prior drivers? Not sure if you had an NV card prior to the AMD one, but that could be causing an issue, I suppose.

The Maximus XI Extreme has three x16 mechanical slots. the top two are CPU fed slots while the last is chipset/DMI. It is worth noting that even though the slots are mechanically x16 they are electrically dependent on slot and device population. The topmost slot is the primary and when the 2nd slot is not populated it will give 16 lanes PCIe to the installed GPU or other devices. However, since this board employs a DIMM.2 it is worth noting those are CPU lanes as well so if you populate two M.2 Drives in the DIMM.2 module your main slot will drop to x8. Keep this in mind as this means if you are running SLI, you will need to use the DMI based M.2’s below the PCH cover. This is not a limitation of the ASUS board but the platform as a whole since mainstream platforms do not have the massive quantity of PCIe lanes available like we see on the HEDT parts.

https://bjorn3d.com/2018/11/asus-maximus-xi-extreme-review/3/

after reading the part I bold type, I am willing to be it works identical to mine c7h. If I populate the top M.2 bay with a drive, my main PCI-E slot 1 drops to X8. I have to put my M.2 drive in the bottom slot for that not to happen as one slot runs off CPU PCI-e lanes, and the other runs off the chipset lanes. (I suspect the Maximus XI is identical in that way, one slot shares main GPU slot, the second is chipset)


To the OP:

Move your M.2 drive to the other slot and test. If you are still getting X8 remove the M.2 drive and test (boot from a linux livecd or similar that will allow you to see if the GPU is running at the full X16, as I suspect the M.2 drive is your OS drive. You might be able to check in the bios.
 
Last edited:
https://bjorn3d.com/2018/11/asus-maximus-xi-extreme-review/3/

after reading the part I bold type, I am willing to be it works identical to mine c7h. If I populate the top M.2 bay with a drive, my main PCI-E slot 1 drops to X8. I have to put my M.2 drive in the bottom slot for that not to happen as one slot runs off CPU PCI-e lanes, and the other runs off the chipset lanes. It says if you have 2 drives, but the main slot is either X8 or X16, you can't run it at X12.. and the M.2 slots run at X4.

The MAXIMUS XI FORMULA (what he has, not the EXTREME) does not have the DIMM.2 slot. He only has 2 M.2 slots. I have all M.2 slots occupied (2 of them, granted one is SATA) and my 2080TI runs at 3.0 16x. He only owns 1 NVME drive, like I do.

The NVME/m.2 SSD he has is not causing the issue.

Just to settle this even further -- the 9900K has 20 PCIE lanes. His single NVME eats 4x, and the GPU 16x.

Imchv, I just went through the BIOS on my Maximus XI Hero and there is nothing you can adjust. Everything should be taken care of automatically. Try updating your BIOS if you haven't, I just updated mine to the 1401 BIOS.

Clear the CMOS (should have a button on the back) and pull the power plug out of the PC. Hit the power button 4-5 times to drain any remnant current and then start her up. Do the minimal BIOS settings to boot into Windows and see what you get.

For sure, the NVME is not the problem, especially since you're only running one. Even with the soundcard in (I have one in as well), the board shuts down SATA5 and SATA6 ports to feed 4x chipset lanes to the soundcard.

Oh, and another thing.. when you update the BIOS, it will delete all saved profiles. I say this after updating and not thinking about it, and having not touched any settings in months :lol: ****!!
 
Last edited:
Thanks everyone for the feedback.

The PCIE lanes share between the devices was my initial thought on the issue. I haven't had time to properly test all the scenarios but will surely next week.

Right now, the easiest way to determine if it's the mainboard or the video card is to make my 1080 work on the Maximus XI Formula or the VII on the X99, will try that again today.

The 1401 was just release yesterday!, will try it today for sure, I'm aware of the profile thing, luckily I have all the changes I made to the BIOS on my memory :)

One more thing, this is not driver related, in the BIOS itself it is shown as x8.
 
I wouldn't be surprised if it turns out to be a bad GPU. Judging by your setup, it's pretty much guaranteed that the NVME or expansion devices you have plugged in are not causing the problem.
 
The MAXIMUS XI FORMULA (what he has, not the EXTREME) does not have the DIMM.2 slot. He only has 2 M.2 slots. I have all M.2 slots occupied (2 of them, granted one is SATA) and my 2080TI runs at 3.0 16x. He only owns 1 NVME drive, like I do.

The NVME/m.2 SSD he has is not causing the issue.

Just to settle this even further -- the 9900K has 20 PCIE lanes. His single NVME eats 4x, and the GPU 16x.

Imchv, I just went through the BIOS on my Maximus XI Hero and there is nothing you can adjust. Everything should be taken care of automatically. Try updating your BIOS if you haven't, I just updated mine to the 1401 BIOS.

Clear the CMOS (should have a button on the back) and pull the power plug out of the PC. Hit the power button 4-5 times to drain any remnant current and then start her up. Do the minimal BIOS settings to boot into Windows and see what you get.

For sure, the NVME is not the problem, especially since you're only running one. Even with the soundcard in (I have one in as well), the board shuts down SATA5 and SATA6 ports to feed 4x chipset lanes to the soundcard.

Oh, and another thing.. when you update the BIOS, it will delete all saved profiles. I say this after updating and not thinking about it, and having not touched any settings in months :lol: ****!!

You are right, the info I gave was for the Extreme, but the same holds true for the formula. It just has 2 m.2 slots instead of dimm.2 slots. Nearly all flavors of the motherboard has 24 pci lanes (16 from cpu, 8 from chipset).

9900k only has 16 pci lanes:
https://www.intel.com/content/www/us/en/products/processors/core/i9-processors/i9-9900k.html

Even the manual for the forumula section 1-8 shows how slot one will go into X8 if configured in the bios for Hyper M.2 X 16 drives (granted, it talks about a third SSD) , which shows that the M.2 slots are tied to he gpu pci lanes. depending on which slots he has it in, may determine if it effects gpu slot. Because you have your NVME drive in slot one, and and SSD in slot two (which disables the second M.2 support per the manaul).. you may not see it, but if he has it in slot 2, not slot 1 with NO ssd, it is possible it effect the GPU, which is exactly what the majority of Asus motherboards do, one slot doesn't effect the GPU lanes, the second slot does.

However, there are some who have had his problem with GPU only running at X8, one had a bent pin on the cpu slot (had to fix pin and remount CPU) other's where effected by botched bioses, and one had a defective motherboard (his was with a 1080ti).

I would check the bios to ensure the Hyper m.2 X16 support is not turned on, as it appears that it will effect the gpu slot speed per section 1-8 in the manual.
 
Last edited:
I had a nice reply written up and just as I was finishing, my PC hard locked. This sick new thing it's doing since I installed that garbage ICue software .. even after uninstalling it.

I'm going to burn my PC
 
Tried the new BIOS, same result.

I'm almost convinced it's the nvme drive, my old 5930K had 40 PCIE lanes and had plentiful to connect 2 cards in crossfire, a nvme drive and a sound card. When I was deciding to upgrade, it was between my current setup and a X299 + Core i9 7920X.

Will try the GTX1080 next week, will let you know

PD: HyperX M.2 is disabled.
 
Tried the new BIOS, same result.

I'm almost convinced it's the nvme drive, my old 5930K had 40 PCIE lanes and had plentiful to connect 2 cards in crossfire, a nvme drive and a sound card. When I was deciding to upgrade, it was between my current setup and a X299 + Core i9 7920X.

Will try the GTX1080 next week, will let you know

PD: HyperX M.2 is disabled.
What slot is the sound blaster in?
 
In the only x1 slot the mobo has. I have already removed the sound card for testing, and the result was the same.

Ok, was making sure you didn't use the PCIE_2 (X16) slot under the GPU. Even a X1 pcie card (sound card) in the PCIE_2 slot will cause PCIE_1 (the gpu slot) to revert to X8. In the past I have had to bypass using the X1 slot above the GPU due to heat issues (usually caused by poor air flow, or heat from the GPU due to over clocking it) So I wasn't assuming you automatically used the X1 slot.

After you test all possible ideas that have been discussed and have not solved the issue, you may want to test it with the motherboard outside the case. It's very possible that you could have a grounding issue (motherboard touching the case where it isn't supposed to) usually caused by a defective solder joint that extends out from the bottom of the motherboard slightly farther than it is supposed to, making it almost undetectable. You would be surprised how little off the wall things can cause the weirdest issues. I had a cat bite a mouse cable a number of years ago. The machine worked fine, as long as you didn't want to turn it off. You could not power it down no matter what was tried (power button did nothing, holding the power button down did nothing, shut down in windows just reboot the machine). the only way you could turn it off, was to unplug the power cord. I was floored when It turned out to be the usb mouse that caused it, due to the cat chewing on the mouse cable.
 
Last edited:
Need to get the 1080 in there to see if that card shows 8x as well. If it does, it's the motherboard as the 1080 used to show 16x before (I assume?). If it shows 16x, I'd RMA or return the Radeon.
 
Need to get the 1080 in there to see if that card shows 8x as well. If it does, it's the motherboard as the 1080 used to show 16x before (I assume?). If it shows 16x, I'd RMA or return the Radeon.

Could still be a motherboard issue, even if the 1080 shows X16, as it could be a bug in the bios, or not properly supported. I learned with my son's dell (I know, it's a dell.. lol) that the bios actually has to be coded to support various GPU's, which includes pci-e speed.. (the dell which is pretty old by our standards, would take a Nvidia 1060, but not an equivalent AMD branded card, or even a tier lower). But, I would also chose to return the GPU first over the motherboard.


OP: Here is a thread about the same issue: https://rog.asus.com/forum/showthread.php?97581-PCI-E-runs-at-x8-instead-of-x16

One suggestion that caught my eye on page 2, was this about the card's power cables:


Are your 6 + 2 pin PCI E power cables properly connected to the VGA ports of your EVGA power supply? VGA1 and VGA2? try on VGA 3 and 4. I have a friend who had the same worries and by swapping his cables he found the PCI E X16.

I would verify all power connectors are connected properly, on the card, and on the power supply. Also make sure you are using 2 separate cables not one that has duel connectors (can't tell from this picture below, you put up in pics of your computer section , but from researching your power supply, it has this cable (https://cdn.mos.cms.futurecdn.net/fDdN8YZvicC4a4Fmaq2Tx4.jpg) a dual connector cable, which it looks like you may be using and connecting both connectors, which is bad)

New card!

Radeon VII

GqOJHaM.jpg
 
Last edited:
I don't remember if the GTX1080 was running @x16 after I build this computer in July, it never crossed my mind.

I have a Thor 1200W Platinum PSU, a single cable should be more than enough, but honestly it doesn't hurt to try a second cable for the other 8 pin connector, I will for sure, my GTX1080 is 8+6, this card is 8+8

The computer turns on and off properly, no ground issues that I can tell.

Also, I can't return the card, I live in Peru, sending it back will cost me several hundred dollars, I may as well buy a new card!
 
I don't remember if the GTX1080 was running @x16 after I build this computer in July, it never crossed my mind.

I have a Thor 1200W Platinum PSU, a single cable should be more than enough, but honestly it doesn't hurt to try a second cable for the other 8 pin connector, I will for sure, my GTX1080 is 8+6, this card is 8+8

The computer turns on and off properly, no ground issues that I can tell.

Also, I can't return the card, I live in Peru, sending it back will cost me several hundred dollars, I may as well buy a new card!

When you use only 1 cable with dual connectors, you are trying to pull 300 watts thru that one cable (each pci-e 8 pin connector is supposed to support 150 watts). It's not the power supply you need to worry about (well not yours as yours is all 1 rail, if you had one that had 2 rails, then you would also worry about power supply). Most cables are not designed to pull 300 watts thru them, and tend to cause resistance, not capable of feeding the full 300 watts to the GPU, which will cause them to get hot. Some people have gotten away with it for short amount of times, other's have ended up with melted cables and/or fire.

Also, when are you checking what speed the pci-e slot is at? Because it can scale down to X8 when not under load such as the desktop. So you may want to verify by starting up a game in window mode and verify it if it is X8 or X16. edit: If you click the ? next to the bus interface speed in GPU-Z, it explains why, and gives you an option to put the card under load to verify it's speed (I never knew this till I just clicked the ? just now to see what it said. I can now go back to sleep as I learned something today :D)

I would also re-seat your CPU, many people who have such problems find out it's the CPU that is causing it, either by a bent pin, or just a bad alignment that is just enough to cause problems because the GPU relies Fully on the CPU lanes.
 
Last edited:
When you use only 1 cable with dual connectors, you are trying to pull 300 watts thru that one cable (each pci-e 8 pin connector is supposed to support 150 watts). It's not the power supply you need to worry about (well not yours as yours is all 1 rail, if you had one that had 2 rails, then you would also worry about power supply). Most cables are not designed to pull 300 watts thru them, and tend to cause resistance, not capable of feeding the full 300 watts to the GPU, which will cause them to get hot. Some people have gotten away with it for short amount of times, other's have ended up with melted cables and/or fire.

Also, when are you checking what speed the pci-e slot is at? Because it can scale down to X8 when not under load such as the desktop. So you may want to verify by starting up a game in window mode and verify it if it is X8 or X16. edit: If you click the ? next to the bus interface speed in GPU-Z, it explains why, and gives you an option to put the card under load to verify it's speed (I never knew this till I just clicked the ? just now to see what it said. I can now go back to sleep as I learned something today :D)

I would also re-seat your CPU, many people who have such problems find out it's the CPU that is causing it, either by a bent pin, or just a bad alignment that is just enough to cause problems because the GPU relies Fully on the CPU lanes.

Will to test the cable thing for sure, it's my first 8+8 card, I usually had 8+6 and connected them using a single cable, even when crossfiring my older cards.

I have tried GPUZ and was aware of the stress tool, it makes no difference:

jzc.png


The GPU has been reseated several times, I'm pretty sure it's properly installed.
 
The GPU has been reseated several times, I'm pretty sure it's properly installed.

NO, not the GPU.. The CPU (the processor), remove it and re-seat it, as it controls the pci-e lanes for the GPU. My i7 920 would give me hassles if It wasn't seated perfect, or if I tightened the noctua D14 down unevenly.

You very well could have a bad card, I am just trying to give you things to try to verify 100% cuz I hate RMAing something only to find out it was something I missed in the end. I would even go as far as sticking the Vega VII in the X99 motherboard and see if it gets X16 or X8.
 
Last edited:
x370 only has 20 lanes and most boards dont have a plex chip SO any pcie card added beyond a m.2 and gfx card will drop the gfx card to an 8x slot. I was using a 1x sound card for awhile and it dropped me to 8x on the gfx card. could be a similar issue.
 
x370 only has 20 lanes and most boards dont have a plex chip SO any pcie card added beyond a m.2 and gfx card will drop the gfx card to an 8x slot. I was using a 1x sound card for awhile and it dropped me to 8x on the gfx card. could be a similar issue.

No. Or, it depends on which slot you use of course.

Zen has 20 lanes, X370 has additional 8 lanes.
 
Back
Top