![Graphics Graphics](http://a.fsdn.com/sd/topics/graphics_64.png)
![Power Power](http://a.fsdn.com/sd/topics/power_64.png)
![Hardware Hardware](http://a.fsdn.com/sd/topics/hardware_64.png)
Nvidia's RTX 5090 Power Connectors Are Melting (arstechnica.com) 54
An anonymous reader quotes a report from Ars Technica: Two owners of Nvidia's new RTX 5090 Founders Edition GPUs have reported melted power connectors and damage to their PSUs. The images look identical to reports of RTX 4090 power cables burning or melting from two years ago. Nvidia blamed the issue on people not properly plugging the 12VHPWR power connection in fully and the PCI standards body blamed Nvidia.
A Reddit poster upgraded from an RTX 4090 to an RTX 5090 and noticed "a burning smell playing Battlefield 5," before turning off their PC and finding the damage. The images show burnt plastic at both the PSU end of the power connector and the part that connects directly to the GPU. The cable is one from MODDIY, a popular manufacturer of custom cables, and the poster claims it was "securely fastened and clicked on both sides (GPU and PSU)." While it's tempting to blame the MODDIY cable, Spanish YouTuber Toro Tocho has experienced the same burnt cable (both at the GPU and PSU ends) with an RTX 5090 Founders Edition while using a cable supplied by PSU manufacturer FSP. Plastic has also melted into the PCIe 5.0 power connector on the power supply.
A Reddit poster upgraded from an RTX 4090 to an RTX 5090 and noticed "a burning smell playing Battlefield 5," before turning off their PC and finding the damage. The images show burnt plastic at both the PSU end of the power connector and the part that connects directly to the GPU. The cable is one from MODDIY, a popular manufacturer of custom cables, and the poster claims it was "securely fastened and clicked on both sides (GPU and PSU)." While it's tempting to blame the MODDIY cable, Spanish YouTuber Toro Tocho has experienced the same burnt cable (both at the GPU and PSU ends) with an RTX 5090 Founders Edition while using a cable supplied by PSU manufacturer FSP. Plastic has also melted into the PCIe 5.0 power connector on the power supply.
It's like real-life (Score:1)
...AI errors.
Re: (Score:2)
"a burning smell playing Battlefield 5,"
A whole new level of realism, smell-o-vision
Thermocouple? (Score:3)
How much would it cost nVidia to put a temperature sensor on the PCB at the power connector, seeing as how this is a recurring problem?
Re: Thermocouple? (Score:4, Interesting)
Putting it exactly where the problem is would be expensive because there is no directly suitable connector.
It's not really necessary anyway. What IS necessary is moving up to a connector that is less crap, i.e. not based on molex mini-fit jr. Molex has connectors which will suit just fine, like the ones used for powering electric jack legs on RVs. The PC industry is just still dicking around with this cheap crap instead of moving to a real connector.
Re: (Score:3)
Re: (Score:2)
You know what? I thought about this some more and I kind of changed my mind. Not about what they should do, but what it would cost to add a temp sensor while still using the same inadequate connectors. I think you could do it reasonably cheaply by designing thermistors (or whatever kind of sensor) which fit into the spaces where a Mini-Fit Jr. pin goes. However you would still need a new, wider connector on the GPU so that you could intersperse the connector pins with the temperature sensors.
The right thing
Re: (Score:2)
Indeed. There are enough high-current low voltage converters around that work reliably. For example, solar routinely needs them.
The real problem they have here is that they run the same voltage over multiple pins without balancing and go close to the limits. A sane design might use multiple connectors for redundancy, but it would make sure each pin can take the full current, exactly to not run into this problem.
Re: (Score:2)
12VHPWR is a pile of shit, and PCI-SIG are specifying connectors with the all-too-infamous Mixture Of Morons specification model.
Re:Thermocouple? (Score:5, Interesting)
Not ideal. The problem is that NVIDIA doesn't decide how board partners lay out their device. They followed a standard. Now the standards body may have some soul-searching to do, they've already tried to address this problem once by mandating proper current available signalling and adjusting of the pins so that the connector has to be seated all the way but ultimately temperature sensor may not help much. Fundamentally there's still not enough margin for error so simply straining the connector can cause the metal contacts to not seat perfectly. And they need perfect contact for their max power rating.
The fundamental problem is they made the connector too damn compact. Bigger pins, better contact would have resolved this. If they had the space for a temp sensor in the middle they could have just opted for a larger connector in the first place. Cars don't have this problem because their connectors are *looks up technical term* fucking massive.
Really this whole 12VHWPR thing needs to just be relegated to the dustbin of history and something better developed.
Re:Thermocouple? (Score:4, Interesting)
Re: (Score:2)
The problem is that NVIDIA doesn't decide how board partners lay out their device.
The report is about Founder Edition cards, which are manufactured by NVIDIA itself, not by partners.
Re: (Score:3)
Probably easier to just design it with a connector rated for the wattage that could possibly be flowing through it.
Well, we would think so at least. And then we get two generations of insanely priced GPUs that melt their power connections.
Re: Thermocouple? (Score:2)
GPU bug? (Score:4, Interesting)
Wild guessing GPU is pulling way too much energy from one set of pins. Otherwise if there was a weak connection or poor crimps it would likely be localized to one side not both.. pin 6 is toasty on both ends.
Re: (Score:2)
No need to guess. This will be the same thing as last time: Running an unbalanced multi-wire-per-voltage connector up to the pin-limits is a _very_ bad idea. It just takes one pin with a not so good connection and the whole thing melts. Sane people will restrict themselves to something like 50% per pin or even less.
Re: GPU bug? (Score:2)
First time? (Score:2)
Re: (Score:2)
Or more likely that $3000 price tag did not leave $30 for a connector that can reliable take the power.
Re: (Score:2)
You're suggesting they spend $30 for a quality connector when the CEO can buy a bigger yacht? What are you? A communist?!
Maybe 12V just isn't enough? (Score:3)
Why hasn't a 24V rail been added as standard? Or even 48V?
Re: (Score:3)
Re: Maybe 12V just isn't enough? (Score:2)
Re: (Score:2)
A100 GPUs use 48vdc in datacenters, apparently:
https://l4rz.net/running-nvidi... [l4rz.net]
And the GaN based buck converters are apparently much smaller in dimension than what we've been used to:
https://www.eenewseurope.com/e... [eenewseurope.com]
https://www.eetimes.com/how-ga... [eetimes.com]
So... yeah, I would like to see them update the standard to allow for 48v PSUs in consumer PCs.
Maybe somebody should first explain why... (Score:2)
Re:Maybe somebody should first explain why... (Score:4, Informative)
Maybe somebody should first explain why we can safely push 1800 watts
Maybe because 1800 Watts at 120 Volts is 15 Amps. 600 Watts at 12 Volts* is 50 Amps.
*Just guessing about the RTX power requirements based on what a typical PS provides.
Re: (Score:2)
And why "amps" is a problem is because the power dissipated in a wire (or connector) increases w/ the square of current, whereas it's only linear in voltage.
power = I^2 * R
So if you have a connector with say 0.0001 ohms (which is a random number and pretty high for a connector based on the math below) ... it is dissipating:
in the 50A@12V -> 50 * 50 *12 * 0.0001 = 3W
vs 15A@120V -> ---> = 2.7W
Huh. that's closer than i would've thought
Re: (Score:2)
Re: (Score:2)
That's not how loss in a section of conductor works. First, the supply voltage doesn't contribute to the loss, the voltage drop across the conductor section does. So:
50A * 0.0001 = 0.005 V
15A * 0.0001 = 0.0015 V
And the power dissipation for each case is V*I:
50 A * 0.005 V = 0.25 Watts
15 A * 0.0015 V = 0.0225 Watts
Quite a difference. It may seem like a small amount of power in each case, but it is dissipated in a very small volume of metal. So it gets hot.
Re: (Score:2)
yow. sorry about that. I have NO idea why i multiplied by voltage when plugging the numbers in my previous post.. :facepalm: it's been way too long. Thanks for pointing it out.
i^2 * r just combines the two steps into one to give the power dissipation. (it comes from plugging v=IR into P = IV )
50A * 50A * 0.0001 ohm = 0.25W
15A * 15A * 0.0001 ohm = 0.0225W
as you pointed out.
Re: (Score:2)
0.0001 ohms seems pretty low actually.
The actual connected used here is rated at a maximum of 0.005ohms per pin, with 6 pins for power and 6 for ground, that is 0.005/6 + 0.005/6 = 0.0017 ohms.
Pushing 50A through that is 1.4W of heat in the connector alone.
They're also only rated at 50 mating cycles before the resistance goes up by another 0.005R per pin. That'll double the heat
Another 5mR for thermal aging: 105C for 240 hours
Humidity rating adds another 5mR
The connector is rated for 600W,, but at that powe
Re: (Score:2)
Re: (Score:2)
time for a hardware manufacturer to put an aux power plug on a card!
Re: (Score:1)
Re: (Score:2)
Ohms law. You can’t escape it.
Re: (Score:2)
Re: (Score:2)
Simple: That standard outlet and plug has to conform to requirements that mandate generous reserves. Connectors that can safely do 1000W at 12V exist. They just cost money.
Meanwhile... (Score:1)
Move to 48 volts already (Score:2)
Time to modernize the standard and bump 12 volts up to 48. It doesn’t look like GPU power usage is going to decrease any time soon and going to 48 volts will drop the current to 1/4 of what it was. The cards already have built in power supplies to drop the 12 volts down.
Re: (Score:2)
But then people would have to throw away their old PSUs and these loads are just temporary insanity for a small customer segment anyways. They simply should have used good connectors and put in a reasonable safety margin.
My 7900XTX Has 3 PCIe From Power Supply (Score:1)
And has been running 24/7 for over a year with no burnt cables. It is a Seasonic 1000w unit purchased when I bought the video card.
How can they fuck up a standard connector? We all know how much current you can pump through a certain gauge of wire.
I am so glad I went with AMD this last time around on my build.
I don't understand the design (Score:2)
They appear to be using connectors with little or no safety margin, where everything must be perfect or they overheat. It seems so simple to use more pins, bigger pins or some other strategy to make them more reliable
Re: (Score:2)
Last time that was the case. Running a "max 8A" connector at 8A. With the additional problem that this is a multi-pin design and no balancing on the power lines, hence anybody smart would never go up to the max in the first place. And then you have people that are not very competent plug these in. I would not feel comfortable with more than half the rated current per pin under these conditions.
As to why do they not use better connectors? Simple: greed. I mean, they would make less in profits if they put in
Start including an external AC connection. (Score:2)
No personal computer should need a 1000w supply. Or even a 500w supply.
Especially when just playing a game, for which modern GPUs shouldn't be at max power dissipati
Time for external power (Score:2)
Somebody has learned nothing, it seems (Score:2)
Using a connector again that already caused bad issues is not smart. Sure, most or all of these problems will be because some people have reached the level of incapability needed to be to incompetent to plug-in a connector. You still have to idiot-proof that connection, unless you start only selling to EEs with proven credentials.
Uh oh (Score:2)
Heard this before (Score:1)
It's not user error (Score:2)