Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Power Hardware

Nvidia on Melting RTX 4090 Cables: You're Plugging It In Wrong (theverge.com) 127

NVIDIA has responded to a class action lawsuit over melting RTX 4090 GPU adapters. The Verge reports: Weeks after Nvidia announced that it was investigating reports that the power cables for its RTX 4090 graphics card were melting and burning, the company says it may know why: they just weren't plugged in all the way.

In a post to its customer support forum on Friday, Nvidia says that it's still investigating the reports, but that its findings "suggest" an insecure connector has been a common issue. It also says that it's gotten around 50 reports of the issue.

Nvidia's flagship card uses what's known as a 12VHPWR power connector, a new standard that isn't natively supported by most of the power supplies that people already have in their PCs. Because of that, it ships an adapter — or "power dongle," as Friday's post calls it — in the box. Users' initial reports blamed the adapter, with some saying that the melting cable had damaged their $1,599 GPU as well....

GamersNexus, an outlet that's respected in the PC-building community for its rigorous testing, basically came to the same conclusion earlier this week. A video posted on Wednesday by the outlet, which inspected damaged adapters sent in by viewers and done extensive testing and reporting on the issue, showed that the connectors had wear lines, implying that they hadn't been completely inserted into the slot. GamersNexus even says that some people seem to have missed a full connection by several millimeters. Its video shows that a loose connection could cause the plug to heat up dramatically, if it were plugged in poorly and tilted at an angle..... [A]n unnamed spokesperson for the company told GamersNexus on Friday that "any issues with the burned cable or GPU, regardless of cable or GPU, it will be processed" for a replacement.

Thanks to long-time Slashdot reader fahrbot-bot for submitting the article.
This discussion has been archived. No new comments can be posted.

Nvidia on Melting RTX 4090 Cables: You're Plugging It In Wrong

Comments Filter:
  • Why is the power not being cut when the cable gets too hot? The resistance changes and many electric components have this basic safety feature.
    • Comment removed based on user account deletion
      • Re:Seriously? (Score:4, Informative)

        by Joce640k ( 829181 ) on Sunday November 20, 2022 @05:29PM (#63066781) Homepage

        Because hindsight is 20/20.

        Yep.

        The problem also appears to be with people trying to jam the massive cards into cases which are too small and stressing the connectors against the side of the case.

        eg. https://cablemod.com/12vhpwr/ [cablemod.com]

        • The problem is cables not designed to be installed into actual PC cases.

          • The problem is cables not designed to be installed into actual PC cases.

            Think: Lots of people have managed to use these cards without a meltdown.

            • Think: Lots of people have managed to use these cards without a meltdown.

              Think: They overcame unnecessary adversity.

              Think: You're jerking off nvidia for no apparent reason.

      • It'd also be a pretty odd thing to put in a power cable or socket, I mean you've got temperature sensors in or near various active components where you'd expect heat buildup, but I suspect an engineer who proposed adding a temperature sensor and associated circuitry and signalling to a power cable would be sent out for drug screening or something.
    • Re:Seriously? (Score:4, Informative)

      by gweihir ( 88907 ) on Sunday November 20, 2022 @05:51PM (#63066815)

      You clearly have no clue what you are talking about. There would need to be special sensors (heat or voltage-drop) in there for that to work. Almost no connectors have this feature and it is not at all "basic". It would also need additional circuitry to cut the power.

      • Literally all it would take to solve this problem is individual polyfuses on the input leads, and have the card able to run in 2d mode from bus power (which some GPUs with additional power connectors can do, and some can't.) and enough hardware to determine whether power is coming in on each of the pins on the connector — which is (again, literally) just some voltage divider resistors and some digital inputs, or one input, one output, and a mux. If power is lost on any pin, stop drawing from high powe

        • by gweihir ( 88907 )

          Polyfuses for 8A on connector pins in a 2.54mm spacing? Yes, probably doable. If you do not mind the connector getting bulky and a _lot_ more expensive. And the Polyfuses heating stuff up themselves at this amperage. And polyfuses have rather large tolerances and work best as short-circuit protection. There is a reason nobody does that and the reason is not that all EEs are stupid. The reason is that, all things considered, the suggestion to do this is stupid.

          Just as a sample, a 0ZRB0500FF1A costs about 50

          • No, it doesn't matter where you put the fuses, because current is constant throughout a entire circuit. All that matters is that you have one per lead. They can go in the PS, in the cable, or on the GPU PCB.

            I'm open to it being done some other way, my point was that it's not necessarily complicated.

            The cost of the polyfuses is small compared to the cost of the board.

            • by gweihir ( 88907 )

              You do not understand the problem. For this application (bad connectivity in the connecter), you need a thermal coupling between the polyfuse and the connector pin. That works because polyfuses can also be heat-triggered when loaded near their trip-point. The contacts are _not_ loaded with too much current, hence a current trigger does not work at all. The contacts are badly plugged in hence have too high inner resistance and that makes them heat up.

              • The contacts are badly plugged in hence have too high inner resistance and that makes them heat up.

                PC power supplies are constant voltage, not constant current, and the GPU is a resistive load, so no. (Fans are inductive, but they are also low current devices.) With constant voltage, increased resistance means reduced current. Whichever pins have the lowest resistance will experience increased current, and THEY will heat up.

    • Apparently, if you have a new compliant PSU that has native 12VHPWR connectors, it does actually have the ability to communicate with the dGPU and manage current accordingly. Whether or not it has built-in protections to change behavior based on the heat of the cable is not clear (to me anyway). Initial reports of melted connectors on the dGPU side have been from people using the four-prong adapters that let you use the standard 8-pin PCIe connectors in groups of four to power a 12VHPWR connector that goe

    • Why is the power not being cut when the cable gets too hot?

      Are you asking why our cables are cables rather than expensive complex electronics containing sensors?

      There are few connectors in the world that monitor their cable temperature. The situation is almost exclusively limited to situations where current demand may be higher than capacity of the cable. This isn't the case here, both connector and cable are adequately sized for the load.

      We don't implement complex electronics simply because a small handful of users missuse their equipment.

  • by 93 Escort Wagon ( 326346 ) on Sunday November 20, 2022 @04:49PM (#63066689)

    But then it appears the company is likely correct:

    "GamersNexus, an outlet that's respected in the PC-building community for its rigorous testing, basically came to the same conclusion earlier this week. A video posted on Wednesday by the outlet, which inspected damaged adapters sent in by viewers and done extensive testing and reporting on the issue, showed that the connectors had wear lines, implying that they hadn't been completely inserted into the slot. GamersNexus even says that some people seem to have missed a full connection by several millimeters."

    So the mocking tone of headline is just an attempt at clickbait, which I guess shouldn't be a surprise - given what we've seen from the Verge in the recent past.

    • by ravenshrike ( 808508 ) on Sunday November 20, 2022 @05:23PM (#63066757)

      Except the socket can come out those several mm while doing cable management. The retention clip is crap with little to no feedback and there's no visual difference between clipped/not clipped. The "solution" is to plug the card in before socketing it, and rock the power cable firmly back and forth to make sure it is secure. It's a shit design.

      • by Nrrqshrr ( 1879148 ) on Sunday November 20, 2022 @06:04PM (#63066835)

        Exactly my thought. If I just bought a 1600$ GPU, am not gonna hammer that bitch in. As soon as I feel any kind of resistance or haptic feedback, I would assume that extra force is unnecessary or even harmful. A good old *click* with a bit of plastic to lock the cable in would have done wonders in this case.
        Yes, hindsight is 20/20. But this seems like basic UX design for me.
        Keep the user informed.

        • UI would be about good design, haptic feedback, and not blaming the user.

          UX is about more LEDs and go-fast stickers.

        • The connector does latch. It's the same fundamental mating design we've been using for 20 years. It sounds like the problem is limp-wristed people thinking that just because something is expensive means they shouldn't push as hard.

          Push that connector in just as hard as you would on a $10 GPU and you will be fine, and so will your GPU.

          Funny story, I have had a connector catch fire as well. Years ago. It also wasn't seated correctly. But it didn't make international news because it wasn't a new connector des

          • Funny story, I have had a connector catch fire as well. Years ago. It also wasn't seated correctly. But it didn't make international news because it wasn't a new connector design and I wasn't attempting to blame my incompetence on a vendor.

            This isn't a new connector design. Molex Mini-Fit Jr. has been around for decades. What's new is that nvidia is trying to deliver an unprecedented amount of current per pin, without any circuit protection whatsoever.

    • by fazig ( 2909523 )
      Headline from the linked Verge article:

      Nvidia thinks RTX 4090 cables melted because they weren’t fully plugged in

      The mocking tone of the headline primarily comes from TFS.

      • by fahrbot-bot ( 874524 ) on Sunday November 20, 2022 @05:37PM (#63066797)

        The mocking tone of the headline primarily comes from TFS.

        It was me. I just noted it here [slashdot.org] as an homage to Apple's iPhone 4 poor reception response of "you're holding it wrong", which was also technically correct, but ridiculous.

        • by fazig ( 2909523 ) on Sunday November 20, 2022 @05:45PM (#63066801)
          Yes, it is technically correct.
          NVIDIA's quality control should have caught this though and put measures in place to prevent it. Especially for such an expensive article I can understand the frustration.
          I've been building and selling PCs for over a decade, and I don't think I would above making such a mistake.
          • To be fair, though...
            These cards represent a new, uhhh... bar... in PC component loads.
            These fuckers are drawing near 40 amps. You've never had to deal with the kind of careful you need to be when connecting up 40 amps.

            Part of it, is probably NV's fault for not explaining to people that these take a bit more care than the stuff you're used to and understand.
            Part of it is definitely the fault of hubris on the side of installers.
            • by fazig ( 2909523 )
              Since P = U^2 / R = I^2 * R, theoretically you could use higher voltages to deliver the same power at lower resistive losses.
              With 12V*40A you get 480W in total. The same 480W total can be accomplished by 24V*20A while the resistive losses are kept lower.
              Perhaps only do so up to the VRMs on the card which then convert it down to safer levels. But of course that would require new power supplies, because as far as I know contemporary ATX power supplies either can't deliver +24VDC or can only do so at bad eff
              • Raising voltage is absolutely an option. But not one without consequences.
                Stepping the voltage back down means more active electronics and heat.

                I think an easier option, is to just use more fucking power delivery cables.
                These aren't the first 400+W GPUs. They're just the first meant for home consumers.
                AMD's 400W pro cards have their power delivery over a second PCIe-like connector, so that the per-conductor power delivery is tiny.

                Someone at PCI-SIG thought it'd be a good idea to design a connector to
          • by AmiMoJo ( 196126 )

            They should make GPUs that don't require 600W.

            This is what happens when they haven't been able to engineer the expected efficiency boost. Just throw more transistors at it to ensure that it performs better than the competition and last year's model. Figure out how to supply it with 600W.

            • by fazig ( 2909523 )
              Well, it doesn't require 600W, technically.
              It does draw so much power and thus current because it's stupidly overclocked out of the box so that they can squeeze out the last half percent of performance that places them high on some synthetic bullshit chart just keeping it below the threshold of burning your house down.

              That's the new trend for new hardware released which you can also see with the latest Intel CPUs as well as AMD Ryzens. If you manually lower their power target a bit you can drastically re
    • But then it appears the company is likely correct:

      So the mocking tone of headline is just an attempt at clickbait, which I guess shouldn't be a surprise - given what we've seen from the Verge in the recent past.

      Actually, I added the "plugging it in wrong" bit as an homage to the Apple iPhone 4 reception issue where Apple (basically) asserted people were holding the phone wrong -- which was technically correct given the design of the antenna, but ridiculous from an end-user perspective.

    • Yes and no... the problem is that the cable is hard to get in the first place, then it can come out during cable management especially if not fully clipped. Which means you might have a at the first look properly inserted cable which is not fully clipped than you have some mechanical load on the cable and moves a few mm and suddenly burn... This is clearly a construction problem, as GM also stated, there should be some security mechanisms in place which instantly should switch off the connection, the proble

  • by sxpert ( 139117 ) on Sunday November 20, 2022 @04:53PM (#63066703)

    all NVidia has to say is "you're doing it wrong" ?
    well, no problem, AMD has a decent offer for about 1/2 the price... off there goes my money

    • Re:seriously ? (Score:4, Insightful)

      by Joce640k ( 829181 ) on Sunday November 20, 2022 @05:36PM (#63066795) Homepage

      all NVidia has to say is "you're doing it wrong" ?
      well, no problem, AMD has a decent offer for about 1/2 the price... off there goes my money

      It's not just NVIDIA saying it, third parties are agreeing - there's an example in the summary.

      • NVidia: "From now on, every GPU sold will come with its own power supply soldered directly to the video board to prevent this from happening."
    • AMD has a decent offer for about 1/2 the price

      If only that was true. Some of us actually use the features that separate NVIDIA from AMD. But if you're just a casual gamer then you'd be mad not to choose AMD right now.

  • Still bad design (Score:5, Informative)

    by Retired Chemist ( 5039029 ) on Sunday November 20, 2022 @05:10PM (#63066733)
    If it is that easy to plug it in wrong, it is still bad design. No excuse.
    • Re:Still bad design (Score:5, Informative)

      by jacks smirking reven ( 909048 ) on Sunday November 20, 2022 @05:20PM (#63066751)

      That's what I was thinking and apparently that may have already been acknowledged

      PCI-SIG now considering changes to problematic 12VHPWR connector [tweaktown.com]

      • Good, working as it should.

        Gotta get more spazzes and kindergartners in the QA lab.

      • This is going to be a big pain for for existing PSUs which already have the 12VHPWR as part of the ATX 3.0 standards.

        Not to mention the current 4090 nvidia cards.

        Will that mean that an ATX 3.1 or some other new standard with the new / updated connector is forthcoming?

        And will ATX 3.0 PSU vendors be supplying adapters for the upcoming connectors? Or will new PSU vendors with the new upcoming new connectors going to provide adapters to downconvert to the 4090 / current 12VHPWR connectors?

        This will be an inter

        • One of the fixes I have seen mentioned (maybe on GN?) was to shorten the connectors so that if it isn't fully inserted, it doesn't make the connection. This would be compatible with the existing stuff, without significantly changing things. The female side (power supply) would have the dimples closer together, while the male side (video card) would stay the same.

    • Maybe it’s time to ditch 12 volts and bump it up to 48.

      • by dohzer ( 867770 ) on Sunday November 20, 2022 @07:39PM (#63067005)

        640kV ought to be enough for anybody.

      • Can't get the automotive world on board, what makes you think the PC world wants any part of that too?

      • Maybe it’s time to ditch 12 volts and bump it up to 48.

        Why? That doesn't solve mating connectors improperly and currently the connector is designed to provide nearly double the amount of power that is being drawn. We're far from any limit that would necessitate changing supply voltage. Additionally if you do that it creates additional complexities for the peripherals which need to cope with a bigger jump. It's harder to regulate 48V down to 1.3V than it is 12V.

        • Voltage goes up 4x and the current goes down 4x. You don’t have to worry about milliohms of contact resistance heating up either. For decades everything in the telecom world ran 48 volts.

      • That's exactly what I said in the last discussion of this story. This new connector is associated with a new power supply standard, and that standard could have included a 48V rail... but it didn't. Instead they just said fuck it, we'll do more 12v with more leads. But running multiple leads safely requires independent current protection on every lead, and all most power supplies have is per-rail overcurrent protection (if that.) The assorted ATX standards we were using already fucked this up, there are six

      • by AmiMoJo ( 196126 )

        Problem is that you have far fewer choices with your power electronics to handle 48V. Many devices that can handle 12V exist, far fewer that are 48V capable.

        It would also be a new voltage that PSUs need to generate. The industry is moving towards just 12V for everything already, removing the 5V, 3.3V, and -5V supplies.

    • If it is that easy to plug it in wrong, it is still bad design. No excuse.

      It's not that easy to plug in wrong. It's the same design we've been using for 20 years, just with a couple of extra pins on the top. The only thing new is the size of the card and people cramming them into undersized cases and putting extreme stress on the connector.

      Cables have been inserted incorrectly for decades in PCs. The only thing here is *OMG NVIDIA MADE A NEW CONNECTOR IT'S THEIR FAULT NOOOT MIIINEEEE*

    • Like soldering irons [openstem.com.au]?
  • It is people messing things up. Sure, the connectors are loaded close to maximum and you should probably not do that for something that is getting installed by amateurs, but technically Nvidia uses the part within spec.

  • It's time to start designing these extremely power hungry video cards to sit in external boxes. A PCIx card with external connector could extend the bus to the video card. This would solve power issues as it would have its own dedicated supply and reduce the heat in game systems. I'm still clunking along with a GTX 960 and i7 6th gen CPU. I can't imagine what kind of heat and power draw today's newest cards create and need but melting cables, even because they are a bit loose, is a design flaw AFAIK and mov
    • moving these monsters externally seems reasonable.

      Unfortunately, the current trend for external PCI-E devices is to use Thunderbolt connections from the host to the external enclosure. (Mostly for Apple devices that lack an internal slot.) Other mechanisms exist [amazon.com], (mainly just reuse of existing DVI or USB 3.0 cables to carry the raw PCI signaling) but those are relatively few in number. The most affordable (around $60-70 on Amazon [amazon.com]) are cheap connectors mounted in a 3d printed project box without any shell or support structure for the card itself. (No PCI-

    • It's time to start designing these extremely power hungry video cards to sit in external boxes.

      Why? Just to be wasteful? In any case, you do not need to do anything to a video card to make it sit in an external box, except follow a standard. All video cards already do this, so it's done.

      This would solve power issues as it would have its own dedicated supply

      Nonsense. The power issue with these cards is a connector issue, and you're going to have the same connector if you use an external enclosure.

      and reduce the heat in game systems

      This is a non-issue. Fans move the heat out of the case without any trouble. Managing case heat is not a problem, it's not even difficult except in small cases, and even then al

  • British plugs have insulted electrical tape half way up the live and neutral pins. This makes shorts and fires much less likely.

    If you have, for example, an american plug that is only half way plugged in, a small piece of metal (dime, paperclip, etc) can fall down and connect the live and neutral pins, causing a short.

    Any plug that is having issues when it is only partly seated should be redesigned to prevent this issue.

    Remember, always design your devices for morons that failed first grade, not for electr

    • by XanC ( 644172 )

      What happens when you insult the electrical tape? Does it challenge you to a duel?

    • You're totally right. Modern GPUs are demanding more and more power from the power supply than ever before. The standardized power connectors were never designed to pass the current that they are being asked to nowadays. They have always been cheaply made and mass produced. NVidia should re-think the connector design, and make it more robust and fool-proof, or require many more power supply connections to reduce the load on any given cable.
      • by ceoyoyo ( 59147 )

        Apparently, they did. The actual connector is a new type. The problem seems to be with the adapter they put in so people wouldn't scream that they needed to upgrade their power supply as well.

    • I bridged a 20A circuit with a quarter once.
      The problem seemed to resolve itself within a few microseconds.
    • Any plug that is having issues when it is only partly seated should be redesigned to prevent this issue.

      No plugs providing power in the world (including British plugs) which don't include active thermal monitoring (e.g. some EV charging cables) are able to deal with incorrect seating. British plugs are capable of arcing and starting a fire if you seat them incorrectly. The insulation *exclusively* prevents short circuits from the paperclip scenario you mentioned.

      This scenario is not what is happening with these GPUs.

  • People first noticed problems with the 4-prong PCIe 8-pin adapters included with 4090 cards intended to allow people to use older PSUs that have enough juice but don't have the new 12VHPWR connector. It's not hard to see how a 4-prong adapter would have problems staying seated in the card.

    However, now there have been reports of newer PSUs with native 12VHPWR connectors melting in 4090s:

    https://www.digitaltrends.com/... [digitaltrends.com]

    Looks like the entire connector design is bunk.

    • Looks like the entire connector design is bunk.

      It's definitely... optimistic.

      The current being carried by these is non-trivial.
      Nothing short of some kind of tightened locking mechanism is going to stop these things from melting, and even then, sometimes some are going to melt anyways because of damage to the prongs or corrosion.
      There's zero fucking room for extra resistance. The connector is designed to get hot even with a perfect connection.

      Frankly, sending this much current through connectors with only 12 conductors needs to be carefully re-thou

  • by DrXym ( 126579 )
    The real problem here is that NVidia is selling graphics cards that draw 450W requiring new PSUs and connectors. Personally I'm all in favour if some bitcoin mining asshole watches their operation go up in smoke, but even better if happens to NVidia's sales strategy.

It was kinda like stuffing the wrong card in a computer, when you're stickin' those artificial stimulants in your arm. -- Dion, noted computer scientist

Working...