Nvidia on Melting RTX 4090 Cables: You're Plugging It In Wrong (theverge.com) 127
NVIDIA has responded to a class action lawsuit over melting RTX 4090 GPU adapters. The Verge reports:
Weeks after Nvidia announced that it was investigating reports that the power cables for its RTX 4090 graphics card were melting and burning, the company says it may know why: they just weren't plugged in all the way.
In a post to its customer support forum on Friday, Nvidia says that it's still investigating the reports, but that its findings "suggest" an insecure connector has been a common issue. It also says that it's gotten around 50 reports of the issue.
Nvidia's flagship card uses what's known as a 12VHPWR power connector, a new standard that isn't natively supported by most of the power supplies that people already have in their PCs. Because of that, it ships an adapter — or "power dongle," as Friday's post calls it — in the box. Users' initial reports blamed the adapter, with some saying that the melting cable had damaged their $1,599 GPU as well....
GamersNexus, an outlet that's respected in the PC-building community for its rigorous testing, basically came to the same conclusion earlier this week. A video posted on Wednesday by the outlet, which inspected damaged adapters sent in by viewers and done extensive testing and reporting on the issue, showed that the connectors had wear lines, implying that they hadn't been completely inserted into the slot. GamersNexus even says that some people seem to have missed a full connection by several millimeters. Its video shows that a loose connection could cause the plug to heat up dramatically, if it were plugged in poorly and tilted at an angle..... [A]n unnamed spokesperson for the company told GamersNexus on Friday that "any issues with the burned cable or GPU, regardless of cable or GPU, it will be processed" for a replacement.
Thanks to long-time Slashdot reader fahrbot-bot for submitting the article.
In a post to its customer support forum on Friday, Nvidia says that it's still investigating the reports, but that its findings "suggest" an insecure connector has been a common issue. It also says that it's gotten around 50 reports of the issue.
Nvidia's flagship card uses what's known as a 12VHPWR power connector, a new standard that isn't natively supported by most of the power supplies that people already have in their PCs. Because of that, it ships an adapter — or "power dongle," as Friday's post calls it — in the box. Users' initial reports blamed the adapter, with some saying that the melting cable had damaged their $1,599 GPU as well....
GamersNexus, an outlet that's respected in the PC-building community for its rigorous testing, basically came to the same conclusion earlier this week. A video posted on Wednesday by the outlet, which inspected damaged adapters sent in by viewers and done extensive testing and reporting on the issue, showed that the connectors had wear lines, implying that they hadn't been completely inserted into the slot. GamersNexus even says that some people seem to have missed a full connection by several millimeters. Its video shows that a loose connection could cause the plug to heat up dramatically, if it were plugged in poorly and tilted at an angle..... [A]n unnamed spokesperson for the company told GamersNexus on Friday that "any issues with the burned cable or GPU, regardless of cable or GPU, it will be processed" for a replacement.
Thanks to long-time Slashdot reader fahrbot-bot for submitting the article.
Seriously? (Score:2)
Re: (Score:2)
Re:Seriously? (Score:4, Informative)
Because hindsight is 20/20.
Yep.
The problem also appears to be with people trying to jam the massive cards into cases which are too small and stressing the connectors against the side of the case.
eg. https://cablemod.com/12vhpwr/ [cablemod.com]
Re: (Score:3)
The problem is cables not designed to be installed into actual PC cases.
Re: (Score:2)
The problem is cables not designed to be installed into actual PC cases.
Think: Lots of people have managed to use these cards without a meltdown.
Re: (Score:3)
Think: Lots of people have managed to use these cards without a meltdown.
Think: They overcame unnecessary adversity.
Think: You're jerking off nvidia for no apparent reason.
Re: (Score:2)
Re:Seriously? (Score:4, Informative)
You clearly have no clue what you are talking about. There would need to be special sensors (heat or voltage-drop) in there for that to work. Almost no connectors have this feature and it is not at all "basic". It would also need additional circuitry to cut the power.
Re: (Score:2)
Literally all it would take to solve this problem is individual polyfuses on the input leads, and have the card able to run in 2d mode from bus power (which some GPUs with additional power connectors can do, and some can't.) and enough hardware to determine whether power is coming in on each of the pins on the connector — which is (again, literally) just some voltage divider resistors and some digital inputs, or one input, one output, and a mux. If power is lost on any pin, stop drawing from high powe
Re: (Score:2)
Polyfuses for 8A on connector pins in a 2.54mm spacing? Yes, probably doable. If you do not mind the connector getting bulky and a _lot_ more expensive. And the Polyfuses heating stuff up themselves at this amperage. And polyfuses have rather large tolerances and work best as short-circuit protection. There is a reason nobody does that and the reason is not that all EEs are stupid. The reason is that, all things considered, the suggestion to do this is stupid.
Just as a sample, a 0ZRB0500FF1A costs about 50
Re: (Score:2)
No, it doesn't matter where you put the fuses, because current is constant throughout a entire circuit. All that matters is that you have one per lead. They can go in the PS, in the cable, or on the GPU PCB.
I'm open to it being done some other way, my point was that it's not necessarily complicated.
The cost of the polyfuses is small compared to the cost of the board.
Re: (Score:2)
You do not understand the problem. For this application (bad connectivity in the connecter), you need a thermal coupling between the polyfuse and the connector pin. That works because polyfuses can also be heat-triggered when loaded near their trip-point. The contacts are _not_ loaded with too much current, hence a current trigger does not work at all. The contacts are badly plugged in hence have too high inner resistance and that makes them heat up.
Re: (Score:2)
The contacts are badly plugged in hence have too high inner resistance and that makes them heat up.
PC power supplies are constant voltage, not constant current, and the GPU is a resistive load, so no. (Fans are inductive, but they are also low current devices.) With constant voltage, increased resistance means reduced current. Whichever pins have the lowest resistance will experience increased current, and THEY will heat up.
Re: (Score:2)
The misspelling is intentional. Yes, I do understand that is too complicated for you to understand.
Re: (Score:2)
Apparently, if you have a new compliant PSU that has native 12VHPWR connectors, it does actually have the ability to communicate with the dGPU and manage current accordingly. Whether or not it has built-in protections to change behavior based on the heat of the cable is not clear (to me anyway). Initial reports of melted connectors on the dGPU side have been from people using the four-prong adapters that let you use the standard 8-pin PCIe connectors in groups of four to power a 12VHPWR connector that goe
Re: (Score:2)
Why is the power not being cut when the cable gets too hot?
Are you asking why our cables are cables rather than expensive complex electronics containing sensors?
There are few connectors in the world that monitor their cable temperature. The situation is almost exclusively limited to situations where current demand may be higher than capacity of the cable. This isn't the case here, both connector and cable are adequately sized for the load.
We don't implement complex electronics simply because a small handful of users missuse their equipment.
Re: (Score:3)
After all, the card is a mere $1,599.
Question. I'm not a gamer, or someone who needs a high-end graphics card like this, so I don't have any experience with them. And while I imagine they do what they're suppose to do very well. That's a LOT of money for a graphics card. Perhaps it's worth it -- which is also a question -- but anyone got an idea on what the profit margin / mark up is on stuff like this?
Re:Seriously? (Score:5, Informative)
Re: (Score:3)
Thanks!
Everything above that is pure profit.
I think the technical term is "gravy". :-)
Re:Seriously? (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
The rest is not pure profit. The majority of the cost is in the R&D of the things.
Re: (Score:3)
Re: (Score:2)
Clearly they should give away the flagship models and only charge for the low end.
Re: (Score:2)
The same R&D is funded by everything because all the GPUs are based on the same designs, just with differing numbers of cores. They sell increasing numbers of the halo products ever since GPU compute became a thing. nvidia's enterprise compute business generated $3.83 billion or 64% of total revenue in Q3 22 [seekingalpha.com]. So if anything, you have it backwards.
Re: (Score:2)
The 24GB ram costs maybe another $100
Last price I saw for GDDR6X was ~$24/GB. That's a lot more than $100.
Re: (Score:3)
Everything above that is pure profit.
No. Profit is not everything above BOM, profit is what's left after the cost of production. You've ignored shipping, packaging, R&D, support, and staff overheads, all of which factor into the cost of producing a unit.
Re: (Score:3)
Honestly, almost nobody. The last-generation cards could play most AAA games at 4K resolution with the details turned up to high at 90 frames a second. This new card can play them at speeds around 130 frames a second... but most people can't tell the difference.
These cards are made for next-generation games that aren't available yet. COVID seems to have delayed the games that were supposed to be released in 2022 by a year.
Re: (Score:2)
These cards are also sold for GPU compute tasks. Plenty of us use our GPUs both for gaming and other types of computing. I can't imagine spending this much on a single card at the moment personally, but it's not the dumbest thing someone could do. You can spend that much on a CPU/MB/RAM combo without too much effort, if you buy a chip with a ton of cores for video editing or whatever.
Re: (Score:2)
Power supplies should protect themselves from per-pin overcurrent, but nvidia could literally solve this problem with some polyfuses.
Re: Seriously? (Score:4, Informative)
It's too much current on some pins because there's not enough current on other pins. If they just polyfuse them near to their maximum then they'll all overload when this happens and cut off. Since the voltage is constant and the pins that aren't connected properly have high resistance, they are carrying little current. But other pins have to take up the slack...
Re: (Score:2)
It's too much current on some pins because there's not enough current on other pins.
No. The scenario you are talking about would be a broken connection on one pin causing a high load on another. If instead you only half plug in the connector then all pins have increased resistance, voltage drop, and start heating up, all without any overcurrent on any of the pins.
Re: (Score:2)
The scenario you are talking about would be a broken connection on one pin causing a high load on another. If instead you only half plug in the connector then all pins have increased resistance,
Oh, my sweet summer child. That's not how anything works. If you had ever worked with these pins and connectors then you would know that. If the connector isn't inserted all the way and the cable is pulling it one way or another then it absolutely makes contact more on some pins than others. Please, for the love of all that is factual, just hush up and let people who know things communicate facts.
Re: (Score:2)
Nope. It's an over current. That's where the heat comes from! High resistance leads to under voltage (at the device) and over current.
Same thing can happen from using wires that are too small for the load. In a household situation that will trip a breaker because it draws too much current.
Re: Seriously? (Score:2)
You might need to rethink what you are saying.
Increasing the resistance is unlikely to increase the current.
Re: (Score:2)
It all depends on the dynamics of the circuit. Any resistance across the plug is going to cause heat at that spot (as well as voltage drop). And if the load is constant in its demand, the power supply will ramp up to try to supply both the load and the the power being dissipated as heat. Sooner or later it will melt the connection at that spot, possibly burning through and breaking the circuit. Which is what was happening here. So yes, definitely over current that could be detected by the power supply.
Re: (Score:2)
Probably should have used a different word than "constant." I just meant that if the load demanded so many watts, and if the voltage was sagging (which is is here), the current drawn would increase. It's probably more clear to say "non-linear load." With a non-linear load, as wire resistance goes up, the load starts demanding more and more current to bring up the voltage level.
Motors are bad offenders here as well. For example a vacuum cleaner or air compressor that would run fine on a 15 amp breaker plu
Re: (Score:2)
Motors are bad offenders here as well. For example a vacuum cleaner or air compressor that would run fine on a 15 amp breaker plugged directly in the wall will start tripping the breaker when you use a longer, under-sized extension cord.
The reason this happens is you need a critical level of current to start up the motor. While it’s stalled, like on startup, the A/C input isn’t able to be limited by the inductance and back emf and is essentially a short circuit. When it’s at full speed, the inductance limits current. With a short cord the current threshold is reached quick enough the motor starts up and runs and does not pull enough current long enough to trip the breaker. With a cord that is too long or thin, the
Re: (Score:2)
Nice explanation, but no, that's not what I'm talking about. I mean under regular use, not during starting. Read up on non-linear loads. It's a real phenomenon. Even if voltage is reduced from the line resistance, a motor will pull whatever current is needed to power its load, hence the rise in current. There are other types of non-linear loads besides motors that do the same thing.
In fact recently I had a 35 hp motor that would start and run fine for hours and then it would trip the over-current breake
Re: (Score:2)
, a motor will pull whatever current is needed to power its load, hence the rise
This isn’t a very good way to explain power factor and harmonics if that’s what your talking about. Yes, the motor and load are essentially inseparable, but a better way to say it in this case, in my opinion, is to say that a motor, it’s load, and the supply are all part of a balanced system and harmonics and balance is just as important on the electrical side as the mechanical side. Just as a mechanical resonance in your load can cause excessive vibration and damage, so can the identic
Re: Seriously? (Score:4, Insightful)
Computer power supplies are constant voltage, not constant current. They don't "ramp up" to compensate for a high resistance connection. And no, they don't ramp up to compensate for a voltage drop either, because they monitor the voltage they're providing.
Re: (Score:2)
Yes we're in violent agreement, then, even though you're contradicting yourself. The power supply does indeed "ramp up" to provide whatever current is necessary to maintain the voltage it's set to. It meets whatever the load demands. Thus when the plug is heating, the GPU is still demanding a certain amount of current, even though the voltage has sagged across the bad connection.
Now tell me, how does the power supply know how much current to supply? It's based on voltage. As the voltage drops, the curre
Re: (Score:2)
I guess you were in such a hurry to "agree" with me that you missed my third sentence, which I specifically wrote to head this off.
The computer power supply supplies 12 volts *at its output*. The voltage drop you're talking about is across the connector to the GPU. The power supply doesn't know anything about that, and even if it did, it wouldn't do an
Re: (Score:2)
Yep, as ceoyoyo has said, the power supply provides a constant voltage and its not a four wire sense either so it only regulates the voltage at the output of the supply not at the load, the power supply doesn't do anything because of anything. The problem is that as the resistance of the connector increases the voltage across the connector increases (E=I*R, as R increases the voltage increase). This increased voltage across the connector for the same current as before will increase the power dissipation in
Re: (Score:2)
It's not. No one uses "overcurrent" to describe this. Over current is the delivery of current beyond spec, that's not what is happening here. No one ever uses "over-current" to describe heating due to a connector mating incorrectly.
High resistance leads to under voltage (at the device) and over current.
You're making assumptions about the device that are not always correct. Given the current through the connector the voltage drop across the connector is borderline insignificant. In fact the resulting change in current would still be well within the limits of the mated connector
So the headline is decidedly mocking (Score:5, Insightful)
But then it appears the company is likely correct:
"GamersNexus, an outlet that's respected in the PC-building community for its rigorous testing, basically came to the same conclusion earlier this week. A video posted on Wednesday by the outlet, which inspected damaged adapters sent in by viewers and done extensive testing and reporting on the issue, showed that the connectors had wear lines, implying that they hadn't been completely inserted into the slot. GamersNexus even says that some people seem to have missed a full connection by several millimeters."
So the mocking tone of headline is just an attempt at clickbait, which I guess shouldn't be a surprise - given what we've seen from the Verge in the recent past.
Re:So the headline is decidedly mocking (Score:5, Insightful)
Except the socket can come out those several mm while doing cable management. The retention clip is crap with little to no feedback and there's no visual difference between clipped/not clipped. The "solution" is to plug the card in before socketing it, and rock the power cable firmly back and forth to make sure it is secure. It's a shit design.
Re:So the headline is decidedly mocking (Score:5, Insightful)
Exactly my thought. If I just bought a 1600$ GPU, am not gonna hammer that bitch in. As soon as I feel any kind of resistance or haptic feedback, I would assume that extra force is unnecessary or even harmful. A good old *click* with a bit of plastic to lock the cable in would have done wonders in this case.
Yes, hindsight is 20/20. But this seems like basic UX design for me.
Keep the user informed.
Re: (Score:2)
UI would be about good design, haptic feedback, and not blaming the user.
UX is about more LEDs and go-fast stickers.
Re: (Score:2)
The connector does latch. It's the same fundamental mating design we've been using for 20 years. It sounds like the problem is limp-wristed people thinking that just because something is expensive means they shouldn't push as hard.
Push that connector in just as hard as you would on a $10 GPU and you will be fine, and so will your GPU.
Funny story, I have had a connector catch fire as well. Years ago. It also wasn't seated correctly. But it didn't make international news because it wasn't a new connector des
Re: (Score:2)
Funny story, I have had a connector catch fire as well. Years ago. It also wasn't seated correctly. But it didn't make international news because it wasn't a new connector design and I wasn't attempting to blame my incompetence on a vendor.
This isn't a new connector design. Molex Mini-Fit Jr. has been around for decades. What's new is that nvidia is trying to deliver an unprecedented amount of current per pin, without any circuit protection whatsoever.
Re: (Score:2)
The mocking tone of the headline primarily comes from TFS.
Re:So the headline is decidedly mocking (Score:5, Insightful)
The mocking tone of the headline primarily comes from TFS.
It was me. I just noted it here [slashdot.org] as an homage to Apple's iPhone 4 poor reception response of "you're holding it wrong", which was also technically correct, but ridiculous.
Re:So the headline is decidedly mocking (Score:4)
NVIDIA's quality control should have caught this though and put measures in place to prevent it. Especially for such an expensive article I can understand the frustration.
I've been building and selling PCs for over a decade, and I don't think I would above making such a mistake.
Re: (Score:2)
These cards represent a new, uhhh... bar... in PC component loads.
These fuckers are drawing near 40 amps. You've never had to deal with the kind of careful you need to be when connecting up 40 amps.
Part of it, is probably NV's fault for not explaining to people that these take a bit more care than the stuff you're used to and understand.
Part of it is definitely the fault of hubris on the side of installers.
Re: (Score:2)
With 12V*40A you get 480W in total. The same 480W total can be accomplished by 24V*20A while the resistive losses are kept lower.
Perhaps only do so up to the VRMs on the card which then convert it down to safer levels. But of course that would require new power supplies, because as far as I know contemporary ATX power supplies either can't deliver +24VDC or can only do so at bad eff
Re: (Score:2)
Stepping the voltage back down means more active electronics and heat.
I think an easier option, is to just use more fucking power delivery cables.
These aren't the first 400+W GPUs. They're just the first meant for home consumers.
AMD's 400W pro cards have their power delivery over a second PCIe-like connector, so that the per-conductor power delivery is tiny.
Someone at PCI-SIG thought it'd be a good idea to design a connector to
Re: (Score:2)
They should make GPUs that don't require 600W.
This is what happens when they haven't been able to engineer the expected efficiency boost. Just throw more transistors at it to ensure that it performs better than the competition and last year's model. Figure out how to supply it with 600W.
Re: (Score:2)
It does draw so much power and thus current because it's stupidly overclocked out of the box so that they can squeeze out the last half percent of performance that places them high on some synthetic bullshit chart just keeping it below the threshold of burning your house down.
That's the new trend for new hardware released which you can also see with the latest Intel CPUs as well as AMD Ryzens. If you manually lower their power target a bit you can drastically re
Re: (Score:3)
But then it appears the company is likely correct:
So the mocking tone of headline is just an attempt at clickbait, which I guess shouldn't be a surprise - given what we've seen from the Verge in the recent past.
Actually, I added the "plugging it in wrong" bit as an homage to the Apple iPhone 4 reception issue where Apple (basically) asserted people were holding the phone wrong -- which was technically correct given the design of the antenna, but ridiculous from an end-user perspective.
Re: (Score:2)
which was technically correct
The best kind of correct. [youtu.be]
Okay, I withdraw this particular criticism of the Verge.
Re: (Score:2)
Yes and no... the problem is that the cable is hard to get in the first place, then it can come out during cable management especially if not fully clipped. Which means you might have a at the first look properly inserted cable which is not fully clipped than you have some mechanical load on the cable and moves a few mm and suddenly burn... This is clearly a construction problem, as GM also stated, there should be some security mechanisms in place which instantly should switch off the connection, the proble
seriously ? (Score:3)
all NVidia has to say is "you're doing it wrong" ?
well, no problem, AMD has a decent offer for about 1/2 the price... off there goes my money
Re:seriously ? (Score:4, Insightful)
all NVidia has to say is "you're doing it wrong" ?
well, no problem, AMD has a decent offer for about 1/2 the price... off there goes my money
It's not just NVIDIA saying it, third parties are agreeing - there's an example in the summary.
Re: (Score:2)
Re: (Score:2)
AMD has a decent offer for about 1/2 the price
If only that was true. Some of us actually use the features that separate NVIDIA from AMD. But if you're just a casual gamer then you'd be mad not to choose AMD right now.
Re: (Score:2)
Which AMD card has a 12VHPWR connector?
https://www.pcmag.com/news/amd... [pcmag.com]
None of the RDNA2 cards or earlier have it. RDNA3 won't have it.
Still bad design (Score:5, Informative)
Re:Still bad design (Score:5, Informative)
That's what I was thinking and apparently that may have already been acknowledged
PCI-SIG now considering changes to problematic 12VHPWR connector [tweaktown.com]
Re: (Score:2)
Good, working as it should.
Gotta get more spazzes and kindergartners in the QA lab.
Re: (Score:2)
This is going to be a big pain for for existing PSUs which already have the 12VHPWR as part of the ATX 3.0 standards.
Not to mention the current 4090 nvidia cards.
Will that mean that an ATX 3.1 or some other new standard with the new / updated connector is forthcoming?
And will ATX 3.0 PSU vendors be supplying adapters for the upcoming connectors? Or will new PSU vendors with the new upcoming new connectors going to provide adapters to downconvert to the 4090 / current 12VHPWR connectors?
This will be an inter
Re: (Score:2)
One of the fixes I have seen mentioned (maybe on GN?) was to shorten the connectors so that if it isn't fully inserted, it doesn't make the connection. This would be compatible with the existing stuff, without significantly changing things. The female side (power supply) would have the dimples closer together, while the male side (video card) would stay the same.
Re: (Score:3)
Maybe it’s time to ditch 12 volts and bump it up to 48.
Re:Still bad design (Score:5, Funny)
640kV ought to be enough for anybody.
Re: (Score:2)
640kV ought to be enough for anybody.
These jokes never get old.
Re: (Score:2)
Can't get the automotive world on board, what makes you think the PC world wants any part of that too?
Re: (Score:2)
Maybe it’s time to ditch 12 volts and bump it up to 48.
Why? That doesn't solve mating connectors improperly and currently the connector is designed to provide nearly double the amount of power that is being drawn. We're far from any limit that would necessitate changing supply voltage. Additionally if you do that it creates additional complexities for the peripherals which need to cope with a bigger jump. It's harder to regulate 48V down to 1.3V than it is 12V.
Re: (Score:2)
Voltage goes up 4x and the current goes down 4x. You don’t have to worry about milliohms of contact resistance heating up either. For decades everything in the telecom world ran 48 volts.
Re: (Score:2)
That's exactly what I said in the last discussion of this story. This new connector is associated with a new power supply standard, and that standard could have included a 48V rail... but it didn't. Instead they just said fuck it, we'll do more 12v with more leads. But running multiple leads safely requires independent current protection on every lead, and all most power supplies have is per-rail overcurrent protection (if that.) The assorted ATX standards we were using already fucked this up, there are six
Re: (Score:2)
Problem is that you have far fewer choices with your power electronics to handle 48V. Many devices that can handle 12V exist, far fewer that are 48V capable.
It would also be a new voltage that PSUs need to generate. The industry is moving towards just 12V for everything already, removing the 5V, 3.3V, and -5V supplies.
Re: (Score:2)
If it is that easy to plug it in wrong, it is still bad design. No excuse.
It's not that easy to plug in wrong. It's the same design we've been using for 20 years, just with a couple of extra pins on the top. The only thing new is the size of the card and people cramming them into undersized cases and putting extreme stress on the connector.
Cables have been inserted incorrectly for decades in PCs. The only thing here is *OMG NVIDIA MADE A NEW CONNECTOR IT'S THEIR FAULT NOOOT MIIINEEEE*
Re: (Score:2)
Re: (Score:2)
You're both right. It's not a very good connector, AND you should take care when connecting it. There's no reason not to replace it with a better connector, it's only used on high dollar products anyway.
Re: (Score:2)
It's funny that you think the "pros" would do any better. Prebuilts tend to get sloppy outside of the highest priced vendors. Indeed, I would expect they are more likely to leave it unclipped, but since they tend to do the bare minimum regarding cable management, the cable isn't worked out those several mm necessary to create the issue.
As I said the first time (Score:2)
It is people messing things up. Sure, the connectors are loaded close to maximum and you should probably not do that for something that is getting installed by amateurs, but technically Nvidia uses the part within spec.
Outside the box (Score:2)
Re: (Score:2)
moving these monsters externally seems reasonable.
Unfortunately, the current trend for external PCI-E devices is to use Thunderbolt connections from the host to the external enclosure. (Mostly for Apple devices that lack an internal slot.) Other mechanisms exist [amazon.com], (mainly just reuse of existing DVI or USB 3.0 cables to carry the raw PCI signaling) but those are relatively few in number. The most affordable (around $60-70 on Amazon [amazon.com]) are cheap connectors mounted in a 3d printed project box without any shell or support structure for the card itself. (No PCI-
Re: (Score:2)
It's time to start designing these extremely power hungry video cards to sit in external boxes.
Why? Just to be wasteful? In any case, you do not need to do anything to a video card to make it sit in an external box, except follow a standard. All video cards already do this, so it's done.
This would solve power issues as it would have its own dedicated supply
Nonsense. The power issue with these cards is a connector issue, and you're going to have the same connector if you use an external enclosure.
and reduce the heat in game systems
This is a non-issue. Fans move the heat out of the case without any trouble. Managing case heat is not a problem, it's not even difficult except in small cases, and even then al
Poorly designed plugs (Score:2)
British plugs have insulted electrical tape half way up the live and neutral pins. This makes shorts and fires much less likely.
If you have, for example, an american plug that is only half way plugged in, a small piece of metal (dime, paperclip, etc) can fall down and connect the live and neutral pins, causing a short.
Any plug that is having issues when it is only partly seated should be redesigned to prevent this issue.
Remember, always design your devices for morons that failed first grade, not for electr
Re: (Score:3)
What happens when you insult the electrical tape? Does it challenge you to a duel?
Re: (Score:2)
Re: (Score:2)
Apparently, they did. The actual connector is a new type. The problem seems to be with the adapter they put in so people wouldn't scream that they needed to upgrade their power supply as well.
Re: (Score:2)
That's just the first few problems they noticed. Then people with new PSUs that have native 12VHPWR connectors reported problems:
https://www.digitaltrends.com/... [digitaltrends.com]
Re: (Score:2)
The problem seemed to resolve itself within a few microseconds.
Re: (Score:2)
Any plug that is having issues when it is only partly seated should be redesigned to prevent this issue.
No plugs providing power in the world (including British plugs) which don't include active thermal monitoring (e.g. some EV charging cables) are able to deal with incorrect seating. British plugs are capable of arcing and starting a fire if you seat them incorrectly. The insulation *exclusively* prevents short circuits from the paperclip scenario you mentioned.
This scenario is not what is happening with these GPUs.
Re: (Score:2)
...which would in then within milliseconds turn trip the safety breaker in the GFCI-protected outlet or circuit that is basically required in most modern American house designs, particularly in high-risk areas.
You do not know what GFCI does or how it works. It compares the live and neutral, and if the same amount of power is not flowing through both, the imbalance produces a voltage which causes a trip. GFCI does nothing to protect against overcurrent [stackexchange.com], that's the circuit breaker's job.
You're also free to install GFCI upgrades to make older homes a bit safer.
Yes, they do that, but not for any reason you imagined.
It isn't just the adapters (Score:2)
People first noticed problems with the 4-prong PCIe 8-pin adapters included with 4090 cards intended to allow people to use older PSUs that have enough juice but don't have the new 12VHPWR connector. It's not hard to see how a 4-prong adapter would have problems staying seated in the card.
However, now there have been reports of newer PSUs with native 12VHPWR connectors melting in 4090s:
https://www.digitaltrends.com/... [digitaltrends.com]
Looks like the entire connector design is bunk.
Re: (Score:2)
Looks like the entire connector design is bunk.
It's definitely... optimistic.
The current being carried by these is non-trivial.
Nothing short of some kind of tightened locking mechanism is going to stop these things from melting, and even then, sometimes some are going to melt anyways because of damage to the prongs or corrosion.
There's zero fucking room for extra resistance. The connector is designed to get hot even with a perfect connection.
Frankly, sending this much current through connectors with only 12 conductors needs to be carefully re-thou
Nah (Score:2)
Re: (Score:2)
That can't even plug in a power cable with enough thrust to seat the connector.
I wish I could +1 your post.