Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Courts IT Technology

Nvidia Hit With Class Action Suit Over Melting RTX 4090 GPU Adapters 45

A frustrated owner of an RTX 4090 graphics card, suffering from the infamous melty power connector problem, has filed a class action suit against Nvidia. From a report: Filed in a California court on November 11th, the suit may make for painful reading for Nvidia and includes numerous allegations from fraud to unjust enrichment. The case refers to widely reported instances of the new-style 16-pin power connector used by Nvidia's GeForce RTX 4090 boards overheating and melting under heavy load. Reportedly, the lawsuit claims that Nvidia sold RTX 4090s with, "defective and dangerous power cable plug and socket(s), which has rendered consumers' cards inoperable and poses a serious electrical and fire hazard for each and every purchaser." It's notable that the claimant, one Lucas Genova, describes himself as "experienced in the installation of computer componentry like graphics cards," thereby aiming to head off any implication of user error at the pass.
This discussion has been archived. No new comments can be posted.

Nvidia Hit With Class Action Suit Over Melting RTX 4090 GPU Adapters

Comments Filter:
  • User error (Score:4, Insightful)

    by blackomegax ( 807080 ) on Thursday November 17, 2022 @05:28PM (#63059210) Journal
    The forensic evidence that Gamers Nexus just posted has proved that every cable melt has been user error. As long as its plugged in firmly, it can shunt all 600W over a single pin safely. It's when it isn't seated correctly that it melted.
    • Nope, they only proved that single device could.

      It was an outlier that was more safe than expected, is all. The real trouble is all the other devices, right?

      • It was an outlier that was more safe than expected, is all.

        Nope. It was exactly as safe as expected meeting the specification exactly, a specification with a significantly higher power capability than the cards in question.

        When the vendor connector provides a specification, the GPU vendor falls under that specification, PCI-SIG signed off on the specification, and torture tests validate the specification it ceases being an outlier.

        • Thatâ(TM)s like saying Samsung had no explosive batteries problem because the one you tested didnâ(TM)t explode.
          • Thatâ(TM)s like saying Samsung had no explosive batteries problem because the one you tested didnâ(TM)t explode.

            No. It's saying nothing of the sort. Go back and read my post. You don't even need to understand what I said, you just need to count the number of subjects in my post to realise it doesn't have anything to do with checking any single sample.

    • The forensic evidence that Gamers Nexus just posted has proved that every cable melt has been user error.

      That is not how anything works.

      As long as its plugged in firmly, it can shunt all 600W over a single pin safely.

      As long as by "it" you mean "the one card they tested", sure.

      It's when it isn't seated correctly that it melted.

      So you have some evidence that the plugs weren't seated correctly on the cards that melted in the wild? Or Gamers Nexus does? That would be relevant, unlike everything you've said so far.

      The problem with Molex Mini-Fit Jr. connectors, which is what all of the ATX-specific power connectors are, is that they are really fiddly and fragile. It is seriously easy to get the contacts fucked up, and even when the connector i

      • Yeah I wish those stupid connectors were easier to repin. I wouldn't mind getting my own and making my own cables out of high gauge copper.
      • GM even stated, that the connector can be a problem in end users hands, because it is too hard to get the full contact in, and many users wont even notice or with cable bending reach a critical distance because it is not 100% plugged in. This is definitely a broken design!

        • Re:User error (Score:5, Insightful)

          by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday November 17, 2022 @07:33PM (#63059570) Homepage Journal

          Well, I've connected and disconnected a whole lot of these connectors, as I imagine a lot of us have around here, and I have noticed the opposite. The locking clip on the connector is extremely positive. It is super obvious whether it is locked or not. Almost every time you seat a mini-fit Jr. connector there is an audible snap, and also, what kind of tool doesn't pull on a locking connector after seating it to make sure that it's locked? These connectors are enormously common outside of computing as well, for example they are typically used for RV fridge brow boards and these days sensor wires as well, I have seem them on some car stereos, on some CB radios... They are not commonly bursting into flames because they are not commonly expected to carry high current. That's why there are two +12 lines and 3 +5 lines in the traditional ATX connector, then two more +12 lines on the +4 connector, and then 2-4 MORE lines on modern computers going to the separated 12V connector near the voltage regulator... and that's just for a ~150W CPU and any expansion cards that don't have their own power connectors.

          The 4090 can draw over 350W, and it only has six leads for 12V (plus the six grounds) where the motherboard has 8 leads for 12V even though it typically draws a lot less. This just doesn't make any sense. But what makes even less sense is not having overload protection on every pin. If you need to run multiple conductors to something in a DC system because you don't have heavy enough wire, or you can't fit it, or it can't make a bend or whatever, you fuse every lead separately, even if they all go to the same terminal. That way you never have a failure (cut wire, whatever) which results in wiring carrying less current than recommended. By contrast, nvidia doesn't seem to have included any overcurrent protection. You can get surface mount polyfuses in 0805 size (2.0 mm Ã-- 1.2 mm) that will handle up to 40A, and they are carrying only about 5.5A per pin, so it's hard to imagine that they couldn't implement some protection. What's more, Molex literally does not commit to a current rating on these connectors [rs-online.com]! "The ratings listed in the chart above are per Molex test method based on a 30ÂC maximum temperature rise over ambient temperature and are provided as a guideline." (emphasis mine)

          My takeaway from all this, whatever the problem turns out to be, is three things. First, these connectors suck ass, I have always hated everything about them. Second, the new ATX spec just has more 12V and that's a huge fucking miss. If we're going to get cards that need this much power, they need more voltage as well... and Third, if they are going to have multiple pins, they should have current protection on each pin.

          • and also, what kind of tool doesn't pull on a locking connector after seating it to make sure that it's locked?

            Clearly you haven't been around people that long. If there's one thing certain it's that any amount of extra effort to do anything is shunned. Is the plug in? Let's rock!

          • I agree with everything you posted, but raising the voltage coming out of the power supply in order to deliver more wattage gives you a chicken / egg problem - we now need to replace every power supply when someone buys a new GPU.

            Granted, if you're buying a RTX 4090 you probably don't give a shit about buying a $150 power supply with it, but these problems won't always just be something that happens at the extreme high end - eventually we'll have midrange cards that require the higher voltage as well and no

            • Yeah, the 4090 is already going to "require" a new PS even for a lot of power users, because some cards' specs are calling for a 1200W unit...

      • That is not how anything works.

        Actually it is. Forensic analysis is precisely how you do investigation of failure. Now you may want to shit on Gamers Nexus but before you do note that they didn't to the analysis, they sent it to a third party qualified lab. Specifically the sent failed connectors the user sent in to the lab.

        As long as by "it" you mean "the one card they tested", sure.

        The one card they tested is within specification. The specification of the cables weren't exceeded. The vendor specifications for cables are given with certification and verified by PCI-SIG. Additionally with the spec

    • Re: (Score:3, Informative)

      by cats-paw ( 34890 )

      600W over a single pin ? LOL. you don't shunt 600W over a pin. 600W is 600V @ 1A and 6v @ 100A. Those are VERY different conditions. You push current throgh a pin. That current and the resistance of the pin determines the power dissipation of the pin and tells you everything you need to know about whether it will melt.

      There is a LOT of current in those pins and NVIDIA damn well knew that.

      It shoudln't work at all if it's not plugged in properly.

      Making sure it has a good connection is as easy as putting i

      • by Anonymous Coward

        There are 8 pins that the power is distributed over. It is 12v at 600 watt which is 50 amps. Each pin is half the standard 1.27mm width.

        Assuming copper, this is only about a 4mV voltage drop, dissipating 210 milliwatts of heat over all 8 pins.
        Even with the failure of a single pin, this shouldn't be anywhere close to enough heat to melt the plastic, let alone cause the damages seen.

        There is something more going wrong with these connectors than a simple claim that the pins are too small.

        The connector might

      • is still bad engineering on NVIDIA's part.

        NVIDIA didn't engineer any part of this. It's the same connector pin design we've been using for 3 decades.

        • by Anonymous Coward

          And therein lies the problem. They should have changed the connector type to handle the increase in power rather than using the same connector.

          • And therein lies the problem. They should have changed the connector type to handle the increase in power rather than using the same connector.

            No. There's nothing wrong with the connector's power handling. It literally can handle over double the power this card pulls when mated properly.

    • What they proved was that poorly seated cables and cables with the wrong sort of foreign object debris from poor manufacturing in them could cause the melting. The former is obviously much more common than the latter, but the latter is very possible. In any case, both the "user error"(which with a good clip system or even properly designed sense pins would not be an issue) and FOD issues are because of poor cable design. That being said, it's the initial spec itself that is a large part of the problem.

    • by ljw1004 ( 764174 )

      The forensic evidence that Gamers Nexus just posted has proved that every cable melt has been user error. As long as its plugged in firmly, it can shunt all 600W over a single pin safely. It's when it isn't seated correctly that it melted.

      And if you design a product which has a higher than reasonable incidence of user error, moreover a user error that a reasonable and proficient user wouldn't realize? -- class action.

      • If all the user has to do is click the connector in fully, when it has a clip on it clearly indicating to the user that the function is intended to click in to be held securely to the card, there is no legal escape for users. Nvidia is legally protected from user stupidity.
    • by Z80a ( 971949 )

      I suspect in many cases could be with the lid of the case itself pressing the cable loose, given how big of a turn you have to do to even fit the card in most cases.
      If the latch actually held to something and the connector angle was less dumb, you wouldn't get badly fitted connections.

    • by fazig ( 2909523 )
      Yes, it's good that they finally put a stop to all the speculation with the first solid evidence for how it occurs.
      Regardless, for a graphics card that expensive you can expect the manufacturer to implement some mechanism that makes sure the powerplug is plugged in correctly before allowing it to let so much current flow through a single pin. At least you could expect them to put up some serious warnings.
      Maybe there are warnings I'm not aware of (while I would want one of these GPUs for all production wor
    • by tlhIngan ( 30335 )

      The forensic evidence that Gamers Nexus just posted has proved that every cable melt has been user error. As long as its plugged in firmly, it can shunt all 600W over a single pin safely. It's when it isn't seated correctly that it melted.

      The problem is there is no feedback indicating the plug is correctly plugged in. So the user might not notice it is not completely plugged in, and this is a fundamental design flaw.

      It's several design flaws, actually - because the design shouldn't be reliant on every pin m

    • Re:User error (Score:4, Insightful)

      by Waccoon ( 1186667 ) on Thursday November 17, 2022 @10:54PM (#63060018)

      I don't recall hearing about melting GPU power cables before, so clearly we've reached a point where this needs to be addressed. Contrary to how geeks may feel, blaming the user is never a solution.

      You really, really have to push hard to get those cables to seat correctly, and even when it's flush that doesn't mean it won't pop off if you pull on the cable sideways. They're also always mounted in such a place that they're impossible to get off when they are properly locked in place.

      It's a bad design and insufficient for such high power loads. I always hated these ATX-style connectors and the flimsy tabs that hold them together, and we really could use a newer design.

    • it can shunt all 600W over a single pin safely

      600W at 12V means 50A. You would need 6 AWG wire to safely carry 50A (typically rated for 55A, more or less depending on insulation). The cables are 14 AWG wire and going to be safe to 15 A.

      Now the pins themselves, I don't know what 12VHPWR is rated for, the pins are probably not the weakest link in these adapter cables.

  • Put "GPU adapters may melt." in product notes/documentation as a "feature". :-)

  • While this case has been widely reported, I've not heard of any other similar incidents so far. In order for a suit to be given class action status, the plaintiff needs to prove that numerous people have suffered harm or loss, not just one or a small handful. Even if this wasn't user error, and was in fact a faulty cable, this guy is going to need to identify a bunch more people who's cables have failed in the same or similar ways if he's to get the court to treat it as a class action.

    • by nickovs ( 115935 )

      (And before someone jumps on me to RTFA, yes, he cites a Reddit thread of me-too posts, but the complaint appears to only name one person, himself.)

  • seems to be the problem. For what those cards sell for they should come with a proprietary PSU and far more robust hardware.

    • No, four standard 8-pin PCI-E connectors would have worked just fine. But that would have required standard length PCBs to fit them all which wouldn't have let nVidia have their nifty pass through cooler design. So they shoved a shitload of power through an inferior connector design.

  • We use lots of minifit standard and jr connectors.
    We have had the standard ones fail with 5A continuous.
    I wouldn't want to push more than 3A through a Jr Pin on a 16 Way connector.
    8 * 3A * 12v = 288W.
    That's pushing the friendship for 500W loads!

    Never been a fan of current sharing // pins in connectors. The should have used properly rated connector where 2 pins could take the current.

God doesn't play dice. -- Albert Einstein

Working...