Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Math Software Hardware Technology

Tesla Is Building Its Own AI Chips For Self-Driving Cars (techcrunch.com) 157

Yesterday, during his quarterly earnings call, Tesla CEO Elon Musk revealed a new piece of hardware that the company is working on to perform all the calculations required to advance the self-driving capabilities of its vehicles. The specialized chip, known as "Hardware 3," will be "swapped into the Model S, X, and 3," reports TechCrunch. From the report: Tesla has thus far relied on Nvidia's Drive platform. So why switch now? By building things in-house, Tesla say it's able to focus on its own needs for the sake of efficiency. "We had the benefit [...] of knowing what our neural networks look like, and what they'll look like in the future," said Pete Bannon, director of the Hardware 3 project. Bannon also noted that the hardware upgrade should start rolling out next year. "The key," adds Elon "is to be able to run the neural network at a fundamental, bare metal level. You have to do these calculations in the circuit itself, not in some sort of emulation mode, which is how a GPU or CPU would operate. You want to do a massive amount of [calculations] with the memory right there." The final outcome, according to Elon, is pretty dramatic: He says that whereas Tesla's computer vision software running on Nvidia's hardware was handling about 200 frames per second, its specialized chip is able to crunch out 2,000 frames per second "with full redundancy and failover." Plus, as AI analyst James Wang points out, it gives Tesla more control over its own future.
This discussion has been archived. No new comments can be posted.

Tesla Is Building Its Own AI Chips For Self-Driving Cars

Comments Filter:
  • by DontBeAMoran ( 4843879 ) on Thursday August 02, 2018 @04:23PM (#57059752)

    From the previous thread about Tesla, I expected this headline to read "Tesla is now building their own arcade cabinets".

    • Re: (Score:2, Insightful)

      by saloomy ( 2817221 )

      It was either going to be "Elon's folly: why his chips wont work and he will become homeless trying to make them", or "Elon Musk set to disrupt the modern world as we know it with new Super-Computer on a chip stroke of genius".

      What I can say is, the reason Tesla will be the biggest innovator and market leader in their field is simple. People are passionate about it. Good or bad, everyone has a strong opinion. Tesla these days reminds me of the "Pray" cover WiReD published about Apple before the second comin

    • by ebvwfbw ( 864834 )

      That'll be made by his Boring company.

  • by mykepredko ( 40154 ) on Thursday August 02, 2018 @04:30PM (#57059798) Homepage

    You know if you've ever said anything nasty about Elon.

    Now, his vehicles know.

    Be afraid. Be very, very afraid.

    • You know if you've ever said anything nasty about Elon.

      Now, his vehicles know.

      Be afraid. Be very, very afraid.

      Wouldn't that kind of murder be more likely be perpetrated by the government or organized crime? if theres a difference between the two, that is

      • Wouldn't that kind of murder be more likely be perpetrated by the government or organized crime? if theres a difference between the two, that is

        Of course there's a difference. Organized crime is - see the clue in the name - organized.

        Seafood platter, etc.

      • by jeremyp ( 130771 )

        I love the quaint belief that Americans have that corporations are more trustworthy than the government.

        • It's not that we trust them more, we just know they have less power to enforce their will. You can still choose to associate with a corporation or not; not so much with Government.
    • Don't worry, he will launch them all to Mars.
  • by postbigbang ( 761081 ) on Thursday August 02, 2018 @04:33PM (#57059806)

    For better and worse, keeping things proprietary means it's by definition both closed source, and tested only to one's own environment. Although it produces fast yields, it doesn't have many eyes. Many eyes and many hours are needed to vet the integrity and edge cases (like cliff edges) before safety can be assured.

    It's a risky, expensive, and proprietary endeavor. If everyone (systems builders) were using similar development, the testing age could be completed in a concurrent time, rather than a serial/iterative time. I'm betting against this turning out well.

    • by religionofpeas ( 4511805 ) on Thursday August 02, 2018 @04:47PM (#57059896)

      Neural net calculations are pretty simple, just repeated many times over. Testing the silicon should be relatively simple compared to general purpose CPU or even GPU design.

    • by RhettLivingston ( 544140 ) on Thursday August 02, 2018 @05:14PM (#57060058) Journal

      Every NN is proprietary, and that is where the functionality to worry about is at. The performance on "edge cases" in driving is directly related to how much compute power you can throw at it. Tesla is multiplying its compute power. The edge cases will improve. Staying with the general purpose GPU instead of true NN hardware will guarantee continued unhandled edge cases.

      This HW is undoubtedly also more energy efficient. That is the real key. They could stack on more boards, but these units are already consuming a significant amount of the vehicle's energy. The trick is to get more compute power with the same or less energy. NN specific HW is going to be a requirement to have that happen.

      Everyone in the industry has known that GPUs will not be used past the first generation or so. They are development HW. Someone will eventually come up with a general purpose NPU that will win the market, but it hasn't happened yet - mostly because NN implementations haven't settled.

      • No, not yet.

        Tesla PURPORTS to be able to add more NN intelligence using proprietary silicon. None of this is proven yet, just like many other Tesla missed goals. This is really rocket science, even with NN. Your rocket is an auto navigating known/unknown obstacles through its use cycle. It can't talk to other cars so as to update them with info about how squirrels leap out in front of you, then go back suddenly. The car will swerve anyway. Kill the squirrel is my opinion, personally, but these cars don't le

        • The cars themselves don't learn, but that would be very inefficient anyway. Tesla can review accident logs, modify their NN, and then send updates to all the cars.

          FPGA technology makes no sense. FPGAs are slow and power hungry for NN applications. Besides, there's no reason to change the hardware. The same silicon can implement a wide variety of neural net topologies and weights.

          • FPGAs make great sense for execution. If you use a kernel that learns in a small environment and execute in an FPGA for routines, you get the limbic/pre-frontal cortex analog. FPGAs can run circles around GPUs as well, given the scope of execution needed.

            As far as reviewing accident logs, that's much tougher. Ingesting the logs to discern actions to take/conditions to learn becomes not so much arbitrary, but a difficult regimen itself. And it means that it's forensic, rather than preventative. Knowing what

            • FPGAs can run circles around GPUs as well, given the scope of execution needed

              You have no idea what you're talking about. NN evaluation requires multiplications, big caches and fast external memory access. FPGAs are lousy for all of those things. Big FPGAs are crazy expensive too. And their key property, the ability to upgrade them in the field, is totally useless for this application.

              • NNs learn. FPGAs are easily sent through (new) subroutines. Wanna do algorithms? Field programmable ones, ones that don't have to wait for storage? And they're not that expensive. As a microcontroller in this app, they're great for sensor monitoring.

                Whatever the actual design that arrives, it's proprietary, and isn't going to be open to scrutiny, and will be expensive. I still believe it's a bad move.

                • by larkost ( 79011 )

                  No Neural Nets are “trained” in the lab. Then you choose the best one for your deployment, and make it as efficient as possible. The th8ng you send out into the field is quite static. It no longer “learns”.

        • I was on the road very early this morning and I was very happy to say I avoided around 10 frogs enjoying the warm pavement. Automated cars should be even better at it than I am.
      • by AmiMoJo ( 196126 )

        Problem is they already sold hundreds of thousands of vehicles with the full self driving option, so they will all need to be retrofitted with this chip if that's what self driving requires. And I'd guess more sensors too, because the cameras they have are probably inadequate and don't have any self-cleaning functionality that will be essential.

        It's no wonder they jacked up the price of the self driving option.

        The whole thing is another lawsuit waiting to happen.

    • By going baremetal, they're reducing complexity and eliminating most of the potential for what you're talking about. Also, I'm guessing this might be a realtime system, in which case tolerances will be even tighter and bugs would be more obvious and thus more likely to get identified.

    • it's by definition both closed source, and tested only to one's own environment

      So basically the entire self driving car industry with the concept of custom silicon being completely irrelevant?

  • by dfn5 ( 524972 ) on Thursday August 02, 2018 @04:41PM (#57059848) Journal
    I'll be in the basement hiding from the cars
  • by mugurel ( 1424497 ) on Thursday August 02, 2018 @04:49PM (#57059906)

    "We had the benefit [...] of knowing what our neural networks look like, and what they'll look like in the future,"

    Really? If they take their neural network development seriously I don't think they know what their networks will look like in ten years. It's a research area in the middle of a transformation. Using architectures molded into hardware is probably just costly and will act as an antagonist to innovation. I don't think having 2000 vs 200 frames per second right now outweighs that downside.

    • Not really. Musk has it all figured out. Just ask Rei! He will tell you the exact specifications of the new system as well.
    • To speak with Knuth, premature optimization is the root of all evil.
    • There's no guarantee that off-the-shelf hardware from NVidia is a better match for networks 10 years from now.

      • by Jeremi ( 14640 )

        ... and if Nvidia is better in 10 years, Tesla will have the option of buying Nvidia hardware at that time. Developing in-house hardware no doesn't mean they can only use in-house hardware for the rest of time.

    • And 10 NVidia boards instead of 1 would drop the range of the vehicle in half and multiply the cost of an already expensive computer by 10. If there is something better in 10 years and the car is still on the road, you just unplug the old and plug in the new - probably for less than half what this one costs.
    • What he's saying, very badly probably, is that they're building an inference chip, which is designed to deploy neural nets, rather than a repurposed GPU like Nvidia offers, which is much better at training neural nets. Difference being floating point precision requirements, as well as bandwidth v alu balance.
  • by Kjella ( 173770 ) on Thursday August 02, 2018 @04:51PM (#57059916) Homepage

    A car travelling at 90 mph is moving about 4 cm/millisecond. So going from 200 fps to 2000 fps is going from 20 cm to 2 cm per cycle. What's the use of recognizing a car every two centimeters? For a jogger at 9 mph it's down from 2 cm to 2 mm. It's neat and all but I don't see how that necessary to react in the time frames a car needs to react. Even if it takes 3-4 frames for the car to get a motion vector 0.2 seconds is still way quicker than a human and 0.02 seconds doesn't bring that much.

    • Agreed. The main risk of a self-driving car is probably not in reacting too slowly, but in scene detection breaking down. For that you need better architectures, not faster architectures.
      • Re:To what end? (Score:5, Interesting)

        by dgatwood ( 11270 ) on Thursday August 02, 2018 @05:33PM (#57060140) Homepage Journal

        Maybe after a point, but up until that point, the main risk is reacting too slowly. Ask anybody with an AP2 Tesla how well it handled curves prior to earlier this year. Of they don't use the word "lag", they don't know software, and if their eyes don't bug out in abject terror, they don't know how to drive.

        Basically, it had (and still has, to a lesser extent) trouble with lane keeping, because its reactions lagged behind reality, and it started turning way too late, resulting in uncomfortable turns, getting dangerously close to barriers and center lines, etc. This is better in current versions, but I still get scared enough to take manual control a couple of times per day.

        So right now, performance is still their main problem. This is a very welcome announcement.

        • Very practical question regarding Tesla handling (never drove a Level 2+ car before, only my parents' car with FCAS and LDWS, and various similar system on all the various fleets of car-sharing).

          Normally, a Telsa should handle both steering and accelerating/braking.
          I the driver where to grab the wheel to adjust steering - BUT keeps the feet hovering above the pedals without pushing them (Yet) - would this disengage only the autopilot steering ? While keeping engage the usual distance keeping / emergency aut

          • by dgatwood ( 11270 )

            I the driver where to grab the wheel to adjust steering - BUT keeps the feet hovering above the pedals without pushing them (Yet) - would this disengage only the autopilot steering ? While keeping engage the usual distance keeping / emergency autonomous braking features (FCAS) ?

            If you press the gas pedal too much, you'll disable automatic braking, and nothing else. If you touch the brake, it disengages everything. (This is, IMO, a bug; it is too easy to accidentally tap it on a curve and have the wheels s

    • Re:To what end? (Score:5, Insightful)

      by TomGreenhaw ( 929233 ) on Thursday August 02, 2018 @05:28PM (#57060118)
      I bet that this allows them to have more cameras. 2000 fps for one camera could be 250fps for 8 cameras. it could also be used for much higher resolution cameras that have fisheye or insect like lenses.
    • by Anonymous Coward

      Think of a video game. If you dial your resolution and detail back to 1990s levels, most modern GPUs could perform thousands of FPS on the top modern games. They don't because we've used the rate gain to produce higher fidelity and more detail.

      Same here. It may run their current NN at 2K frames per second. But they will change that network to utilize the extra time to get better results than they currently have at 200 FPS.

      Think of the NN as a Chess engine. It can consider more with more compute power or it

    • You're calculating based on a single camera. The Model 3 has 8: 8 x 200fps = 1600fps
    • by AHuxley ( 892839 )
      "control over its own future".
      No company wants to be the coachmaker, placing their design over another brands "computer".
    • Re:To what end? (Score:4, Interesting)

      by philmarcracken ( 1412453 ) on Thursday August 02, 2018 @06:12PM (#57060362)

      That's the answer they want people to hear. The real answer is no longer having to pay nvidia.

    • 0.2 seconds is nearly two car lengths at 90mph

    • You can get better estimates of the position and orientation of objects by using more measurements. Remember these neural networks are susceptible to sensor noise and environmental effects. I worked on an object recognition and tracking system once during my undergrad. When we boosted the frame rate it drastically improved the performance of the system.
    • A car travelling at 90 mph is moving about 4 cm/millisecond. So going from 200 fps to 2000 fps is going from 20 cm to 2 cm per cycle. What's the use of recognizing a car every two centimeters? For a jogger at 9 mph it's down from 2 cm to 2 mm. It's neat and all but I don't see how that necessary to react in the time frames a car needs to react. Even if it takes 3-4 frames for the car to get a motion vector 0.2 seconds is still way quicker than a human and 0.02 seconds doesn't bring that much.

      2cm is about 3/4 inch. Could the vehicle recalculate and take a trend line view of a problem. That would lead to sufficient leadtime to prevent a fatality. If the car could know what to do by one foot of travel (at 90mph), wow.

  • As an existing owner, how do I upgrade to the new hardware system? Is it an over the air update?
    • There's two components on an autonomous system.

      The sensors and the computer.

      Tesla has publicly stated that they've on purpose designed the computer to be modular, and the current cars since recently (forgot the exact date, it's google-able, I think it's since the introduction of the triple front cam) are designed in such way that you could swap the computer with a newer one in the future.
      So in theory yes, if they keep their word, you should be able to install the newer computer with the better NN when it's

  • What happens if (or, when) Tesla realizes they need to make a significant change to their code?

    Automated driving and AI are both hot research areas. I wouldn't take a bet that there won't be big changes in the near future.

    This smells like an unholy combination of two things: a development team getting burnt by premature optimization, with just a hint of "painting oneself into a corner".

    Between this and the omission of lidar, I'm not enthusiastic about Tesla's self-driving capability. My pessimism applies ac

    • I wouldn't take a bet that there won't be big changes in the near future.

      Most likely, the silicon can handle future changes just as well as current graphics cards, if not better. Also, I would expect Tesla to keep evolving their hardware.

    • Between stereo and inter frame processing using the known car speed/direction you can reconstruct a 3D representation without LIDAR. LIDAR has problems too. The mutual interference which LIDAR suffers from requires some very expensive tricks to prevent ... especially expensive for non scanning LIDAR, which is the best LIDAR because fuck mechanical 2D scanners. LIDAR is a dead end. high resolution radar would be nice though for heavy rain and fog.

      As for designing their own neural network ICs, better than be

    • This smells like an unholy combination of two things: a development team getting burnt by premature optimization, with just a hint of "painting oneself into a corner".

      Well, on the other hand, It's just number-crunching hardware, used to run their NN.
      Maybe they'll come up with a good computer, maybe in the future they'll realise that silicon by Nvidia or AMD ends up being the best to run their nets.

      It's more important to them (they'll be divesting money into that R&D) than to users (it's just number-crunching silicon to run a NN on it. You're supposed to be able to swap the computer for a better one in the future, according to Tesla).

      Between this and the omission of lidar, I'm not enthusiastic about Tesla's self-driving capability.

      I totally agree with this. At a ti

  • As they have been advertising for a long time that what you buy now contains all the hardware required for fully autonomous driving with a free software upgrade in the future.

  • In other words, they're making an ASIC. Great, but I would have expected that most of these computer vision-based self-driving platforms would indeed be using ASICs. So the news here is probably that they are moving away from NVidia's turnkey hardware solution.
  • The reasoning is because Apple did it?

    Apple has to fit its chip into a 4 oz container slightly larger than a credit card. You've got an entire f'in car, put 100 chips in it genius.
  • If your trip has more than two stops you may get to see places you've never seen before - the "hidden" stops; hopefully there is a supercharger along that path.
  • Why does he think he can start up a new business like that and eclipse Nvidia that has decades of experience? He's been the dog that caught the car. The dog can't keep doing that.

    Could be the first (or second... or so on) nail in this coffin.

"A mind is a terrible thing to have leaking out your ears." -- The League of Sadistic Telepaths

Working...