Elon Musk Says Tesla Is Building Dedicated Chips For Autopilot (theregister.co.uk) 32
Elon Musk says Tesla is developing its own chip to run the Autopilot system in future vehicles from the firm. The news was revealed at a Tesla party that took place at the intelligence conference NIPS. Attendees at the party told The Register that Musk said, "I wanted to make it clear that Tesla is serious about AI, both on the software and hardware fronts. We are developing custom AI hardware chips." From the report: Musk offered no details of his company's plans, but did tell the party that "Jim is developing specialized AI hardware that we think will be the best in the world." "Jim" is Jim Keller, a well-known chip engineer who was lead architect on a range of silicon at AMD and Apple and joined Tesla in 2016. Keller later joined Musk on a panel discussing AI at the Tesla Party alongside Andrej Karpathy, Tesla's Director of AI and chaired by Shivon Zilis, a partner and founding member at Bloomberg Beta, a VC firm. Musk is well known for his optimism about driverless cars and pessimism about whether AI can operate safely. At the party he voiced a belief that "about half of new cars built ten years from now will be autonomous." He added his opinion that artificial general intelligence (AGI) will arrive in about seven or eight years.
Re: (Score:3)
Re: (Score:2)
Re:ADHD... (Score:5, Insightful)
If there are dedicated functions or custom designs it can certainly make sense to drop the investment to develop one if you expect volumes to be high.
Re: (Score:2)
Meanwhile, the second round of config invites to non-Tesla/SpaceX Model 3 customers just went out, and the first round's deliveries start in the first few days.
Re: (Score:3)
Re: (Score:3)
Tesla is likely working with AMD on this. It was reported a couple of months ago. [cnbc.com]
Also, these AI ASICs are not that complicated. They are just a big array of FP16 or FP8 multipliers with a really wide data path. Sort of like a low precision GPU with all the graphics features removed.
Google already makes their own [wikipedia.org] but they don't sell it, it is for internal use only. They used it for AlphaGo, and they also use it for image processing.
Re:ADHD... (Score:4, Informative)
I expect these to be less general purpose than Google's offerings. The EV industry doesn't need to train nets in-vehicle and has to be concerned with power consumption. The traditional vector FPU approach to AI is power hungry, many orders of magnitude more so than the human brain. So, we know there is room for improvement.
Something in the direction of IBM's TrueNorth chip which initially had problems with convolutional neural nets but can now handle them might be better.
In any case, I hope that Elon's allusions mean that a different approach is being taken - something between Google's TPU and IBM's TrueNorth. AMD would be eager to throw in some of their own funds if the development could be marketed to others perhaps after some delay. Elon is usually amenable to spreading the tech.
Re: (Score:2)
The EV industry doesn't need to train nets in-vehicle
They still need to train them in their labs, which will definitely benefit from the same technology.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
More specifically: Tesla's biggest challenge with improving Autopilot is processing power; this was discussed a month or two ago. With a car, you can't offload something like that to the cloud, since you can't trust in 100% unbroken connectivity. But better capabilities requires neural nets, for image recognition tasks (you can't decide how to respond to something if you can't recognize what you're seeing) - and neural nets are extremely computation-intensive. They've been working (and having success) at
Re:ADHD... (Score:4, Interesting)
Some people over at the TMC forums have been extracting data from Tesla cars, so we know quite a bit about the current state of their system.
They have cameras that do black and white plus a red channel. I guess red for road signs. They have neural networks looking at the images. They are quite primitive though, and have some severe limitations.
For example, they only look at one frame at a time. That means no depth perception. You need either two cameras or two frames separated by time for that, and in the latter case it doesn't work while you are stationary.
It really looks like they underestimated the difficulty of doing full self driving with just cameras and no lidar. The level of AI needed to navigate around a car park with just cameras and ultrasonics, for example, seems beyond anything anyone is doing right now. They could fudge it but then Tesla cars will be easy to hurd and prone to getting stuck without a driver. Wouldn't want your kids getting robbed by thrives with spray paint.
Re:ADHD... (Score:4, Interesting)
Wouldn't want your kids getting robbed by thrives with spray paint.
Lots of things can wreak havoc with just about every navigation approach that self-driving cars can use.
- Spray paint on cameras if you're using optical recognition.
- Same can be done with lidars, and there hasn't been much talk of the interference you get with multiple lidars in view of each other.
- Radar sensors can be stopped with mylar party balloons or anything metallic.
- Ultrasonic sensors can be stopped with just cardboard, or even chewing gum on the sensor.
- A metallic blanket thrown over the top of a car can stop GPS (eg, an alfoil windscreen sunshade that you can buy for $2, if you place it in the right spot).
Thinking about that, I reckon you could get a big painter's drop sheet, spray it with metallic paint, weight the corners, and then you would have effectively made a large "net" that you could use to catch self-driving cars for fun and profit.
So what's left? Hopefully a pop-out joystick that an occupant can grab and drive the car with when it all goes pear-shaped.
Re: (Score:2)
If you look at Google's self driving cars, they use lidar and cameras to build up a 3D model of the world around them. Tesla is either planning to do that just with cameras, or hopes it doesn't need to.
Lidar can tell a poster from an actual object. Cameras alone... It will be like those Road Runner cartoons where Wylie Coyote paints a fake tunnel mouth on a wall.
Re: (Score:2)
"Tesla's biggest challenge with improving Autopilot is processing power"
Not so sure. Their AP1.0 system from Mobileye is vastly inferior in terms of raw power yet AP2.0 is still not at feature parity
Re: (Score:1)
Re: (Score:3)
He probably means the design of chips rather than the actual fabrication. Anything vision related is more DSP than CPU or GPU. In the past, chips like the TMS320x0 series or i860's were used, but nothing beats a custom ASIC with all the unused instructions stripped out and new custom instructions added.
Too far (Score:3)