Setting aside the question what "machine learning" actually means... Machine learning is just software and if it is a computer running software, then machine learning is already "built in" to some extent. I assume that the claim is that there is some specific hardware support to accelerate an algorithm currently in vogue for machine learning. Can someone comment on technically what kind of hardware acceleration is considered state of the art for machine learning and if this is what on the table for the Rasp
Can someone comment on technically what kind of hardware acceleration is considered state of the art for machine learning
The new features described in TFA are designed to speed up neural networks using platforms such as TensorFlow. You want fast operations, especially multiplication, on large matrices of low-precision floating-point values, typically FP16.
Training a NN is very computationally expensive, so even with the new features, it is not something you will want to do on a RPi. It makes much more sense to upload your data to the cloud, do the training there using TPUs or high-end GPUs, and then download the fully trained tensor.
What exactly is "built in" machine learning? (Score:3)
Re:What exactly is "built in" machine learning? (Score:3)
Can someone comment on technically what kind of hardware acceleration is considered state of the art for machine learning
The new features described in TFA are designed to speed up neural networks using platforms such as TensorFlow. You want fast operations, especially multiplication, on large matrices of low-precision floating-point values, typically FP16.
Training a NN is very computationally expensive, so even with the new features, it is not something you will want to do on a RPi. It makes much more sense to upload your data to the cloud, do the training there using TPUs or high-end GPUs, and then download the fully trained tensor.
This is the state of the art: Tensor Processing Unit [wikipedia.org]