What was hitherto limited to big machines, with the arrival of Face ID, Apple’s biometric authentication systems and Google’s federated learning models, embedded artificial intelligence (AI) and machine learning (ML) algorithms on small devices, is a reality now? Some are referring to this as the second stage of embedded AI.
Challenges for small-scale machine learning
The main constraints for machine learning on small smartphones and IoT (Internet of Things) devices was that AI techniques tend to be resource intensive. The large and complex neural network models require huge computing loads. This called for complex concurrency, large storage, and memory requirements. With ultra-light devices that smartphones are and the requirement of continuous connectivity, authentication, and battery-life dependencies, running ML algorithms on such devices posed huge challenges.
With organizations looking to optimize and improve the performance of ML on resource-constrained devices, small-scale machine learning has been the focus of research for some time now. While some of this could be achieved by connecting to a cloud, with small devices, latency could become a big problem. Most data computing requirements for small devices are local so that there aren’t any security vulnerabilities and for increased computing reliability.
Some recent innovations
Google’s federated model is a way to handle these challenges. They have developed a shared prediction model where the data is on the device and have decoupled the ability for ML processing. Users download the model from the cloud and the model learns from the data on the user’s phone. Such incremental learning changes are then encrypted and moved to the cloud in what they call small, focused updates.
Engineers have now designed learning software and embedded hardware that can efficiently run parallel algorithms efficiently. Some of this has been possible through increased GPU hardware performance. Thus, we see now increased applications in businesses like logistics, data protection, and cybersecurity.
Advances are semi-conductor process integration now make it possible for algorithms to run on mobile devices and perform activities like authentication, tagging, and even robotic controls, without the necessity to rely on power-hungry servers. It is not only improved GPUs but things like Google TPU, a custom silicon that can help run larger models, that has accelerated the ability to build business specific AI abilities in small devices.
While the advances made recently are significant, research is on to transition to the next phase of development. That would require getting the complex algorithms out of the clouds into small physical devices like sensors and mobile devices. The focus has now shifted to consolidate and optimize existing machine learning models into a small device’s limited processor and use minimal memory footprints.
This calls for a combination of hardware and software innovation to improve the power of embedded AI that would enable planning for business applications across domains and industries, on devices, especially in the area of mobile and IoT software that is agile, intelligent and self-correcting.
Guest Contributor: Carolette Alcoran, the owner of thelifestyleofmaria.com, a Freelance Content Writer & Digital Nomad. She writes on a wide range of topics and has worked with startups, corporates, and individuals globally.