Smart technology puts intelligence where it’s needed
We often associate artificial intelligence (AI) and machine learning (ML) with exotic applications - self-driving cars, speech and facial recognition, robotic control and medical diagnosis - all powered by massive rows of servers filled with CPUs or GPUs, at some distant data center. But in fact, AI and ML are getting closer and closer to all of us.
That’s because companies such as Google, Microsoft, Nvidia and others have recently introduced technologies, platforms and devices that can cost-effectively extend AI and ML capabilities to the edge of the network. Working in concert with cloud services, these devices are capable of processing large volumes of data locally, and enabling highly localized and timely “inference,” industry jargon for AI- and ML-driven predictions executed at the edge after having been trained in the cloud; where data storage and processing power are plentiful and scalable.
Previously, if you wanted to deploy machine learning capability you had to run it on some kind of server. Detecting a pattern required that the data go from the device to the cloud in order to generate the inference. Putting intelligence onboard a device has key advantages:
- minimal latency, since AI and ML functions are no longer dependent on an internet connection;
- data communication can be filtered, prioritized or summarized based on communication constraints;
- security and data privacy can be enhanced by keeping sensitive data on the device.
Coral – the key to developing proper deep learning models
Coral is Google’s new developer platform for local AI. It is powered by the company’s Edge TPU (tensor processing unit) chip and is specifically designed to run machine learning models for edge computing. Coral features a tiny integrated circuit on a credit-card sized development board surrounded by an aluminum housing, with a USB connection to any Linux-based system. In other words, everything a developer needs to prototype applications for on-device machine learning inference.
Coral provides a perfect illustration of how cloud and edge work together to put intelligence where it’s needed. The heavy lifting for developing a machine learning model and its subsequent training happens in the cloud where massive amounts of computing and processing power are needed to refine the vast amounts of data required to develop a proper deep learning model. The result of this training is then compressed and distilled into smaller and faster applications that can be applied quickly to new data at the edge using specialty hardware designed to perform this task.
Implications for utility companies and grid management
Google’s initial performance benchmarks for an image classification application with a remote camera using edge AI show a 70-100x faster performance than a CPU-based approach. Speed and low-latency will be key criteria for utility grid management using AI and ML.
Bringing this technology to the power grid domain requires expertise in developing specific use cases and applications that would deliver new value to utilities. Many industry vendors - from smart metering to distribution automation to demand response - are beginning to integrate edge intelligence into their product lines.
But as many are finding out, this is a complex endeavor, both technologically and culturally. By engaging with vendors and system integrators to communicate their requirements and define their use cases for edge intelligence, utilities are in a position to drive a technology revolution in the industry that will unlock new benefit streams and empower customers with more energy choices.
The more data that is collected in the cloud, the better the training models become, resulting in a smarter, faster and more accurate inference – all of which enables edge intelligence to be a true business differentiator for utilities.
For more information, please visit: https://engage.atos.net/nao-iot-utility-2019