With the Internet of Things (IoT) expanding further into factories and across supply chains, latency reduction and bandwidth optimization have both become top priorities for manufacturers and other IoT-invested organizations like logistics providers and transit agencies. Meeting these two objectives is key to business-sustaining innovation: Over 60% of respondents to a 2020 Mindbowser survey of industrial IoT adopters saw their implementations as enabling competitive advantage and higher revenue.
Individual IoT sensors collect data and transmit it over a network. They can perform tasks ranging from using computer vision to inspect drug bottles on an assembly for tiny defects, to orchestrating the movements of robots on a factory floor to prevent collisions. AI and machine learning powered many of these use cases. Two-thirds of manufacturers told Google Cloud that they had increased the use of AI in their day-to-day operations in 2020.
However, effectively completing such activities requires very low-latency connectivity between the locations of the IoT sensors themselves and the infrastructure on which their data gets analyzed and stored. Otherwise, time-critical tasks become impractical and safety and cost concerns escalate. An AI-powered motion sensor in a car, or a robotic control system can’t afford to wait for multiple full seconds for its readings to trombone through a faraway cloud and back again.
Although the cloud does provide scalable, on-demand resources for these connected IoT devices — storage, analytics engines, etc. — it also introduces unique bottlenecks, due to the laws of physics. Because cloud-based servers are physically far away from IoT sensors, round trips for data workloads take longer than if those same servers were nearer. Copious bandwidth is needed as well, to support this ongoing massive data exchange.
In contrast, edge computing moves data processes closer to consumption, e.g. to the network edge, where IoT sensors, servers and end-user devices are most concentrated. It doesn’t replace the cloud so much as complement it.
Edge computing is computing conducted at or near the location of a data source and its user(s). The main advantages of edge computing over cloud computing are reduced latency and bandwidth consumption, both of which are essential in supporting cameras, robotics and other IoT infrastructure.
Within a setting like a factor, these devices are usually “dumb” — not a pejorative, but simply a way to describe how they only collect data and offload it somewhere else, either to the cloud or a company data center, for analysis and decisions about what gets retained, sent to other applications and/or discarded. This setup often leads to the latency and bandwidth issues highlighted earlier, while the sheer scale of the IoT — with thousands of sensors and terabytes of data collected each month — amplifies those issues.
Edge computing offers a solution by, for instance, embedding an AI chip within such IoT devices (or within other hardware nearby) so that they can process that data on-device, without sending it to a cloud, thereby reducing round-trip times. Amazon has taken this specific approach with Alexa hardware, moving tasks to dedicated silicon to curb the amount of cloud processing required for each query.
By moving processing to the edge, organizations can improve performance and save money on cloud storage and bandwidth. Tasks such as AI-driven analysis of images and sensor data for predictive maintenance can be completely literally on-site. Moreover, 5G networking has made running edge processes much more efficient, offering:
Over 90% of manufacturers see 5G as important to their future, per a Manufacturing Institute survey. Tasks like remote monitoring, real-time communications between equipment and the use of mobile robots were all seen as easier with 5G than 4G or Wi-Fi.
NVIDIA has described this convergence of the IoT, AI, edge computing and 5G networking — plus the cloud-native applications that interact with and provide a backend for all of it — as “large-scale AI at the edge,” a combination that supports truly real-time decision making. The rollout of edge-specific solutions plus the entry of numerous cloud providers into the edge market — AWS, Azure, Google, et al. — has made the actual building of a full edge environment more practical than ever.
This democratization of infrastructure, on par with what the cloud has done for business collaboration software and data storage, means that harnessing the power of AI at the edge, across the IoT, is more realistic than ever. Edge computing has come a long way from its origins and is now at the forefront of how numerous organizations innovate.
In sectors such as industrial manufacturing and transportation, AI and ML-enhanced interactions spanning the IoT, edge and cloud are rapidly becoming more common. But it took time for the ecosystems that support them to become practical to build for most companies.
The roots of edge computing itself go back to the 1990s, when content delivery networks emerged and used nodes geographically close to end-users for caching content like photos and videos. This reduced latency and became a model for what an influential IEEE Xplore article called “cloud droplets” in 2009 — a dispersed architecture that would perform state caching on the edge, to improve the performance of processor-intensive tasks natural language processing (NLP) on mobile devices in particular.
As the name indicated, the idea was to work together with a cloud backend. In practice, that is how modern edge computing implementations often function, with a combination of edge-specific and cloud resources easily procured from a service provider and configured into a coherent, consistent architecture (e.g., AWS Snowball for edge, AWS EC2 and AWS Lambda for cloud). Alternatively, edge computing may be entirely self-contained on a device and only transmitted over a WAN later, if at all.
All of these relatively easy-to-build edge, and edge-to-cloud, ecosystems provide superior:
These benefits extend to numerous possible use cases, including:
This is a classic IoT-edge-AI use case and one that remains important, particularly as advances in AI have enabled faster and more accurate interpretation of data collected straight from factory floors. For example, IoT sensors and programmable logic controllers might gather information at the rate of once or more every 100 milliseconds, look for signs of equipment failure and alert stakeholders if necessary.
Cameras are vital IoT devices, and ones with utility in contexts other than traditional surveillance. According to NVIDIA, BMW has implemented inspection cameras in its factories to help improve the safety of its manufacturing. A camera’s embedded AI might flag a defective product or a worker near a hazardous area, and transmit only the most important data for analysis. The city of Dubuque, Iowa has also relied on edge-connected cameras to monitor dangerous driving and drivers requiring assistance in real-time.
The precision of robot movements in factories has greatly improved over time, with some models being able to control their movements to within 0.02 millimeters as of 2017, per McKinsey. AI on the edge can help coordinate these movements and enable robots to autonomously avoid collisions with humans and infrastructure. The University of California San Diego has developed a learning algorithm called Fastron that models a robot’s configuration space and continuously classifies collision and non-collision points.
Even for older factories and other facilities requiring low-latency data processing, spinning up an edge environment that convergences with the IoT and integrates the power of AI is now relatively straightforward.
A solution such as AWS Snowball, Azure IoT Edge and Google Edge TPU makes it simple to get started with just a credit card, and may offer a free trial and account credits as well. This on-demand setup makes it so that even small, lightly funded teams and their companies can quickly tap into edge computing to support their innovation initiatives.
In other words, size and significant amounts of raised capital are no longer prerequisites for operational success. A PwC survey found that “digital champions” that had moved early to adopt industrial automation technologies like AI and ML were far ahead of other manufacturers in realizing benefits such as supply chain transparency and cost-to-serve optimization. These firms are also well-positioned to capitalize on 5G’s strengths.
Overall, due to the democratization of building an edge environment and its clear-cut benefits, 69% of manufacturers are in the process of using edge computing and AI to modernize their operations, according to IDC Research and Lumen. Almost three-fourths of operational data may eventually be acquired, analyzed and acted upon directly within factories themselves.
As your organization considers how to develop the edge and cloud applications that will shape the future of its operations, Transcenda can help. Our experienced team will work closely with you on projects tailored to your specific requirements. Connect with us to learn more!