With the trendiness of the term “edge AI” or talk of “having more intelligence at the network edge,” it’s easy to lose sight of the benefits of having more local, real-time processing that doesn’t rely on cloud-based resources to run artificial intelligence (AI) models. By enabling the electronics we interact with daily to make decisions in the real world based on AI models, we can increase their responsiveness, safety and overall efficiency.
Of course, some AI-powered systems will likely always need cloud-based resources. It is possible to greatly enhance many low-power applications, specifically those with one to two cameras, with processing capabilities such as people and object classification, anomaly detection and human pose estimation. Implementing these capabilities in low-power applications can be challenging, however, because of cost constraints as well the amount of power needed for this level of processing.
Newer Arm® Cortex®-based vision processors such as the AM62A processor family help designers expand vision and AI processing capabilities in applications ranging from video doorbells to smart retail.
Let’s look at these applications in more depth to understand what expanded vision and AI capabilities can enable.
Making the future of embedded possible for edge AI
|Watch the video "Making the future of embedded possible for edge AI" to learn how TI enables advanced AI analytics and real-time responsiveness in edge AI applications.|
AI cameras in video doorbells
In video doorbells and home security systems (as shown in Figure 1), any delay in response to a theft or person identification, even for a millisecond, could make a difference in preventing loss of life or property.
Figure 1: Demonstration of people and object recognition running on a video doorbell
By analyzing real-time video data locally, video doorbells can respond faster and more reliably, with fewer false positives and no need for network connectivity. But power and size constraints have traditionally limited the level of AI processing necessary to achieve this real-time responsiveness.
The AM62A family, which includes the AM62A3, AM62A7, AM62A3-Q1 and AM62A7-Q1, is designed to operate at 2 to 3 W, in a form factor small enough for use in compact video doorbell enclosures. Video doorbell designers can implement higher levels of human and object detection in their designs by leveraging the 1 to 2 teraoperations per second of AI processing in AM62A processors. Read the technical white paper, “Edge AI Smart Cameras Using Energy-Efficient AM62A Processor” to learn more about implementing AI processing in video doorbells.
AI cameras in smart retail
Smart retail, also known as “grab-and-go retail,” is a new shopping experience where customers select their purchases and then leave the store without having to pay a cashier – it’s all handled automatically.
The vision-based systems managing this experience rely on object-detection-derived AI models as well as barcode scanners to identify what items customers put in their baskets and ultimately purchase when they leave the store (as shown in Figure 2).
Figure 2: AI camera using AI model to monitor customer activity in a smart retail store
Smart retail applications can decrease response times during transactions and increase data security by processing data locally. In particular for data security, running AI models locally doesn’t require a network connection to cloud resources – limiting the potential for unauthorized access of that data since it is not being transmitted externally.
Similar to video doorbells, power consumption is a primary design challenge for smart retail AI cameras, especially considering high-frame-rate video analysis.
The energy-efficient, highly integrated system-on-a-chip architecture of AM62A processors unlocks the local AI processing capabilities of smart retail cameras. These processors, through their integrated AI hardware accelerators, enable objection classification, anomaly detection, orientation detection and barcode identification – even on nonstandard surfaces such as fruits and vegetables.
More intelligence at the edge means more real-time responsiveness and reliable human-machine interaction. While I only focused on two applications in this article, the list of electronics that can benefit from locally run AI data models grows daily. Highly capable, highly integrated vision processors are making this transformation possible, and our world smarter.