BrainChip Accelerates AI at the Edge With New Low-Power Neural Processor
The Akida Pico low-power acceleration co-processor may drive compact AI-enabled wearables and sensors for consumer, healthcare, and IoT applications.
BrainChip Holdings has introduced the Akida Pico, a low-power acceleration co-processor designed for the smallest and most compact AI-enabled wearables and remote sensors used in consumer, healthcare, and IoT applications.
The Akida Pico is based on BrainChip’s Akida low-power, event-based compute platform designed to accelerate neural network models for the secure personalization of applications such as voice wake detection, vital sign anomalies, speech noise reduction, audio enhancement, and presence detection.
BrainChip's Akida Pico is based on the company's original Akida architecture.
The new neural processing unit (NPU) operates on less than 1 mW of power to extend the operating life of battery-powered devices and has a compact die size (0.18mm² die area) for reduced cost and PCB footprint.
AI Acceleration at the Edge
The Akida Pico is an edge acceleration NPU that can be used as a co-processor or as a standalone core. It is built on BrainChip’s Akida event-based processing platform that optimizes temporal event-based neural network models for ultra-low-power, event-based processing.
Akida Pico functional block diagram.
At the heart of Akida Pico is a neural processing engine that employs a minimal core for temporal event-based neural networks (TENNs). TENNs were developed to address some limitations of traditional artificial neural networks (ANNs), such as convolutional neural networks (CNNs). CNNs are good at processing spatial information in images but are ineffective when processing temporal information or information that conveys the passage of time.
Operated in standalone mode, the Akida Pico can independently process audio and vital signs data at very low power levels to generate instant, personalized event triggers based on detected health anomalies (like high heart rate) or voice commands from a device’s owner through voice-activated systems.
As a co-processor, the Akida Pico allows engineers to offload otherwise intensive AI processing tasks from the system MCU/CPU to the application-specific and far more energy-efficient NPU. For continuous monitoring applications, it also offers a wake-up function for more power-hungry system processors, preserving power by filtering out false alarms and ensuring that only qualified events initiate a wake-up.
MetaTF Software Flow
BrainChip’s proprietary MetaTF software flow allows developers to optimize and compile Akida Pico’s TENNs as their application requires. MetaTF supports models created with TensorFlow/Keras and PyTorch so developers can rapidly deploy their edge AI application without learning a new machine language.
Further, MetaTF's tools convert models with floating point weights and activate alternative models that use more energy-efficient, low-bit-width weights without compromising model performance.
Akida Pico audio use case flow.
The machine learning framework also converts quantized models trained using traditional deep-learning methods to event-domain models with low latency and power consumption. A hardware abstraction layer (HAL) within the platform-agnostic Akida Runtime executes models on Akida hardware. Akida Runtime also includes a software simulator for model evaluation without hardware.