Coral USB Accelerator

Do you think that [this] (USB Accelerator | Coral) external USB Neural Processor could be of some use to run full NAM models?

1 Like

That’s a smart one! I have one running on mny Frigate camera system.
Would there be a way to improve the modelling on the Dwarf?

1 Like

I don’t understand this usb-thing !?!

So, could you tell us ?

1 Like

It Is an hw accelerator for neural network processing

1 Like

that could be a game changer if it works!!

there is still a long way to go here. You would either need a more powerful device or a smaller model while keeping the same quality.

I can recommend this video that @madmaxwell sent me for an explanation how to do it with images.

We are not going to train models on the device.

That accelerator is intended to execute a (tensorflow compatible) model.

I’ve played with it a bit with the PCIe version, the basic issue is that the runtime libraries are not made for audio rt. One of the problems addressed by RTNeural, the engine that we use in AIDA-X, is that it has been specifically made for usage on rt audio applications, which in turns it boils down to memory management tricks. Rationale here SIMD vs memory management performance gain · Issue #39 · jatinchowdhury18/RTNeural · GitHub.

Not mentioning we don’t want mutexes and other stuff often implemented when you need to access external hw, when we execute code in an rt thread. This is often the case those NPUs (in this case a TPU) are accompagnied by libraries whoose rt profile is video at 30fps at best. With audio, and in particular low latency audio, we need to be a bit more careful. Think about using an USB audio card instead of I2S on a linux embedded system: with a standard preempt-rt patch you will be forced to relax your RTL spec.

One thing to consider is that RTNeural is already supporting some form of HW acceleration, for example when compiling the engine for Apple or even Intel platforms. If I would need to choose where to integrate an NPU, I would try to see if I can integrate in RTNeural, so that it would be benchmarkable within the tools provided by the engine.