The forward pass during inference would be faster because of the reduced complexity (i.e., the edge device can process camera frames much faster).Reduce hardware BOM cost by minimizing the memory footprint of your model.This also helps speed up development time, so you can get to market faster.This in turn saves money spent on keeping the training hardware up and running.Save time during network training, because you have a reduced dataset.In other words, by optimizing your neural network, you can achieve the following: This has a direct impact on your neural network’s complexity, which in turn impacts its size, training time, and inference time.
By reducing your dataset from a thousand classes down to 20, you are also reducing the number of features that need to be extracted. You won’t need a ‘zebra’, ‘armadillo’, ‘lionfish’, or many of the other thousand classes defined in ImageNet instead you probably need just 15 to 20 classes such as ‘person’, ‘dog’, ‘mailman’, ‘person wearing hoody’, etc. Let us suppose you are building a smart front door security camera. If you plan on building a proof of concept (PoC) for an edge product, such as smart digital cameras, gesture controlled drones, or industrial smart cameras, you will probably need to customize your neural network.Ĭlick here for a community contributed Chinese translation of this blog. These example networks and applications make it easy for developers to evaluate the platform, and also build simple projects. Most of these networks are trained on ImageNet dataset, which has over a thousand classes (also called categories) of images.
The Neural Compute Application Zoo (NCAppZoo) downloads and compiles a number of pre-trained deep neural networks such as GoogLeNet, AlexNet, SqueezeNet, MobileNets, and many more.