The library makes it possible to decentralize computing power for example by allowing small embedded systems to receive data before processing and pass on the results to a superordinate system. This dramatically reduces the amount of data to be transferred. In addition, it’s possible to implement a network of small learning-capable systems which distribute tasks among themselves.
AIfES currently contains a neural network with a feed-forward structure that also supports deep neural networks. “We programmed our solution so that we can describe a complete network with one single function,” says Gembaczka. The integration of additional network forms and structures is currently in development. Furthermore the researcher and his colleagues are developing hardware components for neural networks in addition to other learning algorithms and demonstrators. Fraunhofer IMS is currently working on a RISC-V microprocessor which will have a hardware accelerator specifically for neural networks. A special version of AIfES is being optimized for this hardware in order to optimally exploit the resource.
Fraunhofer IMS - www.ims.fraunhofer.de