Optimization of Deep Neural Networks
M. Ziegler, C. Schinabeck, A. Mishra, M. Sabih, F. Hannig
Seminar Optimization of Deep Neural Networks (OptDNN)
Artificial intelligence enriches our everyday life in many different ways. Once a data scientist has fully trained a model, it can be integrated into a complete application. Inside such an application, input data is feed to the neural network and provides its output at the end of the processing pipeline. This inference step can run either in the cloud or on the local edge device.
Especially if the model needs to be executed frequently on an edge device, it needs to be optimized beforehand. Such optimizations may include structured or unstructured pruning, quantization and compression, subspace methods or neural network compilers. Changing the structure or the computation mode of the network (i.e., float or integer) typically impacts performance as well as computational efficiency.
Based on selected applications from the domains audio processing (e.g., speaker localization) and computer vision (e.g., depth extraction), each group optimizes one of the selected applications using the methods above and tools (e.g., neural network distillers), evaluates the performance gain and presents the respective methods and results to all participants.
Further Information and Registration:
per Email an Matthias Ziegler
Due to the Corona pandemic, the seminar will be held “online” until further notice. Accordingly, it is planned to hold the Initial meeting as Web Conference Call. Please register with Matthias Ziegler (see above), and you will receive all further information by email from us in time.