Our group aims to bridge specialists from machine learning and embedded systems. While deep learning models excel in solving complex problems, they require extensive training data and resources. Our goal is to develop machine learning methods suitable for embedded devices with limited computational power and memory. This involves techniques such as compressing model parameters, pruning insignificant weights, and using fewer bits for parameter encoding, potentially down to single-bit encoding in binary neural networks. Additionally, we focus on few-shot learning methods for scenarios with limited training data. These algorithms have practical applications in fields like the Internet of Things and particle physics experiments.
Scope:
• Machine learning models compression
• Neural networks pruning and quantization
• Binary neural networks
• Few-shot learning
• Applications of EML (e.g. in IoT, robotics, military, particle physics)
