top of page

EnCoDe: Enhancing Compressed Deep Learning Models Through Feature Distillation and Informative Sample Selection

International Conference on Machine Learning and Applications (ICMLA)

Publication Type



This paper presents Encode, a novel technique that merges active learning, model compression, and knowledge distillation to optimize deep learning models. The method tackles issues such as generalization loss, resource intensity, and data redundancy that usually impede compressed models' performance. It actively integrates valuable samples for labeling, thus enhancing the student model's performance while economizing on labeled data and computational resources. Encode's utility is empirically validated using SVHN and CIFAR-10 datasets, demonstrating improved model compactness, enhanced generalization, reduced computational complexity, and lessened labeling efforts. In our evaluations, applied to compressed versions of VGGll and AlexNet models, Encode consistently outperforms baselines even when trained with 60% of the total training samples. Thus, it establishes an effective framework for enhancing the accuracy and generalization capabilities of compressed models, which is especially beneficial in situations with limited resources and scarce labeled data.


Rebati Gaire, Sepehr Tabrizchi, Arman Roohi

bottom of page