Lightweight Deep Neural Network Model With Padding-free Downsampling
首发时间:2024-02-05
Abstract:Deep neural networks have achieved impressive performance in image classification tasks. However, due to limitations in hardware resources, including computing units and storage capacity, deploying these networks directly on resource-constrained devices such as mobile and edge devices is challenging. While lightweight network models have made significant advancements, the downsampling stage has received little attention. As the feature map is reused multiple times, the reduction of its size during the downsampling stage not only reduces the computational cost of the downsampling module itself but also lowers the computational burden of subsequent stages. This paper addresses this gap by proposing a padding-free downsampling module that effectively reduces computational costs and can seamlessly integrates into various deep learning models. Furthermore, we introduce a hybrid stem layer to obtain competitive accuracy. Extensive experiments were conducted on CIFAR-100, Stanford Dogs, and ImageNet datasets. On the CIFAR-100 dataset, the results show that the proposed module reduces computational costs by approximately 20% and improves inference speed on resource-constrained devices by around 10%.
keywords: Computer Technology Neural Network Lightweighting Deep Learning Image Classification Model Architecture Design
点击查看论文中文信息
基于无填充下采样的轻量化深度神经网络
摘要:深度神经网络在图像分类任务中取得了巨大的成功。然而,由于硬件资源(如计算能力和存储容量)的限制,将这些深度神经网络直接部署在资源受限的设备(如移动设备和边缘设备)上具有挑战性。虽然轻量级网络模型已经取得了重大进展,但下采样阶段很少受到关注。由于特征图会被多次重复调用,因此在下采样阶段缩减特征图尺寸不仅会降低下采样模块本身的计算量,更可以降低后续阶段的计算量。本文通过提出一种无填充的下采样模块有效地降低了计算成本,且可以无缝集成到各种深度学习模型中。此外,为了保持精度,本文引入了一种混合茎层。在CIFAR-100、Stanford Dogs和ImageNet数据集上进行了广泛的实验。在CIFAR-100数据集上的结果表明,所提出的模块将计算成本降低了约20%,且在资源受限的设备上将推理速度提高了约10%。
基金:
引用
No.****
动态公开评议
共计0人参与
勘误表
基于无填充下采样的轻量化深度神经网络
评论
全部评论