一种基于连续层影响力的大模型剪枝与异构替换方案
首发时间:2025-02-19
摘要:本文旨在提出一种高效的大模型压缩方案,使得大模型在保持原有精度不变或者略微折损的前提下大幅降低模型参数量,提升推理速度。为此,本文提出了一种基于连续层影响力的大模型剪枝与异构替换方案,该方案首先通过计算大模型中所有连续层的影响力,进而根据相关算法依次挑选出影响力最小的一个层或多个层并进行异构替换,最后通过知识蒸馏的方式对压缩后的模型进行了恢复训练获得了最终轻量模型。本研究紧接着开展了相关实验对轻量模型进行评测,最终的评测结果表明,轻量模型的性能和原来的大模型较为接近。本研究证明了基于连续层影响力的大模型剪枝与异构替换方案的有效性,该方案在保持模型精度的同时,显著减少了推理延迟,达到了模型加速与精度保持的平衡。
关键词: 计算机科学与技术、大模型轻量化、模型剪枝、知识蒸馏
For information in English, please click here
A Large Model Pruning and Heterogeneous Replacement Scheme Based on Consecutive Layer Influence
Abstract:This paper aims to propose an efficient large model compression scheme that significantly reduces the model parameter size and improves inference speed, while maintaining the original accuracy or with only slight degradation. To achieve this, the paper introduces a pruning and heterogeneous replacement approach based on the influence of consecutive layers in the large model. The method first calculates the influence of all consecutive layers in the model and then selects one or more layers with the least influence for heterogeneous replacement based on related algorithms. Finally, the compressed model is fine-tuned through knowledge distillation to recover its performance, resulting in a lightweight model. This study also conducts relevant experiments to evaluate the lightweight model, and the final results show that its performance is close to that of the original large model. The research demonstrates the effectiveness of the proposed pruning and heterogeneous replacement method based on consecutive layer influence, achieving a balance between model acceleration and accuracy retention, significantly reducing inference latency while maintaining model precision.
Keywords: Computer Science and Technology Large Model Compression Model Pruning Knowledge Distillation
基金:
引用
No.****
同行评议
勘误表
一种基于连续层影响力的大模型剪枝与异构替换方案
评论
全部评论