EdgeGNN: 面向资源受限设备的高效GNN架构搜索
首发时间:2025-04-03
摘要:图神经网络(Graph Neural Networks, GNNs)在处理非欧几里得数据方面表现出色,但其设计通常需要大量的专业知识和反复调整。尽管神经架构搜索(Neural Architecture Search, NAS)已成功实现卷积神经网络和递归神经网络的自动化设计,但将其应用于GNNs,特别是在资源受限的边缘设备上的优化,仍然面临挑战。本文旨在减少GNN架构设计中对专业知识和反复调整的依赖,特别关注在边缘设备上的高效部署。为此,本文提出了一种基于强化学习的神经架构搜索框架EdgeGNN,旨在自动发现高精度、低延迟的GNN架构。本研究采用联合优化策略,同时考虑整体模型的准确性和受硬件影响的推理延迟。在多个基准数据集上的实验结果表明,使用本研究方法发现的GNN架构在模型大小和推理速度方面显著优于现有基线,同时保持了竞争力的预测准确性。?
For information in English, please click here
EdgeGNN: Efficient GNN Architecture Search for Resource-Constrained Devices
Abstract:Graph Neural Networks (GNNs) excel at processing non-Euclidean data, but their design often necessitates substantial knowledge and iterative adjustments. While Neural Architecture Search (NAS) has successfully automated the design of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), applying it to GNNs, particularly focusing on optimization for resource-constrained devices, remains a challenge. This paper aims to reduce the dependence on specialized knowledge and iterative adjustments in GNN architecture design, with particular emphasis on efficient deployment on resource-constrained devices. To achieve this, we propose a reinforcement learning-based neural architecture search framework, EdgeGNN, that automatically discovers high-accuracy, low-latency GNN architectures. Our approach employs a joint optimization strategy, taking into account both overall model accuracy and inference latency influenced by the hardware. Experimental results on multiple benchmark datasets demonstrate that the GNN architectures identified by our framework significantly outperform existing baselines in terms of model size and inference speed, while maintaining competitive predictive accuracy.
Keywords: Artificial Intelligence Graph Neural Network Neural Architecture Search
基金:
引用
No.****
动态公开评议
共计0人参与
勘误表
EdgeGNN: 面向资源受限设备的高效GNN架构搜索
评论
全部评论