Hilfe beim Zugang
A structurally re-parameterized convolution neural network-based method for gearbox fault diagnosis in edge computing scenarios
Gearboxes operate in harsh environments. Cloud-based techniques have been previously adopted for fault diagnosis in Gearboxes. Cloud-based fault diagnosis methods are prone to time delays and loss of information. Therefore, edge computing-based fault diagnosis becomes an option. However, with limite...
Ausführliche Beschreibung
Gearboxes operate in harsh environments. Cloud-based techniques have been previously adopted for fault diagnosis in Gearboxes. Cloud-based fault diagnosis methods are prone to time delays and loss of information. Therefore, edge computing-based fault diagnosis becomes an option. However, with limited hardware resources for edge devices, balancing the diagnostic capabilities of the model with operating performance becomes a challenge. This paper proposes a lightweight convolutional neural network for gearbox fault diagnosis in edge computing scenarios to achieve an accurate diagnosis and lightweight deployment of models. By constructing the Mel-Frequency Cepstral Coefficients (MFCC) feature matrix of input data, the methodology can suppress noise interference and improve diagnostic accuracy. Based on the structural re-parameterization, the model structure transforms from multiple branches at training time to a single branch at inference time. This improves the inference speed of the model and reduces the hardware cost when the model is deployed while ensuring that the diagnostic capability of the model remains unchanged. Validation experiments were conducted on a public dataset and a custom experimental device, using the NVIDIA Jetson Xavier NX kit as the edge computing platform. According to the experiment result, after extracting the MFCC feature matrix, the average diagnostic accuracy rate in the noisy environment of the presented methodology is improved by 12.22% and 9.44%, respectively. After structural re-parameterization, the Memory of the model decreases by 52.58%, and the inference speed is increased by 38.83%. Ausführliche Beschreibung