機械知覚&ロボティクスグループ
中部大学

Deep Learning 国際会議

Analyzing the Accuracy, Representations, and Explainability of Various Loss Functions for Deep Learning

Author
Tenshi Ito, Hiroki Adachi, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
Publication
The International Joint Conference on Neural Networks, 2023

Download: PDF (English)

Deep learning utilizes a vast amounts of training data and updates weight parameters so as to minimize the loss between a predicted probability and a ground truth label. Generally, we use cross-entropy as the loss function. Although loss functions for image classification other than cross-entropy exist, their efficacy has not been adequately investigated. In this work, we extensively analyze models trained with different loss functions and clarify the properties of each. Specifically, we analyze the feature space and explainability as well as the classification accuracy on various benchmark datasets and network architectures. For feature space and explainability, we investigate the effectiveness of each loss function by quantitative and qualitative evaluations. We then discuss the properties and improvements of each.

前の研究 次の研究