MITIGATING CONFIRMATION BIAS IN DEEP LEARNING WITH NOISY LABELS THROUGH COLLABORATIVE NETWORK TRAINING
Abstract
Confirmation bias in deep learning arises when models trained on datasets with noisy labels tend to reinforce incorrect predictions, leading to suboptimal learning and reduced generalization performance. This paper proposes a collaborative network training framework to mitigate confirmation bias in the presence of label noise. In the proposed method, two networks are trained simultaneously, each selecting clean samples for the other to learn from. This cross-training strategy prevents individual networks from overfitting to noisy labels and helps preserve model diversity. The framework also incorporates a sample agreement mechanism and consistency regularization to further stabilize training and improve robustness. Experimental evaluations on benchmark datasets including CIFAR-10, CIFAR-100, and Clothing1M show that the proposed approach outperforms existing noise-robust training methods, achieving higher accuracy and better noise tolerance. The results validate the effectiveness of collaborative learning in reducing confirmation bias and improving model reliability under label noise.
Keywords
References
Similar Articles
- Sneha R. Patil, Dr. Liam O. Hughes, ENHANCED MALWARE DETECTION THROUGH FUNCTION PARAMETER ENCODING AND API DEPENDENCY MODELING , International Journal of Modern Computer Science and IT Innovations: Vol. 1 No. 01 (2024): Volume 01 Issue 01
You may also start an advanced similarity search for this article.